The 6 Best Apps to Improve Your Problem-Solving Skills
Want to improve your problem-solving skills and become more solution-oriented in your daily routine? Here are some apps to try.
Your ability to solve problems is a valuable skill you cannot do without if you want to succeed in your career, business, and life. While most people learn to solve problems primarily through exposure to challenging situations and having to find solutions almost immediately, others don't.
As such, not everyone is skilled at effective problem-solving. However, there is an easy way to improve your problem-solving skills using technology. Today, there are several fun ways to do so, including playing brain games on your mobile. Here are six game apps you can use to develop problem-solving skills while having fun.
Lumosity is a web app that helps you improve your mental skills. It is programmed with activities that help people improve their memory, flexibility, rate of processing information, and concentration levels. Thus, Lumosity is a great tool to help you develop problem-solving capabilities.
Lumosity was launched in 2007 and had over 70 million users as of January 2015. The app is available in English, French, Spanish, and German.
Download : Lumosity for Android | iOS (Free, in-app purchases available)
Happify is a company that works to enhance personal, organizational, and healthcare effectiveness by improving the emotional health of its users.
The Happify app incorporates scientific experiments into gaming activities designed to improve resilience and mindfulness and tackle health conditions like mood disorders, depression, anxiety, severe pain, and insomnia. Thus, it is a great healthcare software platform for improving your mental and physical conditions.
Download : Happify for Android | iOS (Free, in-app purchases available)
Launched by Elevate Labs in 2014, Elevate is a brain game app that focuses on improving its users' reading, writing, speaking, listening, and math skills. It is also one of the best android apps to help you solve math problems .
Seeing you already possess the skills mentioned above, you may wonder, is the Elevate brain training app worth your time ? The truth is, there is always room for improvement, hence, the need for you to keep developing these skills. And, as you pass each assessment in the training sessions, the difficulty level increases. This way, you can test whether your abilities are basic or strong.
Download : Elevate for Android | iOS (Free, in-app purchases available)
Neuronation is a cognitive training site and app that was made public in 2011. Since then, over 10 million people have used the app. The Neuronation app focuses on improving users' cognitive abilities, such as thinking, learning, understanding, and remembering, through its specialized training activities in the program.
Although native to Germany, the app is available in over eight languages, including English, French, Spanish, Italian, Portuguese, Russian, Turkish, and German. Additionally, the app enjoys widespread use, especially among German healthcare practitioners.
Download: Neuronation for Android | iOS (Free, in-app purchases available)
The Peak brain training app is designed to correct cognitive disorders with the help of short, interactive games. To get started on the Peak brain game app, you will be required to set goals on areas you want to improve, like mental processing, emotional strength, linguistic skills, recollection, concentration levels, and problem-solving.
Once you complete this stage, a virtual coach will be assigned to guide you through the program, and you will be given an assessment. Immediately after you finish each assessment, you will receive feedback based on your result.
You can start using the app for free with the basic version, but it has a limited number of daily exercises that are randomly selected. On the other hand, with the paid version, Peak Pro, you enjoy unlimited access to over 40+ exercises, alongside detailed feedback and personalized training sessions.
Download : Peak for Android | iOS (Free, in-app purchases available)
6. New York Times Crossword
The New York Times Crossword is a daily puzzle published by the renowned New York Times magazine on their website and mobile apps. The New York Times Crossword puzzle dates back to 1942. The first puzzle was published on Sunday, 15 February 1942.
Several freelance puzzle developers generate the puzzle. The puzzle gets tougher as each day goes by in a week, so the simplest puzzles are on Mondays, and the hardest are on Saturdays. Sunday's crossword puzzle is a 21×21 square matrix, whereas the daily crossword puzzle is a 15×15 square matrix. The crossword and other free puzzle games greatly improve critical thinking, learning, and reasoning abilities.
Download : New York Times Crossword for Android | iOS (Free, in-app purchases available)
Have Fun While Improving Your Problem-solving Skills
Developing cognitive abilities, emotional well-being, and problem-solving skills are no longer challenging. Thanks to these brain game apps, you can develop and improve your mental and emotional abilities more easily, faster, and while having fun.
- Search Search Search …
- Search Search …
Best Apps for Problem Solving: Top Picks for Effective Solutions
In today’s fast-paced and technology-driven world, problem-solving skills are becoming increasingly crucial for personal and professional success. It is important to recognize that problem-solving involves more than just finding answers to a problem; it’s about understanding the problem, identifying potential solutions, and making informed decisions. In this regard, problem-solving apps have the potential to drastically improve one’s abilities in a variety of areas, as well as provide educational benefits.
These apps come in many forms, addressing everything from math and science problems to enhancing attention and concentration. They offer accessibility and support for users seeking to bolster their problem-solving capabilities. In addition to educational applications, many of these apps are also tailored to address challenges in industries like business and mental health or even to provide career guidance.
- Problem-solving apps cover a wide range of topics and can improve both personal and professional success.
- Educational benefits are a significant aspect of these apps, as they deal with various subjects such as math, science, and concentration.
- Apps that provide support for problem-solving extend beyond education, addressing challenges in business, mental health, and career guidance.
Recognizing Problem-Solving Apps
Today’s technology landscape is filled with mobile apps that aim to address various challenges we face daily. Recognizing essential problem-solving apps for iOS and Android devices can significantly improve our efficiency and simplify our lives.
- Lumosity : This app is designed to improve mental skills with engaging activities that target memory, flexibility, information processing speed, and concentration levels. Lumosity is perfectly suitable for individuals looking to enhance their problem-solving capabilities.
- Braingle : Braingle stands out for its focus on mental sharpness and reasoning through riddles and visual illusions. Compared to other apps, Braingle offers a unique approach to problem-solving, instead of relying on memory and reaction-based tasks.
- Educurious : This website offers supplemental apps that aim to turn students into “developing experts” by connecting them with real-world mentors and providing problem-based learning activities.
- Photomath : An app that simplifies mathematical problem-solving, Photomath allows users to scan a math problem with their device’s camera and receive instant solutions, explanations, and step-by-step instructions on how to solve it.
By incorporating these problem-solving apps into daily routines, users can enhance their mental abilities and find solutions to everyday challenges more effectively. Focusing on the right technology and investing time in useful software will undoubtedly improve one’s overall problem-solving skills.
Educational Benefits of Problem-Solving Apps
Boosting learning skills.
Problem-solving apps provide a variety of educational benefits to users, aiding in the development of crucial learning skills. These apps target different aspects of learning, such as memory, reading, writing, and listening, by presenting engaging challenges and activities. As users navigate through these tasks, they gain valuable insights and ideas that contribute to their overall understanding of an array of subjects.
Incorporating elements such as Lumosity with a personalized approach to learning can offer tailored activities to improve memory, attention, speed, and problem-solving skills. This adaptability allows users to progress at their own pace while receiving appropriate guidance and support.
Enhancing Critical Thinking
Problem-solving apps also play a vital role in enhancing critical thinking abilities. By offering various challenges and exercises, these apps motivate users to employ creative thinking, logical reasoning, and decision-making skills. Through continuous practice and application, the users develop a deeper understanding of concepts and improve their ability to analyze and evaluate scenarios.
Moreover, apps like Educurious , which connects students with real-world mentors and incorporates the Common Core aligned curriculum, help students build their critical thinking abilities and problem-based learning skills in line with the 21st-century technology demands.
In conclusion, problem-solving apps offer numerous educational benefits, from boosting learning skills to enhancing critical thinking. As users engage with these apps, they become more confident in their learning abilities, paving the way for higher academic achievement and lifelong learning.
Applications in Math and Science
In this digital age, there are numerous apps and websites available to help students develop problem-solving skills in math and science. These resources provide interactive, engaging, and adaptive platforms to enhance their educational experience.
Apps for Math Problems
From basic calculations to more complex topics like algebra, calculus, and word problems, math apps offer an excellent way to empower students with the tools they need for success.
One such resource is Mathway , which caters to a wide range of mathematical topics. Mathway enables students to input math problems, offering step-by-step solutions and explanations to further their understanding. The app even has a graphing feature for visual learners.
Another engaging resource is Moose Math , a free app that focuses on math games. These games assist younger students in refining their math skills, such as counting, addition, and subtraction, through points earned for completing challenges.
Applications for Science Tasks
When it comes to science, students need a comprehensive understanding of various concepts across physics, chemistry, and biology. Several apps can help with this intricate learning process.
For common core science concepts, Brilliant offers hands-on, interactive lessons to build quantitative skills. This platform covers core topics like algebra functions, quadratics, and even computer science concepts. Brilliant is designed to help students dive deep into problem-solving by breaking down complex topics and providing in-depth examples.
To assist students with their science homework, websites like Educators Technology can offer a selection of math problem solver apps. These apps not only tackle math problems but also provide additional reinforcement for understanding scientific concepts.
In conclusion, utilizing these various apps and online tools can significantly improve students’ problem-solving abilities in both math and science domains, paving the way for academic success.
Enhancing Attention and Concentration
A critical aspect of improving problem-solving skills involves enhancing one’s attention and concentration. Numerous apps are designed to target these cognitive abilities, allowing individuals to perform tasks efficiently and manage their time effectively.
One popular app that aims to maximize attention span is Lumosity . Developed by a team of game designers and scientists, Lumosity offers a range of interactive games and training exercises. These games are specifically tailored to improve memory, processing speed, attention span, and overall cognitive ability. By engaging in these activities regularly, users can strengthen their focus and address their weaknesses.
In addition to Lumosity, other apps are well-regarded for their positive influence on attention and concentration. For instance, Calm Sage lists several brain training apps to help improve memory and cognitive function. These apps provide fun, challenging exercises that test users’ problem-solving skills and logical thinking abilities while also identifying areas of improvement. Engaging in these activities can foster perseverance, allowing individuals to tackle tasks with greater determination and success.
To ensure that users can effectively manage their time, it is essential to incorporate strategies that promote enhanced attention and concentration. By utilizing apps like Lumosity and those mentioned on Calm Sage, individuals can train their brains to focus on tasks more effectively and allocate their time more efficiently. Ultimately, these tools can lead to meaningful improvements in one’s ability to approach complex problem-solving scenarios with confidence and clarity.
Support and Help within Apps
When it comes to problem-solving apps, efficient support and help features are crucial for users to navigate through the platform and find the solutions they need. A good app will provide diverse support mechanisms, whether it’s tutorials for first-time users, FAQs to answer common questions, or customer service to address specific concerns.
In-app purchases often play a significant part in enhancing app experience. They might offer advanced features or additional resources, allowing users to unlock their full potential when solving problems. However, it’s essential for the app developers to offer a clear and transparent payment system, helping users to make informed decisions on whether the additional content is worth the investment.
Problem-solving apps rely on user reviews and feedback to constantly improve their features and functionalities. Therefore, it’s essential to have an efficient way for users to communicate their experiences, suggestions, and issues. Developers should ensure that they actively monitor feedback and provide prompt responses to users who might need assistance with the app.
When it comes to navigating through an app, a well-designed interface and smooth user experience will keep users engaged and motivated to solve problems. This includes logical menu structures, consistent design elements, and clear labeling for different sections or features. Visual aids, such as color-coding or iconography, can further help users find their way around the app, streamlining the overall problem-solving process.
By addressing these aspects, problem-solving apps can create a holistic experience with clear solutions and support mechanisms in place. When users feel empowered to access the help they need, it enables them to tackle challenges effectively, enhancing their overall problem-solving experience.
Problem-Solving Apps for Business
Applications for business challenges.
In today’s fast-paced business environment, companies face various challenges, such as improving customer service, addressing operational inefficiencies, and managing resources effectively. With the help of innovative mobile apps, businesses can tackle these issues and find effective solutions.
Lumosity is a prime example of a problem-solving app designed to improve mental skills. By enhancing memory, flexibility, and information processing speed, this web app can indirectly contribute to the development of employees’ problem-solving capabilities.
Mobile applications are becoming increasingly useful in improving customer services by providing quicker query resolution and 24/7 support. AI-based chatbots, often embedded in mobile apps, can help businesses respond to customer queries and questions more efficiently, resulting in better customer satisfaction.
In addressing business operations , many organizations turn to Microsoft Power Apps to identify and solve problems. Power Apps allow developing custom applications tailored to specific business needs without requiring extensive coding experience. By streamlining processes and automating manual tasks, these apps can significantly impact operational efficiency.
Furthermore, numerous apps on the market solve everyday problems faced by both businesses and individuals. For example, Google Play offers 2.56 million mobile apps, while the App Store provides access to 1.85 million apps. Among these vast selections, businesses can surely find applications that cater to their specific requirements, from project management to financial planning.
In summary, as businesses traverse the ever-evolving landscape of challenges, adopting problem-solving applications can undoubtedly provide valuable assistance in finding the most effective and efficient solutions along their path to success.
Mental Health Support through Apps
In today’s fast-paced world, finding support and solutions for mental health issues is crucial. Numerous apps have been developed to help individuals cope with and manage their anxiety and depression. These digital tools offer a variety of approaches to maintaining mental well-being, from cognitive training exercises to resources for professional guidance.
Apps for Anxiety
Anxiety can manifest in different ways, but common symptoms include constant worrying, restlessness, and even physical symptoms like rapid heartbeat or shortness of breath. The following apps aim to provide support and techniques for managing anxiety:
- Headspace : This popular meditation app teaches mindfulness techniques, which have been found effective in managing anxiety and reducing instances of negative, repetitive thinking.
- MindShift : Designed specifically for anxiety, MindShift provides resources and tools to help users develop healthy coping strategies and face their fears. This app embraces Cognitive Behavioral Therapy (CBT) principles, which are widely considered as an effective approach to dealing with anxiety disorders.
Apps for Depression
Depression can be a debilitating condition resulting in persistent sadness, loss of interest in daily activities, and even physical symptoms like lack of energy or changes in appetite. The following apps offer support and solutions for those experiencing depression:
- Lumosity : This brain-training app focuses on cognitive exercises that stimulate different areas of the brain and encourage users to develop healthy cognitive habits. By improving memory, attention, and problem-solving skills, Lumosity can help individuals coping with depression maintain their mental abilities and gain a stronger sense of control.
- Elevate : Like Lumosity, Elevate is a cognitive training app aimed at improving focus, memory, and comprehension through engaging games and activities. Regular use of the app can lead to better mental clarity, which may help alleviate some depressive symptoms.
- BetterHelp : This platform connects users with licensed therapists, offering a convenient way to access professional mental health support. BetterHelp provides therapy sessions through phone, video, or messaging, making it easier for those experiencing depression to receive the guidance they need.
Using apps for mental health support can be an effective and accessible way to manage anxiety and depression. It is important, however, to remember that these apps are not a substitute for professional help but can serve as valuable supplementary tools in one’s mental health journey.
Popular Puzzle and Brain Games
Memory-boosting puzzle games.
A variety of memory-boosting puzzle games are available for those who wish to sharpen their cognitive skills. These games are designed to challenge the brain and improve memory, logic, and problem-solving abilities. Some popular memory-boosting puzzle games include:
- Lumosity : This app offers over 40 puzzles and games that test your brain and help train memory, logic, and math skills for a well-rounded mind workout. It features specific challenges for attention, flexibility, problem solving, language, math, speed, memory, and more ( source ).
- Grindstone : A strategy puzzle game where players plan each move carefully to complete levels efficiently, thus encouraging the development of critical thinking and planning skills ( source ).
- Monument Valley : This beautiful and captivating game requires players to manipulate the environment to progress through an M.C. Escher-inspired world, enhancing spatial reasoning and creativity.
The New York Times Crossword
The New York Times Crossword is a classic app that has stood the test of time, providing avid fans with daily crossword puzzles to stimulate their brains and expand their vocabularies. The puzzles range in difficulty, offering varying levels of challenge for both new and experienced solvers. The app is easily accessible on both Android and iOS devices, enabling players to indulge in a moment of problem-solving fun anytime and anywhere.
By engaging in these popular puzzle and brain games, players can keep their minds sharp and refine their problem-solving skills. These activities not only provide a fun and engaging form of entertainment but also promote cognitive growth and development.
Career Guidance through Problem-Solving Apps
In today’s competitive job market, individuals seeking career success must continually hone their problem-solving skills. By utilizing problem-solving apps, they can sharpen their cognitive abilities, find solutions to challenges, and stay on the right path to achievement. In this section, we will discuss some of the best apps that are designed to help improve problem-solving skills.
Braingle is a unique app that pushes the limits of mental sharpness through the use of riddles and visual illusions. By presenting different types of puzzles, Braingle encourages users to strengthen their reasoning and analytical skills, which could be beneficial in various aspects of career growth.
Another outstanding app is Lumosity , specifically designed to enhance cognitive function. It offers various activities that focus on memory, flexibility, information processing speed, and concentration. Incorporating Lumosity into one’s routine can ultimately lead to better problem-solving capabilities necessary for career advancement.
The third app, Elevate , is an award-winning brain training program offering a wide array of exercises and games. These activities are aimed at improving cognitive abilities critical for effective problem-solving. With a progress tracking feature, users can monitor their improvement over time and see how they are progressing in their problem-solving skills.
In addition to these apps, individuals must also practice problem-solving strategies in the workplace. Asana recommends a four-step approach, starting with identifying the problem, gathering information, formulating a plan, and executing the solution. Following this process can efficiently solve issues faced in a professional environment.
By using these problem-solving apps and adopting a methodical approach to tackling career challenges, individuals can pave the way for continuous growth and achievement. It is vital to remember that enhancing one’s problem-solving skills is a journey, requiring dedication and persistent effort.
You may also like
Critical Thinking Vs Analytical Thinking
It’s easy to confuse critical and analytical thinking as one, but they’re not the same. While there are similarities, you will realize […]
What’s the Difference Between Critical Thinking and Scientific Thinking?
Thinking deeply about things is a defining feature of what it means to be human, but, surprising as it may seem, there […]
What Does Critical Mean in Critical Thinking?
English is a complex language, and there are often several meanings for the same word depending on the context. Here’s what the […]
The Relationship between Empathy and Critical Thinking: A Balanced Approach
The relationship between empathy and critical thinking may appear contradictory at first glance, but these two concepts can actually complement each other […]
Discover what your mind can do
Exercise memory, flexibility, and more with the world’s most popular brain training program..
No purchase necessary
You care about your brain. We do, too.
Train the skills that matter to you most.
Memory. Processing Speed. Problem Solving. Lumosity targets these cognitive skills and more.
Scientific rigor, made fun
Lumosity takes tasks from the lab and turns them into fun games. We interpret your scores to offer actionable feedback and rich insights into your cognition.
Daily exercise for your mind
Work out with a fresh set of games each day to keep you challenged. Detailed progress tracking helps maintain your brain training habit.
Brain training tailored to you
No matter your age or skill level, Lumosity knows that all brains are different, and our program adapts to your unique strengths and weaknesses.
Science. That feels like games.
Our scientists take tasks from the lab and adapt them into easy-to-learn brain games., 14 years, 100 million members.
4.7 rating on iOS App Store
"I am surprised and delighted by your games. I appreciate the variety and multiplicity of games and the feeling of personalization. The daily workouts are welcome, more fun than push ups, and they make my day go even better."
Mary , New Jersey
"I really like this app. Lots of great games and it comes with tutorials that help you understand the game. I love the Insights I receive every time I train. It's also nice that I get to choose what to play."
Rashmi , California
Researching the efficacy of Lumosity
What we did
Lumos Labs conducted a randomized study of Lumosity brain training and published the results in a peer-reviewed research journal.
In it, half of the 4,715 participants who completed the study trained five days per week, for fifteen minutes each day on Lumosity while the other half did online crossword puzzles as an active control.
What we found
After 10 weeks, Lumosity users improved more than the control group on our assessments of working memory, short term memory, processing speed, problem solving, fluid reasoning, and overall cognitive function.
These results are promising, but more research is needed to determine the connection between improved assessment scores and everyday tasks in participants' lives.
Future research should address the risk of inadvertent experimenter bias and the risk of attrition bias in this study, as both the Lumosity and crossword groups had approximately 50% attrition rate. As with all scientific research, there is also a risk of publication bias.
Build a creative practice by experimenting with music, art, writing and more. download our newest app, figment, to jumpstart your creativity with new daily activities., introducing lumosity mind, lumosity mind includes mindfulness sessions on the topics of relaxation, focus, and sleep—designed by the experts at lumosity., start your free training program.
- Hire Talent
- Payroll Processing
- Workforce Development
- Skilled Trades Workforce Training Facility
- Light Industrial, Manufacturing and Warehouse
- Commercial Construction
- Skilled Trades
- Engineering, Professional, and Technical
- Marine Construction and Repair
- Find Your Job
- Case Studies
- Service Overviews
- Current Field Associates
- Internal Careers at NSC
- News & Events
Improve Your Problem Solving Skills With These Apps
- Improve Your Problem Solving Skills…
From learning how to work a remote desktop to keeping small children occupied while writing emails, on-the-job problem solving during the COVID-19 pandemic is at an all-time high.
Hopefully, you’re successfully dealing with these challenges and more. That said, if you’re looking to improve your problem-solving skills, there are apps for that. Below is a short list of mobile apps that can make you a better problem solver.
This popular app includes games that concentrate on improving the user’s memory, problem-solving ability, attention span, and creativity. Games are constantly changing and typically involve completing timed challenges.
Fit Brains Trainer
This problem-solving app has 10 groups of games that are meant to trigger different areas of the mind. Focusing on memory and concentration, this app asks users to complete tasks from each group on a daily basis. Users are shown their progress using a color-coded graph.
CogniFit Brain Fitness
Created in part by neuroscientists, this app is also meant to enhance problem-solving by boosting memory and focus. More important than ever right now, this app also has a social aspect, as users can challenge their friends who are also on the app. CogniFit Brain Fitness also adjusts the difficulty of tasks to match the user’s profile. It also provides tips founded on performance. Investing 20 to 30 minutes every other day can provide significant improvement in problem-solving ability.
Eidetic is an app designed to boost memory using a ‘spaced repetition’ system that can help users remember things like keywords, credit card numbers, and internet passwords. Alerts help to keep users on track, as periodic memory tests are essential to the process.
This app helps to preserve mental sharpness and enhance reasoning through the use of riddles and visual illusions. As you can probably tell from that description, it is quite different from other problem-solving apps, most of which are based on memory and reaction-based tasks. This app does include a social aspect, allowing users to play riddle-based games against friends and family.
Not the Hole Story
If you have a fondness for solving difficult riddles, then Not the Hole Story is a must-have app for your phone. Stuffed with unique riddles as well as a simple-to-use interface, this app provides you with riddles you have to unlock through a book. You will be provided with tips as you go along, and if you give up, the solutions are revealed. Not the Hole Story will motivate you to expand your thinking and challenge your brain.
This entertaining brain training app is based around the story of two animated characters who move through a grassy field. Personal Zen is mostly focused on lowering anxiety and training the brain to concentrate on the positivity. According to the developer, you should use the app for 10 minutes each day to get optimal results.
We Can Connect You to Inspiring Career Opportunities
At NSC, we often connect folks to fun, inspiring, and engaging career opportunities, many of which revolve around problem-solving. Please contact us today to find an opportunity that suits your needs.
What to Read Next
- Privacy Overview
- Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
The Tech Edvocate
- Home Page Five (No Sidebar)
- Home Page Four
- Home Page Three
- Home Page Two
- Icons [No Sidebar]
- Left Sidbear Page
- Lynch Educational Consulting
- My Speaking Page
- Newsletter Sign Up Confirmation
- Newsletter Unsubscription
- Page Example
- Protected Content
- Request a Product Review
- Shortcodes Examples
- Terms and Conditions
- The Edvocate
- The Tech Edvocate Product Guide
- Write For Us
- Dr. Lynch’s Personal Website
- The Edvocate Podcast
- Assistive Technology
- Child Development Tech
- Early Childhood & K-12 EdTech
- EdTech Futures
- EdTech News
- EdTech Policy & Reform
- EdTech Startups & Businesses
- Higher Education EdTech
- Online Learning & eLearning
- Parent & Family Tech
- Personalized Learning
- Product Reviews
- Tech Edvocate Awards
- School Ratings
The Best Age Magic Black Friday Deals
Thriving after making bad grades, 4 misunderstandings about multiple-choice questions, advice from college admissions officers, how to save your gpa, instructional design 101 for beginners, how to make getting into college easier, why the dimo autopi is the perfect holiday gift, what part do grades play in college admissions, questions that high school students have about college admissions, 7 must have apps, tools, and resources that develop critical thinking skills.
Gone are the days where children were expected to memorize facts and recite them at any given time. Instead, it is hoped that children will develop critical thinking skills so that they can analyze situations, think about different outcomes, and present well-reasoned conclusions. To help children develop critical thinking skills, there are some apps, tools, and resources. Here is a list of the top seven.
- Guess the Code
While this app is presented as a game, it is actually a great way for children to look at patterns and sequences, and try to figure them out. The app generates different color combinations, and it is up to the user to decipher the pattern and enter the next color.
Physics can be a daunting subject, but the new way of approaching it makes physics more of a hands-on subject that gets students to engage in problem-solving actively. SimplePhysics provides games and puzzles that test the limits of students’ critical thinking skills.
- A Clockwork Brain
This app has a range of games in such categories as memory, attention, language, reasoning, and dexterity. Critical thinking is strengthened as children must work quickly to solve the problems before moving on to more challenging puzzles.
- Civilization VI
Now in its sixth iteration, this modern computer game is not just fun; it’s actually a place for children (and adults) to use the full extent of their imagination and critical thinking skills. The game starts with the dawn of humans, and it is up to the player to help them achieve civilization through each time period. Users must decide what it takes for a culture to evolve and this is no easy task.
- Whooo’s Reading
One of the most important steps in the journey towards critical thinking is the ability to read and interact with books. Reading is more than just memorizing letter combinations. It is about understanding the motive behind characters and the importance of setting. It is about connecting plot developments with real life and making connections between the two. To help students engage more when reading, Whooo’s Reading is a program that works to connect books at a deeper level. As a result of this program, students often increase their love of reading and as a bonus, do better on reading exams.
Social media has become pervasive in today’s culture, and while platforms like Twitter and Instagram can lead to more harm than good for most youth, social media can be used to an educator’s advantage. Edmodo provides a platform for students and teachers to engage in collaborative projects that help to foster creative thinking skills. It is a tool that can be used to bring students’ ideas together.
- Highlights Every Day
This app is a nostalgic treat for anyone that eagerly awaited their monthly Highlights magazine subscription. Updated for today’s technological world, Highlights Every Day is an app that features engaging puzzles, stories, and videos.
Critical thinking should be fostered because it creates students who actively engage in the world around them. It prepares children for a world where they will become adults and will need to navigate life. Critical thinking skills can be developed in fun, creative settings through the use of these apps, tools, and resources.
8 Essential Digital Literacy Skills That Students ...
13 must-have biology apps and tools.
Related articles more from author.
Tips for Teachers Who Are Frustrated with EdTech
Should my child have a cellphone appropriate ages and stages of use.
Trailblazers in Edtech: Mark MacDonald
How EdTech Companies Can Persevere Through Superintendent Turnover
What is Digital Literacy?
Edtech aims to save time, combine resources
Apps To Improve Problem Solving And Reasoning
- June 3, 2021
Critical thinking is a key skill in today’s learning and is growing in importance with each passing day. It is a cognitive faculty that embraces several conceptual skills that include synthesising, assessing, analysing, applying and evaluating information, making it important for solving maths and reasoning problems in exams, answering comprehension questions properly, evaluating options with multiple choice papers and, of course, making crucial decisions later in life.
While we recommend a minimal use of iPads, tablets and PCs with children in primary school, whether we like it or not, the use of technology is saturating the way information is presented to our children, as well as the way in which they learn at school.
Moreover, in a digitally focused world replete with amateurish content and fake news, the ability to critically assess propositions and make informed decisions becomes essential to learners’ cognitive and intellectual well-being. While these may, on the surface, seem like problems for adults, they are increasingly becoming an ‘information diet’ for children and will most certainly form a large part of their lives as they move towards maturity.
Critical thinking allows students to make informed and rational decisions as it forces them to dive deeper and analyse things logically. What would be viewed as a common fact, which makes sense to ‘surface thinkers’, would often be considered as a provocative statement that invites much reflection and investigation by critical thinkers . The purpose of today’s post is to introduce you to a list of our favourite iPad apps you can use with your kids to boost their critical thinking skills. We invite you to check them out below – the Android versions of these apps will be introduced at a later date.
Share This Post
Leave a comment cancel reply.
Your email address will not be published. Required fields are marked *
I agree to these terms .
More To Explore
5 Reasons Why Online Tuition Is The Future Of Education
Discover the benefits of online tuition classes and why they may be a better option for your child’s education. Here are 5 reasons why online tuition classes are more effective.
5 Mistakes To Avoid When Writing Your University Personal Statement
Your personal statement is your chance to showcase your unique qualities and experiences. Don’t make these five mistakes when writing it for university.
5 Amazing Homeschool Ideas That Will Inspire Parents
Take homeschooling to the next level with these five inspired ideas! Save time and have a blast teaching with this guide for parents looking to make their children’s education more rewarding.
Interested in Extra Curricular Activities?
Please choose the activities you are interested in below. a member of our team will respond and give you access to discounted activities in your area., you get a free session, talk with an expert and answer any query you have about education, home-schooling, career options or even legal advice., get new posts by email:.
- Reference Manager
- Simple TEXT file
People also looked at
Original research article, the mental models training app: enhancing verbal reasoning through a cognitive training mobile application.
- 1 Department of Psychology, Georgetown University, Washington, DC, United States
- 2 Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States
Introduction: Reasoning is a complex form of human cognition whose nature has long been debated. While a number of neurocognitive mechanisms for deductive reasoning have been offered, one of the most prominent accounts is Mental Model Theory (MMT). According to MMT, humans are able to manipulate and represent information for reasoning and problem solving by leveraging the brain’s evolved visuospatial resources. Thus, when solving deductive reasoning problems, reasoners build “mental models” of the essential pieces of information conveyed in the premises, with their relations to each other represented spatially—even when the information contained within a reasoning problem is not intrinsically spatial. Crucially, taking a spatially-based approach, such as building mental models, supports higher accuracy on deductive reasoning problems. However, no study has empirically tested whether explicitly training this mental modeling ability leads to improved deductive reasoning performance.
Method: Therefore, we designed the Mental Models Training App, a cognitive training mobile application which requires participants to complete increasingly difficult reasoning problems while using an external mental modeling tool. In this preregistered study ( https://osf.io/4b7kn ), we conducted a between-subjects experiment ( N = 301) which compared the Mental Models Training App to 3 distinct control conditions in order to examine which specific components (if any) of the training were causally responsible for improved reasoning performance.
Results: Results demonstrate that, when compared to a passive control condition, the Mental Models Training App led to improvements in adults’ verbal deductive reasoning performance both during and after the training intervention. However, contrary to our preregistered hypotheses, the training-induced improvements were not significantly larger than the effects of the active control conditions—one which included adaptive practice of the reasoning problems, and one which included adaptive practice as well as a spatial alphabetization control task.
Discussion: Therefore, while the present results demonstrate the ability of the Mental Models Training App to enhance verbal deductive reasoning, they do not support the hypothesis that directly training participants mental modeling ability yields improved performance beyond the effects of adaptive practice of reasoning. Future research should examine the long-term effects of repeated usage of the Mental Models Training App, as well as transfer effects to other forms of reasoning. Finally, we present the Mental Models Training App as a free mobile application available on the Apple App store ( https://apps.apple.com/us/app/mental-models-training/id1664939931 ), in the hope that this translational research may be utilized by the general public to improve their reasoning ability.
Complex human thinking and reasoning is a recent evolutionary arrival. The primate brain evolved to interact with objects in space rather than interact with complex logic structures, so a great deal of the cerebral cortex is devoted to visuospatial and motor processing ( Byrne and Johnson-Laird, 1989 ; Waltz et al., 1999 ; Byrne et al., 2007 ; Kravitz et al., 2011 ). According to a prominent account in cognitive science—mental model theory (MMT)—human reasoning and problem-solving co-opts previously evolved neural machinery for visuospatial and motor processing to internally represent and manipulate information ( Johnson-Laird, 1980 , 2010 ; Tversky, 1991 ; Wai et al., 2009 ). In other words, people form internal, spatially arranged “mental models” of relevant information, suggesting a connection between mental modeling ability and spatial cognition (e.g., related pieces of information are close together in space and unrelated pieces of information are far apart). Consistent with this perspective, emerging work indicates that spatial cognition is a malleable neurocognitive resource that supports deductive verbal reasoning ( Collins and Gentner, 1987 ; Johnson-Laird, 2010 ; Uttal et al., 2013a , b ; Cortes et al., 2022 ). The well-established role of mental modeling as a form of spatial cognition that supports verbal reasoning suggests that, if mental modeling can be trained through explicit spatialization of information, verbal reasoning performance can be enhanced. The goal of the present study was to train mental modeling using a mobile application and test for improvements in verbal deductive reasoning performance.
Mental model theory has been highly influential in the cognitive and brain sciences for several decades ( Johnson-Laird, 1980 ; Byrne and Johnson-Laird, 1989 ; Goodwin and Johnson-Laird, 2005 ), and this literature has described mental modeling as a resource that generalizes across multiple forms of reasoning. Deductive verbal reasoning, for example, is supported by the formation and manipulation of mental models ( Knauff and Johnson-Laird, 2002 ; Goodwin and Johnson-Laird, 2005 ; Knauff, 2009 ). In a deductive verbal reasoning problem, one must deduce whether a conclusion logically follows from premises (e.g., Premise 1: The dog is better than the cat/Premise 2: The cat is better than the frog/Conclusion: The dog is better than the frog). In such an example, a reasoner might represent the better option as above a worse option, “spatializing” the concept of goodness, which is not inherently spatial. Several theories of human reasoning suggest that these sorts of problems, often called linear syllogisms, are solved using internal representations which are spatially ordered ( De Soto et al., 1965 ; Huttenlocher, 1968 ; Byrne and Johnson-Laird, 1989 ; Khemlani and Johnson-Laird, 2012 ; Ragni and Knauff, 2013 ). Notably, the extent to which reasoners are able to apply such mental models is associated with variability in task performance; building superior mental models has been associated with higher accuracy on deductive reasoning tasks ( Galotti et al., 1986 ; Roberts, 2000 ; Schaeken et al., 2014 ). However, no study has empirically tested whether it is possible to explicitly training this mental modeling ability.
Although mental model training is thus-far untested, there is reason to believe that mental modeling can be improved through targeted interventions. For instance, many other visuospatial and motor cognitive resources are trainable and show transfer to untrained reasoning tasks ( Adkins et al., 2006 ; Forgeard et al., 2008 ; Sanchez, 2012 ; Frick and Möhring, 2016 ; Lowrie et al., 2017 ). Educational psychology has also shown promise for training spatial cognition, which is thought to support mental modeling during reasoning ( Byrne and Johnson-Laird, 1989 ; Johnson-Laird, 2004 ; Knauff, 2009 ). Meta-analytic evidence indicates that training on a range of spatial tasks leads to improvement on the trained abilities and may yield transfer to untrained STEM-related tasks ( Uttal et al., 2013a ). Emerging research has highlighted neural and behavioral changes during verbal reasoning following participation in spatially focused curricula in real-world classroom ( Cortes et al., 2022 ). While encouraging, other spatial training studies have failed to produce lasting transfer ( Mix and Cheng, 2012 ; Xu and LeFevre, 2016 ). Notably, none of this work has tested whether it is possible to directly train the mental modeling resource itself, and whether this would lead to improved verbal deductive reasoning performance.
Training efforts to improve spatial thinking reflect a growing emphasis within psychology and neuroscience to use cognitive training programs to improve general cognitive ability (CGA; Sala and Gobet, 2019 ). Generally, these training paradigms follow a similar logic: If Tasks X, Y, and Z require Cognitive Skill A—and Cognitive Skill A influence GCA—then training on Tasks X, Y, and/or Z can transfer to improve GCA. In other words, enhancing a domain-general cognitive ability is be achieved by a domain-specific training ( Taatgen, 2021 ).
Most of these cognitive training efforts have focused on working memory ( Jaeggi et al., 2008 ; Shipstead et al., 2012a , b ). This is not surprising given the extensive literature demonstrating the strong positive relationship between working memory and a range of cognitive abilities (e.g., executive function, fluid intelligence, verbal reasoning, and mathematical achievement; Daneman and Carpenter, 1980 ; Kyllonen and Christal, 1990 ; Engle et al., 1999 ). Some of this work is promising, but in many cases, working memory trainings have been unable to achieve appreciable effect sizes, do not demonstrate sustained and/or transferable effects, and have failed to replicate ( Shipstead et al., 2012a , b ; Melby-Lervåg and Hulme, 2013 ; Redick et al., 2013 ). Indeed, robust meta-analyses have provided strong evidence that past cognitive training efforts—including but not limited to working memory paradigms—do not yield transfer for GCA or its component abilities ( Sala and Gobet, 2019 ).
Although substantial evidence has highlighted the role of working memory in verbal reasoning ( Kyllonen and Christal, 1990 ; Klauer, 1997 ; Ruff et al., 2003 ), the lack of successful working memory training effects suggests that targeted training of other cognitive abilities may be worth investigating. Mental modeling is a cognitive ability that draws on working memory ( Ruff et al., 2003 ; Ragni and Knauff, 2013 )—as virtually all cognitive abilities do—but has direct, mechanistic ties to spatial cognition and verbal reasoning, and may therefore yield larger effects than efforts to train working memory broadly. Given the evidence for mental modeling as a reasoning-general mechanism, the present study was devised to test whether targeting this specific cognitive ability can produce sustained improvements in reasoning (a domain-general cognitive ability).
If mental modeling is indeed a viable subject of cognitive training, there are important considerations regarding how to conduct such a training. Key components of successful cognitive training paradigms include: adaptive training (e.g., attuned to each individual’s performance; Kelly et al., 2014 ), increases in problem difficulty ( Wickens et al., 2013 ), and performance feedback after each problem ( Larson, 1984 ). For mental models training in particular, one promising direction is to externalize reasoners’ internal mental representations—that is, “build” visible manifestations of the internal spatial representations of complex mental models during the reasoning process. The use of external spatialization tools may afford reasoners better insight into model accuracy through concrete visualization while also reducing burdens on working memory. Informed by educational psychology research, spatial tools allow individuals to better process abstract concepts through concrete visualization, and that can be measured and compared through established methods ( Hay et al., 2008 ). However, it is important that these tools are as simple and color-less as possible, as visual imagery can actually impede the reasoning process ( Knauff and Johnson-Laird, 2002 ). Research on multimedia learning (e.g., translating verbal content into visual images to improve learning) provides support for this notion, as overly complex visual environments during learning can lead to extraneous cognitive processing that distracts from the core processes of the learning paradigm, therefore impeding optical instructional outcomes ( Mayer, 2009 , 2014 ; Makransky et al., 2019 ).
Successful efforts at mental modeling training via a simple smartphone application could allow for increased growth in accessibility of such trainings, given the ubiquity of such devices ( Poushter, 2016 ). However, most “brain training” mobile applications are not empirically validated by scientific research before released to the public—and when these apps are scientifically tested, many of them turn out to be completely ineffective at enhancing cognition ( Owen et al., 2010 ; Rabipour and Raz, 2012 ). This has resulted in a general distrust of “brain training” apps by the scientific community ( Simons et al., 2016 ), as well as legal sanctions against certain apps, such as the FTC’s conviction of Lumosity, for deceptive advertising ( Bainbridge and Mayer, 2018 ).
Therefore, we designed the Mental Models Training App, which requires participants to adaptively complete increasingly difficult reasoning problems while using a spatial modeling tool to construct external mental models. The present study tests whether this app-based training improves verbal deductive reasoning, as measured by the Multidimensional Relational Reasoning Test (MRRT; Cortes et al., 2021 ). We compared the Mental Models Training App to several control conditions (see Methods) in order to examine which specific components (if any) were causally responsible for improved reasoning performance. Positive effects of the training would provide support for the MMT by demonstrating a causal role of mental modeling ability in verbal deductive reasoning, while also demonstrating the efficacy of a free mobile app that anyone can use to enhance their own reasoning ability. This research is part of a larger effort to translate basic science into applied tools that have the potential to benefit the general public ( Wethington and Dunifon, 2012 ). This study was preregistered on the Open Science Framework. 1
2. Materials and methods
A total of 382 participants were recruited through Prolific ( Palan and Schitter, 2018 ), and compensated $37.50 for their participation in the full study (i.e., $15 per hour for 2.5 total hours). Participation was limited to adults’ ages 18–35 living in the United States who spoke English as their first language and had not participated in any prior studies from our laboratory. Substantial data removal is standard in online data collection ( Buhrmester et al., 2011 ; Allahbakhsh et al., 2013 ; Palan and Schitter, 2018 ), and was anticipated in the present study. We included four attention check items (e.g., please select “True”) throughout the study to screen for participants who were not properly attending to the questions (e.g., rushing through and clicking answers). Thirteen participants were removed for missing a total of two or more attention checks across both sessions, 50 participants were lost due to an error during data collection (sent the wrong survey link), and 18 participants were removed because they did not complete the entire study. Therefore, the final sample included 301 participants (57.8% Female, 38.5% Male, 3.7% Other; mean age = 27.4 years, SD = 7.3; 63.2% Caucasian, 7.3% Asian, 12.6% African American, 5.6% Hispanic; 0% Native American, 11.3% Mixed Race/Other; Total Years of Education: 48.1% 16+ years, 37.5% 13–15 years, 12.9% 12 years, 1.4% 0–11 years; Total Household Income: 19.3% Less than $30,000, 18.3% $30,000–$50,000, 17.9% $50,001–$70,000, 21.6% $70,001–$100,000, 14.3% $100,001–$150,000, 4.3% $150,001–$250,000, 4.3% More than $250,000). All study procedures were approved by the Georgetown University Institutional Review Board, and all participants provided informed written consent before participation.
2.2. Study design and procedure
A full visual depiction of the study design and procedure can be found in Figure 1 . During the pretest, participants first completed 45 items from the MRRT ( Cortes et al., 2021 ), a measure of verbal deductive reasoning which served as the main outcome measure of the study. After completing the MRRT, participants completed additional measures not analyzed in the present study, with the demographics survey always administered at the end. The entire pretest took approximately 1 h. The following day (24 h later), participants were randomized into one of the four experimental conditions (see Experimental Conditions section and Figure 2 for full description of each condition). The timing of the interventions was participant-dependent, as the training application was adaptive to performance in all conditions (except condition 0 in which participants received no intervention), however overall average completion time was approximately 32 min. After completing their respective version of the mobile training application, participants were provided a mandatory 10-min break. Then, all participants completed an appropriately counterbalanced version of the MRRT as a posttest measure of verbal deductive reasoning (to measure change in performance from pretest). The posttest took approximately 30 min. All participants completed the entire study on their iPhones.
Figure 1 . Study design and procedure. Full visual depiction of the study design, cognitive measures administered, sample sizes at each timepoint (for each group), and complete timing information for the length of tasks/interventions administered as well as the break between each session.
Figure 2 . Key components of each condition. Full visual presentation of the app interface for each condition (Left), as well of the key training components of each condition (Right). The app screenshots (Left) represent one cycle from one level, however the design and structure was the same across all 4 levels of the training (as well as each of the 3+ cycles in each level) in each condition. Complete screenshots of the entire instructions section and training levels within each condition can be found at https://osf.io/a8zyn/ .
2.3. Verbal deductive reasoning
Verbal deductive reasoning was measured with the MRRT (available for use at https://osf.io/qfvp2/ ; Cortes et al., 2021 ). Within each MRRT problem, 2–3 premises and a conclusion were presented (e.g., “Premise 1: Tim is above and the right of John/Premise 2: Bob is above and to the right of Tim/Conclusion: John is below and to the left of Bob”) and participants were instructed to respond with “True” if the conclusion necessarily follows from the premises or “False” if the conclusion could possibly be false (i.e., if it is clearly false from the information in the premises or if the solution is indeterminate). Participants were given up to 90 s to complete each problem and were instructed to solve every problem in their head without the use of pencil/paper or their fingers. The problems in the MRRT were systematically varied along the following stimulus properties: Number of Premises (2 or 3), Number of Dimensions (1 or 2), Relation Type (Spatial or Non-spatial), and Solution (True, False, or Indeterminate). The MRRT was used during pretest, training, and posttest—each implementation contained a different set of names (all two-syllable male names from ranks 50–100 in the list of popular names from the 1990s 2 in order to prevent participants from seeing repeated problems while preserving (and matching) the underlying stimulus qualities. Two different versions of the MRRT were created (A and B) for the pretest and posttest, both of which contained 45 problems with the same stimulus properties and overall average problem difficulty (72% accuracy), but with different specific names and wording—these versions were counterbalanced across all participants, equally across each of the conditions. For example, half of the participants in each condition completed version A in the pretest and version B in the posttest, while the other half completed version B in the pretest and version A in the posttest. The version of the MRRT in the training was divided into levels based on stimulus properties (number of premises and number of dimensions) which have been empirically proven to impact problem difficulty (for more details, see Experimental Conditions, Figure 3 , and Cortes et al., 2021 ). The full stimuli for version A, version B, and the training version of MRRT can be found at https://osf.io/a8zyn/ .
Figure 3 . Levels within the mental models training. Full description of the problem types included in each level of the training app in Conditions 1–3. The MRRT problems in these levels were empirically proven to be increasingly difficult ( Cortes et al., 2021 ). The normative average accuracy was 80% for the problems in level 1, 73% for the problems in level 2, 72% for the problems in level 3, and 66% for the problems in level 4.
2.4. Experimental conditions
2.4.1. condition 0: passive control group—no intervention.
In order to control for the practice effects of completing the MRRT during the pretest and test the effects of each condition against a truly passive control group, Condition 0 was implemented such that participants did not complete any intervention (i.e., they did not download any training app) and simply completed the MRRT posttest 24 h after they completed the pretest.
2.4.2. Similarity across all conditions (excluding Condition 0)
All training conditions (Conditions 1–3) were completed by participants on their iPhones through the TestFlight application, which allowed participants to download a specific version of the training app using a condition-specific password provided by the researchers. Upon opening the app, participants entered their Prolific ID number along with the condition-specific password. The title of the app (“Reasoning Training”), the instructions provided about the reasoning problems (e.g., “Welcome to the Reasoning Training app. This app is designed to help you improve your reasoning skills. The training will get increasingly difficult as you go on, and it is very important that you follow the instructions so that the training is effective.”), and the overall structure of the app (adaptive reasoning training with increasingly difficult problems) was kept the exact same across all conditions (see Figure 2 ) to create a uniform participant experience and ensure that any group differences were related to specific and intentional differences created between conditions. Within each app, participants were instructed to solve all problems in their head and were given optional 3 min breaks between each level of the training. Participants had 90 total seconds to solve each problem—75 s to view the premises and reason about them, and once participants pressed the “conclusion” button, the conclusion would appear and participants had 15 s to response “Yes” for necessarily true or “No” for not necessarily true. The purpose of this problem timing was to ensure that participants fully solved the problems and processed all of the premise information, rather than focusing solely on the conclusion and using process of elimination. In Condition 3, this ensured that participants fully constructed a mental model before attempting to solve the problem. After each problem, participants received feedback on whether they answered the problem correctly or incorrectly (“Correct” vs. “Incorrect”).
In all training conditions, participants completed the same 4 levels of increasingly difficult MRRT problems (see Figure 3 ). The verbal deductive reasoning problems in these levels were empirically proven to increasingly difficult based on normative accuracy data ( Cortes et al., 2021 ; Figure 3 ). Level 1 contained two premise, one dimensional problems with both non-spatial and spatial wording (average accuracy = 80%); Level 2 contained two premise, two dimensional problems with both non-spatial and spatial wording (average accuracy = 73%); Level 3 contained three premise, one dimensional problems with both non-spatial and spatial wording (average accuracy = 72%); and Level 4 contained three premise, two dimensional problems with both non-spatial and spatial wording (average accuracy = 66%). See Figure 3 for full details of each level 3 for the exact problems within each level. Within each level, participants had to complete 3 successful cycles to advance to the next level. A successful cycle entailed completing two reasoning problems in a row with the correct answer—some of the components within the cycles differed based on condition (see Figure 2 and the Condition 1–3 sections below). After each problem, participants received feedback on whether they answered the problem correctly or incorrectly (“Correct” vs. “Incorrect”). At the end of the app, participants were redirected to a survey which included a mandatory 10-min break, followed by the posttest MRRT. Complete screenshots of the entire instructions section and training levels for each condition (1–3) can be found at https://osf.io/a8zyn/ .
2.4.3. Condition 1: Active control group—adaptive practice
In order to control for the effects of practicing verbal deductive reasoning problems in a mobile application, Condition 1 was designed the same as Conditions 2 and 3, except that there was no spatial tool included in the training. Participants still received instructions for solving reasoning problems, the problem timing remained the same, correct/incorrect feedback was still provided after each problem, and the levels still advanced in the same increasingly difficult manner. However, the cycles within each level only included 2 successive reasoning problems (see Figure 2 ) and there was never any mention or usage of a spatial tool throughout the training.
2.4.4. Condition 2: Active control group—adaptive practice with spatial alphabetization tool
In order to control for the visual, spatial, and motor processes engaged by using a spatial tool during the reasoning training, Condition 2 matched the design of Condition 3, but provided participants with a spatial alphabetization tool ( Figure 2 ) instead of the spatial modeling tool. In the instructions section of the app, participants were introduced to the spatial alphabetization tool and instructed to “arrange the names below in a horizontal line, alphabetically from left to right” (see Figure 2 ). Participants were instructed to create several different spatial structures throughout the training, depending on the number of names in the premises (e.g., horizontal line, vertical line, triangle, square), and the direction of alphabetization (e.g., left to right, right to left, top to bottom, bottom to top, clockwise, counter clockwise) was evenly distributed across the training.
A key difference from Condition 3 is that, during the levels of the training, participants in Condition 2 were provided with the spatial alphabetization tool after each reasoning problem using a different set of names than those shown in the previous problem. This design ensured that participants were not distracted during the reasoning problem (i.e., dividing their attention in counterproductive ways) and that they could not use the alphabetization tool in order to create mental models during the reasoning problems or retrospectively after solving reasoning problems. Relatedly, participants in Condition 2 completed cycles with the following components: 1) complete a reasoning problem without a tool, (2) alphabetize a separate list of names in the specific spatial configuration and alphabetical direction, (3) for non-spatial problems, verbally explain how they used the spatial alphabetization tool to arrange the names to form the alphabetized shape (4) complete a new reasoning problem, (4) alphabetize a separate list of names in the specific spatial configuration and alphabetical direction (see Figure 2 ). As in all other conditions, participants had to complete 3 successful cycles to advance from one level to the next. At the beginning of each level, participants were shown an example of how the tool could be used to spatially alphabetize the names from the type of problems included in that level ( Figure 2 ).
Typical responses to the verbal explanation prompt for non-spatial problems in Condition 2 included: “I put them alphabetically from left to right,” “I arranged the circles alphabetically from bottom to top in a vertical line,” and “I placed the names alphabetically in a triangle starting lower left and clockwise.” The prevalence of these sorts of responses suggested that the spatial alphabetization tool was generally used as intended. In addition, thorough visual inspection of the alphabetized shapes created throughout the training by participants in this condition confirmed that the alphabetize spatial tool was utilized as intended.
2.4.5. Condition 3: Experimental group—the Mental Models Training App
The defining feature of the Mental Models Training App (Condition 3) was that it provided participants with a spatial modeling tool to create external mental models while solving increasingly difficult reasoning problems in the app’s levels. The spatialization tool was introduced during the instructions section of the app, wherein participants were shown (1) a visual example of how the tool could be used to represent reasoning problems in a spatial manner, (2) how to tap in the workspace to create pre-labeled tokens for each of the names in a reasoning problem, (3) how to move the tokens around within the workspace to create a mental model for a reasoning problem, and (4) an example reasoning problem in which they could use the tool to create a mental model and solve the problem. After completing the instructions, participants began level 1 of the training.
Within each level of the Mental Models Training App (Condition 3), participants completed cycles with the following structure: (1) complete a problem using the spatialization tool to create mental models of the names in the premises, (2) for non-spatial problems, verbally explain how they solved the problem using the spatialization tool, (3) complete a new problem without the use of the spatialization tool, and (4) use the tool to spatially explain how they solved the previous problem (see Figure 2 ). The goal of this process was to teach participants how to construct mental models externally in a 2-dimensional space and encourage the internalization of this process. As in all other conditions, participants had to complete 3 successful cycles to advance from one level to the next. At the beginning of each level, participants were shown an example mental model for the corresponding type of problems included in that level ( Figure 2 ).
Typical responses to the verbal explanation prompt for non-spatial problems in Condition 3 included: “I used the tool similar to above and below to rank the level of excitement,” “I placed those who were more patient further to the right than those who were less patient,” and “I used the visual tool to show the hierarchy.” The prevalence of these sorts of responses suggested that the spatialization tool was generally used as intended. In addition, thorough visual inspection of the mental models created throughout the training by participants in this condition confirmed that the mental modeling tool was utilized as intended.
2.5. Analytic strategy
In order to assess the effects of each training condition on reasoning performance (i.e., MRRT accuracy and RT) from pretest to posttest, we conducted a series of mixed-effects models testing for condition-by-time interactions. Mixed-effects models are appropriate when several repeated measurements or observations (Level 1) are nested within a higher level of data (Level 2; Longford, 1995 ; Goldstein, 2011 ). In the present study, stimulus properties of the MRRT (number of dimensions, number of premises, spatial vs. non-spatial wording, true vs. false solution) and timepoint (pretest, posttest) were modeled as a Level 1 variables, and each participant’s demographic variables (age, gender, income, and education) and condition assignment (Condition 0, 1, 2, or 3) were modeled as Level 2 variables. Because we were interested in examining the condition-by-time effects on MRRT accuracy and RT, we performed separate mixed-effects models for these two dependent variables. The condition-by-time effect on accuracy was investigated using a mixed-effects logistic regression because accuracy was a binary variable (i.e., each individual response was either correct or incorrect). RT models were estimated via mixed-effects linear regression. All models estimated fixed effects, given that the high number of variables included made random slope estimations computationally infeasible ( Bell et al., 2019 ). All mixed-effects models were fit using the glmer (for accuracy) and lmer (for RT) commands in R Studio ( De Boeck et al., 2011 ; Lee and Grimm, 2018 ; Verzani, 2014 ). Significance tests were two-sided.
3.1. Descriptive statistics for pretest variables
Descriptive statics for all variables measured at pretest (separated by condition) can be found in Table 1 . Results indicate that all variables were not significantly different across conditions, indicating that each condition contained cognitively and demographically equivalent participants at the start of the experiment (before the various training conditions were administered). This result provides confidence that any training-related effects are likely due to the training conditions rather than extraneous characteristics of the sample in each condition.
Table 1 . Descriptive statistics for pretest measures across conditions.
3.2. Effects of training conditions on reasoning performance
We ran two mixed-effects models (Model 1: Accuracy, mixed-effects logistic regression; Model 2: RT, mixed-effects linear regression) to examine whether each of the training conditions (1–3) significantly improved MRRT performance from pretest to posttest, using the passive control condition with no intervention (condition 0) as the reference factor level. All models controlled for stimulus properties of the MRRT problems (relation type, premises, dimensions, and solution) and demographic characteristics of the participants (Age, Gender, Income Bracket, and Total Education). Results indicated significant condition-by-time effects of all three conditions (1–3) on MRRT accuracy ( Table 2 ) and RT ( Table 3 ). Condition 1 (adaptive practice) showed the largest training effects compared to condition 0 (passive control), as participants in condition 1 were 1.46 times more likely to provide the correct response in 3.26 fewer seconds. Participants in condition 2 (alphabetize spatial tool) were 1.31 times more likely to provide the correct response in 1.98 fewer seconds when compared to condition 0 (passive control). Participants in condition 3 (mental models training) were 1.35 times more likely to provide the correct response in 2.22 fewer seconds. Bar graphs of the mean accuracy and RT for each condition at each timepoint can be found in Figures 4 , 5 , respectively. Additional models comparing the effects between the training app conditions (condition 3 vs. condition 1, condition 2 vs. condition 1, condition 3 vs. condition 2) revealed no significant differences in the size of the training effects between conditions 1 and 3 on accuracy (all p > 0.38) or RT (all p > 0.07).
Table 2 . Mixed-effects logistic regression model for condition-by-time effects on accuracy (fixed effects).
Table 3 . Mixed-effects linear regression model for condition-by-time effects on RT (fixed effects).
Figure 4 . Mean accuracy at each timepoint across all conditions. Condition 0, no intervention; Condition 1, adaptive practice; Condition 2, alphabetize spatial tool; Condition 3, mental models training.
Figure 5 . Mean reaction time (seconds) at each timepoint across all conditions. Condition 0, no intervention; Condition 1, adaptive practice; Condition 2, alphabetize spatial tool; Condition 3, mental models training.
3.3. Within-training differences between conditions
Next, we examined differences in performance within the training app across conditions 1–3 (condition 0 was not included as it did not include the app intervention). Participants in condition 1 (adaptive training) completed the training in an average of 21.93 min, which was significantly shorter (about half as long) than the average completion time in condition 2 (alphabetize spatial tool; 38.43 min) and condition 3 (mental models training; 37.55 min; Table 4 ). This was not surprising given that condition 1 contained half as many training components as conditions 2 and 3 (see Figure 2 ). For this reason, the remaining analyses of within-training focus on the number of problems completed within the training levels, which directly tracks with the number of cycles participants had to successfully complete before advancing to the following level (i.e., how well they were performing within each level).
Table 4 . Total training time and number of problems completed during the app training across conditions 1–3.
The total number of reasoning problems completed in the training was not significantly different across conditions ( Table 4 ). However, in level 3 of the training, participants in condition 3 (mental models training) completed significantly fewer problems (mean of 8.47 problems, or 4.1 successful cycles) than both condition 2 (alphabetize spatial tool; mean of 11.07 problems, or 5.53 successful cycles) and condition 1 (adaptive practice; mean of 12.21 problems, or 6.11 successful cycles; Table 4 ). Completing fewer problems indicated improved performance within a training level, as 3 successful cycles (one successful cycle included two subsequent correct reasoning problems) were required to advance from each level—the higher number of problems completed within a level, the more problems a participant answered incorrectly. In sum, participants in the Mental Models Training App condition answered fewer problems incorrectly (i.e., performed better) in level 3 compared to the active control conditions. Level 3 problems contained three premise, one-dimension reasoning problems. There were not significant differences in number of problems completed in any other levels ( Table 4 ), though the differences in progression through the training can be visualized in Figure 6 , which contains a bar graph representing the mean number of problems completed during the training across conditions 1–3.
Figure 6 . Mean number of problems completed in each level of the training across conditions 1–3. Condition 1, adaptive practice; Condition 2, alphabetize spatial tool; Condition 3, mental models training.
3.4. Exploratory analyses
Based on the finding that participants in Condition 3 showed improved performance on 3-premise problems in level 3 of the mobile training app (compared to Conditions 1 and 2), we conducted exploratory analyses testing for a significant three-way interaction between Condition-Time-Premises on reasoning performance (examining the posttest training effects in Condition 3 as compared to the other conditions). Results indicated no significant Condition-Time-Premises interaction for Condition 3 compared to: Condition 0 (Accuracy: Odds Ratio = 1.08, CI = 0.66–1.27, p = 0.602; RT: Estimated effect: −0.30 s, CI = −1.96-2.56, p = 0.795), Condition 1 (Accuracy: Odds Ratio = 1.17, CI = 0.60–1.15, p = 0.262; RT: Estimated effect: 0.65 s, CI = −2.90-1.60, p = 0.572), or Condition 2 (Accuracy: Odds Ratio = 0.92, CI = 0.81–1.44, p = 0.593; RT: Estimated effect: 0.24 s, CI = −2.21-1.73, p = 0.813).
The present study provides empirical evidence that a mental model-based cognitive training mobile application (“The Mental Models Training App”) significantly improved verbal deductive reasoning performance, as indicated by increased accuracy and reduced reaction time on the MRRT ( Cortes et al., 2021 ), compared to a passive control group which received no intervention. However, contrary to our preregistered hypotheses, the training-induced improvements in the Mental Models Training App condition were not significantly different than the improvements in both of the active control conditions of the app intervention—one which included adaptive practice of the MRRT (condition 1), and the other which included adaptive practice as well as an alphabetize spatial tool control task (condition 2). Specifically, the adaptive practice training (condition 1) led to the nominally highest improvements in reasoning performance, despite taking roughly half the amount of time (~22 min) as the mental models training and the alphabetize spatial tool control training (~38 amounts). These results demonstrate that simply practicing reasoning problems within any version of the mobile app led to improved reasoning performance immediately after completing the training.
We did not find evidence for an additive benefit of the spatialization tool, nor a closely matched control version of that tool, for improving reasoning performance after the training. In line with prior research on cognitive training ( Schubert et al., 2014 ), it is possible that the practice-based training (in the adaptive practice condition) may be more effective than strategy-based training (in the mental models conditioning) at improving reasoning performance in the short-term (i.e., after one session). Relatedly, the additional cognitive demands of the mental models training (i.e., creating visualizations of mental models in-between and during trials) may have produced fatigue effects which were not present in the adaptive training condition (which took half the time to complete and did not involve any sort of multi-tasking between problems). Future research should examine the long-term effects of repeated usage of the Mental Models Training App, as it is possible that if the intervention was completed multiple times across several weeks, and posttest performance was measured on the scale of months rather than minutes, the Mental Models Training App may be the most effective at promoting long-term retention of improvements and overall strategy changes compared to basic practice in the control condition. Therefore, while the present results demonstrate the ability of the Mental Models Training App to enhance verbal reasoning, they do not support the mental models theory-based hypothesis that directly training participants’ mental modeling ability yields improved performance beyond the effects of adaptive, increasingly difficult practice of reasoning problems in a cognitive training mobile application.
However, we did find evidence that the spatial modeling tool directly improved performance during the mobile training app. Specifically, participants in the Mental Models Training App training completed level 3 of the training (one-dimension, three premise problems) with significantly fewer total attempts (an average of 8 problems completed compared to 12 problems in both of the control conditions). Previous research on deductive verbal reasoning has found that the single most impactful stimulus factor on problem difficulty is the number of premises ( Cortes et al., 2021 ). In particular, the increase from two premises to three premises results in a 10% reduction in accuracy ( Cortes et al., 2021 ), due to the additional demands a third premise places on working memory ( Klauer, 1997 ; Johnson-Laird, 2001 ; Goodwin and Johnson-Laird, 2005 ). In the present data, access to the spatial modeling tool during the training completely wiped out this effect on difficulty (0% change in difficulty compared to 10% in prior data; see Figure 6 ), indicating that externalizing mental models improved adaptation when reasoning becomes more difficult, perhaps by reducing working memory load during reasoning. However, it should be noted that this within-training improvement on three premise problems did not transfer to posttest reasoning performance.
Future research should test for transfer effects of the Mental Models Training App to other kinds of reasoning, such as causal ( Waldmann and Hagmayer, 2013 ; Khemlani et al., 2014 ), temporal ( Kelly et al., 2020 ), categorical ( Copeland, 2006 ), and visuospatial reasoning ( Elliott and Tyler, 1986 ; Waschl et al., 2017 ), all of which are theorized to be supported by the mental modeling resource ( Johnson-Laird, 1980 , 2004 , 2010 ; Goel et al., 2000 ; Khemlani and Johnson-Laird, 2012 ; Ragni and Knauff, 2013 ; Khemlani et al., 2014 ; Johnson-Laird et al., 2017 ). Moreover, research should examine the effects of the intervention on different age groups, such as older adults where cognitive training has yielded the most substantial benefits ( Willis et al., 2006 ; Kueider et al., 2012 ), or younger children where milestones along their developmental cascade are significantly predictive of future cognitive abilities ( Piaget, 1952 ; Gibson, 1988 ; Bornstein et al., 2013 ; Adolph and Tamis-LeMonda, 2014 ; Libertus et al., 2016 ). Given recent evidence demonstrating transfer effects from spatially enriched education to verbal deductive reasoning ( Cortes et al., 2021 ), it is possible that an intervention which directly trains spatial scanning ability, a core spatial cognitive process known to support reasoning ( Knauff, 2009 ), may be more effective at producing post-training reasoning performance enhancements than an intervention which directly training participants’ reasoning (such as the Mental Models Training App). Future research should compare the effects of spatial and reasoning training on posttest reasoning performance within the same sample.
Finally, we present the Mental Models Training App as a free mobile application (available on the Apple App store 4 ), in the hope that it may be useful for individuals seeking to improve their reasoning ability.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found at: The data and code for this study can be found in the Open Science Framework ( https://osf.io/a8zyn/ ).
The studies involving human participants were reviewed and approved by Georgetown University Institutional Review Board. The patients/participants provided their written informed consent to participate in this study.
AG, RC, and AW: conceptualization and writing—review and editing. RC: methodology, formal analysis, investigation, data curation, visualization, project administration, and funding acquisition. RC and AW: writing—original draft preparation. AG: supervision. All authors contributed to the article and approved the submitted version.
This research was supported by grants to AG from the National Science Foundation (DRL-1420481, EHR-1661065, and EHR-1920682) and the John Temple Foundation for AG and AW [ID 61114]. RC was supported by a National Science Foundation Graduate Research Fellowship and by the Patrick Healy Graduate Fellowship from Georgetown University.
We acknowledge Joseph Marino for developing and coding the Mental Models Training App in Swift and helping publish the app on the Apple App Store. We also acknowledge Sangmeet Khemlani for his help in designing increasingly difficult levels of reasoning problems within the training.
Conflict of interest
RC and AG are the developers of intellectual property owned by Georgetown University related to the Mental Models Training App technology that is described in this manuscript. Although the app is free to download, it includes advertisements and in-app purchases that have the potential to generate revenue for RC, AG, and Georgetown University. Google manages all aspects of the advertisements including content and placement. Furthermore, in-app advertisements do not necessarily represent the data presented in the app.
The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
1. ^ https://osf.io/4b7kn
2. ^ https://www.ssa.gov
3. ^ https://osf.io/a8zyn
4. ^ https://apps.apple.com/us/app/mental-models-training/id1664939931
Adkins, D. L., Boychuk, J., Remple, M. S., and Kleim, J. A. (2006). Motor training induces experience-specific patterns of plasticity across motor cortex and spinal cord. J. Appl. Physiol. 101, 1776–1782. doi: 10.1152/japplphysiol.00515.2006
PubMed Abstract | CrossRef Full Text | Google Scholar
Adolph, K. E., and Tamis-LeMonda, C. S. (2014). The costs and benefits of development: the transition from crawling to walking. Child Dev. Perspect. 8, 187–192. doi: 10.1111/cdep.12085
Allahbakhsh, M., Benatallah, B., Ignjatovic, A., Motahari-Nezhad, H. R., Bertino, E., and Dustdar, S. (2013). Quality control in crowdsourcing systems: issues and directions. IEEE Internet Comput. 17, 76–81. doi: 10.1109/MIC.2013.20
CrossRef Full Text | Google Scholar
Bainbridge, K., and Mayer, R. E. (2018). Shining the light of research on Lumosity. J. Cogn. Enhanc. 2, 43–62. doi: 10.1007/s41465-017-0040-5
Bell, A., Fairbrother, M., and Jones, K. (2019). Fixed and random effects models: making an informed choice. Qual. Quant. 53, 1051–1074. doi: 10.1007/s11135-018-0802-x
Bornstein, M. H., Hahn, C.-S., and Suwalsky, J. T. D. (2013). Physically developed and exploratory young infants contribute to their own long-term academic achievement. Psychol. Sci. 24, 1906–1917. doi: 10.1177/0956797613479974
Buhrmester, M., Kwang, T., and Gosling, S. D. (2011). Amazon’s mechanical Turk. Perspect. Psychol. Sci. 6, 3–5. doi: 10.1177/1745691610393980
Byrne, P., Becker, S., and Burgess, N. (2007). Remembering the past and imagining the future: a neural model of spatial memory and imagery. Psychol. Rev. 114, 340–375. doi: 10.1037/0033-295X.114.2.340
Byrne, R. M. J., and Johnson-Laird, P. N. (1989). Spatial reasoning. J. Mem. Lang. 28, 564–575. doi: 10.1016/0749-596X(89)90013-2
Collins, A., and Gentner, D. (1987). “How people construct mental models” in Cultural Models in Language and Thought . eds. D. Holland and N. Quinn (Cambridge, England: Cambridge University Press), 243–265. doi: 10.1017/CBO9780511607660.011
Copeland, D. E. (2006). Theories of categorical reasoning and extended syllogisms. Think. Reason. 12, 379–412. doi: 10.1080/13546780500384772
Cortes, R. A., Peterson, E. G., Kraemer, D. J. M., Kolvoord, R. A., Uttal, D. H., Dinh, N., et al. (2022). Transfer from spatial education to verbal reasoning and prediction of transfer from learning-related neural change. Sci. Adv. 8:eabo3555. doi: 10.1126/sciadv.abo3555
Cortes, R. A., Weinberger, A. B., Colaizzi, G. A., Porter, G. F., Dyke, E. L., Keaton, H. O., et al. (2021). What makes mental modeling difficult? Normative data for the multidimensional relational reasoning task. Front. Psychol. 12:8256. doi: 10.3389/fpsyg.2021.668256
Daneman, M., and Carpenter, P. A. (1980). Individual differences in working memory and reading. J. Verb. Learn. Verb. Behav. 19, 450–466. doi: 10.1016/S0022-5371(80)90312-6
Boeck, P.De, Bakker, M., Zwitser, R., Nivard, M., Hofman, A., Tuerlinckx, F., et al. (2011). The estimation of item response models with the lmer function from the lme4 package in R. J. Stat. Softw. 39:i12. doi: 10.18637/jss.v039.i12.
De Soto, C. B., London, M., and Handel, S. (1965). Social reasoning and spatial paralogic. J. Pers. Soc. Psychol. 2, 513–521. doi: 10.1037/h0022492
Elliott, C. D., and Tyler, S. T. (1986). British ability scales profiles of children with reading difficulties. Educ. Child Psychol. 3, 80–89.
Engle, R. W., Tuholski, S. W., Laughlin, J. E., and Conway, A. R. A. (1999). Working memory, short-term memory, and general fluid intelligence: a latent-variable approach. J. Exp. Psychol. Gen. 128, 309–331. doi: 10.1037/0096-34184.108.40.2069
Forgeard, M., Winner, E., Norton, A., and Schlaug, G. (2008). Practicing a musical instrument in childhood is associated with enhanced verbal ability and nonverbal reasoning. PLoS One 3:e3566. doi: 10.1371/journal.pone.0003566
Frick, A., and Möhring, W. (2016). A matter of balance: motor control is related to Children’s spatial and proportional reasoning skills. Front. Psychol. 6:2049. doi: 10.3389/fpsyg.2015.02049
Galotti, K. M., Baron, J., and Sabini, J. P. (1986). Individual differences in syllogistic reasoning: deduction rules or mental models? J. Exp. Psychol. Gen. 115, 16–25. doi: 10.1037/0096-34220.127.116.11
Gibson, E. J. (1988). Exploratory behavior in the development of perceiving, acting, and the acquiring of knowledge. Annu. Rev. Psychol. 39, 1–42. doi: 10.1146/annurev.ps.39.020188.000245
Goel, V., Buchel, C., Frith, C., and Dolan, R. J. (2000). Dissociation of mechanisms underlying syllogistic reasoning. NeuroImage 12, 504–514. doi: 10.1006/NIMG.2000.0636
Goldstein, H. (2011). Multilevel Statistical Models (Vol. 922) . Hoboken, NJ: John Wiley & Sons.
Goodwin, G. P., and Johnson-Laird, P. N. (2005). Reasoning about relations. Psychol. Rev. 112, 468–493. doi: 10.1037/0033-295X.112.2.468
Hay, D., Kinchin, I., and Lygo-Baker, S. (2008). Making learning visible: the role of concept mapping in higher education. Stud. High. Educ. 33, 295–311. doi: 10.1080/03075070802049251
Huttenlocher, J. (1968). Constructing spatial images: a strategy in reasoning. Psychol. Rev. 75, 550–560. doi: 10.1037/h0026748
Jaeggi, S. M., Buschkuehl, M., Jonides, J., and Perrig, W. J. (2008). Improving fluid intelligence with training on working memory. Proc. Natl. Acad. Sci. 105, 6829–6833. doi: 10.1073/pnas.0801268105
Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cogn. Sci. 4, 71–115. doi: 10.1207/s15516709cog0401_4
Johnson-Laird, P. N. (2001). Mental models and deduction. Trends Cogn. Sci. 5, 434–442. doi: 10.1016/S1364-6613(00)01751-4
Johnson-Laird, P. N. (2004). “The history of mental models.” in Psychology of Reasoning: Theoretical and Historical Perspectives . eds. K. Manktelow and M. Chung (London, England: Psychology Press), 189–365. doi: 10.4324/9780203506936
Johnson-Laird, P. N., Goodwin, G. P., and Khemlani, S. S. (2017). “Mental models and reasoning” in The Routledge International Handbook of Thinking and Reasoning . eds. L. J. Ball and V. A. Thompson (Abingdon, England: Routledge/Taylor & Francis Group), 346–365.
Johnson-Laird, P. N. (2010). Mental models and human reasoning. Proc. Natl. Acad. Sci. U. S. A. 107, 18243–18250. doi: 10.1073/pnas.1012933107
Kelly, L. J., Khemlani, S., and Johnson-Laird, P. N. (2020). Reasoning about durations. J. Cogn. Neurosci. 32, 2103–2116. doi: 10.1162/jocn_a_01621
Kelly, M. E., Loughrey, D., Lawlor, B. A., Robertson, I. H., Walsh, C., and Brennan, S. (2014). The impact of cognitive training and mental stimulation on cognitive and everyday functioning of healthy older adults: a systematic review and meta-analysis. Ageing Res. Rev. 15, 28–43. doi: 10.1016/j.arr.2014.02.004
Khemlani, S. S., Barbey, A. K., and Johnson-Laird, P. N. (2014). Causal reasoning with mental models. Front. Hum. Neurosci. 8:849. doi: 10.3389/fnhum.2014.00849
Khemlani, S., and Johnson-Laird, P. N. (2012). Theories of the syllogism: a meta-analysis. Psychol. Bull. 138, 427–457. doi: 10.1037/a0026841
Klauer, K. C. (1997). Working memory involvement in propositional and spatial reasoning. Think. Reason. 3, 9–47. doi: 10.1080/135467897394419
Knauff, M. (2009). A neuro-cognitive theory of deductive relational reasoning with mental models and visual images. Spat. Cogn. Compu. 9, 109–137. doi: 10.1080/13875860902887605
Knauff, M., and Johnson-Laird, P. N. (2002). Visual imagery can impede reasoning. Mem. Cogn. 30, 363–371. doi: 10.3758/BF03194937
Kravitz, D. J., Saleem, K. S., Baker, C. I., and Mishkin, M. (2011). A new neural framework for visuospatial processing. Nat. Rev. Neurosci. 12, 217–230. doi: 10.1038/nrn3008
Kueider, A. M., Parisi, J. M., Gross, A. L., and Rebok, G. W. (2012). Computerized cognitive training with older adults: a systematic review. PLoS One 7:e40588. doi: 10.1371/journal.pone.0040588
Kyllonen, P. C., and Christal, R. E. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence 14, 389–433. doi: 10.1016/S0160-2896(05)80012-1
Larson, J. R. (1984). The performance feedback process: a preliminary model. Organ. Behav. Hum. Perform. 33, 42–76. doi: 10.1016/0030-5073(84)90011-4
Lee, W., and Grimm, K. J. (2018). Generalized linear mixed-effects modeling programs in R for binary outcomes. Struct. Equ. Model A Multidiscip. J. 25, 824–828. doi: 10.1080/10705511.2018.1500141
Libertus, K., Joh, A. S., and Needham, A. W. (2016). Motor training at 3 months affects object exploration 12 months later. Dev. Sci. 19, 1058–1066. doi: 10.1111/desc.12370
Longford, N. T. (1995). “Random coefficient models” in Handbook of statistical modeling for the social and behavioral sciences (Boston, MA: Springer US), 519–570. doi: 10.1007/978-1-4899-1292-3_10
Lowrie, T., Logan, T., and Ramful, A. (2017). Visuospatial training improves elementary students’ mathematics performance. Br. J. Educ. Psychol. 87, 170–186. doi: 10.1111/bjep.12142
Makransky, G., Terkildsen, T. S., and Mayer, R. E. (2019). Adding immersive virtual reality to a science lab simulation causes more presence but less learning. Learn. Instr. 60, 225–236. doi: 10.1016/j.learninstruc.2017.12.007
Mayer, R. E. (2009). Multimedia Learning. 2nd Edn. (Cambridge, England: Cambridge University Press). doi: 10.1017/CBO9780511811678.
Mayer, R. E. (2014). “Cognitive theory of multimedia learning” in The Cambridge handbook of multimedia learning (New York: Cambridge University Press), 43–71. doi: 10.1017/CBO9781139547369.005
Melby-Lervåg, M., and Hulme, C. (2013). Is working memory training effective? A meta-analytic review. Dev. Psychol. 49, 270–291. doi: 10.1037/a0028228
Mix, K. S., and Cheng, Y.-L. (2012). “The relation between space and math: Developmental and educational implications,” in Advances in Child Development and Behavior . ed. J. B. Benson (Elsevier Academic Press). 197–243 doi: 10.1016/B978-0-12-394388-0.00006-X.
Owen, A. M., Hampshire, A., Grahn, J. A., Stenton, R., Dajani, S., Burns, A. S., et al. (2010). Putting brain training to the test. Nature 465, 775–778. doi: 10.1038/nature09042
Palan, S., and Schitter, C. (2018). Prolific.Ac—a subject pool for online experiments. J. Behav. Exp. Financ. 17, 22–27. doi: 10.1016/j.jbef.2017.12.004
Piaget, J. (1952). The origins of intelligence in children . New York: W W Norton & Co doi: 10.1037/11494-000.
Poushter, J. (2016). Smartphone ownership and internet usage continues to climb in emerging economies. Pew Research Center . 22, 1–44.
Rabipour, S., and Raz, A. (2012). Training the brain: fact and fad in cognitive and behavioral remediation. Brain Cogn. 79, 159–179. doi: 10.1016/j.bandc.2012.02.006
Ragni, M., and Knauff, M. (2013). A theory and a computational model of spatial reasoning with preferred mental models. Psychol. Rev. 120, 561–588. doi: 10.1037/a0032460
Redick, T. S., Shipstead, Z., Harrison, T. L., Hicks, K. L., Fried, D. E., Hambrick, D. Z., et al. (2013). No evidence of intelligence improvement after working memory training: a randomized, placebo-controlled study. J. Exp. Psychol. Gen. 142, 359–379. doi: 10.1037/a0029082
Roberts, M. J. (2000). Strategies in relational inference. Think. Reason 6, 1–26. doi: 10.1080/135467800393902
Ruff, C. C., Knauff, M., Fangmeier, T., and Spreer, J. (2003). Reasoning and working memory: common and distinct neuronal processes. Neuropsychologia 41, 1241–1253. doi: 10.1016/S0028-3932(03)00016-2
Sala, G., and Gobet, F. (2019). Cognitive training does not enhance general cognition. Trends Cogn. Sci. 23, 9–20. doi: 10.1016/j.tics.2018.10.004
Sanchez, C. A. (2012). Enhancing visuospatial performance through video game training to increase learning in visuospatial science domains. Psychon. Bull. Rev. 19, 58–65. doi: 10.3758/s13423-011-0177-7
Schaeken, W., Breugelmans, V., and Janssens, L. (2014). “Spatial reasoning: the effect of training for adults and children” in Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 36 (Quebec City, QC).
Schubert, T., Strobach, T., and Karbach, J. (2014). New directions in cognitive training: on methods, transfer, and application. Psychol. Res. 78, 749–755. doi: 10.1007/s00426-014-0619-8
Shipstead, Z., Hicks, K. L., and Engle, R. W. (2012a). Cogmed working memory training: does the evidence support the claims? J. Appl. Res. Mem. Cogn. 1, 185–193. doi: 10.1016/j.jarmac.2012.06.003
Shipstead, Z., Redick, T. S., and Engle, R. W. (2012b). Is working memory training effective? Psychol. Bull. 138, 628–654. doi: 10.1037/a0027473
Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., et al. (2016). Do “brain-training” programs work? Psychol. Sci. Public Interes. 17, 103–186. doi: 10.1177/1529100616661983
Taatgen, N. A. (2021). “Theoretical models of training and transfer effects” in Cognitive training (Cham: Springer International Publishing), 41–54. doi: 10.1007/978-3-030-39292-5_4
Tversky, B. (1991). Spatial mental models Psychol. Learn. Motiv. 27, 109–145. doi: 10.1016/S0079-7421(08)60122-X
Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., et al. (2013a). The malleability of spatial skills: a meta-analysis of training studies. Psychol. Bull. 139, 352–402. doi: 10.1037/a0028446
Uttal, D. H., Miller, D. I., and Newcombe, N. S. (2013b). Exploring and enhancing spatial thinking: links to achievement in science, technology, engineering, and mathematics? Curr. Dir. Psychol. Sci. 22, 367–373. doi: 10.1177/0963721413484756
Verzani, J. (2014). Using R for Introductory Statistics. 2nd Edn . London, England: Chapman and Hall/CRC. doi: 10.1201/9781315373089.
Wai, J., Lubinski, D., and Benbow, C. P. (2009). Spatial ability for STEM domains: aligning over 50 years of cumulative psychological knowledge solidifies its importance. J. Educ. Psychol. 101, 817–835. doi: 10.1037/a0016127
Waldmann, M. R., and Hagmayer, Y. (2013). “Causal reasoning,” in The Oxford Handbook of Cognitive Psychology . ed. D. Reisberg (Oxford, England: Oxford University Press). 733–752. doi: 10.1093/oxfordhb/9780195376746.013.0046.
Waltz, J. A., Knowlton, B. J., Holyoak, K. J., Boone, K. B., Mishkin, F. S., de Menezes Santos, M., et al. (1999). A system for relational reasoning in human prefrontal cortex. Psychol. Sci. 10, 119–125. doi: 10.1111/1467-9280.00118
Waschl, N. A., Nettelbeck, T., and Burns, N. R. (2017). The role of visuospatial ability in the Raven’s progressive matrices. J. Individ. Differ. 38, 241–255. doi: 10.1027/1614-0001/a000241
Wethington, E., and Dunifon, R. E. (2012). Research for the public good: Applying the methods of translational research to improve human health and well-being . Washington: American Psychological Association doi: 10.1037/13744-000.
Wickens, C. D., Hutchins, S., Carolan, T., and Cumming, J. (2013). Effectiveness of part-task training and increasing-difficulty training strategies. Hum. Factors J. Hum. Factors Ergon. Soc. 55, 461–470. doi: 10.1177/0018720812451994
Willis, S. L., Tennstedt, S. L., Marsiske, M., Ball, K., Elias, J., Koepke, K. M., et al. (2006). Long-term effects of cognitive training on everyday functional outcomes in older adults. JAMA 296, 2805–2814. doi: 10.1001/jama.296.23.2805
Xu, C., and LeFevre, J.-A. (2016). Training young children on sequential relations among numbers and spatial decomposition: differential transfer to number line and mental transformation tasks. Dev. Psychol. 52, 854–866. doi: 10.1037/dev0000124
Keywords: mental models, cognitive training, reasoning, translational research, cognitive enhancement, mobile application
Citation: Cortes RA, Weinberger AB and Green AE (2023) The Mental Models Training App: Enhancing verbal reasoning through a cognitive training mobile application. Front. Psychol . 14:1150210. doi: 10.3389/fpsyg.2023.1150210
Received: 23 January 2023; Accepted: 20 February 2023; Published: 10 March 2023.
Copyright © 2023 Cortes, Weinberger and Green. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Robert A. Cortes, [email protected]
2 Challenges When Children Solve Mathematical Word Problems
An analysis on additive reasoning word problems from a developmental perspective..
Updated November 22, 2023 | Reviewed by Abigail Fagan
- Word problems that can be solved by the same arithmetic operation have varying difficulty.
- Problems that involve the inverse relation between addition and subtraction are more challenging.
- Problems that involve thinking about relations are harder than thinking about quantities.
- Teachers should recognize the cognitive demands of different kinds of word problems.
Solving word problems is a key component of math curriculum in primary schools. One must have acquired basic language skills to make sense of word problems. So why do children still find certain word problems more difficult than others characterized by similar linguistic demands? (Briars & Larkin, 1984; Carpenter & Moser, 1984; De Corte & Verschaffel, 1987; Kintsch & Greeno, 1985; Nunes & Bryant, 1996; Riley et al., 1983; Verschaffel et al., 2020)? Here I will share some views on this issue from a developmental perspective with a focus on additive reasoning problems.
Preschool children can develop initial thinking about addition and subtraction based on their everyday experiences (e.g., their own physical actions or observations) of putting something in a set (addition) and taking away something from a set (subtraction) (Piaget, 1952). Children often use these “schemes of action” to solve math word problems. Therefore, Combine problems (e.g., “John has four pencils and Steven has three. How many do they have altogether?”) are easy for children because they can solve the problems by imagining two groups of pencils joined together.
However, the difficulty of Change problems differs by where the unknown quantity is located in the question. Take the following problem as an example – “Susan had eight oranges and then she gave five of them away. How many did she have left?” This question should not be challenging for children because they can use the “take things away” action scheme to solve the problems.
By contrast, a Change problem becomes more difficult if it involves an unknown starting quantity (e.g., Jerry had some cookies; he gave Alice seven and he has five left. How many did he have before he gave cookies to Alice?). This problem describes a situation where the quantity decreases , whereas it has an unknown initial state that should be solved by an addition, so there is a conflict between the decrease in quantity and the operation of addition. Children have to understand the inverse relation between subtraction and addition to solve the problem, which is a concept difficult for some children to master (Bisanz et al., 2009; Bryant et al., 1999; Canobi et al., 2003; Ching, 2023; Ching & Nunes, 2017; Gilmore & Papadatou-Pastou, 2009; Nunes et al., 2015; Robinson, 2017; Verschaffel et al., 2012).
Gérard Vergnaud (1982) contends that the three types of meanings represented by natural numbers can also influence the levels of difficulty of word problems. These meanings include (1) quantities, (2) transformations, and (3) relations. Consider the following two problems. The first problem involves a quantity and a transformation, while the second problem concerns a combination of two transformations.
- Sophia had seven stickers (quantity). She played a game and lost three stickers (transformation). How many stickers did she have after the game?
- Alice played two games of marbles. She won seven in the first game (transformation) and lost three in the second game (transformation). What happened, counting the two games together?
Research showed that combining transformations is more difficult than combining a quantity and a transformation (e.g., Brown, 1981; Vergnaud, 1982). When children are about seven years old, they achieve about 80% correct responses in the first problem, but they only achieve a comparable level of success two years later in the second problem. According to Vergnaud, children’s thinking has to go beyond natural numbers when they need to combine transformations.
Natural numbers are counting numbers. In a Change problem with an unknown end state, for example, children can count the number of stickers that a person had before he or she started the game, count and take away the stickers that he or she lost in the second game, and find out how many he or she had left in the end. In the case of the Alice problem, if children count the stickers that Alice won in the first game, they need to count them as “one more, two more, three more” and so on. Therefore, they are actually not counting stickers, but the relation of the number that she now has to the number she had to start with – the transformations are now relations, which are more difficult for children to grasp compared with simply counting quantities.
Findings that Compare problems are difficult for children than Combine and Change problems may also be explained by the same reason that these problems require children to quantify relations . Consider this example, “Jason has five tickets. Harry has nine tickets. How many more tickets does Harry have than Jason?” The question in this problem concerns neither a quantity (i.e. Jason’s or Harry’s tickets) nor about a transformation (no one lost or got more tickets). Instead, it is about the relation between the two quantities.
Most preschool children can rightly point out that Harry has more tickets, but the majority cannot quantify the relation or the difference between the two. Therefore, learning to use numbers to represent quantities and learning to use numbers to quantify relations are not the same, even when the same numbers are involved. Relations are more abstract and more difficult for children. Thompson (1993) argues that the ability to think of numbers as measures of relations at young age serves as a foundation for understanding algebra.
In summary, word problems that can be solved by the same arithmetic operation but belong to different problem types have varying difficulty. Here I have reviewed two kinds of problems that are challenging for children: those that involve the inverse relation between addition and subtraction, and those that involve thinking about relations. Teachers should recognize the intellectual demands of each type of problems from a psychological perspective, and design assessments and organize teaching activities that help children handle the relations involved in each problem, such as schema-based instruction (e.g., Fuchs et al. 2010; Jitendra et al., 2007; Jitendra & Hoff, 1996).
Bisanz, J., Watchorn, R. P. D., Piatt, C., & Sherman, J. (2009). On “understanding” children's developing use of inversion. Mathematical Thinking and Learning, 11, 10-24. http://dx.doi.org/10.1080/10986060802583907
Briars, D. J., & Larkin, J. H. (1984). An integrated model of skill in solving elementary word problems. Cognition and Instruction, 1 , 245–296
Brown, M. (1981). Number operations. In K. Hart (Ed.), Children’s Understanding of Mathematics: 11-16 (pp. 23-47). Windsor, UK: NFER-Nelson
Bryant, P, Christie, C, & Rendu, A. (1999). Children's understanding of the relation between addition and subtraction: Inversion, identity and decomposition. Journal of Experimental Child Psychology, 74 , 194-212. doi:10.1006/jecp.1999.2517
Canobi, K. H. (2005). Individual differences in children’s addition and subtraction knowledge. Cognitive Development, 19 , 81–93. doi:10.1016/j.cogdev.2003.10.001
Carpenter, T. P., & Moser, J. M. (1984). The acquisition of addition and subtraction concepts in grades one through three. Journal for Research in Mathematics Education, 15 , 179–202
Ching, B. H.-H. (2023). Inhibitory control and visuospatial working memory contribute to 5-year-old children’s use of quantitative inversion . Learning and Instruction , 83 , Article 101714. https://doi.org/10.1016/j.learninstruc.2022.101714
Ching, B. H.-H., & Nunes, T. (2017). The importance of additive reasoning in children's mathematical achievement: A longitudinal study. Journal of Educational Psychology, 109, 477-508. http://dx.doi.org/10.1037/edu0000154
De Corte, E., & Verschaffel, L. (1987). The effect of semantic structure on first graders’ solution strategies of elementary addition and subtraction word problems. Journal for Research in Mathematics Education, 18 , 363-381
Fuchs, L. S., Zumeta, R. O., Schumacher, R. F., Powell, S. R., Seethaler, P. M., Hamlett, C. L., & Fuchs, D. (2010). The effects of schema-broadening instruction on second graders’ word problem performance and their ability to represent word problems with algebraic equations: A randomized control study. Elementary School Journal, 110 , 440-463. doi: 10.1086/651191
Gilmore, C. K., & Papadatou-Pastou, M. (2009). Patterns of individual differences in conceptual understanding and arithmetical skills: A meta-analysis. Mathematical Thinking and Learning, 11 , 25–40. https://doi.org/10.1080/1098600802583923 .
Jitendra, A. K., Griffin, C. C., Haria, P., Leh, J., Adams, A., & Kaduvettoor, A. (2007). A comparison of single and multiple strategy instruction on third-grade students' mathematical problem solving. Journal of Educational Psychology, 99 , 115-127. doi:10.1037/0022-0618.104.22.168
Jitendra, A. K., & Hoff, K. (1996). The effects of schema-based instruction on the mathematical word-problem-solving performance of students with learning disabilities. Journal of Learning Disabilities, 29 , 422-431. doi: 10.1177/002221949602900410
Kintsch, W., & Greeno, J. G. (1985). Understanding and solving word arithmetic problems. Psychological Review, 92 , 109–129. https://doi.org/10.1037/0033-295X.92.1.109
Nunes, T., & Bryant, P. E. (1996). Children doing mathematics . Oxford, United Kingdom: Blackwell.
Piaget, J. (1952). The Child's Conception of Number . London: Routledge & Kegan Paul.
Riley, M. S., Greeno, J. G., & Heller, J. I. (1983). Development of children’s problem–solving ability in arithmetic. In H. P. Ginsburg (Ed.), The development of mathematical thinking (pp. 153–196). New York: Academic Press
Robinson, K. M. (2017). The understanding of additive and multiplicative arithmetic concepts. In D. C. Geary, D. Berch, R. Oschsendorf, & K. M. Koepke (Eds.), Mathematical cognition and learning: Vol. 3. Acquisition of complex arithmetic skills and higher-order mathematics concepts (pp. 21-46). https://doi.org/10.1016/B978-0-12-805086-6.00002-3 Elsevier Academic Press.
Thompson, P. W. (1993). Quantitative reasoning, complexity, and additive structures. Educational Studies in Mathematics, 3, 165–208. http://dx.doi.org/10.1007/BF01273861
Vergnaud, G. (1982). A classification of cognitive tasks and operations of thought involved in addition and subtraction problems. In T. P. Carpenter, J. M. Moser & R. T. A (Eds.), Addition and subtraction: A cognitive perspective (pp. 60-67). Hillsdale (NJ): Lawrence Erlbaum.
Verschaffel, L., Bryant, P., & Torbeyns, J. (2012). Mathematical inversion: Introduction. Educational Studies in Mathematics, 79 , 327 – 334. doi:10.1007/s10649-012-9381-2
Verschaffel, L., Schukajlow, S., Star, J., & Van Dooren, W. (2020). Word problems in mathematics education: A survey. ZDM, 52, 1-16. https://doi.org/10.1007/s11858-020-01130-4
Boby Ching, Ph.D., is an Associate Professor of Educational Psychology at University of Macau.
- Find a Therapist
- Find a Treatment Center
- Find a Support Group
- New Zealand
- South Africa
- Bipolar Disorder
- Chronic Pain
- Eating Disorders
- Passive Aggression
- Goal Setting
- Positive Psychology
- Stopping Smoking
- Low Sexual Desire
- Child Development
- Therapy Center NEW
- Diagnosis Dictionary
- Types of Therapy
The people around us have a stronger influence on our decisions and actions than we realize. Here’s what research reveals about our networks’ gravitational force.
- Coronavirus Disease 2019
- Affective Forecasting
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altman’s four days in exile , several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
'VEIL OF IGNORANCE'
[1/2] Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights
Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.
Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.
Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.
In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.
"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.
A day later, the board fired Altman.
Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker
Our Standards: The Thomson Reuters Trust Principles.
Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211
Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.
Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.
Ex-Binance CEO Zhao urges judge to allow him to leave US before sentencing
ECB chief Lagarde admits her son lost crypto cash
Exclusive: Nvidia delays launch of new China-focused AI chip -sources
X may lose up to $75 mln by year-end on advertiser exodus - NYT
More from Reuters
European mobile data traffic to triple by 2028 -GSMA
European mobile data traffic will triple by 2028 driven by the adoption of 5G and migration into 4G, which continue to put pressure on network investments, telecom industry group GSMA said on Thursday.
India drawing up laws to regulate deepfakes - minister
Exclusive: Amazon to win unconditional EU nod for iRobot deal -sources
Volume of Apple sales underperforms Huawei, Xiaomi on China's Singles Day - data
US prosecutors want Binance's Zhao to remain in US until sentencing -filing
Designed for iPhone
- 2.6 • 61 Ratings
Reason is the premiere libertarian media and research organization, which advocates “free minds and free markets” through critically acclaimed print, online, and video journalism and top-tier public policy research. With the Reason iPhone App, you can stay up to date with the latest news and views from the Reason staff wherever you are! Compatible with the iPhone, iPhone 3GS, iPhone 4, iPhone 4S, and iPod Touch, the Reason App downloads and syncs Reason’s content to your device in real time. And best of all, its FREE! Read: The latest Reason.com columns, Hit & Run posts, Brickbats, and Reason.org content. Watch: High quality, full-screen Reason.tv videos. Save: Your favorite Reason content in a separate, easily accessible tab. E-mail: Reason’s stories and videos to your friends instantly within the Reason iPhone App. Donate: With the Reason iPhone App’s secure donations tab, you can help support free minds and free markets even when you’re on the go. Designed and developed exclusively for the iPhone and the iPod Touch. Programming done by Peter Snyder [email protected], http://home.peteresnyder.com
This app has been updated by Apple to display the Apple Watch app icon. Added "Articles" tab into the Reason.com section
Ratings and Reviews
Needs an update.
Love the website on my computer, so the app is important for my phone... but it needs an update for the new generation of iPhones. While they’re at it, what I need is a smooth way to search for articles, with adds not in the search results (keep them in the articles please).
Not reason magazine
This is some dude’s blog… mixed with some links to legit Reason articles anyone can access for free at Reason’s website or Instagram. No content in the videos, articles, or the .org part of the app anymore… so I am guessing Reason shut down the use of the original content the blogger doesn’t have permission to redistribute. Point is… this is not Reason Magazine… it’s just a way for an industrious Libertarian to get his blog out there. Not really a bad place to view all the articles in one place and the blogs are clearly marked… but it’s worth paying the $1.99 to gain access to the real Reason Magazine app.
Decent, but bugged.
An easy way to access articles, for sure. However, links in articles don’t work (this includes both links to full versions of the articles being read and links to external sources). You click them and get no response.
No Details Provided
The developer will be required to provide privacy details when they submit their next app update.
- Developer Website
- App Support
More By This Developer
You Might Also Like
The Washington Times
Discover. Collaborate. Create. On-the-go music studios
Reason Compact Your pocket music studio
Go creative and make amazing beats and melodies now! Maybe you’re humming on your first beat ever or maybe you’re thinking about your next album. With Reason Compact you’ll have a powerful and easy-to-use music studio right in your pocket, ready to sketch down ideas whenever inspiration strikes. Take your tracks further with Reason, the desktop studio.
- Watch video
Figure Create beats in seconds
Create an addictive beat or lay down a beefy bass line while waiting for the bus to arrive. Truly designed for mobile use, Figure gives you drums, bass and lead synths, all controlled through an incredibly easy to use interface that gets you sounding great in seconds.
Take Record your ideas anywhere
Sing, hum, rap, strum. Capture your musical ideas—anytime, anywhere. Take is the creative vocal recorder that lets you catch inspiration when it strikes. With a single tap you’ll be recording, beatboxing, overdubbing, riffing, writing and trying out song ideas. Import your Reason beat as a backing track and record your idea on top. Export and continue when you get back to Reason.
- MP3 & Audio Software
- Audio Production & Recording Software
As a Reason user, you can forget about the downsides of music production. Forget malfunctioning modules and confusing connections. Reason's cables don't tangle. Forget about steep learning curves and menus within menus. Reason is so direct you'll learn it in minutes. And forget the tedious process of gathering all the different disks and soundbanks needed to load up a song.
With Reason, picking up where you left off be it last night or last month it is as simple as turning the power on. When you save your music, your whole studio setup is stored along with it. You can even include your samples, loops and drum kits in the Reason file, for easy web publishing or email distribution to other Reason users. For once, total recall is truly total.
And so is the sound. The audio quality is everything you would expect from the people behind ReCycle and ReBirth. But pristine sound quality is only half the story; the instruments and effects in Reason are loaded with character and attitude. Reason will not just impress, but inspire you.
What's new in version 4.0.1, operating systems, related software.
Magix Music Maker Plus
Audio Record Wizard
Get the best price on everything
Shop your favorite products and we’ll find the best deal with a single click. Designed to make shopping easier.
Reason for Windows
- Trial version
- V 6.0.1
- Security Status
Your virtual studio rack
Serious music production requires serious tools. Reason provides all you need to set up your own virtual music on your computer: a synthesizer, arpeggiator, sequencer and groove mixer.
Reason's synthesizer comes with six oscillator types, four different filters, a step sequencer and a modular matrix. The monophonic arpeggiator can play in multiple modes and includes a pattern section. I was also very much impressed with Reason's sequencer, which features multiple lanes, vector automation and dedicated device types. The 32 channel mixer completes a set of powerful tools.
Reason is not for beginners, and this is obvious the moment you open up the application. However if you have some knowledge of music production, you'll welcome the wealth of tools available and the flexibility they give you when working with music and sounds.
Reason can easily be integrated with other software and hardware you already use, making it a versatile choice.
If you're seriously into music production, Reason is the tool you need.
Audio recording: Capture your ideas with a minimum of mouse clicks and menu actions - version 6 introduces unlimited hard disk tracks and rock solid audio recording in Reason. More sounds: The included Factory Sound Bank has been expanded with thousands of new patches and loops for Reason's instruments, and hundreds of patches for the new Pulveriser, The Echo and Alligator effects. Finding your sound has never been easier. Time stretch and audio transpose: With our now legendary non-destructive time stretch, you can actually record first and pick your tempo later. And with the brand new audio transpose you can even record first and change the key later. Change the tempo of your song, and the audio follows right along. Non-destructively change the pitch with a mouse click.For more information, full details of Reason 6 are available here.
- Complete music production tool
- Integrates with other hardware and software
- Not for beginners
Also available in other platforms
- Reason for Android
- Reason for Mac
Reason for PC
User reviews about reason.
compfort to use. exellent sound,compfort to use,cool disain,working fast,good effects Pros: easy to use,very logic button positsion
Do not download. I downloaded this it says free of spy ware but its full of ad ware, do not download! Pros: Well if you want an unnecessar y tool bar added to your browser, and games you don't really need then go ahead. Cons: Its not really reason 3.0, plus a whole bunch of pop ups appear, and takes for every to close them all, and I'm running on cable modem. (sheesh) I'm not saying all software on this site are like that but, this one is definitely a no-no... More
- Read all reviews
Market-Leading Music Production Software
Cubase—Creating Music, Your Way
MAGIX Music Maker Production Suite
A recording studio in your own home
Sequencer for live or studio sessions
Cubase Pro 10
Get creative with Cubase Pro 10
Alternatives to Reason
A free and powerful music production tool
MAGIX Music Maker
A beginner-friendly beat maker
A free program for windows, by Reflection IT
Free CD to MP3 Converter
Free software to convert CD files into MP3 format
Guitar FX BOX
Free Alternative to Voxengo Tube Amp: Guitar Effects Program
Nokia Ovi Player
Free music player for personal computers
VUPlayer CD Player
An audio player for nearly every format
SRS Audio Sandbox
Advanced audio enhancement tool for personal computers
Manage your iPod with this lighter alternative to iTunes
Music Editing Master
Robust music-editing software for home users
Play karaoke in the comfort of your own home
Virtual DJ Studio
DJ and Karaoke software with its own customer-request app
MP3myMP3 Free Sound Recorder
Record any sound your computer makes, including internet streams
Free music and video player for personal computers
A deep dive into the Bard YouTube Extension experience
Why you should watch Monarch: the Legacy of the Monsters, the Apple TV+ series about Godzilla
An advertisement for Assassin’s Creed… inside an Assassin’s Creed game
Party Up Pokemon Go rewards and more
The new iPhone 15 features support for Thread: what does it mean for our smart homes?
NASA just released the picture of a baby star, and it’s the most impressive thing you’ll see today
The Burning Body: What is the real story behind the Netflix series?
The 3 keys to the Razer Viper V3 HyperSpeed
When does Berlin, the Money Heist spin-off, come out? We have a Netflix release date
When will macOS Sonoma be released: earlier than expected
Laws concerning the use of this software vary from country to country. We do not encourage or condone the use of this program if it is in violation of these laws.
In Softonic we scan all the files hosted on our platform to assess and avoid any potential harm for your device. Our team performs checks each time a new file is uploaded and periodically reviews files to confirm or update their status. This comprehensive process allows us to set a status for any downloadable file as follows:
It’s extremely likely that this software program is clean.
What does this mean?
We have scanned the file and URLs associated with this software program in more than 50 of the world's leading antivirus services; no possible threat has been detected.
This software program is potentially malicious or may contain unwanted bundled software.
Why is the software program still available?
Based on our scan system, we have determined that these flags are possibly false positives .
What is a false positive?
It means a benign program is wrongfully flagged as malicious due to an overly broad detection signature or algorithm used in an antivirus program.
It’s highly probable this software program is malicious or contains unwanted bundled software.
Why is this software program no longer available in our Catalog?
Based on our scan system, we have determined that these flags are likely to be real positives.
Your review for Reason
Thank you for rating!
What do you think about Reason? Do you recommend it? Why?
OOPS! This is embarrassing...
Something’s gone terribly wrong. Try this instead