How to be a Better Troubleshooter in Your Laboratory

For scientists, the ability to troubleshoot their experiments is a valuable skill to develop. This skill allows you to be an independent and responsible researcher (Roberts, 2001).

Although important, troubleshooting as a subject is commonly not included in many undergraduate molecular biology courses. Therefore, this article provides some useful steps to troubleshoot problems in the molecular biology laboratory.

Some common steps for troubleshooting problems in the lab are (Gerstein, 2004):

1. Identify the problem

2. List all possible explanations

3. Collect the data

4. Eliminate some possible explanations

5. Check with experimentation

6. Identify the cause

Below, we present two different scenarios and how this troubleshooting process could be applied. While these scenarios are very specific, this troubleshooting approach can be used broadly across other experiments in your lab.

Example 1: No PCR Product Detected

1.identify the problem.

First, you need to identify the problem, but try not to define the cause yet. In this example, let’s say you don’t see any PCR product on the agarose gel. You see the presence of your DNA ladder, so there is no problem with the gel electrophoresis system.

Now, you identify that the problem is your PCR reaction. Remember, we’re not looking at the cause just yet.

2.List All Possible Explanations

After you identify the problem, list all the possible causes for the issue (in this example, your PCR reaction). Start listing the obvious causes. These could be each of the ingredients in your PCR Master Mix: Taq DNA Polymerase, MgCl 2 , Buffer, dNTPs, primers, and your DNA template. After listing the obvious causes, include the causes that might have escaped your attention, such as the equipment and the PCR procedure.

3.Collect the Data

To do this step, start collecting data for the easiest explanations. First, check if the PCR equipment works properly. You can ask other scientists in your laboratory if they have encountered similar problems. However, if there is no problem with the equipment, then go ahead and collect data for the other explanations.

If you ran your samples with all proper control reactions, find out whether your positive control (using a DNA vector as the DNA template) was present in your gel or not.

Storage and Conditions

The next step is to find out about the expiration date of the PCR kit that you used. In addition, check if your PCR kit was stored according to your vendor’s instructions.

To collect data about your procedure, check your laboratory notebook for the procedure that you used in the experiment and compare it with the manufacturer’s instructions. Write down all the modifications you made during this experiment, and note any possible missed steps.

4.Eliminate Explanations

Based on the data you collected, eliminate the explanations that you have determined are not the cause. For example, if your positive controls worked, your kit had not expired and it was properly stored, you can eliminate your kit as the possible cause. If you also find out that you didn’t modify the PCR procedure, eliminate it as the possible cause.

5.Check with Experimentation

Recheck your list and design an experiment to test the remaining explanations. For example, test whether your DNA templates are the possible cause. For example, run the DNA samples on a gel to see if there is any degradation. Measure the DNA concentration to see if you used enough template for your PCR reaction.

6.Identify the Cause

The last step is to eliminate most of your explanations that you’ve ruled out and identify the only one remaining as the cause. Using information from the experiment you just ran (in step 5), plan ways in which you’ll fix the problem and redo your experiment.

If this issue is something that could arise again, you might want to find ways to reduce future errors. For example, rather than making your own master mix, you could use a premade master mix.

Common Steps for Troubleshooting in a Molecular Biology Laboratory

Example 2: No Clones Growing on the Agar Plate

First, check all the transformation plates and see if any colonies are growing on your control plates. If there were colonies on your plates, then the problem is the transformation of your plasmid DNA.

2.List all possible explanations

After you identified that the problem is not the competent cells, the possible explanations for your failed cloning may be your plasmid, the antibiotic, or the temperature during heat shock procedure.

3.Collect the data

If you included your controls in your transformation, your positive control plate should be the cells transformed with an uncut control plasmid. This plate should contain many colonies. If there are only few colonies growing on this plate, the efficiency of the competent cells may be too low.

To find out if your antibiotic is the cause, check if you used the correct antibiotic for selection and the concentration recommended for selection.

To see if the incorrect temperature during the heat shock could be the cause, find out if the temperature water bath was at 42˚C.

4.Eliminate explanations

Now you can start eliminating some of the possible explanations. For example, based on your data collection, there were many colonies growing on your positive control plate. It means that your competent cells were efficient.

You also found out that you used the correct antibiotic with the recommended concentration for selection. Then, you can eliminate antibiotic as a possible cause.

Moving on to the temperature during the heat shock, you found out that the temperature of the water bath was at 42˚C. Therefore, the procedure was not the problem and this should be eliminated from your list.

Now the only possible cause is your plasmid DNA.

5. Check with Experimentation

In order to test if your plasmid is the problem, check if it is intact using gel electrophoresis and the plasmid concentration is not too low. In addition, check your ligation by sequencing your plasmid to make sure the insert DNA is in the plasmid. Make sure you follow the instruction from your transformation protocol regarding the concentration of plasmid you should use.

6.Identify the cause

For the last step, gather all the information you need after you ran the experiments. For example, you made sure that there was no problem with your ligation based on your sequencing data, but you saw a faint band on the gel electrophoresis and found out that the concentration of your plasmid was too low. Therefore, you identified that the cause of your failed transformation was that your plasmid DNA concentration was too low.

To better organize your troubleshooting process, below is a checklist you can use:

Troubleshooting checklist in a laboratory

Gerstein, A. S. (2004). Molecular biology problem solver: a laboratory guide: John Wiley & Sons.

Oelker, S. (2012). LibGuides: Biological Sciences: FAQs.

Roberts, L. M. (2001). Developing experimental design and troubleshooting skills in an advanced biochemistry lab. Biochemistry and Molecular Biology Education, 29 (1), 10-15.

Related Articles

examples of problem solving in lab

For scientists, the ability to troubleshoot their experiments is a valuable skill to develop. This s...

examples of problem solving in lab

Maintaining Work-Life Balance in Academic Life

Many early career scientists, graduate students and postdocs associate longer hours at work with pro...

examples of problem solving in lab

The Postdoc’s Guide to Tenure-Track Positions

With tenure track positions being few and far between and thousands of applicants to each position...

examples of problem solving in lab

How to Bounce Back From Manuscript Rejection

You saw a new email coming from a journal regarding the status of your manuscript submission. You ha...

Join our list to receive promos and articles.

NSF Logo

  • Competent Cells
  • Lab Startup
  • Z')" data-type="collection" title="Products A->Z" target="_self" href="/collection/products-a-to-z">Products A->Z
  • GoldBio Resources
  • GoldBio Sales Team
  • GoldBio Distributors
  • Duchefa Direct
  • Sign up for Promos
  • Terms & Conditions
  • ISO Certification
  • Agarose Resins
  • Antibiotics & Selection
  • Biochemical Reagents
  • Bioluminescence
  • Buffers & Reagents
  • Cell Culture
  • Cloning & Induction
  • Competent Cells and Transformation
  • Detergents & Membrane Agents
  • DNA Amplification
  • Enzymes, Inhibitors & Substrates
  • Growth Factors and Cytokines
  • Lab Tools & Accessories
  • Plant Research and Reagents
  • Protein Research & Analysis
  • Protein Expression & Purification
  • Reducing Agents

examples of problem solving in lab

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

6. Iterate.

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

  • Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Daily Do Lesson Plans
  • Free Resources
  • American Rescue Plan
  • For Preservice Teachers
  • NCCSTS Case Collection
  • Partner Jobs in Education
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Prospective Authors
  • Web Seminars
  • Exhibits & Sponsorship
  • Conference Reviewers
  • National Conference • Denver 24
  • Leaders Institute 2024
  • Submit a Proposal
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

A Problem-Solving Experiment

Using Beer’s Law to Find the Concentration of Tartrazine

The Science Teacher—January/February 2022 (Volume 89, Issue 3)

By Kevin Mason, Steve Schieffer, Tara Rose, and Greg Matthias

Share Start a Discussion

A Problem-Solving Experiment

A problem-solving experiment is a learning activity that uses experimental design to solve an authentic problem. It combines two evidence-based teaching strategies: problem-based learning and inquiry-based learning. The use of problem-based learning and scientific inquiry as an effective pedagogical tool in the science classroom has been well established and strongly supported by research ( Akinoglu and Tandogan 2007 ; Areepattamannil 2012 ; Furtak, Seidel, and Iverson 2012 ; Inel and Balim 2010 ; Merritt et al. 2017 ; Panasan and Nuangchalerm 2010 ; Wilson, Taylor, and Kowalski 2010 ).

Floyd James Rutherford, the founder of the American Association for the Advancement of Science (AAAS) Project 2061 once stated, “To separate conceptually scientific content from scientific inquiry,” he underscored, “is to make it highly probable that the student will properly understand neither” (1964, p. 84). A more recent study using randomized control trials showed that teachers that used an inquiry and problem-based pedagogy for seven months improved student performance in math and science ( Bando, Nashlund-Hadley, and Gertler 2019 ). A problem-solving experiment uses problem-based learning by posing an authentic or meaningful problem for students to solve and inquiry-based learning by requiring students to design an experiment to collect and analyze data to solve the problem.

In the problem-solving experiment described in this article, students used Beer’s Law to collect and analyze data to determine if a person consumed a hazardous amount of tartrazine (Yellow Dye #5) for their body weight. The students used their knowledge of solutions, molarity, dilutions, and Beer’s Law to design their own experiment and calculate the amount of tartrazine in a yellow sports drink (or citrus-flavored soda).

According to the Next Generation Science Standards, energy is defined as “a quantitative property of a system that depends on the motion and interactions of matter and radiation with that system” ( NGSS Lead States 2013 ). Interactions of matter and radiation can be some of the most challenging for students to observe, investigate, and conceptually understand. As a result, students need opportunities to observe and investigate the interactions of matter and radiation. Light is one example of radiation that interacts with matter.

Light is electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle. When light interacts with matter, light can be reflected at the surface, absorbed by the matter, or transmitted through the matter ( Figure 1 ). When a single beam of light enters a substance at a perpendicularly (at a 90 ° angle to the surface), the amount of reflection is minimal. Therefore, the light will either be absorbed by the substance or be transmitted through the substance. When a given wavelength of light shines into a solution, the amount of light that is absorbed will depend on the identity of the substance, the thickness of the container, and the concentration of the solution.

Light interacting with matter.  (Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg).

Light interacting with matter.

(Retrieved from https://etorgerson.files.wordpress.com/2011/05/light-reflect-refract-absorb-label.jpg ).

Beer’s Law states the amount of light absorbed is directly proportional to the thickness and concentration of a solution. Beer’s Law is also sometimes known as the Beer-Lambert Law. A solution of a higher concentration will absorb more light and transmit less light ( Figure 2 ). Similarly, if the solution is placed in a thicker container that requires the light to pass through a greater distance, then the solution will absorb more light and transmit less light.

Figure 2 Light transmitted through a solution.  (Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg).

Light transmitted through a solution.

(Retrieved from https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-319-57330-4_13/MediaObjects/432946_1_En_13_Fig4_HTML.jpg ).

Definitions of key terms.

Absorbance (A) – the process of light energy being captured by a substance

Beer’s Law (Beer-Lambert Law) – the absorbance (A) of light is directly proportional to the molar absorptivity (ε), thickness (b), and concentration (C) of the solution (A = εbC)

Concentration (C) – the amount of solute dissolved per amount of solution

Cuvette – a container used to hold a sample to be tested in a spectrophotometer

Energy (E) – a quantitative property of a system that depends on motion and interactions of matter and radiation with that system (NGSS Lead States 2013).

Intensity (I) – the amount or brightness of light

Light – electromagnetic radiation that is detectable to the human eye and exhibits properties of both a wave and a particle

Molar Absorptivity (ε) – a property that represents the amount of light absorbed by a given substance per molarity of the solution and per centimeter of thickness (M-1 cm-1)

Molarity (M) – the number of moles of solute per liters of solution (Mol/L)

Reflection – the process of light energy bouncing off the surface of a substance

Spectrophotometer – a device used to measure the absorbance of light by a substance

Tartrazine – widely used food and liquid dye

Transmittance (T) – the process of light energy passing through a substance

The amount of light absorbed by a solution can be measured using a spectrophotometer. The solution of a given concentration is placed in a small container called a cuvette. The cuvette has a known thickness that can be held constant during the experiment. It is also possible to obtain cuvettes of different thicknesses to study the effect of thickness on the absorption of light. The key definitions of the terms related to Beer’s Law and the learning activity presented in this article are provided in Figure 3 .

Overview of the problem-solving experiment

In the problem presented to students, a 140-pound athlete drinks two bottles of yellow sports drink every day ( Figure 4 ; see Online Connections). When she starts to notice a rash on her skin, she reads the label of the sports drink and notices that it contains a yellow dye known as tartrazine. While tartrazine is safe to drink, it may produce some potential side effects in large amounts, including rashes, hives, or swelling. The students must design an experiment to determine the concentration of tartrazine in the yellow sports drink and the number of milligrams of tartrazine in two bottles of the sports drink.

While a sports drink may have many ingredients, the vast majority of ingredients—such as sugar or electrolytes—are colorless when dissolved in water solution. The dyes added to the sports drink are responsible for the color of the sports drink. Food manufacturers may use different dyes to color sports drinks to the desired color. Red dye #40 (allura red), blue dye #1 (brilliant blue), yellow dye #5 (tartrazine), and yellow dye #6 (sunset yellow) are the four most common dyes or colorants in sports drinks and many other commercial food products ( Stevens et al. 2015 ). The concentration of the dye in the sports drink affects the amount of light absorbed.

In this problem-solving experiment, the students used the previously studied concept of Beer’s Law—using serial dilutions and absorbance—to find the concentration (molarity) of tartrazine in the sports drink. Based on the evidence, the students then determined if the person had exceeded the maximum recommended daily allowance of tartrazine, given in mg/kg of body mass. The learning targets for this problem-solving experiment are shown in Figure 5 (see Online Connections).

Pre-laboratory experiences

A problem-solving experiment is a form of guided inquiry, which will generally require some prerequisite knowledge and experience. In this activity, the students needed prior knowledge and experience with Beer’s Law and the techniques in using Beer’s Law to determine an unknown concentration. Prior to the activity, students learned how Beer’s Law is used to relate absorbance to concentration as well as how to use the equation M 1 V 1 = M 2 V 2 to determine concentrations of dilutions. The students had a general understanding of molarity and using dimensional analysis to change units in measurements.

The techniques for using Beer’s Law were introduced in part through a laboratory experiment using various concentrations of copper sulfate. A known concentration of copper sulfate was provided and the students followed a procedure to prepare dilutions. Students learned the technique for choosing the wavelength that provided the maximum absorbance for the solution to be tested ( λ max ), which is important for Beer’s Law to create a linear relationship between absorbance and solution concentration. Students graphed the absorbance of each concentration in a spreadsheet as a scatterplot and added a linear trend line. Through class discussion, the teacher checked for understanding in using the equation of the line to determine the concentration of an unknown copper sulfate solution.

After the students graphed the data, they discussed how the R2 value related to the data set used to construct the graph. After completing this experiment, the students were comfortable making dilutions from a stock solution, calculating concentrations, and using the spectrophotometer to use Beer’s Law to determine an unknown concentration.

Introducing the problem

After the initial experiment on Beer’s Law, the problem-solving experiment was introduced. The problem presented to students is shown in Figure 4 (see Online Connections). A problem-solving experiment provides students with a valuable opportunity to collaborate with other students in designing an experiment and solving a problem. For this activity, the students were assigned to heterogeneous or mixed-ability laboratory groups. Groups should be diversified based on gender; research has shown that gender diversity among groups improves academic performance, while racial diversity has no significant effect ( Hansen, Owan, and Pan 2015 ). It is also important to support students with special needs when assigning groups. The mixed-ability groups were assigned intentionally to place students with special needs with a peer who has the academic ability and disposition to provide support. In addition, some students may need additional accommodations or modifications for this learning activity, such as an outlined lab report, a shortened lab report format, or extended time to complete the analysis. All students were required to wear chemical-splash goggles and gloves, and use caution when handling solutions and glass apparatuses.

Designing the experiment

During this activity, students worked in lab groups to design their own experiment to solve a problem. The teacher used small-group and whole-class discussions to help students understand the problem. Students discussed what information was provided and what they need to know and do to solve the problem. In planning the experiment, the teacher did not provide a procedure and intentionally provided only minimal support to the students as needed. The students designed their own experimental procedure, which encouraged critical thinking and problem solving. The students needed to be allowed to struggle to some extent. The teacher provided some direction and guidance by posing questions for students to consider and answer for themselves. Students were also frequently reminded to review their notes and the previous experiment on Beer’s Law to help them better use their resources to solve the problem. The use of heterogeneous or mixed-ability groups also helped each group be more self-sufficient and successful in designing and conducting the experiment.

Students created a procedure for their experiment with the teacher providing suggestions or posing questions to enhance the experimental design, if needed. Safety was addressed during this consultation to correct safety concerns in the experimental design or provide safety precautions for the experiment. Students needed to wear splash-proof goggles and gloves throughout the experiment. In a few cases, students realized some opportunities to improve their experimental design during the experiment. This was allowed with the teacher’s approval, and the changes to the procedure were documented for the final lab report.

Conducting the experiment

A sample of the sports drink and a stock solution of 0.01 M stock solution of tartrazine were provided to the students. There are many choices of sports drinks available, but it is recommended that the ingredients are checked to verify that tartrazine (yellow dye #5) is the only colorant added. This will prevent other colorants from affecting the spectroscopy results in the experiment. A citrus-flavored soda could also be used as an alternative because many sodas have tartrazine added as well. It is important to note that tartrazine is considered safe to drink, but it may produce some potential side effects in large amounts, including rashes, hives, or swelling. A list of the materials needed for this problem-solving experiment is shown in Figure 6 (see Online Connections).

This problem-solving experiment required students to create dilutions of known concentrations of tartrazine as a reference to determine the unknown concentration of tartrazine in a sports drink. To create the dilutions, the students were provided with a 0.01 M stock solution of tartrazine. The teacher purchased powdered tartrazine, available from numerous vendors, to create the stock solution. The 0.01 M stock solution was prepared by weighing 0.534 g of tartrazine and dissolving it in enough distilled water to make a 100 ml solution. Yellow food coloring could be used as an alternative, but it would take some research to determine its concentration. Since students have previously explored the experimental techniques, they should know to prepare dilutions that are somewhat darker and somewhat lighter in color than the yellow sports drink sample. Students should use five dilutions for best results.

Typically, a good range for the yellow sports drink is standard dilutions ranging from 1 × 10-3 M to 1 × 10-5 M. The teacher may need to caution the students that if a dilution is too dark, it will not yield good results and lower the R2 value. Students that used very dark dilutions often realized that eliminating that data point created a better linear trendline, as long as it didn’t reduce the number of data points to fewer than four data points. Some students even tried to use the 0.01 M stock solution without any dilution. This was much too dark. The students needed to do substantial dilutions to get the solutions in the range of the sports drink.

After the dilutions are created, the absorbance of each dilution was measured using a spectrophotometer. A Vernier SpectroVis (~$400) spectrophotometer was used to measure the absorbance of the prepared dilutions with known concentrations. The students adjusted the spectrophotometer to use different wavelengths of light and selected the wavelength with the highest absorbance reading. The same wavelength was then used for each measurement of absorbance. A wavelength of 650 nanometers (nm) provided an accurate measurement and good linear relationship. After measuring the absorbance of the dilutions of known concentrations, the students measured the absorbance of the sports drink with an unknown concentration of tartrazine using the spectrophotometer at the same wavelength. If a spectrophotometer is not available, a color comparison can be used as a low-cost alternative for completing this problem-solving experiment ( Figure 7 ; see Online Connections).

Analyzing the results

After completing the experiment, the students graphed the absorbance and known tartrazine concentrations of the dilutions on a scatter-plot to create a linear trendline. In this experiment, absorbance was the dependent variable, which should be graphed on the y -axis. Some students mistakenly reversed the axes on the scatter-plot. Next, the students used the graph to find the equation for the line. Then, the students solve for the unknown concentration (molarity) of tartrazine in the sports drink given the linear equation and the absorbance of the sports drink measured experimentally.

To answer the question posed in the problem, the students also calculated the maximum amount of tartrazine that could be safely consumed by a 140 lb. person, using the information given in the problem. A common error in solving the problem was not converting the units of volume given in the problem from ounces to liters. With the molarity and volume in liters, the students then calculated the mass of tartrazine consumed per day in milligrams. A sample of the graph and calculations from one student group are shown in Figure 8 . Finally, based on their calculations, the students answered the question posed in the original problem and determined if the person’s daily consumption of tartrazine exceeded the threshold for safe consumption. In this case, the students concluded that the person did NOT consume more than the allowable daily limit of tartrazine.

Sample graph and calculations from a student group.

Sample graph and calculations from a student group.

Communicating the results

After conducting the experiment, students reported their results in a written laboratory report that included the following sections: title, purpose, introduction, hypothesis, materials and methods, data and calculations, conclusion, and discussion. The laboratory report was assessed using the scoring rubric shown in Figure 9 (see Online Connections). In general, the students did very well on this problem-solving experiment. Students typically scored a three or higher on each criteria of the rubric. Throughout the activity, the students successfully demonstrated their ability to design an experiment, collect data, perform calculations, solve a problem, and effectively communicate those results.

This activity is authentic problem-based learning in science as the true concentration of tartrazine in the sports drink was not provided by the teacher or known by the students. The students were generally somewhat biased as they assumed the experiment would result in exceeding the recommended maximum consumption of tartrazine. Some students struggled with reporting that the recommended limit was far higher than the two sports drinks consumed by the person each day. This allows for a great discussion about the use of scientific methods and evidence to provide unbiased answers to meaningful questions and problems.

The most common errors in this problem-solving experiment were calculation errors, with the most common being calculating the concentrations of the dilutions (perhaps due to the use of very small concentrations). There were also several common errors in communicating the results in the laboratory report. In some cases, students did not provide enough background information in the introduction of the report. When the students communicated the results, some students also failed to reference specific data from the experiment. Finally, in the discussion section, some students expressed concern or doubts in the results, not because there was an obvious error, but because they did not believe the level consumed could be so much less than the recommended consumption limit of tartrazine.

The scientific study and investigation of energy and matter are salient topics addressed in the Next Generation Science Standards ( Figure 10 ; see Online Connections). In a chemistry classroom, students should have multiple opportunities to observe and investigate the interaction of energy and matter. In this problem-solving experiment students used Beer’s Law to collect and analyze data to determine if a person consumed an amount of tartrazine that exceeded the maximum recommended daily allowance. The students correctly concluded that the person in the problem did not consume more than the recommended daily amount of tartrazine for their body weight.

In this activity students learned to work collaboratively to design an experiment, collect and analyze data, and solve a problem. These skills extend beyond any one science subject or class. Through this activity, students had the opportunity to do real-world science to solve a problem without a previously known result. The process of designing an experiment may be difficult for some students that are often accustomed to being given an experimental procedure in their previous science classroom experiences. However, because students sometimes struggled to design their own experiment and perform the calculations, students also learned to persevere in collecting and analyzing data to solve a problem, which is a valuable life lesson for all students. ■

Online Connections

The Beer-Lambert Law at Chemistry LibreTexts: https://bit.ly/3lNpPEi

Beer’s Law – Theoretical Principles: https://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/beers1.htm

Beer’s Law at Illustrated Glossary of Organic Chemistry: http://www.chem.ucla.edu/~harding/IGOC/B/beers_law.html

Beer Lambert Law at Edinburgh Instruments: https://www.edinst.com/blog/the-beer-lambert-law/

Beer’s Law Lab at PhET Interactive Simulations: https://phet.colorado.edu/en/simulation/beers-law-lab

Figure 4. Problem-solving experiment problem statement: https://bit.ly/3pAYHtj

Figure 5. Learning targets: https://bit.ly/307BHtb

Figure 6. Materials list: https://bit.ly/308a57h

Figure 7. The use of color comparison as a low-cost alternative: https://bit.ly/3du1uyO

Figure 9. Summative performance-based assessment rubric: https://bit.ly/31KoZRj

Figure 10. Connecting to the Next Generation Science Standards : https://bit.ly/3GlJnY0

Kevin Mason ( [email protected] ) is Professor of Education at the University of Wisconsin–Stout, Menomonie, WI; Steve Schieffer is a chemistry teacher at Amery High School, Amery, WI; Tara Rose is a chemistry teacher at Amery High School, Amery, WI; and Greg Matthias is Assistant Professor of Education at the University of Wisconsin–Stout, Menomonie, WI.

Akinoglu, O., and R. Tandogan. 2007. The effects of problem-based active learning in science education on students’ academic achievement, attitude and concept learning. Eurasia Journal of Mathematics, Science, and Technology Education 3 (1): 77–81.

Areepattamannil, S. 2012. Effects of inquiry-based science instruction on science achievement and interest in science: Evidence from Qatar. The Journal of Educational Research 105 (2): 134–146.

Bando R., E. Nashlund-Hadley, and P. Gertler. 2019. Effect of inquiry and problem-based pedagogy on learning: Evidence from 10 field experiments in four countries. The National Bureau of Economic Research 26280.

Furtak, E., T. Seidel, and H. Iverson. 2012. Experimental and quasi-experimental studies of inquiry-based science teaching: A meta-analysis. Review of Educational Research 82 (3): 300–329.

Hansen, Z., H. Owan, and J. Pan. 2015. The impact of group diversity on class performance. Education Economics 23 (2): 238–258.

Inel, D., and A. Balim. 2010. The effects of using problem-based learning in science and technology teaching upon students’ academic achievement and levels of structuring concepts. Pacific Forum on Science Learning and Teaching 11 (2): 1–23.

Merritt, J., M. Lee, P. Rillero, and B. Kinach. 2017. Problem-based learning in K–8 mathematics and science education: A literature review. The Interdisciplinary Journal of Problem-based Learning 11 (2).

NGSS Lead States. 2013. Next Generation Science Standards: For states, by states. Washington, DC: National Academies Press.

Panasan, M., and P. Nuangchalerm. 2010. Learning outcomes of project-based and inquiry-based learning activities. Journal of Social Sciences 6 (2): 252–255.

Rutherford, F.J. 1964. The role of inquiry in science teaching. Journal of Research in Science Teaching 2 (2): 80–84.

Stevens, L.J., J.R. Burgess, M.A. Stochelski, and T. Kuczek. 2015. Amounts of artificial food dyes and added sugars in foods and sweets commonly consumed by children. Clinical Pediatrics 54 (4): 309–321.

Wilson, C., J. Taylor, and S. Kowalski. 2010. The relative effects and equity of inquiry-based and commonplace science teaching on students’ knowledge, reasoning, and argumentation. Journal of Research in Science Teaching 47 (3): 276–301.

Chemistry Crosscutting Concepts Curriculum Disciplinary Core Ideas General Science Inquiry Instructional Materials Labs Lesson Plans Mathematics NGSS Pedagogy Science and Engineering Practices STEM Teaching Strategies Technology Three-Dimensional Learning High School

You may also like

Reports Article

Web Seminar

Join us on Thursday, May 2, 2024, from 7:00 to 8:00 PM ET, to learn about the fifth National Climate Assessment (NCA5) report....

National Academies Press: OpenBook

America's Lab Report: Investigations in High School Science (2006)

Chapter: 3 laboratory experiences and student learning, 3 laboratory experiences and student learning.

In this chapter, the committee first identifies and clarifies the learning goals of laboratory experiences and then discusses research evidence on attainment of those goals. The review of research evidence draws on three major strands of research: (1) cognitive research illuminating how students learn; (2) studies that examine laboratory experiences that stand alone, separate from the flow of classroom science instruction; and (3) research projects that sequence laboratory experiences with other forms of science instruction. 1 We propose the phrase “integrated instructional units” to describe these research and design projects that integrate laboratory experiences within a sequence of science instruction. In the following section of this chapter, we present design principles for laboratory experiences derived from our analysis of these multiple strands of research and suggest that laboratory experiences designed according to these principles are most likely to accomplish their learning goals. Next we consider the role of technology in supporting student learning from laboratory experiences. The chapter concludes with a summary.

GOALS FOR LABORATORY EXPERIENCES

Laboratories have been purported to promote a number of goals for students, most of which are also the goals of science education in general (Lunetta, 1998; Hofstein and Lunetta, 1982). The committee commissioned a paper to examine the definition and goals of laboratory experiences (Millar, 2004) and also considered research reviews on laboratory education that have identified and discussed learning goals (Anderson, 1976; Hofstein and Lunetta, 1982; Lazarowitz and Tamir, 1994; Shulman and Tamir, 1973). While these inventories of goals vary somewhat, a core set remains fairly consistent. Building on these commonly stated goals, the committee developed a comprehensive list of goals for or desired outcomes of laboratory experiences:

Enhancing mastery of subject matter . Laboratory experiences may enhance student understanding of specific scientific facts and concepts and of the way in which these facts and concepts are organized in the scientific disciplines.

Developing scientific reasoning . Laboratory experiences may promote a student’s ability to identify questions and concepts that guide scientific

investigations; to design and conduct scientific investigations; to develop and revise scientific explanations and models; to recognize and analyze alternative explanations and models; and to make and defend a scientific argument. Making a scientific argument includes such abilities as writing, reviewing information, using scientific language appropriately, constructing a reasoned argument, and responding to critical comments.

Understanding the complexity and ambiguity of empirical work . Interacting with the unconstrained environment of the material world in laboratory experiences may help students concretely understand the inherent complexity and ambiguity of natural phenomena. Laboratory experiences may help students learn to address the challenges inherent in directly observing and manipulating the material world, including troubleshooting equipment used to make observations, understanding measurement error, and interpreting and aggregating the resulting data.

Developing practical skills . In laboratory experiences, students may learn to use the tools and conventions of science. For example, they may develop skills in using scientific equipment correctly and safely, making observations, taking measurements, and carrying out well-defined scientific procedures.

Understanding of the nature of science . Laboratory experiences may help students to understand the values and assumptions inherent in the development and interpretation of scientific knowledge, such as the idea that science is a human endeavor that seeks to understand the material world and that scientific theories, models, and explanations change over time on the basis of new evidence.

Cultivating interest in science and interest in learning science . As a result of laboratory experiences that make science “come alive,” students may become interested in learning more about science and see it as relevant to everyday life.

Developing teamwork abilities . Laboratory experiences may also promote a student’s ability to collaborate effectively with others in carrying out complex tasks, to share the work of the task, to assume different roles at different times, and to contribute and respond to ideas.

Although most of these goals were derived from previous research on laboratory experiences and student learning, the committee identified the new goal of “understanding the complexity and ambiguity of empirical work” to reflect the unique nature of laboratory experiences. Students’ direct encounters with natural phenomena in laboratory science courses are inherently more ambiguous and messy than the representations of these phenomena in science lectures, textbooks, and mathematical formulas (Millar, 2004). The committee thinks that developing students’ ability to recognize this complexity and develop strategies for sorting through it is an essential

goal of laboratory experiences. Unlike the other goals, which coincide with the goals of science education more broadly and may be advanced through lectures, reading, or other forms of science instruction, laboratory experiences may be the only way to advance the goal of helping students understand the complexity and ambiguity of empirical work.

RECENT DEVELOPMENTS IN RESEARCH AND DESIGN OF LABORATORY EXPERIENCES

In reviewing evidence on the extent to which students may attain the goals of laboratory experiences listed above, the committee identified a recent shift in the research. Historically, laboratory experiences have been separate from the flow of classroom science instruction and often lacked clear learning goals. Because this approach remains common today, we refer to these isolated interactions with natural phenomena as “typical” laboratory experiences. 2 Reflecting this separation, researchers often engaged students in one or two experiments or other science activities and then conducted assessments to determine whether their understanding of the science concept underlying the activity had increased. Some studies directly compared measures of student learning following laboratory experiences with measures of student learning following lectures, discussions, videotapes, or other methods of science instruction in an effort to determine which modes of instruction were most effective.

Over the past 10 years, some researchers have shifted their focus. Assuming that the study of the natural world requires opportunities to directly encounter that world, investigators are integrating laboratory experiences and other forms of instruction into instructional sequences in order to help students progress toward science learning goals. These studies draw on principles of learning derived from the rapid growth in knowledge from cognitive research to address the question of how to design science instruction, including laboratory experiences, in order to support student learning.

Given the complexity of these teaching and learning sequences, the committee struggled with how best to describe them. Initially, the committee used the term “science curriculum units.” However, that term failed to convey the importance of integration in this approach to sequencing laboratory experiences with other forms of teaching and learning. The research reviewed by the committee indicated that these curricula not only integrate laboratory experiences in the flow of science instruction, but also integrate

student learning about both the concepts and processes of science. To reflect these aspects of the new approach, the committee settled on the term “integrated instructional units” in this report.

The following sections briefly describe principles of learning derived from recent research in the cognitive sciences and their application in design of integrated instructional units.

Principles of Learning Informing Integrated Instructional Units

Recent research and development of integrated instructional units that incorporate laboratory experiences are based on a large and growing body of cognitive research. This research has led to development of a coherent and multifaceted theory of learning that recognizes that prior knowledge, context, language, and social processes play critical roles in cognitive development and learning (National Research Council, 1999). Taking each of these factors into account, the National Research Council (NRC) report How People Learn identifies four critical principles that support effective learning environments (Glaser, 1994; National Research Council, 1999), and a more recent NRC report, How Students Learn , considers these principles as they relate specifically to science (National Research Council, 2005). These four principles are summarized below.

Learner-Centered Environments

The emerging integrated instructional units are designed to be learner-centered. This principle is based on research showing that effective instruction begins with what learners bring to the setting, including cultural practices and beliefs, as well as knowledge of academic content. Taking students’ preconceptions into account is particularly critical in science instruction. Students come to the classroom with conceptions of natural phenomena that are based on their everyday experiences in the world. Although these conceptions are often reasonable and can provide satisfactory everyday explanations to students, they do not always match scientific explanations and break down in ways that students often fail to notice. Teachers face the challenge of engaging with these intuitive ideas, some of which are more firmly rooted than others, in order to help students move toward a more scientific understanding. In this way, understanding scientific knowledge often requires a change in—not just an addition to—what students notice and understand about the world (National Research Council, 2005).

Knowledge-Centered Environments

The developing integrated instructional units are based on the principle that learning is enhanced when the environment is knowledge-centered. That is, the laboratory experiences and other instruction included in integrated instructional units are designed to help students learn with understanding, rather than simply acquiring sets of disconnected facts and skills (National Research Council, 1999).

In science, the body of knowledge with which students must engage includes accepted scientific ideas about natural phenomena as well as an understanding of what it means to “do science.” These two aspects of science are reflected in the goals of laboratory experiences, which include mastery of subject matter (accepted scientific ideas about phenomena) and several goals related to the processes of science (understanding the complexity of empirical work, development of scientific reasoning). Research on student thinking about science shows a progression of ideas about scientific knowledge and how it is justified. At the first stage, students perceive scientific knowledge as right or wrong. Later, students characterize discrepant ideas and evidence as “mere opinion.” Eventually, students recognize scientific knowledge as being justified by evidence derived through rigorous research. Several studies have shown that a large proportion of high school students are at the first stage in their views of scientific knowledge (National Research Council, 2005).

Knowledge-centered environments encourage students to reflect on their own learning progress (metacognition). Learning is facilitated when individuals identify, monitor, and regulate their own thinking and learning. To be effective problem solvers and learners, students need to determine what they already know and what else they need to know in any given situation, including when things are not going as expected. For example, students with better developed metacognitive strategies will abandon an unproductive problem-solving strategy very quickly and substitute a more productive one, whereas students with less effective metacognitive skills will continue to use the same strategy long after it has failed to produce results (Gobert and Clement, 1999). The basic metacognitive strategies include: (1) connecting new information to former knowledge, (2) selecting thinking strategies deliberately, and (3) monitoring one’s progress during problem solving.

A final aspect of knowledge-centered learning, which may be particularly relevant to integrated instructional units, is that the practices and activities in which people engage while learning shape what they learn. Transfer (the ability to apply learning in varying situations) is made possible to the extent that knowledge and learning are grounded in multiple contexts. Transfer is more difficult when a concept is taught in a limited set of contexts or through a limited set of activities. By encountering the same concept at work in multiple contexts (such as in laboratory experiences and in discussion),

students can develop a deeper understanding of the concept and how it can be used as well as the ability to transfer what has been learned in one context to others (Bransford and Schwartz, 2001).

Assessment to Support Learning

Another important principle of learning that has informed development of integrated instructional units is that assessment can be used to support learning. Cognitive research has shown that feedback is fundamental to learning, but feedback opportunities are scarce in most classrooms. This research indicates that formative assessments provide students with opportunities to revise and improve the quality of their thinking while also making their thinking apparent to teachers, who can then plan instruction accordingly. Assessments must reflect the learning goals of the learning environment. If the goal is to enhance understanding and the applicability of knowledge, it is not sufficient to provide assessments that focus primarily on memory for facts and formulas. The Thinkertools science instructional unit discussed in the following section incorporates this principle, including formative self-assessment tools that help students advance toward several of the goals of laboratory experiences.

Community-Centered Environments

Research has shown that learning is enhanced in a community setting, when students and teachers share norms that value knowledge and participation (see Cobb et al., 2001). Such norms increase people’s opportunities and motivation to interact, receive feedback, and learn. Learning is enhanced when students have multiple opportunities to articulate their ideas to peers and to hear and discuss others’ ideas. A community-centered classroom environment may not be organized in traditional ways. For example, in science classrooms, the teacher is often the sole authority and arbiter of scientific knowledge, placing students in a relatively passive role (Lemke, 1990). Such an organization may promote students’ view that scientific knowledge is a collection of facts about the world, authorized by expert scientists and irrelevant to students’ own experience. The instructional units discussed below have attempted to restructure the social organization of the classroom and encourage students and the teacher to interact and learn from each other.

Design of Integrated Instructional Units

The learning principles outlined above have begun to inform design of integrated instructional units that include laboratory experiences with other types of science learning activities. These integrated instructional units were

developed through research programs that tightly couple research, design, and implementation in an iterative process. The research programs are beginning to document the details of student learning, development, and interaction when students are given systematic support—or scaffolding—in carefully structured social and cognitive activities. Scaffolding helps to guide students’ thinking, so that they can gradually take on more autonomy in carrying out various parts of the activities. Emerging research on these integrated instructional units provides guidance about how to design effective learning environments for real-world educational settings (see Linn, Davis, and Bell, 2004a; Cobb et al., 2003; Design-Based Research Collective, 2003).

Integrated instructional units interweave laboratory experiences with other types of science learning activities, including lectures, reading, and discussion. Students are engaged in framing research questions, designing and executing experiments, gathering and analyzing data, and constructing arguments and conclusions as they carry out investigations. Diagnostic, formative assessments are embedded into the instructional sequences and can be used to gauge student’s developing understanding and to promote their self-reflection on their thinking.

With respect to laboratory experiences, these instructional units share two key features. The first is that specific laboratory experiences are carefully selected on the basis of research-based ideas of what students are likely to learn from them. For example, any particular laboratory activity is likely to contribute to learning only if it engages students’ current thinking about the target phenomena and is likely to make them critically evaluate their ideas in relation to what they see during the activity. The second is that laboratory experiences are explicitly linked to and integrated with other learning activities in the unit. The assumption behind this second feature is that just because students do a laboratory activity, they may not necessarily understand what they have done. Nascent research on integrated instructional units suggests that both framing a particular laboratory experience ahead of time and following it with activities that help students make sense of the experience are crucial in using a laboratory experience to support science learning. This “integration” approach draws on earlier research showing that intervention and negotiation with an authority, usually a teacher, was essential to help students make meaning out of their laboratory activities (Driver, 1995).

Examples of Integrated Instructional Units

Scaling up chemistry that applies.

Chemistry That Applies (CTA) is a 6-8 week integrated instructional unit designed to help students in grades 8-10 understand the law of conservation

of matter. Created by researchers at the Michigan Department of Education (Blakeslee et al., 1993), this instructional unit was one of only a few curricula that were highly rated by American Assocation for the Advancement of Science Project 2061 in its study of middle school science curricula (Kesidou and Roseman, 2002). Student groups explore four chemical reactions—burning, rusting, the decomposition of water, and the volcanic reaction of baking soda and vinegar. They cause these reactions to happen, obtain and record data in individual notebooks, analyze the data, and use evidence-based arguments to explain the data.

The instructional unit engages the students in a carefully structured sequence of hands-on laboratory investigations interwoven with other forms of instruction (Lynch, 2004). Student understanding is “pressed” through many experiences with the reactions and by group and individual pressures to make meaning of these reactions. For example, video transcripts indicate that students engaged in “science talk” during teacher demonstrations and during student experiments.

Researchers at George Washington University, in a partnership with Montgomery County public schools in Maryland, are currently conducting a five-year study of the feasibility of scaling up effective integrated instructional units, including CTA (Lynch, Kuipers, Pyke, and Szesze, in press). In 2001-2002, CTA was implemented in five highly diverse middle schools that were matched with five comparison schools using traditional curriculum materials in a quasi-experimental research design. All 8th graders in the five CTA schools, a total of about 1,500 students, participated in the CTA curriculum, while all 8th graders in the matched schools used the science curriculum materials normally available. Students were given pre- and posttests.

In 2002-2003, the study was replicated in the same five pairs of schools. In both years, students who participated in the CTA curriculum scored significantly higher than comparison students on a posttest. Average scores of students who participated in the CTA curriculum showed higher levels of fluency with the concept of conservation of matter (Lynch, 2004). However, because the concept is so difficult, most students in both the treatment and control group still have misconceptions, and few have a flexible, fully scientific understanding of the conservation of matter. All subgroups of students who were engaged in the CTA curriculum—including low-income students (eligible for free and reduced-price meals), black and Hispanic students, English language learners, and students eligible for special educational services—scored significantly higher than students in the control group on the posttest (Lynch and O’Donnell, 2005). The effect sizes were largest among three subgroups considered at risk for low science achievement, including Hispanic students, low-income students, and English language learners.

Based on these encouraging results, CTA was scaled up to include about 6,000 8th graders in 20 schools in 2003-2004 and 12,000 8th graders in 37 schools in 2004-2005 (Lynch and O’Donnell, 2005).

ThinkerTools

The ThinkerTools instructional unit is a sequence of laboratory experiences and other learning activities that, in its initial version, yielded substantial gains in students’ understanding of Newton’s laws of motion (White, 1993). Building on these positive results, ThinkerTools was expanded to focus not only on mastery of these laws of motion but also on scientific reasoning and understanding of the nature of science (White and Frederiksen, 1998). In the 10-week unit, students were guided to reflect on their own thinking and learning while they carry out a series of investigations. The integrated instructional unit was designed to help them learn about science processes as well as about the subject of force and motion. The instructional unit supports students as they formulate hypotheses, conduct empirical investigations, work with conceptually analogous computer simulations, and refine a conceptual model for the phenomena. Across the series of investigations, the integrated instructional unit introduces increasingly complex concepts. Formative assessments are integrated throughout the instructional sequence in ways that allow students to self-assess and reflect on core aspects of inquiry and epistemological dimensions of learning.

Researchers investigated the impact of Thinker Tools in 12 7th, 8th, and 9th grade classrooms with 3 teachers and 343 students. The researchers evaluated students’ developing understanding of scientific investigations using a pre-post inquiry test. In this assessment, students were engaged in a thought experiment that asked them to conceptualize, design, and think through a hypothetical research study. Gains in scores for students in the reflective self-assessment classes and control classrooms were compared. Results were also broken out by students categorized as high and low achieving, based on performance on a standardized test conducted before the intervention. Students in the reflective self-assessment classes exhibited greater gains on a test of investigative skills. This was especially true for low-achieving students. The researchers further analyzed specific components of the associated scientific processes—formulation of hypotheses, designing an experiment, predicting results, drawing conclusions from made-up results, and relating those conclusions back to the original hypotheses. Students in the reflective-self-assessment classes did better on all of these components than those in control classrooms, especially on the more difficult components (drawing conclusions and relating them to the original hypotheses).

Computer as Learning Partner

Beginning in 1980, a large group of technologists, classroom teachers, and education researchers developed the Computer as Learning Partner (CLP)

integrated instructional unit. Over 10 years, the team developed and tested eight versions of a 12-week unit on thermodynamics. Each year, a cohort of about 300 8th grade students participated in a sequence of teaching and learning activities focused primarily on a specific learning goal—enhancing students’ understanding of the difference between heat and temperature (Linn, 1997). The project engaged students in a sequence of laboratory experiences supported by computers, discussions, and other forms of science instruction. For example, computer images and words prompted students to make predictions about heat and conductivity and perform experiments using temperature-sensitive probes to confirm or refute their predictions. Students were given tasks related to scientific phenomena affecting their daily lives—such as how to keep a drink cold for lunch or selecting appropriate clothing for hiking in the mountains—as a way to motivate their interest and curiosity. Teachers play an important role in carrying out the curriculum, asking students to critique their own and each others’ investigations and encouraging them to reflect on their own thinking.

Over 10 years of study and revision, the integrated instructional unit proved increasingly effective in achieving its stated learning goals. Before the sequenced instruction was introduced, only 3 percent of middle school students could adequately explain the difference between heat and temperature. Eight versions later, about half of the students participating in CLP could explain this difference, representing a 400 percent increase in achievement. In addition, nearly 100 percent of students who participated in the final version of the instructional unit demonstrated understanding of conductors (Linn and Songer, 1991). By comparison, only 25 percent of a group of undergraduate chemistry students at the University of California at Berkeley could adequately explain the difference between heat and temperature. A longitudinal study comparing high school seniors who participated in the thermodynamics unit in middle school with seniors who had received more traditional middle school science instruction found a 50 percent improvement in CLP students’ performance in distinguishing between heat and temperature (Linn and Hsi, 2000)

Participating in the CLP instructional unit also increased students’ interest in science. Longitudinal studies of CLP participants revealed that, among those who went on to take high school physics, over 90 percent thought science was relevant to their lives. And 60 percent could provide examples of scientific phenomena in their daily lives. By comparison, only 60 percent of high school physics students who had not participated in the unit during middle school thought science was relevant to their lives, and only 30 percent could give examples in their daily lives (Linn and Hsi, 2000).

EFFECTIVENESS OF LABORATORY EXPERIENCES

Description of the literature review.

The committee’s review of the literature on the effectiveness of laboratory experiences considered studies of typical laboratory experiences and emerging research focusing on integrated instructional units. In reviewing both bodies of research, we aim to specify how laboratory experiences can further each of the science learning goals outlined at the beginning of this chapter.

Limitations of the Research

Our review was complicated by weaknesses in the earlier research on typical laboratory experiences, isolated from the stream of instruction (Hofstein and Lunetta, 1982). First, the investigators do not agree on a precise definition of the “laboratory” experiences under study. Second, many studies were weak in the selection and control of variables. Investigators failed to examine or report important variables relating to student abilities and attitudes. For example, they failed to note students’ prior laboratory experiences. They also did not give enough attention to extraneous factors that might affect student outcomes, such as instruction outside the laboratory. Third, the studies of typical laboratory experiences usually involved a small group of students with little diversity, making it difficult to generalize the results to the large, diverse population of U.S. high schools today. Fourth, investigators did not give enough attention to the adequacy of the instruments used to measure student outcomes. As an example, paper and pencil tests that focus on testing mastery of subject matter, the most frequently used assessment, do not capture student attainment of all of the goals we have identified. Such tests are not able to measure student progress toward goals that may be unique to laboratory experiences, such as developing scientific reasoning, understanding the complexity and ambiguity of empirical work, and development of practical skills.

Finally, most of the available research on typical laboratory experiences does not fully describe these activities. Few studies have examined teacher behavior, the classroom learning environment, or variables identifying teacher-student interaction. In addition, few recent studies have focused on laboratory manuals—both what is in them and how they are used. Research on the intended design of laboratory experiences, their implementation, and whether the implementation resembles the initial design would provide the understanding needed to guide improvements in laboratory instruction. However, only a few studies of typical laboratory experiences have measured the effectiveness of particular laboratory experiences in terms of both the extent

to which their activities match those that the teacher intended and the extent to which the students’ learning matches the learning objectives of the activity (Tiberghien, Veillard, Le Marchal, Buty, and Millar, 2000).

We also found weaknesses in the evolving research on integrated instructional units. First, these new units tend to be hothouse projects; researchers work intensively with teachers to construct atypical learning environments. While some have been developed and studied over a number of years and iterations, they usually involve relatively small samples of students. Only now are some of these efforts expanding to a scale that will allow robust generalizations about their value and how best to implement them. Second, these integrated instructional units have not been designed specifically to contrast some version of laboratory or practical experience with a lack of such experience. Rather, they assume that educational interventions are complex, systemic “packages” (Salomon, 1996) involving many interactions that may influence specific outcomes, and that science learning requires some opportunities for direct engagement with natural phenomena. Researchers commonly aim to document the complex interactions between and among students, teachers, laboratory materials, and equipment in an effort to develop profiles of successful interventions (Cobb et al., 2003; Collins, Joseph, and Bielaczyc, 2004; Design-Based Research Collective, 2003). These newer studies focus on how to sequence laboratory experiences and other forms of science instruction to support students’ science learning.

Scope of the Literature Search

A final note on the review of research: the scope of our study did not allow for an in-depth review of all of the individual studies of laboratory education conducted over the past 30 years. Fortunately, three major reviews of the literature from the 1970s, 1980s, and 1990s are available (Lazarowitz and Tamir, 1994; Lunetta, 1998; Hofstein and Lunetta, 2004). The committee relied on these reviews in our analysis of studies published before 1994. To identify studies published between 1994 and 2004, the committee searched electronic databases.

To supplement the database search, the committee commissioned three experts to review the nascent body of research on integrated instructional units (Bell, 2005; Duschl, 2004; Millar, 2004). We also invited researchers who are currently developing, revising, and studying the effectiveness of integrated instructional units to present their findings at committee meetings (Linn, 2004; Lynch, 2004).

All of these activities yielded few studies that focused on the high school level and were conducted in the United States. For this reason, the committee expanded the range of the literature considered to include some studies targeted at middle school and some international studies. We included stud-

ies at the elementary through postsecondary levels as well as studies of teachers’ learning in our analysis. In drawing conclusions from studies that were not conducted at the high school level, the committee took into consideration the extent to which laboratory experiences in high school differ from those in elementary and postsecondary education. Developmental differences among students, the organizational structure of schools, and the preparation of teachers are a few of the many factors that vary by school level and that the committee considered in making inferences from the available research. Similarly, when deliberating on studies conducted outside the United States, we considered differences in the science curriculum, the organization of schools, and other factors that might influence the outcomes of laboratory education.

Mastery of Subject Matter

Evidence from research on typical laboratory experiences.

Claims that typical laboratory experiences help students master science content rest largely on the argument that opportunities to directly interact with, observe, and manipulate materials will help students to better grasp difficult scientific concepts. It is believed that these experiences will force students to confront their misunderstandings about phenomena and shift toward more scientific understanding.

Despite these claims, there is almost no direct evidence that typical laboratory experiences that are isolated from the flow of science instruction are particularly valuable for learning specific scientific content (Hofstein and Lunetta, 1982, 2004; Lazarowitz and Tamir, 1994). White (1996) points out that many major reviews of science education from the 1960s and 1970s indicate that laboratory work does little to improve understanding of science content as measured by paper and pencil tests, and later studies from the 1980s and early 1990s do not challenge this view. Other studies indicate that typical laboratory experiences are no more effective in helping students master science subject matter than demonstrations in high school biology (Coulter, 1966), demonstration and discussion (Yager, Engen, and Snider, 1969), and viewing filmed experiments in chemistry (Ben-Zvi, Hofstein, Kempa, and Samuel, 1976). In contrast to most of the research, a single comparative study (Freedman, 2002) found that students who received regular laboratory instruction over the course of a school year performed better on a test of physical science knowledge than a control group of students who took a similar physical science course without laboratory activities.

Clearly, most of the evidence does not support the argument that typical laboratory experiences lead to improved learning of science content. More specifically, concrete experiences with phenomena alone do not appear to

force students to confront their misunderstandings and reevaluate their own assumptions. For example, VandenBerg, Katu, and Lunetta (1994) reported, on the basis of clinical studies with individual students, that hands-on activities with introductory electricity materials facilitated students’ understanding of the relationships among circuit elements and variables. The carefully selected practical activities created conceptual conflict in students’ minds—a first step toward changing their naïve ideas about electricity. However, the students remained unable to develop a fully scientific mental model of a circuit system. The authors suggested that greater engagement with conceptual organizers, such as analogies and concept maps, could have helped students develop more scientific understandings of basic electricity. Several researchers, including Dupin and Joshua (1987), have reported similar findings. Studies indicate that students often hold beliefs so intensely that even their observations in the laboratory are strongly influenced by those beliefs (Champagne, Gunstone, and Klopfer, 1985, cited in Lunetta, 1998; Linn, 1997). Students tend to adjust their observations to fit their current beliefs rather than change their beliefs in the face of conflicting observations.

Evidence from Research on Integrated Instructional Units

Current integrated instructional units build on earlier studies that found integration of laboratory experiences with other instructional activities enhanced mastery of subject matter (Dupin and Joshua, 1987; White and Gunstone, 1992, cited in Lunetta, 1998). A recent review of these and other studies concluded (Hofstein and Lunetta, 2004, p. 33):

When laboratory experiences are integrated with other metacognitive learning experiences such as “predict-observe-explain” demonstrations (White and Gunstone, 1992) and when they incorporate the manipulation of ideas instead of simply materials and procedures, they can promote the learning of science.

Integrated instructional units often focus on complex science topics that are difficult for students to understand. Their design is based on research on students’ intuitive conceptions of a science topic and how those conceptions differ from scientific conceptions. Students’ ideas often do not match the scientific understanding of a phenomenon and, as noted previously, these intuitive notions are resistant to change. For this reason, the sequenced units incorporate instructional activities specifically designed to confront intuitive conceptions and provide an environment in which students can construct normative conceptions. The role of laboratory experiences is to emphasize the discrepancies between students’ intuitive ideas about the topic and scientific ideas, as well as to support their construction of normative understanding. In order to help students link formal, scientific concepts to real

phenomena, these units include a sequence of experiences that will push them to question their intuitive and often inaccurate ideas.

Emerging studies indicate that exposure to these integrated instructional units leads to demonstrable gains in student mastery of a number of science topics in comparison to more traditional approaches. In physics, these subjects include Newtonian mechanics (Wells, Hestenes, and Swackhamer, 1995; White, 1993); thermodynamics (Songer and Linn, 1991); electricity (Shaffer and McDermott, 1992); optics (Bell and Linn, 2000; Reiner, Pea, and Shulman, 1995); and matter (Lehrer, Schauble, Strom, and Pligge, 2001; Smith, Maclin, Grosslight, and Davis, 1997; Snir, Smith, and Raz, 2003). Integrated instructional units in biology have enhanced student mastery of genetics (Hickey, Kindfield, Horwitz, and Christie, 2003) and natural selection (Reiser et al., 2001). A chemistry unit has led to gains in student understanding of stoichiometry (Lynch, 2004). Many, but not all, of these instructional units combine computer-based simulations of the phenomena under study with direct interactions with these phenomena. The role of technology in providing laboratory experiences is described later in this chapter.

Developing Scientific Reasoning

While philosophers of science now agree that there is no single scientific method, they do agree that a number of reasoning skills are critical to research across the natural sciences. These reasoning skills include identifying questions and concepts that guide scientific investigations, designing and conducting scientific investigations, developing and revising scientific explanations and models, recognizing and analyzing alternative explanations and models, and making and defending a scientific argument. It is not necessarily the case that these skills are sequenced in a particular way or used in every scientific investigation. Instead, they are representative of the abilities that both scientists and students need to investigate the material world and make meaning out of those investigations. Research on children’s and adults’ scientific reasoning (see the review by Zimmerman, 2000) suggests that effective experimentation is difficult for most people and not learned without instructional support.

Early research on the development of investigative skills suggested that students could learn aspects of scientific reasoning through typical laboratory instruction in college-level physics (Reif and St. John, 1979, cited in Hofstein and Lunetta, 1982) and in high school and college biology (Raghubir, 1979; Wheatley, 1975, cited in Hofstein and Lunetta, 1982).

More recent research, however, suggests that high school and college science teachers often emphasize laboratory procedures, leaving little time for discussion of how to plan an investigation or interpret its results (Tobin, 1987; see Chapter 4 ). Taken as a whole, the evidence indicates that typical laboratory work promotes only a few aspects of the full process of scientific reasoning—making observations and organizing, communicating, and interpreting data gathered from these observations. Typical laboratory experiences appear to have little effect on more complex aspects of scientific reasoning, such as the capacity to formulate research questions, design experiments, draw conclusions from observational data, and make inferences (Klopfer, 1990, cited in White, 1996).

Research developing from studies of integrated instructional units indicates that laboratory experiences can play an important role in developing all aspects of scientific reasoning, including the more complex aspects, if the laboratory experiences are integrated with small group discussion, lectures, and other forms of science instruction. With carefully designed instruction that incorporates opportunities to conduct investigations and reflect on the results, students as young as 4th and 5th grade can develop sophisticated scientific thinking (Lehrer and Schauble, 2004; Metz, 2004). Kuhn and colleagues have shown that 5th graders can learn to experiment effectively, albeit in carefully controlled domains and with extended supervised practice (Kuhn, Schauble, and Garcia-Mila, 1992). Explicit instruction on the purposes of experiments appears necessary to help 6th grade students design them well (Schauble, Giaser, Duschl, Schulze, and John, 1995).These studies suggest that laboratory experiences must be carefully designed to support the development of scientific reasoning.

Given the difficulty most students have with reasoning scientifically, a number of instructional units have focused on this goal. Evidence from several studies indicates that, with the appropriate scaffolding provided in these units, students can successfully reason scientifically. They can learn to design experiments (Schauble et al., 1995; White and Frederiksen, 1998), make predictions (Friedler, Nachmias, and Linn, 1990), and interpret and explain data (Bell and Linn, 2000; Coleman, 1998; Hatano and Inagaki, 1991; Meyer and Woodruff, 1997; Millar, 1998; Rosebery, Warren, and Conant, 1992; Sandoval and Millwood, 2005). Engagement with these instructional units has been shown to improve students’ abilities to recognize discrepancies between predicted and observed outcomes (Friedler et al., 1990) and to design good experiments (Dunbar, 1993; Kuhn et al., 1992; Schauble et al., 1995; Schauble, Klopfer, and Raghavan, 1991).

Integrated instructional units seem especially beneficial in developing scientific reasoning skills among lower ability students (White and Frederiksen, 1998).

Recently, research has focused on an important element of scientific reasoning—the ability to construct scientific arguments. Developing, revising, and communicating scientific arguments is now recognized as a core scientific practice (Driver, Newton, and Osborne, 2000; Duschl and Osborne, 2002). Laboratory experiences play a key role in instructional units designed to enhance students’ argumentation abilities, because they provide both the impetus and the data for constructing scientific arguments. Such efforts have taken many forms. For example, researchers working with young Haitian-speaking students in Boston used the students’ own interests to develop scientific investigations. Students designed an investigation to determine which school drinking fountain had the best-tasting water. The students designed data collection protocols, collected and analyzed their data, and then argued about their findings (Rosebery et al., 1992). The Knowledge Integration Environment project asked middle school students to examine a common set of evidence to debate competing hypotheses about light propagation. Overall, most students learned the scientific concept (that light goes on forever), although those who made better arguments learned more than their peers (Bell and Linn, 2000). These and other examples (e.g., Sandoval and Millwood, 2005) show that students in middle and high school can learn to argue scientifically, by learning to coordinate theoretical claims with evidence taken from their laboratory investigations.

Developing Practical Skills

Science educators and researchers have long claimed that learning practical laboratory skills is one of the important goals for laboratory experiences and that such skills may be attainable only through such experiences (White, 1996; Woolnough, 1983). However, development of practical skills has been measured in research less frequently than mastery of subject matter or scientific reasoning. Such practical outcomes deserve more attention, especially for laboratory experiences that are a critical part of vocational or technical training in some high school programs. When a primary goal of a program or course is to train students for jobs in laboratory settings, they must have the opportunity to learn to use and read sophisticated instruments and carry out standardized experimental procedures. The critical questions about acquiring these skills through laboratory experiences may not be whether laboratory experiences help students learn them, but how the experiences can be constructed so as to be most effective in teaching such skills.

Some research indicates that typical laboratory experiences specifically focused on learning practical skills can help students progress toward other goals. For example, one study found that students were often deficient in the simple skills needed to successfully carry out typical laboratory activities, such as using instruments to make measurements and collect accurate data (Bryce and Robertson, 1985). Other studies indicate that helping students to develop relevant instrumentation skills in controlled “prelab” activities can reduce the probability that important measurements in a laboratory experience will be compromised due to students’ lack of expertise with the apparatus (Beasley, 1985; Singer, 1977). This research suggests that development of practical skills may increase the probability that students will achieve the intended results in laboratory experiences. Achieving the intended results of a laboratory activity is a necessary, though not sufficient, step toward effectiveness in helping students attain laboratory learning goals.

Some research on typical laboratory experiences indicates that girls handle laboratory equipment less frequently than boys, and that this tendency is associated with less interest in science and less self-confidence in science ability among girls (Jovanovic and King, 1998). It is possible that helping girls to develop instrumentation skills may help them to participate more actively and enhance their interest in learning science.

Studies of integrated instructional units have not examined the extent to which engagement with these units may enhance practical skills in using laboratory materials and equipment. This reflects an instructional emphasis on helping students to learn scientific ideas with real understanding and on developing their skills at investigating scientific phenomena, rather than on particular laboratory techniques, such as taking accurate measurements or manipulating equipment. There is no evidence to suggest that students do not learn practical skills through integrated instructional units, but to date researchers have not assessed such practical skills.

Understanding the Nature of Science

Throughout the past 50 years, studies of students’ epistemological beliefs about science consistently show that most of them have naïve views about the nature of scientific knowledge and how such knowledge is constructed and evaluated by scientists over time (Driver, Leach, Millar, and Scott, 1996; Lederman, 1992). The general public understanding of science is similarly inaccurate. Firsthand experience with science is often seen as a key way to advance students’ understanding of and appreciation for the conventions of science. Laboratory experiences are considered the primary mecha-

nism for providing firsthand experience and are therefore assumed to improve students’ understanding of the nature of science.

Research on student understanding of the nature of science provides little evidence of improvement with science instruction (Lederman, 1992; Driver et al., 1996). Although much of this research historically did not examine details of students’ laboratory experiences, it often included very large samples of science students and thus arguably captured typical laboratory experiences (research from the late 1950s through the 1980s is reviewed by Lederman, 1992). There appear to be developmental trends in students’ understanding of the relations between experimentation and theory-building. Younger students tend to believe that experiments yield direct answers to questions; during middle and high school, students shift to a vague notion of experiments being tests of ideas. Only a small number of students appear to leave high school with a notion of science as model-building and experimentation, in an ongoing process of testing and revision (Driver et al., 1996; Carey and Smith, 1993; Smith et al., 2000). The conclusion that most experts draw from these results is that the isolated nature and rote procedural focus of typical laboratory experiences inhibits students from developing robust conceptions of the nature of science. Consequently, some have argued that the nature of science must be an explicit target of instruction (Khishfe and Abd-El-Khalick, 2002; Lederman, Abd-El-Khalick, Bell, and Schwartz, 2002).

As discussed above, there is reasonable evidence that integrated instructional units help students to learn processes of scientific inquiry. However, such instructional units do not appear, on their own, to help students develop robust conceptions of the nature of science. One large-scale study of a widely available inquiry-oriented curriculum, in which integrated instructional units were an explicit feature, showed no significant change in students’ ideas about the nature of science after a year’s instruction (Meichtry, 1993). Students engaged in the BGuILE science instructional unit showed no gains in understanding the nature of science from their participation, and they seemed not even to see their experience in the unit as necessarily related to professional science (Sandoval and Morrison, 2003). These findings and others have led to the suggestion that the nature of science must be an explicit target of instruction (Lederman et al., 2002).

There is evidence from the ThinkerTools science instructional unit that by engaging in reflective self-assessment on their own scientific investiga-

tions, students gained a more sophisticated understanding of the nature of science than matched control classes who used the curriculum without the ongoing monitoring and evaluation of their own and others’ research (White and Frederiksen, 1998). Students who engaged in the reflective assessment process “acquire knowledge of the forms that scientific laws, models, and theories can take, and of how the development of scientific theories is related to empirical evidence” (White and Frederiksen, 1998, p. 92). Students who participated in the laboratory experiences and other learning activities in this unit using the reflective assessment process were less likely to “view scientific theories as immutable and never subject to revision” (White and Frederiksen, 1998, p. 72). Instead, they saw science as meaningful and explicable. The ThinkerTools findings support the idea that attention to nature of science issues should be an explicit part of integrated instructional units, although even with such attention it remains difficult to change students’ ideas (Khishfe and Abd-el-Khalick, 2002).

A survey of several integrated instructional units found that they seem to bridge the “language gap” between science in school and scientific practice (Duschl, 2004). The units give students “extended opportunities to explore the relationship between evidence and explanation,” helping them not only to develop new knowledge (mastery of subject matter), but also to evaluate claims of scientific knowledge, reflecting a deeper understanding of the nature of science (Duschl, 2004). The available research leaves open the question of whether or not these experiences help students to develop an explicit, reflective conceptual framework about the nature of science.

Cultivating Interest in Science and Interest in Learning Science

Studies of the effect of typical laboratory experiences on student interest are much rarer than those focusing on student achievement or other cognitive outcomes (Hofstein and Lunetta, 2004; White, 1996). The number of studies that address interest, attitudes, and other affective outcomes has decreased over the past decade, as researchers have focused almost exclusively on cognitive outcomes (Hofstein and Lunetta, 2004). Among the few studies available, the evidence is mixed. Some studies indicate that laboratory experiences lead to more positive attitudes (Renner, Abraham, and Birnie, 1985; Denny and Chennell, 1986). Other studies show no relation between laboratory experiences and affect (Ato and Wilkinson, 1986; Freedman, 2002), and still others report laboratory experiences turned students away from science (Holden, 1990; Shepardson and Pizzini, 1993).

There are, however, two apparent weaknesses in studies of interest and attitude (Hofstein and Lunetta, 1982). One is that researchers often do not carefully define interest and how it should be measured. Consequently, it is unclear if students simply reported liking laboratory activities more than other classroom activities, or if laboratory activities engendered more interest in science as a field, or in taking science courses, or something else. Similarly, studies may report increased positive attitudes toward science from students’ participation in laboratory experiences, without clear description of what attitudes were measured, how large the changes were, or whether changes persisted over time.

Student Perceptions of Typical Laboratory Experiences

Students’ perceptions of laboratory experiences may affect their interest and engagement in science, and some studies have examined those perceptions. Researchers have found that students often do not have clear ideas about the general or specific purposes of their work in typical science laboratory activities (Chang and Lederman, 1994) and that their understanding of the goals of lessons frequently do not match their teachers’ goals for the same lessons (Hodson, 1993; Osborne and Freyberg, 1985; Wilkenson and Ward, 1997). When students do not understand the goals of experiments or laboratory investigations, negative consequences for learning occur (Schauble et al., 1995). In fact, students often do not make important connections between the purpose of a typical laboratory investigation and the design of the experiments. They do not connect the experiment with what they have done earlier, and they do not note the discrepancies among their own concepts, the concepts of their peers, and those of the science community (Champagne et al., 1985; Eylon and Linn, 1988; Tasker, 1981). As White (1998) notes, “to many students, a ‘lab’ means manipulating equipment but not manipulating ideas.” Thus, in considering how laboratory experiences may contribute to students’ interest in science and to other learning goals, their perceptions of those experiences must be considered.

A series of studies using the Science Laboratory Environment Inventory (SLEI) has demonstrated links between students’ perceptions of laboratory experiences and student outcomes (Fraser, McRobbie, and Giddings, 1993; Fraser, Giddings, and McRobbie, 1995; Henderson, Fisher, and Fraser, 2000; Wong and Fraser, 1995). The SLEI, which has been validated cross-nationally, measures five dimensions of the laboratory environment: student cohesiveness, open-endedness, integration, rule clarity, and material environment (see Table 3-1 for a description of each scale). Using the SLEI, researchers have studied students’ perceptions of chemistry and biology laboratories in several countries, including the United States. All five dimensions appear to be positively related with student attitudes, although the

TABLE 3-1 Descriptive Information for the Science Laboratory Environment Inventory

relation of open-endedness with attitudes seems to vary with student population. In some populations, there is a negative relation to attitudes (Fraser et al., 1995) and to some cognitive outcomes (Henderson et al., 2000).

Research using the SLEI indicates that positive student attitudes are particularly strongly associated with cohesiveness (the extent to which students know, help, and are supportive of one another) and integration (the extent to which laboratory activities are integrated with nonlaboratory and theory classes) (Fraser et al.,1995; Wong and Fraser, 1995). Integration also shows a positive relation to students’ cognitive outcomes (Henderson et al., 2000; McRobbie and Fraser, 1993).

Students’ interest and attitudes have been measured less often than other goals of laboratory experiences in studies of integrated instructional units. When evidence is available, it suggests that students who participate in these units show greater interest in and more positive attitudes toward science. For example, in a study of ThinkerTools, completion of projects was used as a measure of student interest. The rate of submitting completed projects was higher for students in the ThinkerTools curriculum than for those in traditional instruction. This was true for all grades and ability levels (White and

Frederiksen, 1998). This study also found that students’ ongoing evaluation of their own and other students’ thinking increased motivation and self-confidence in their individual ability: students who participated in this ongoing evaluation not only turned in their final project reports more frequently, but they were also less likely to turn in reports that were identical to their research partner’s.

Participation in the ThinkerTools instructional unit appears to change students’ attitudes toward learning science. After completing the integrated instructional unit, fewer students indicated that “being good at science” was a result of inherited traits, and fewer agreed with the statement, “In general, boys tend to be naturally better at science than girls.” In addition, more students indicated that they preferred taking an active role in learning science, rather than simply being told the correct answer by the teacher (White and Frederiksen, 1998).

Researchers measured students’ engagement and motivation to master the complex topic of conservation of matter as part of the study of CTA. Students who participated in the CTA curriculum had higher levels of basic engagement (active participation in activities) and were more likely to focus on learning from the activities than students in the control group (Lynch et al., in press). This positive effect on engagement was especially strong among low-income students. The researchers speculate, “perhaps as a result of these changes in engagement and motivation, they learned more than if they had received the standard curriculum” (Lynch et al., in press).

Students who participated in CLP during middle school, when surveyed years later as high school seniors, were more likely to report that science is relevant to their lives than students who did not participate (Linn and Hsi, 2000). Further research is needed to illuminate which aspects of this instructional unit contribute to increased interest.

Developing Teamwork Abilities

Teamwork and collaboration appear in research on typical laboratory experiences in two ways. First, working in groups is seen as a way to enhance student learning, usually with reference to literature on cooperative learning or to the importance of providing opportunities for students to discuss their ideas. Second and more recently, attention has focused on the ability to work in groups as an outcome itself, with laboratory experiences seen as an ideal opportunity to develop these skills. The focus on teamwork as an outcome is usually linked to arguments that this is an essential skill for workers in the 21st century (Partnership for 21st Century Skills, 2003).

There is considerable evidence that collaborative work can help students learn, especially if students with high ability work with students with low ability (Webb and Palincsar, 1996). Collaboration seems especially helpful to lower ability students, but only when they work with more knowledgeable peers (Webb, Nemer, Chizhik, and Sugrue, 1998). Building on this research, integrated instructional units engage students in small-group collaboration as a way to encourage them to connect what they know (either from their own experiences or from prior instruction) to their laboratory experiences. Often, individual students disagree about prospective answers to the questions under investigation or the best way to approach them, and collaboration encourages students to articulate and explain their reasoning. A number of studies suggest that such collaborative investigation is effective in helping students to learn targeted scientific concepts (Coleman, 1998; Roschelle, 1992).

Extant research lacks specific assessment of the kinds of collaborative skills that might be learned by individual students through laboratory work. The assumption appears to be that if students collaborate and such collaborations are effective in supporting their conceptual learning, then they are probably learning collaborative skills, too.

Overall Effectiveness of Laboratory Experiences

The two bodies of research—the earlier research on typical laboratory experiences and the emerging research on integrated instructional units—yield different findings about the effectiveness of laboratory experiences in advancing the goals identified by the committee. In general, the nascent body of research on integrated instructional units offers the promise that laboratory experiences embedded in a larger stream of science instruction can be more effective in advancing these goals than are typical laboratory experiences (see Table 3-2 ).

Research on the effectiveness of typical laboratory experiences is methodologically weak and fragmented. The limited evidence available suggests that typical laboratory experiences, by themselves, are neither better nor worse than other methods of science instruction for helping students master science subject matter. However, more recent research indicates that integrated instructional units enhance students’ mastery of subject matter. Studies have demonstrated increases in student mastery of complex topics in physics, chemistry, and biology.

Typical laboratory experiences appear, based on the limited research available, to support some aspects of scientific reasoning; however, typical laboratory experiences alone are not sufficient for promoting more sophisticated scientific reasoning abilities, such as asking appropriate questions,

TABLE 3-2 Attainment of Educational Goals in Typical Laboratory Experiences and Integrated Instructional Units

designing experiments, and drawing inferences. Research on integrated instructional units provides evidence that the laboratory experiences and other forms of instruction they include promote development of several aspects of scientific reasoning, including the ability to ask appropriate questions, design experiments, and draw inferences.

The evidence indicates that typical laboratory experiences do little to increase students’ understanding of the nature of science. In contrast, some studies find that participating in integrated instructional units that are designed specifically with this goal in mind enhances understanding of the nature of science.

The available research suggests that typical laboratory experiences can play a role in enhancing students’ interest in science and in learning science. There is evidence that engagement with the laboratory experiences and other learning activities included in integrated instructional units enhances students’ interest in science and motivation to learn science.

In sum, the evolving research on integrated instructional units provides evidence of increases in students’ understanding of subject matter, development of scientific reasoning, and interest in science, compared with students who received more traditional forms of science instruction. Studies conducted to date also suggest that the units are effective in helping diverse groups of students attain these three learning goals. In contrast, the earlier research on typical laboratory experiences indicates that such typical laboratory experiences are neither better nor worse than other forms of science instruction in supporting student mastery of subject matter. Typical laboratory experiences appear to aid in development of only some aspects of scientific reasoning, and they appear to play a role in enhancing students’ interest in science and in learning science.

Due to a lack of available studies, the committee was unable to draw conclusions about the extent to which either typical laboratory experiences or laboratory experiences incorporated into integrated instructional units might advance the other goals identified at the beginning of this chapter—enhancing understanding of the complexity and ambiguity of empirical work, acquiring practical skills, and developing teamwork skills.

PRINCIPLES FOR DESIGN OF EFFECTIVE LABORATORY EXPERIENCES

The three bodies of research we have discussed—research on how people learn, research on typical laboratory experiences, and developing research on how students learn in integrated instructional units—yield information that promises to inform the design of more effective laboratory experiences.

The committee considers the emerging evidence sufficient to suggest four general principles that can help laboratory experiences achieve the goals outlined above. It must be stressed, however, that research to date has not described in much detail how these principles can be implemented nor how each principle might relate to each of the educational goals of laboratory experiences.

Clearly Communicated Purposes

Effective laboratory experiences have clear learning goals that guide the design of the experience. Ideally these goals are clearly communicated to students. Without a clear understanding of the purposes of a laboratory activity, students seem not to get much from it. Conversely, when the purposes of a laboratory activity are clearly communicated by teachers to students, then students seem capable of understanding them and carrying them out. There seems to be no compelling evidence that particular purposes are more understandable to students than others.

Sequenced into the Flow of Instruction

Effective laboratory experiences are thoughtfully sequenced into the flow of classroom science instruction. That is, they are explicitly linked to what has come before and what will come after. A common theme in reviews of laboratory practice in the United States is that laboratory experiences are presented to students as isolated events, unconnected with other aspects of classroom work. In contrast, integrated instructional units embed laboratory experiences with other activities that build on the laboratory experiences and push students to reflect on and better understand these experiences. The way a particular laboratory experience is integrated into a flow of activities should be guided by the goals of the overall sequence of instruction and of the particular laboratory experience.

Integrated Learning of Science Concepts and Processes

Research in the learning sciences (National Research Council, 1999, 2001) strongly implies that conceptual understanding, scientific reasoning, and practical skills are three capabilities that are not mutually exclusive. An educational program that partitions the teaching and learning of content from the teaching and learning of process is likely to be ineffective in helping students develop scientific reasoning skills and an understanding of science as a way of knowing. The research on integrated instructional units, all of which intertwine exploration of content with process through laboratory experiences, suggests that integration of content and process promotes attainment of several goals identified by the committee.

Ongoing Discussion and Reflection

Laboratory experiences are more likely to be effective when they focus students more on discussing the activities they have done during their laboratory experiences and reflecting on the meaning they can make from them, than on the laboratory activities themselves. Crucially, the focus of laboratory experiences and the surrounding instructional activities should not simply be on confirming presented ideas, but on developing explanations to make sense of patterns of data. Teaching strategies that encourage students to articulate their hypotheses about phenomena prior to experimentation and to then reflect on their ideas after experimentation are demonstrably more successful at supporting student attainment of the goals of mastery of subject matter, developing scientific reasoning, and increasing interest in science and science learning. At the same time, opportunities for ongoing discussion and reflection could potentially support students in developing teamwork skills.

COMPUTER TECHNOLOGIES AND LABORATORY EXPERIENCES

From scales to microscopes, technology in many forms plays an integral role in most high school laboratory experiences. Over the past two decades, personal computers have enabled the development of software specifically designed to help students learn science, and the Internet is an increasingly used tool for science learning and for science itself. This section examines the role that computer technologies now and may someday play in science learning in relation to laboratory experiences. Certain uses of computer technology can be seen as laboratory experiences themselves, according to the committee’s definition, to the extent that they allow students to interact with data drawn directly from the world. Other uses, less clearly laboratory experiences in themselves, provide certain features that aid science learning.

Computer Technologies Designed to Support Learning

Researchers and science educators have developed a number of software programs to support science learning in various ways. In this section, we summarize what we see as the main ways in which computer software can support science learning through providing or augmenting laboratory experiences.

Scaffolded Representations of Natural Phenomena

Perhaps the most common form of science education software are programs that enable students to interact with carefully crafted models of natural phenomena that are difficult to see and understand in the real world and have proven historically difficult for students to understand. Such programs are able to show conceptual interrelationships and connections between theoretical constructs and natural phenomena through the use of multiple, linked representations. For example, velocity can be linked to acceleration and position in ways that make the interrelationships understandable to students (Roschelle, Kaput, and Stroup, 2000). Chromosome genetics can be linked to changes in pedigrees and populations (Horowitz, 1996). Molecular chemical representations can be linked to chemical equations (Kozma, 2003).

In the ThinkerTools integrated instructional unit, abstracted representations of force and motion are provided for students to help them “see” such ideas as force, acceleration, and velocity in two dimensions (White, 1993; White and Frederiksen, 1998). Objects in the ThinkerTools microworld are represented as simple, uniformly sized “dots” to avoid students becoming confused about the idea of center of mass. Students use the microworld to solve various problems of motion in one or two dimensions, using the com-

puter keyboard to apply forces to dots to move them along specified paths. Part of the key to the software’s guidance is that it provides representations of forces and accelerations in which students can see change in response to their actions. A “dot trace,” for example, shows students how applying more force affects an object’s acceleration in a predictable way. A “vector cross” represents the individual components of forces applied in two dimensions in a way that helps students to link those forces to an object’s motion.

ThinkerTools is but one example of this type of interactive, representational software. Others have been developed to help students reason about motion (Roschelle, 1992), electricity (Gutwill, Fredericksen, and White, 1999), heat and temperature (Linn, Bell, and Hsi, 1998), genetics (Horwitz and Christie, 2000), and chemical reactions (Kozma, 2003), among others. These programs differ substantially from one another in how they represent their target phenomena, as there are substantial differences in the topics themselves and in the problems that students are known to have in understanding them. They share, however, a common approach to solving a similar set of problems—how to represent natural phenomena that are otherwise invisible in ways that help students make their own thinking explicit and guide them to normative scientific understanding.

When used as a supplement to hands-on laboratory experiences within integrated instructional units, these representations can support students’ conceptual change (e.g., Linn et al., 1998; White and Frederiksen, 1998). For example, students working through the ThinkerTools curriculum always experiment with objects in the real world before they work with the computer tools. The goals of the laboratory experiences are to provide some experience with the phenomena under study and some initial ideas that can then be explored on the computer.

Structured Simulations of Inaccessible Phenomena

Various types of simulations of phenomena represent another form of technology for science learning. These simulations allow students to explore and observe phenomena that are too expensive, infeasible, or even dangerous to interact with directly. Strictly speaking, a computer simulation is a program that simulates a particular phenomenon by running a computational model whose behavior can sometimes be changed by modifying input parameters to the model. For example, the GenScope program provides a set of linked representations of genetics and genetics phenomena that would otherwise be unavailable for study to most students (Horowitz and Christie, 2000). The software represents alleles, chromosomes, family pedigrees, and the like and links representations across levels in ways that enable students to trace inherited traits to specific genetic differences. The software uses an underlying Mendelian model of genetic inheritance to gov-

ern its behavior. As with the representations described above, embedding the use of the software in a carefully thought out curriculum sequence is crucial to supporting student learning (Hickey et al., 2000).

Another example in biology is the BGuILE project (Reiser et al., 2001). The investigators created a series of structured simulations allowing students to investigate problems of evolution by natural selection. In the Galapagos finch environment, for example, students can examine a carefully selected set of data from the island of Daphne Major to explain a historical case of natural selection. The BGuILE software does not, strictly speaking, consist of simulations because it does not “run” a model; from a student’s perspective, it simulates either Daphne Major or laboratory experiments on tuberculosis bacteria. Studies show that students can learn from the BGuILE environments when these environments are embedded in a well-organized curriculum (Sandoval and Reiser, 2004). They also show that successful implementation of such technology-supported curricula relies heavily on teachers (Tabak, 2004).

Structured Interactions with Complex Phenomena and Ideas

The examples discussed here share a crucial feature. The representations built into the software and the interface tools provided for learners are intended to help them learn in very specific ways. There are a great number of such tools that have been developed over the last quarter of a century. Many of them have been shown to produce impressive learning gains for students at the secondary level. Besides the ones mentioned, other tools are designed to structure specific scientific reasoning skills, such as prediction (Friedler et al., 1990) and the coordination of claims with evidence (Bell and Linn, 2000; Sandoval, 2003). Most of these efforts integrate students’ work on the computer with more direct laboratory experiences. Rather than thinking of these representations and simulations as a way to replace laboratory experiences, the most successful instructional sequences integrate them with a series of empirical laboratory investigations. These sequences of science instruction focus students’ attention on developing a shared interpretation of both the representations and the real laboratory experiences in small groups (Bell, 2005).

Computer Technologies Designed to Support Science

Advances in computer technologies have had a tremendous impact on how science is done and on what scientists can study. These changes are vast, and summarizing them is well beyond the scope of the committee’s charge. We found, however, that some innovations in scientific practice, especially uses of the Internet, are beginning to be applied to secondary

science education. With respect to future laboratory experiences, perhaps the most significant advance in many scientific fields is the aggregation of large, varied data sets into Internet-accessible databases. These databases are most commonly built for specific scientific communities, but some researchers are creating and studying new, learner-centered interfaces to allow access by teachers and schools. These research projects build on instructional design principles illuminated by the integrated instructional units discussed above.

One example is the Center for Embedded Networked Sensing (CENS), a National Science Foundation Science and Technology Center investigating the development and deployment of large-scale sensor networks embedded in physical environments. CENS is currently working on ecosystem monitoring, seismology, contaminant flow transport, and marine microbiology. As sensor networks come on line, making data available, science educators at the center are developing middle school curricula that include web-based tools to enable students to explore the same data sets that the professional scientists are exploring (Pea, Mills, and Takeuchi, 2004).

The interfaces professional scientists use to access such databases tend to be too inflexible and technical for students to use successfully (Bell, 2005). Bounding the space of possible data under consideration, supporting appropriate considerations of theory, and promoting understanding of the norms used in the visualization can help support students in developing a shared understanding of the data. With such support, students can develop both conceptual understanding and understanding of the data analysis process. Focusing students on causal explanation and argumentation based on the data analysis process can help them move from a descriptive, phenomenological view of science to one that considers theoretical issues of cause (Bell, 2005).

Further research and evaluation of the educational benefit of student interaction with large scientific databases are absolutely necessary. Still, the development of such efforts will certainly expand over time, and, as they change notions of what it means to conduct scientific experiments, they are also likely to change what it means to conduct a school laboratory.

The committee identified a number of science learning goals that have been attributed to laboratory experiences. Our review of the evidence on attainment of these goals revealed a recent shift in research, reflecting some movement in laboratory instruction. Historically, laboratory experiences have been disconnected from the flow of classroom science lessons. We refer to these separate laboratory experiences as typical laboratory experiences. Reflecting this separation, researchers often engaged students in one or two

experiments or other science activities and then conducted assessments to determine whether their understanding of the science concept underlying the activity had increased. Some studies compared the outcomes of these separate laboratory experiences with the outcomes of other forms of science instruction, such as lectures or discussions.

Over the past 10 years, researchers studying laboratory education have shifted their focus. Drawing on principles of learning derived from the cognitive sciences, they have asked how to sequence science instruction, including laboratory experiences, in order to support students’ science learning. We refer to these instructional sequences as “integrated instructional units.” Integrated instructional units connect laboratory experiences with other types of science learning activities, including lectures, reading, and discussion. Students are engaged in framing research questions, making observations, designing and executing experiments, gathering and analyzing data, and constructing scientific arguments and explanations.

The two bodies of research on typical laboratory experiences and integrated instructional units, including laboratory experiences, yield different findings about the effectiveness of laboratory experiences in advancing the science learning goals identified by the committee. The earlier research on typical laboratory experiences is weak and fragmented, making it difficult to draw precise conclusions. The weight of the evidence from research focused on the goals of developing scientific reasoning and enhancing student interest in science showed slight improvements in both after students participated in typical laboratory experiences. Research focused on the goal of student mastery of subject matter indicates that typical laboratory experiences are no more or less effective than other forms of science instruction (such as reading, lectures, or discussion).

Studies conducted to date on integrated instructional units indicate that the laboratory experiences, together with the other forms of instruction included in these units, show greater effectiveness for these same three goals (compared with students who received more traditional forms of science instruction): improving students’ mastery of subject matter, increasing development of scientific reasoning, and enhancing interest in science. Integrated instructional units also appear to be effective in helping diverse groups of students progress toward these three learning goals . A major limitation of the research on integrated instructional units, however, is that most of the units have been used in small numbers of science classrooms. Only a few studies have addressed the challenge of implementing—and studying the effectiveness of—integrated instructional units on a wide scale.

Due to a lack of available studies, the committee was unable to draw conclusions about the extent to which either typical laboratory experiences or integrated instructional units might advance the other goals identified at the beginning of this chapter—enhancing understanding of the complexity

and ambiguity of empirical work, acquiring practical skills, and developing teamwork skills. Further research is needed to clarify how laboratory experiences might be designed to promote attainment of these goals.

The committee considers the evidence sufficient to identify four general principles that can help laboratory experiences achieve the learning goals we have outlined. Laboratory experiences are more likely to achieve their intended learning goals if (1) they are designed with clear learning outcomes in mind, (2) they are thoughtfully sequenced into the flow of classroom science instruction, (3) they are designed to integrate learning of science content with learning about the processes of science, and (4) they incorporate ongoing student reflection and discussion.

Computer software and the Internet have enabled development of several tools that can support students’ science learning, including representations of complex phenomena, simulations, and student interaction with large scientific databases. Representations and simulations are most successful in supporting student learning when they are integrated in an instructional sequence that also includes laboratory experiences. Researchers are currently developing tools to support student interaction with—and learning from—large scientific databases.

Anderson, R.O. (1976). The experience of science: A new perspective for laboratory teaching . New York: Columbia University, Teachers College Press.

Ato, T., and Wilkinson, W. (1986). Relationships between the availability and use of science equipment and attitudes to both science and sources of scientific information in Benue State, Nigeria. Research in Science and Technological Education , 4 , 19-28.

Beasley, W.F. (1985). Improving student laboratory performance: How much practice makes perfect? Science Education , 69 , 567-576.

Bell, P. (2005). The school science laboratory: Considerations of learning, technology, and scientific practice . Paper prepared for the Committee on High School Science Laboratories: Role and Vision. Available at: http://www7.nationalacademies.org/bose/July_12-13_2004_High_School_Labs_Meeting_Agenda.html [accessed June 2005].

Bell, P., and Linn, M.C. (2000). Scientific arguments as learning artifacts: Designing for learning from the web with KIE. International Journal of Science Education , 22 (8), 797-817.

Ben-Zvi, R., Hofstein, A., Kampa, R.F, and Samuel, D. (1976). The effectiveness of filmed experiments in high school chemical education. Journal of Chemical Education , 53 , 518-520.

Blakeslee, T., Bronstein, L., Chapin, M., Hesbitt, D., Peek, Y., Thiele, E., and Vellanti, J. (1993). Chemistry that applies . Lansing: Michigan Department of Education. Available at: http://www.ed-web2.educ.msu.edu/CCMS/secmod/Cluster3.pdf [accessed Feb. 2005].

Bransford, J.D., and Schwartz, D.L. (2001). Rethinking transfer: A simple proposal with multiple implications. In A. Iran-Nejad, and P.D. Pearson (Eds.), Review of research in education (pp. 61-100). Washington, DC: American Educational Research Association.

Bryce, T.G.K., and Robertson, I.J. (1985). What can they do: A review of practical assessment in science. Studies in Science Education , 12 , 1-24.

Carey, S., and Smith, C. (1993). On understanding the nature of scientific knowledge. Educational Psychologist , 28 , 235-251.

Champagne, A.B., Gunstone, R.F., and Klopfer, L.E. (1985). Instructional consequences of students’ knowledge about physical phenomena. In L.H.T. West and A.L. Pines (Eds.), Cognitive structure and conceptual change (pp. 61-68). New York: Academic Press.

Chang, H.P., and Lederman, N.G. (1994). The effect of levels of co-operation within physical science laboratory groups on physical science achievement. Journal of Research in Science Teaching , 31 , 167-181.

Cobb, P., Confrey, J., diSessa, A., Lehrer, R., and Schauble, L. (2003). Design experiments in educational research. Educational Researcher , 32 (1), 9-13.

Cobb, P., Stephan, M., McClain, K., and Gavemeijer, K. (2001). Participating in classroom mathematical practices. Journal of the Learning Sciences , 10 , 113-164.

Coleman, E.B. (1998). Using explanatory knowledge during collaborative problem solving in science. Journal of the Learning Sciences , 7 (3, 4), 387-427.

Collins, A., Joseph, D., and Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences , 13 (1), 15-42.

Coulter, J.C. (1966). The effectiveness of inductive laboratory demonstration and deductive laboratory in biology. Journal of Research in Science Teaching , 4 , 185-186.

Denny, M., and Chennell, F. (1986). Exploring pupils’ views and feelings about their school science practicals: Use of letter-writing and drawing exercises. Educational Studies , 12 , 73-86.

Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher , 32 (1), 5-8.

Driver, R. (1995). Constructivist approaches to science teaching. In L.P. Steffe and J. Gale (Eds.), Constructivism in education (pp. 385-400). Hillsdale, NJ: Lawrence Erlbaum.

Driver, R., Leach, J., Millar, R., and Scott, P. (1996). Young people’s images of science . Buckingham, UK: Open University Press.

Driver, R., Newton, P., and Osborne, J. (2000). Establishing the norms of scientific argumentation in classrooms. Science Education , 84 , 287-312.

Dunbar, K. (1993). Concept discovery in a scientific domain. Cognitive Science , 17 , 397-434.

Dupin, J.J., and Joshua, S. (1987). Analogies and “modeling analogies” in teaching: Some examples in basic electricity. Science Education , 73 , 791-806.

Duschl, R.A. (2004). The HS lab experience: Reconsidering the role of evidence, explanation and the language of science . Paper prepared for the Committee on High School Science Laboratories: Role and Vision, July 12-13, National Research Council, Washington, DC. Available at: http://www7.nationalacademies.org/bose/July_12-13_2004_High_School_Labs_Meeting_Agenda.html [accessed July 2005].

Duschl, R.A., and Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Studies in Science Education , 38 , 39-72.

Eylon, B., and Linn, M.C. (1988). Learning and instruction: An examination of four research perspectives in science education. Review of Educational Research , 58 (3), 251-301.

Fraser, B.J., Giddings, G.J., and McRobbie, C.J. (1995). Evolution and validation of a personal form of an instrument for assessing science laboratory classroom environments. Journal of Research in Science Teaching , 32 , 399-422.

Fraser, B.J., McRobbie, C.J., and Giddings, G.J. (1993). Development and cross-national validation of a laboratory classroom environment instrument for senior high school science. Science Education , 77 , 1-24.

Freedman, M.P. (2002). The influence of laboratory instruction on science achievement and attitude toward science across gender differences. Journal of Women and Minorities in Science and Engineering , 8 , 191-200.

Friedler, Y., Nachmias, R., and Linn, M.C. (1990). Learning scientific reasoning skills in microcomputer-based laboratories. Journal of Research in Science Teaching , 27 (2), 173-192.

Glaser, R. (1994). Learning theory and instruction. In G. d’Ydewalle, P. Eelen, and P. Bertelson (Eds.), International perspectives on science, volume 2: The state of the art (pp. 341-357). Hove, England: Erlbaum.

Gobert, J., and Clement, J. (1999). The effects of student-generated diagrams versus student-generated summaries on conceptual understanding of spatial, causal, and dynamic knowledge in plate tectonics. Journal of Research in Science Teaching , 36 (1), 39-53.

Gutwill, J.P., Fredericksen, J.R., and White, B.Y. (1999). Making their own connections: Students’ understanding of multiple models in basic electricity. Cognition and Instruction , 17 (3), 249-282.

Hatano, G., and Inagaki, K. (1991). Sharing cognition through collective comprehension activity. In L.B. Resnick, J.M. Levine, and S.D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 331-348). Washington, DC: American Psychological Association.

Henderson, D., Fisher, D., and Fraser, B. (2000). Interpersonal behavior, laboratory learning environments, and student outcomes in senior biology classes. Journal of Research in Science Teaching , 37 , 26-43.

Hickey, D.T., Kindfield, A.C.H., Horwitz, P., and Christie, M.A. (2000). Integrating instruction, assessment, and evaluation in a technology-based genetics environment: The GenScope follow-up study. In B.J. Fishman and S.F. O’Connor-Divelbiss (Eds.), Proceedings of the International Conference of the Learning Sciences (pp. 6-13). Mahwah, NJ: Lawrence Erlbaum.

Hickey, D.T., Kindfield, A.C., Horwitz, P., and Christie, M.A. (2003). Integrating curriculum, instruction, assessment, and evaluation in a technology-supported genetics environment. American Educational Research Journal , 40 (2), 495-538.

Hodson, D. (1993). Philosophic stance of secondary school science teachers, curriculum experiences, and children’s understanding of science: Some preliminary findings. Interchange , 24 , 41-52.

Hofstein, A., and Lunetta, V.N. (1982). The role of the laboratory in science teaching: Neglected aspects of research. Review of Educational Research , 52 (2), 201-217.

Hofstein, A., and Lunetta, V.N. (2004). The laboratory in science education: Foundations for the twenty-first century. Science Education , 88 , 28-54.

Holden, C. (1990). Animal rights activism threatens dissection. Science , 25 , 751.

Horowitz, P. (1996). Linking models to data: Hypermodels for science education. High School Journal , 79 (2), 148-156.

Horowitz, P., and Christie, M.A. (2000). Computer-based manipulatives for teaching scientific reasoning: An example. In M.J. Jacobson and R.B. Kozma (Eds.), Innovations in science and mathematics education: Advanced designs for technologies of learning (pp. 163-191). Mahwah, NJ: Lawrence Erlbaum.

Jovanovic, J., and King, S.S. (1998). Boys and girls in the performance-based science classroom: Who’s doing the performing? American Educational Research Journal , 35 (3), 477-496.

Kesidou, S., and Roseman, J. (2002). How well do middle school science programs measure up? Findings from Project 2061’s curriculum review. Journal of Research in Science Teaching , 39 (6), 522-549.

Khishfe, R., and Abd-El-Khalick, F. (2002). Influence of explicit and reflective versus implicit inquiry-oriented instruction on sixth graders’ views of nature of science. Journal of Research in Science Teaching , 39 (7), 551-578.

Klopfer, L.E. (1990). Learning scientific enquiry in the student laboratory. In E. Hegarty-Hazel (Ed.), The student laboratory and the science curriculum (pp. 95-118). London, England: Routledge.

Kozma, R.B. (2003). The material features of multiple representations and their cognitive and social affordances for science understanding. Learning and Instruction , 13 , 205-226.

Kuhn, D., Schauble, L., and Garcia-Mila, M. (1992). Cross-domain development of scientific reasoning. Cognition and Instruction , 9 (4), 285-327.

Lazarowitz, R., and Tamir, P. (1994). Research on using laboratory instruction in science. In D.L. Gabel (Ed.), Handbook of research on science teaching and learning (pp. 94-130). New York: Macmillan.

Lederman, N.G. (1992). Students’ and teachers’ conceptions of the nature of science: A review of the research. Journal of Research in Science Teaching , 29 (4), 331-359.

Lederman, N.G., Abd-El-Khalick, F., Bell, R.L., and Schwartz, R.S. (2002). Views of nature of science questionnaire: Toward valid and meaningful assessment of learners’ conceptions of nature of science. Journal of Research in Science Teaching , 39 (6), 497-521.

Lehrer, R., and Schauble, L. (2004). Scientific thinking and science literacy: Supporting development in learning contexts. In W. Damon, R. Lerner, K. Anne Renninger, and E. Sigel (Eds.), Handbook of child psychology, sixth edition, volume four: Child psychology in practice . Hoboken, NJ: John Wiley & Sons.

Lehrer, R., Schauble, L., Strom, D., and Pligge, M. (2001). Similarity of form and substance: Modeling material kind. In S.M. Carver and D. Klahr (Eds.), Cognition and instruction: Twenty-five years of progress . Mahwah, NJ: Lawrence Erlbaum.

Lemke, J. (1990). Talking science: Language, learning, and values . Norwood, NJ: Ablex.

Linn, M.C. (1997). The role of the laboratory in science learning. Elementary School Journal , 97 , 401-417.

Linn, M.C. (2004). High school science laboratories: How can technology contribute? Presentation to the Committee on High School Science Laboratories: Role and Vision. June. Available at: http://www7.nationalacademies.org/bose/June_3-4_2004_High_School_Labs_Meeting_Agenda.html [accessed April 2005].

Linn, M.C., Bell, P., and Hsi, S. (1998). Using the Internet to enhance student understanding of science: The knowledge integration environment. Interactive Learning Environments , 6 (1-2), 4-38.

Linn, M.C., Davis, E., and Bell, P. (2004a). Inquiry and technology. In M.C. Linn, E. Davis, and P. Bell, (Eds.), Internet environments for science education . Mahwah, NJ: Lawrence Erlbaum.

Linn, M.C., Davis, E., and Bell, P. (Eds.). (2004b). Internet environments for science education . Mahwah, NJ: Lawrence Erlbaum.

Linn, M.C., and Hsi, S. (2000). Computers, teachers, peers . Mahwah, NJ: Lawrence Erlbaum.

Linn, M.C., and Songer, B. (1991). Teaching thermodynamics to middle school children: What are appropriate cognitive demands? Journal of Research in Science Teaching , 28 (10), 885-918.

Lunetta, V.N. (1998). The school science laboratory. In B.J. Fraser and K.G. Tobin (Eds.), International handbook of science education (pp. 249-262). London, England: Kluwer Academic.

Lynch, S. (2004). What are the effects of highly rated, lab-based curriculum materials on diverse learners? Presentation to the Committee on High School Science Laboratories: Role and Vision. July 12. Available at: http://www7.nationalacademies.org/bose/July_12-13_2004_High_School_Labs_Meeting_Agenda.html [accessed Oct. 2004].

Lynch, S., Kuipers, J., Pyke, C., and Szesze, M. (In press). Examining the effects of a highly rated science curriculum unitinstructional unit on diverse populations: Results from a planning grant. Journal of Research in Science Teaching .

Lynch, S., and O’Donnell, C. (2005). The evolving definition, measurement, and conceptualization of fidelity of implementation in scale-up of highly rated science curriculum unitsintegrated instructional units in diverse middle schools . Paper presented at the annual meeting of the American Educational Research Association, April 7, Montreal, Canada.

McRobbie, C.J., and Fraser, B.J. (1993). Associations between student outcomes and psychosocial science environment. Journal of Educational Research , 87 , 78-85.

Meichtry, Y.J. (1993). The impact of science curricula on student views about the nature of science. Journal of Research in Science Teaching , 30 (5), 429-443.

Metz, K.E. (2004). Children’s understanding of scientific inquiry: Their conceptualization of uncertainty in investigations of their own design. Cognition and Instruction , 22 (2), 219-290.

Meyer, K., and Woodruff, E. (1997). Consensually driven explanation in science teaching. Science Education , 80 , 173-192.

Millar, R. (1998). Rhetoric and reality: What practical work in science education is really for. In J. Wellington (Ed.), Practical work in school science: Which way now? (pp. 16-31). London, England: Routledge.

Millar, R. (2004). The role of practical work in the teaching and learning of science . Paper prepared for the Committee on High School Science Laboratories: Role and Vision. Available at: http://www7.nationalacademies.org/bose/June3-4_2004_High_School_Labs_Meeting_Agenda.html [accessed April 2005].

National Research Council. (1999). How people learn: Brain, mind, experience, and school . Committee on Developments in the Science of Learning, J.D. Bransford, A.L. Brown, and R.R. Cocking (Eds.). Washington, DC: National Academy Press.

National Research Council. (2001). Eager to learn: Educating our preschoolers . Committee on Early Childhood Pedagogy. B.T. Bowman, M.S. Donovan, and M.S. Burns (Eds.). Commission on Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

National Research Council. (2005). Systems for state science assessment . Committee on Test Design for K-12 Science Achievement, M.R. Wilson and M.W. Bertenthal (Eds.). Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.

Osborne, R., and Freyberg, P. (1985). Learning in science: The implications of children’s science . London, England: Heinemann.

Partnership for 21st Century Skills. (2003). Learning for the 21st century . Washington, DC: Author. Available at: http://www.21stcenturyskills.org/reports/learning.asp [accessed April 2005].

Pea, R., Mills, M., and Takeuchi, L. (Eds). (2004). Making SENS: Science education networks of sensors . Report from an OMRON-sponsored workshop of the Media-X Program at Stanford University, October 3. Stanford, CA: Stanford Center for Innovations in Learning. Available at:: http://www.makingsens.stanford.edu/index.html [accessed May 2005].

Raghubir, K.P. (1979). The laboratory investigative approach to science instruction. Journal of Research in Science Teaching , 16 , 13-18.

Reif, F., and St. John, M. (1979) Teaching physicists thinking skills in the laboratory. American Journal of Physics , 47 (11), 950-957.

Reiner, M., Pea, R.D., and Shulman, D.J. (1995). Impact of simulator-based instruction on diagramming in geometrical optics by introductory physics students. Journal of Science Education and Technology , 4 (3), 199-225.

Reiser, B.J., Tabak, I., Sandoval, W.A., Smith, B.K., Steinmuller, F., and Leone, A.J. (2001). BGuILE: Strategic and conceptual scaffolds for scientific inquiry in biology classrooms. In S.M. Carver and D. Klahr (Eds.), Cognition and instruction: Twenty-five years of progress (pp. 263-305). Mahwah, NJ: Lawrence Erlbaum.

Renner, J.W., Abraham, M.R., and Birnie, H.H. (1985). Secondary school students’ beliefs about the physics laboratory, Science Education , 69 , 649-63.

Roschelle, J. (1992). Learning by collaborating: Convergent conceptual change. Journal of the Learning Sciences , 2 (3), 235-276.

Roschelle, J., Kaput, J., and Stroup, W. (2000). SimCalc: Accelerating students’ engagement with the mathematics of change. In M.J. Jacobsen and R.B. Kozma (Eds). Learning the sciences of the 21st century: Research, design, and implementing advanced technology learning environments (pp. 47-75). Hillsdale, NJ: Lawrence Erlbaum.

Rosebery, A.S., Warren, B., and Conant, F.R. (1992). Appropriating scientific discourse: Findings from language minority classrooms. Journal of the Learning Sciences , 2 (1), 61-94.

Salomon, G. (1996). Studying novel learning environments as patterns of change. In S. Vosniadou, E. De Corte, R. Glaser, and H. Mandl (Eds.), International perspectives on the design of technology-supported learning environments (pp. 363-377). Mahwah, NJ: Lawrence Erlbaum.

Sandoval, W.A. (2003). Conceptual and epistemic aspects of students’ scientific explanations. Journal of the Learning Sciences , 12 (1), 5-51.

Sandoval, W.A., and Millwood, K.A. (2005). The quality of students’ use of evidence in written scientific explanations. Cognition and Instruction , 23 (1), 23-55.

Sandoval, W.A., and Morrison, K. (2003). High school students’ ideas about theories and theory change after a biological inquiry unit. Journal of Research in Science Teaching , 40 (4), 369-392.

Sandoval, W.A., and Reiser, B.J. (2004). Explanation-driven inquiry: Integrating conceptual and epistemic supports for science inquiry. Science Education , 88 , 345-372.

Schauble, L., Glaser, R., Duschl, R.A., Schulze, S., and John, J. (1995). Students’ understanding of the objectives and procedures of experimentation in the science classroom. Journal of the Learning Sciences , 4 (2), 131-166.

Schauble, L., Klopfer, L.E., and Raghavan, K. (1991). Students’ transition from an engineering model to a science model of experimentation. Journal of Research in Science Teaching , 28 (9), 859-882.

Shaffer, P.S., and McDermott, L.C. (1992). Research as a guide for curriculum development: An example from introductory electricity. Part II: Design of instructional strategies. American Journal of Physics , 60 (11), 1003-1013.

Shepardson, D.P., and Pizzini, E.L. (1993). A comparison of student perceptions of science activities within three instructional approaches. School Science and Mathematics , 93 , 127-131.

Shulman, L.S., and Tamir, P. (1973). Research on teaching in the natural sciences. In R.M.W. Travers (Ed.), Second handbook of research on teaching . Chicago: Rand-McNally.

Singer, R.N. (1977). To err or not to err: A question for the instruction of psychomotor skills. Review of Educational Research , 47 , 479-489.

Smith, C.L., Maclin, D., Grosslight, L., and Davis, H. (1997). Teaching for understanding: A study of students’ pre-instruction theories of matter and a comparison of the effectiveness of two approaches to teaching about matter and density. Cognition and Instruction , 15 , 317-394.

Smith, C.L., Maclin, D., Houghton, C., and Hennessey, M. (2000). Sixth-grade students’ epitemologies of science: The impact of school science experiences on epitemological development. Cognition and Instruction , 18 , 349-422.

Snir, J., Smith, C.L., and Raz, G. (2003). Linking phenomena with competing underlying models: A software tool for introducing students to the particulate model of matter. Science Education , 87 (6), 794-830.

Songer, N.B., and Linn, M.C. (1991). How do students’ views of science influence knowledge integration? Journal of Research in Science Teaching , 28 (9), 761-784.

Tabak, I. (2004). Synergy: a complement to emerging patterns of distributed scaffolding. Journal of the Learning Sciences , 13 (3), 305-335.

Tasker, R. (1981). Children’s views and classroom experiences. Australian Science Teachers’ Journal , 27 , 33-37.

Tiberghien, A., Veillard, L., Le Marechal, J.-F., Buty, C., and Millar, R. (2000). An analysis of labwork tasks used in science teaching at upper secondary school and university levels in several European countries. Science Education , 85 , 483-508.

Tobin, K. (1987). Forces which shape the implemented curriculum in high school science and mathematics. Teaching and Teacher Education , 3 (4), 287-298.

VandenBerg, E., Katu, N., and Lunetta, V.N. (1994). The role of “experiments” in conceptual change . Paper presented at the annual meeting of the National Association for Research in Science Teaching, Anaheim, CA.

Webb, N.M., Nemer, K.M., Chizhik, A.W., and Sugrue, B. (1998). Equity issues in collaborative group assessment: Group composition and performance. American Educational Research Journal , 35 (4), 607-652.

Webb, N.M., and Palincsar, A.S. (1996). Group processes in the classroom. In D.C. Berliner and R.C. Calfee (Eds.), Handbook of educational psychology (pp. 841-873). New York: Macmillan.

Wells, M., Hestenes, D., and Swackhamer, G. (1995). A modeling method for high school physics instruction. American Journal of Physics , 63 (7), 606-619.

Wheatley, J.H. (1975).Evaluating cognitive learning in the college science laboratory. Journal of Research in Science Teaching , 12 , 101-109.

White, B.Y. (1993). ThinkerTools: Causal models, conceptual change, and science education. Cognition and Instruction , 10 (1), 1-100.

White, B.Y., and Frederiksen, J.R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction , 16 (1), 3-118.

White, R.T. (1996). The link between the laboratory and learning. International Journal of Science Education , 18 , 761-774.

White, R.T., and Gunstone, R.F. (1992). Probing understanding . London, England: Falmer.

Wilkenson, J.W., and Ward, M. (1997). The purpose and perceived effectiveness of laboratory work in secondary schools. Australian Science Teachers’ Journal , 43-55.

Wong, A.F.L., and Fraser, B.J. (1995). Cross-validation in Singapore of the science laboratory environment inventory. Psychological Reports , 76 , 907-911.

Woolnough, B.E. (1983). Exercises, investigations and experiences. Physics Education , 18 , 60-63.

Yager, R.E., Engen, J.B., and Snider, C.F. (1969). Effects of the laboratory and demonstration method upon the outcomes of instruction in secondary biology. Journal of Research in Science Teaching , 5 , 76-86.

Zimmerman, C. (2000). The development of scientific reasoning skills. Developmental Review , 20 , 99-149.

Laboratory experiences as a part of most U.S. high school science curricula have been taken for granted for decades, but they have rarely been carefully examined. What do they contribute to science learning? What can they contribute to science learning? What is the current status of labs in our nation�s high schools as a context for learning science? This book looks at a range of questions about how laboratory experiences fit into U.S. high schools:

  • What is effective laboratory teaching?
  • What does research tell us about learning in high school science labs?
  • How should student learning in laboratory experiences be assessed?
  • Do all student have access to laboratory experiences?
  • What changes need to be made to improve laboratory experiences for high school students?
  • How can school organization contribute to effective laboratory teaching?

With increased attention to the U.S. education system and student outcomes, no part of the high school curriculum should escape scrutiny. This timely book investigates factors that influence a high school laboratory experience, looking closely at what currently takes place and what the goals of those experiences are and should be. Science educators, school administrators, policy makers, and parents will all benefit from a better understanding of the need for laboratory experiences to be an integral part of the science curriculum—and how that can be accomplished.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Your browser is not supported

Sorry but it looks as if your browser is out of date. To get the best experience using our site we recommend that you upgrade or switch browsers.

Find a solution

  • Skip to main content
  • Skip to navigation

examples of problem solving in lab

  • Back to parent navigation item
  • Collections
  • Sustainability in chemistry
  • Simple rules
  • Teacher well-being hub
  • Women in chemistry
  • Global science
  • Escape room activities
  • Decolonising chemistry teaching
  • Teaching science skills
  • Post-lockdown teaching support
  • Get the print issue
  • RSC Education

Three cartoons: a female student thinking about concentration, a male student in a wheelchair reading Frankenstein and a female student wearing a headscarf and safety goggles heating a test tube on a bunsen burner. All are wearing school uniform.

  • More from navigation items

Critical thinking in the lab (and beyond)

David Read

  • No comments

How to alter existing activities to foster scientific skills

Although many of us associate chemistry education with the laboratory, there remains a lack of evidence that correlates student learning with practical work. It is vital we continue to improve our understanding of how students learn from practical work, and we should devise methods that maximise the benefits. Jon-Marc Rodriguez and Marcy Towns, researchers at Purdue University, US, recently outlined an approach to modify existing practical activities to promote critical thinking in students, supporting enhanced learning. [1]

Although many of us associate chemistry education with the laboratory, there remains a lack of evidence that correlates student learning with practical work. It is vital we continue to improve our understanding of how students learn from practical work, and we should devise methods that maximise the benefits. Jon-Marc Rodriguez and Marcy Towns, researchers at Purdue University, US, recently outlined an approach to modify existing practical activities to promote critical thinking in students , supporting enhanced learning.

A picture of a wood grain desk, with two hands, one holding a piece of graph paper, the other drawing a curve onto the plotted graph

Source: © Science Photo Library

After an experiment, rather than asking a question, task students with plotting a graph; it’ll induce critical thinking and engagement with science practices

Jon-Marc and Marcy focused on critical thinking as a skill needed for successful engagement with the eight ‘science practices’. These practices come from a 2012 framework for science education published by the US National Research Council. The eight practices are: asking questions; developing and using models; planning and carrying out investigations; analysing and interpreting data; using mathematics and computational thinking; constructing explanations; engaging in argument from evidence; and obtaining, evaluating and communicating information. Such skills are widely viewed as integral to an effective chemistry programme. Practising scientists use multiple tools simultaneously when addressing a question, and well-designed practical activities that give students the opportunity to engage with numerous science practices will promote students’ scientific development.

The Purdue researchers chose to examine a traditional laboratory experiment on acid-base titrations because of its ubiquity in chemistry teaching. They characterised the pre- and post-lab questions associated with this experiment in terms of their alignment with the eight science practices. They found only two of ten pre- and post-lab questions elicited engagement with science practices, demonstrating the limitations of the traditional approach. Notably, the pre-lab questions included numerous calculations that were not considered to promote science practices-engagement. Students could answer the calculations algorithmically, with no consideration of the significance of their answer.

Next, Jon-Marc and Marcy modified the experiment and rewrote the pre- and post-lab questions in order to foster engagement with the science practices. They drew on recent research that recommends minimising the amount of information given to students and developing a general understanding of the underlying theory.  [2] The modified set of questions were fewer, with a greater emphasis on conceptual understanding. They questioned aspects such as the suitability of the method and the central question behind the experiment. Questions were more open and introduced greater scope for developing critical thinking.

Next, Jon-Marc and Marcy modified the experiment and rewrote the pre- and post-lab questions in order to foster engagement with the science practices. They drew on recent research that recommends minimising the amount of information given to students and developing a general understanding of the underlying theory. The modified set of questions were fewer, with a greater emphasis on conceptual understanding. They questioned aspects such as the suitability of the method and the central question behind the experiment. Questions were more open and introduced greater scope for developing critical thinking.

In taking an existing protocol and reframing it in terms of science practices, the authors demonstrate an approach instructors can use to adapt their existing activities to promote critical thinking. Using this approach, instructors do not have to spend excessive time creating new activities. Additionally, instructors will have the opportunity to research the impact of their approach on student learning in the teaching laboratory.

Teaching tips

Question phrasing and the steps students should go through to get an answer are instrumental in inducing critical thinking and engagement with science practices. As noted above, simple calculation-based questions do not prompt students to consider the significance of the values calculated. Questions should:

  • refer to an event, observation or phenomenon;
  • ask students to perform a calculation or demonstrate a relationship between variables;
  • ask students to provide a consequence or interpretation (not a restatement) in some form (eg a diagram or graph) based on their results, in the context of the event, observation or phenomenon.

This is more straightforward than it might first seem. The example question Jon-Marc and Marcy give requires students to calculate percentage errors for two titration techniques before discussing the relative accuracy of the methods. Students have to use their data to explain which method was more accurate, prompting a much higher level of engagement than a simple calculation.

As pre-lab preparation, ask students to consider an experimental procedure and then explain in a couple of sentences what methods are going to be used and the rationale for their use. As part of their pre-lab, the Purdue University research team asked students to devise a scientific (‘research’) question that could be answered using the data collected. They then asked students to evaluate and modify their own questions as part of the post-lab, supporting the development of investigative skills. It would be straightforward to incorporate this approach into any practical activity.

Finally, ask students to evaluate a mock response from another student about an aspect of the theory (eg ‘acids react with bases because acids like to donate protons and bases like to accept them’). This elicits critical thinking that can engage every student, with scope to stretch the more able.

These approaches can help students develop a more sophisticated view of chemistry and the higher order skills that will serve them well whatever their future destination.

[1] J-M G Rodriguez and M H Towns, J. Chem. Educ. , 2018, 95 , 2141, DOI: 10.1021/acs . jchemed.8b00683

[2] H Y Agustian and M K Seery, Chem. Educ. Res. Pract., 2017, 18 , 518, DOI: 10.1039/C7RP00140A

J-M G Rodriguez and M H Towns,  J. Chem. Educ. , 2018,  95 , 2141,  DOI: 10.1021/acs . jchemed.8b00683

H Y Agustian and M K Seery,  Chem. Educ. Res. Pract.,  2017,  18 , 518, DOI: 10.1039/C7RP00140A

David Read

More from David Read

A teacher and students all wearing VR headsets examine an electron shell orbital diagram in 3d

The science classroom of the future

An illustration of a crowd of people in the dark with one person shining a torch to show brightly coloured shapes

Support student sensemaking through directed dialogue

The word electrochemistry connected in the style of a circuit board to other related chemistry terms such as thermodynamics, potential difference, chemical equilibrium, electrical circuits, bond concepts, solution chemistry, oxidation states and energy

Overcoming electrochemistry misconceptions

  • Acids and bases
  • Education research
  • Evidence-based teaching
  • Secondary education

Related articles

Cartoon of people negotiating a maze that includes the shapes of mathematical functions in an arrow.

Boost maths skills to improve chemistry learning

2024-01-18T08:00:00Z By Fraser Scott

Use these evidence-based tips to help your learners get ahead with chemical calculations

An illustration of a crowd of people in the dark with one person shining a torch to show brightly coloured shapes

2023-12-19T09:27:00Z By David Read

 Discover how to encourage effective classroom conversation to boost student understanding

An illustration of a conical flask giving of coloured smoke which includes rough science diagrams including apparatus setup, an atom, a molecule, an ionic crystal, question marks.

5 top tips for success with SQA assignments

2023-12-06T07:30:00Z By John Cochrane

Follow this expert advice to manage coursework effectively and guarantee learner outcomes

No comments yet

Only registered users can comment on this article., more from education research.

A glass conical flask of green liquid in a lab setting

How to banish misconceptions with green chemistry

2023-11-23T05:00:00Z By Fraser Scott

Evidence-informed tips on how a greener approach can improve problem-solving abilities and address misconceptions

The word electrochemistry connected in the style of a circuit board to other related chemistry terms such as thermodynamics, potential difference, chemical equilibrium, electrical circuits, bond concepts, solution chemistry, oxidation states and energy

2023-10-10T08:11:00Z By David Read

Use evidence-based research and teacher-tested tips to help your students master this challenging topic

Textbooks in a school backpack

How to use textbooks effectively

2023-09-21T07:10:00Z By Fraser Scott

Discover why not all chemistry textbooks are created equal – and what you can do about it

  • Contributors
  • Print issue
  • Email alerts

Site powered by Webvision Cloud

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Chemistry LibreTexts

6.1.1: Practice Problems- Solution Concentration

  • Last updated
  • Save as PDF
  • Page ID 217282

PROBLEM \(\PageIndex{1}\)

Explain what changes and what stays the same when 1.00 L of a solution of NaCl is diluted to 1.80 L.

The number of moles always stays the same in a dilution.

The concentration and the volumes change in a dilution.

PROBLEM \(\PageIndex{2}\)

What does it mean when we say that a 200-mL sample and a 400-mL sample of a solution of salt have the same molarity? In what ways are the two samples identical? In what ways are these two samples different?

The two samples contain the same proportion of moles of salt to liters of solution, but have different numbers of actual moles.

PROBLEM \(\PageIndex{3}\)

Determine the molarity for each of the following solutions:

  • 0.444 mol of CoCl 2 in 0.654 L of solution
  • 98.0 g of phosphoric acid, H 3 PO 4 , in 1.00 L of solution
  • 0.2074 g of calcium hydroxide, Ca(OH) 2 , in 40.00 mL of solution
  • 10.5 kg of Na 2 SO 4 ·10H 2 O in 18.60 L of solution
  • 7.0 × 10 −3 mol of I 2 in 100.0 mL of solution
  • 1.8 × 10 4 mg of HCl in 0.075 L of solution

PROBLEM \(\PageIndex{4}\)

Determine the molarity of each of the following solutions:

  • 1.457 mol KCl in 1.500 L of solution
  • 0.515 g of H 2 SO 4 in 1.00 L of solution
  • 20.54 g of Al(NO 3 ) 3 in 1575 mL of solution
  • 2.76 kg of CuSO 4 ·5H 2 O in 1.45 L of solution
  • 0.005653 mol of Br 2 in 10.00 mL of solution
  • 0.000889 g of glycine, C 2 H 5 NO 2 , in 1.05 mL of solution

5.25 × 10 -3 M

6.122 × 10 -2 M

1.13 × 10 -2 M

PROBLEM \(\PageIndex{5}\)

Calculate the number of moles and the mass of the solute in each of the following solutions:

(a) 2.00 L of 18.5 M H 2 SO 4 , concentrated sulfuric acid (b) 100.0 mL of 3.8 × 10 −5 M NaCN, the minimum lethal concentration of sodium cyanide in blood serum (c) 5.50 L of 13.3 M H 2 CO, the formaldehyde used to “fix” tissue samples (d) 325 mL of 1.8 × 10 −6 M FeSO 4 , the minimum concentration of iron sulfate detectable by taste in drinking water

37.0 mol H 2 SO 4

3.63 × 10 3 g H 2 SO 4

3.8 × 10 −6 mol NaCN

1.9 × 10 −4 g NaCN

73.2 mol H 2 CO

2.20 kg H 2 CO

5.9 × 10 −7 mol FeSO 4

8.9 × 10 −5 g FeSO 4

PROBLEM \(\PageIndex{6}\)

Calculate the molarity of each of the following solutions:

(a) 0.195 g of cholesterol, C 27 H 46 O, in 0.100 L of serum, the average concentration of cholesterol in human serum (b) 4.25 g of NH 3 in 0.500 L of solution, the concentration of NH 3 in household ammonia (c) 1.49 kg of isopropyl alcohol, C 3 H 7 OH, in 2.50 L of solution, the concentration of isopropyl alcohol in rubbing alcohol (d) 0.029 g of I 2 in 0.100 L of solution, the solubility of I 2 in water at 20 °C

5.04 × 10 −3 M

1.1 × 10 −3 M

PROBLEM \(\PageIndex{7}\)

There is about 1.0 g of calcium, as Ca 2+ , in 1.0 L of milk. What is the molarity of Ca 2+ in milk?

PROBLEM \(\PageIndex{8}\)

What volume of a 1.00- M Fe(NO 3 ) 3 solution can be diluted to prepare 1.00 L of a solution with a concentration of 0.250 M ?

PROBLEM \(\PageIndex{9}\)

If 0.1718 L of a 0.3556- M C 3 H 7 OH solution is diluted to a concentration of 0.1222 M , what is the volume of the resulting solution?

PROBLEM \(\PageIndex{10}\)

What volume of a 0.33- M C 12 H 22 O 11 solution can be diluted to prepare 25 mL of a solution with a concentration of 0.025 M ?

PROBLEM \(\PageIndex{11}\)

What is the concentration of the NaCl solution that results when 0.150 L of a 0.556- M solution is allowed to evaporate until the volume is reduced to 0.105 L?

PROBLEM \(\PageIndex{12}\)

What is the molarity of the diluted solution when each of the following solutions is diluted to the given final volume?

  • 1.00 L of a 0.250- M solution of Fe(NO 3 ) 3 is diluted to a final volume of 2.00 L
  • 0.5000 L of a 0.1222- M solution of C 3 H 7 OH is diluted to a final volume of 1.250 L
  • 2.35 L of a 0.350- M solution of H 3 PO 4 is diluted to a final volume of 4.00 L
  • 22.50 mL of a 0.025- M solution of C 12 H 22 O 11 is diluted to 100.0 mL

PROBLEM \(\PageIndex{13}\)

What is the final concentration of the solution produced when 225.5 mL of a 0.09988- M solution of Na 2 CO 3 is allowed to evaporate until the solution volume is reduced to 45.00 mL?

PROBLEM \(\PageIndex{14}\)

A 2.00-L bottle of a solution of concentrated HCl was purchased for the general chemistry laboratory. The solution contained 868.8 g of HCl. What is the molarity of the solution?

PROBLEM \(\PageIndex{15}\)

An experiment in a general chemistry laboratory calls for a 2.00- M solution of HCl. How many mL of 11.9 M HCl would be required to make 250 mL of 2.00 M HCl?

PROBLEM \(\PageIndex{16}\)

What volume of a 0.20- M K 2 SO 4 solution contains 57 g of K 2 SO 4 ?

PROBLEM \(\PageIndex{17}\)

The US Environmental Protection Agency (EPA) places limits on the quantities of toxic substances that may be discharged into the sewer system. Limits have been established for a variety of substances, including hexavalent chromium , which is limited to 0.50 mg/L. If an industry is discharging hexavalent chromium as potassium dichromate (K 2 Cr 2 O 7 ), what is the maximum permissible molarity of that substance?

4.8 × 10 −6 M

Contributors

Paul Flowers (University of North Carolina - Pembroke), Klaus Theopold (University of Delaware) and Richard Langley (Stephen F. Austin State University) with contributing authors.  Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected] ).

  • Adelaide Clark, Oregon Institute of Technology
  • 8.3 Elastic and Inelastic Collisions
  • Introduction
  • 1.1 Physics: Definitions and Applications
  • 1.2 The Scientific Methods
  • 1.3 The Language of Physics: Physical Quantities and Units
  • Section Summary
  • Key Equations
  • Concept Items
  • Critical Thinking Items
  • Performance Task
  • Multiple Choice
  • Short Answer
  • Extended Response
  • 2.1 Relative Motion, Distance, and Displacement
  • 2.2 Speed and Velocity
  • 2.3 Position vs. Time Graphs
  • 2.4 Velocity vs. Time Graphs
  • 3.1 Acceleration
  • 3.2 Representing Acceleration with Equations and Graphs
  • 4.2 Newton's First Law of Motion: Inertia
  • 4.3 Newton's Second Law of Motion
  • 4.4 Newton's Third Law of Motion
  • 5.1 Vector Addition and Subtraction: Graphical Methods
  • 5.2 Vector Addition and Subtraction: Analytical Methods
  • 5.3 Projectile Motion
  • 5.4 Inclined Planes
  • 5.5 Simple Harmonic Motion
  • 6.1 Angle of Rotation and Angular Velocity
  • 6.2 Uniform Circular Motion
  • 6.3 Rotational Motion
  • 7.1 Kepler's Laws of Planetary Motion
  • 7.2 Newton's Law of Universal Gravitation and Einstein's Theory of General Relativity
  • 8.1 Linear Momentum, Force, and Impulse
  • 8.2 Conservation of Momentum
  • 9.1 Work, Power, and the Work–Energy Theorem
  • 9.2 Mechanical Energy and Conservation of Energy
  • 9.3 Simple Machines
  • 10.1 Postulates of Special Relativity
  • 10.2 Consequences of Special Relativity
  • 11.1 Temperature and Thermal Energy
  • 11.2 Heat, Specific Heat, and Heat Transfer
  • 11.3 Phase Change and Latent Heat
  • 12.1 Zeroth Law of Thermodynamics: Thermal Equilibrium
  • 12.2 First law of Thermodynamics: Thermal Energy and Work
  • 12.3 Second Law of Thermodynamics: Entropy
  • 12.4 Applications of Thermodynamics: Heat Engines, Heat Pumps, and Refrigerators
  • 13.1 Types of Waves
  • 13.2 Wave Properties: Speed, Amplitude, Frequency, and Period
  • 13.3 Wave Interaction: Superposition and Interference
  • 14.1 Speed of Sound, Frequency, and Wavelength
  • 14.2 Sound Intensity and Sound Level
  • 14.3 Doppler Effect and Sonic Booms
  • 14.4 Sound Interference and Resonance
  • 15.1 The Electromagnetic Spectrum
  • 15.2 The Behavior of Electromagnetic Radiation
  • 16.1 Reflection
  • 16.2 Refraction
  • 16.3 Lenses
  • 17.1 Understanding Diffraction and Interference
  • 17.2 Applications of Diffraction, Interference, and Coherence
  • 18.1 Electrical Charges, Conservation of Charge, and Transfer of Charge
  • 18.2 Coulomb's law
  • 18.3 Electric Field
  • 18.4 Electric Potential
  • 18.5 Capacitors and Dielectrics
  • 19.1 Ohm's law
  • 19.2 Series Circuits
  • 19.3 Parallel Circuits
  • 19.4 Electric Power
  • 20.1 Magnetic Fields, Field Lines, and Force
  • 20.2 Motors, Generators, and Transformers
  • 20.3 Electromagnetic Induction
  • 21.1 Planck and Quantum Nature of Light
  • 21.2 Einstein and the Photoelectric Effect
  • 21.3 The Dual Nature of Light
  • 22.1 The Structure of the Atom
  • 22.2 Nuclear Forces and Radioactivity
  • 22.3 Half Life and Radiometric Dating
  • 22.4 Nuclear Fission and Fusion
  • 22.5 Medical Applications of Radioactivity: Diagnostic Imaging and Radiation
  • 23.1 The Four Fundamental Forces
  • 23.2 Quarks
  • 23.3 The Unification of Forces
  • A | Reference Tables

Section Learning Objectives

By the end of this section, you will be able to do the following:

  • Distinguish between elastic and inelastic collisions
  • Solve collision problems by applying the law of conservation of momentum

Teacher Support

The learning objectives in this section will help your students master the following standards:

  • (C) calculate the mechanical energy of, power generated within, impulse applied to, and momentum of a physical system;
  • (D) demonstrate and apply the laws of conservation of energy and conservation of momentum in one dimension.

Section Key Terms

Elastic and inelastic collisions.

When objects collide, they can either stick together or bounce off one another, remaining separate. In this section, we’ll cover these two different types of collisions , first in one dimension and then in two dimensions.

In an elastic collision , the objects separate after impact and don’t lose any of their kinetic energy . Kinetic energy is the energy of motion and is covered in detail elsewhere. The law of conservation of momentum is very useful here, and it can be used whenever the net external force on a system is zero. Figure 8.6 shows an elastic collision where momentum is conserved.

An animation of an elastic collision between balls can be seen by watching this video . It replicates the elastic collisions between balls of varying masses.

Perfectly elastic collisions can happen only with subatomic particles. Everyday observable examples of perfectly elastic collisions don’t exist—some kinetic energy is always lost, as it is converted into heat transfer due to friction. However, collisions between everyday objects are almost perfectly elastic when they occur with objects and surfaces that are nearly frictionless, such as with two steel blocks on ice.

Now, to solve problems involving one-dimensional elastic collisions between two objects, we can use the equation for conservation of momentum. First, the equation for conservation of momentum for two objects in a one-dimensional collision is

Substituting the definition of momentum p = m v for each initial and final momentum, we get

where the primes (') indicate values after the collision; In some texts, you may see i for initial (before collision) and f for final (after collision). The equation assumes that the mass of each object does not change during the collision.

Watch Physics

Momentum: ice skater throws a ball.

This video covers an elastic collision problem in which we find the recoil velocity of an ice skater who throws a ball straight forward. To clarify, Sal is using the equation

m ball V ball + m skater V skater = m ball v ′ ball + m skater v ′ skater m ball V ball + m skater V skater = m ball v ′ ball + m skater v ′ skater .

  • R x + R y = 0
  • A x + A y = A →
  • A x + B y = B x + A y
  • A x + B x = R x

Now, let us turn to the second type of collision. An inelastic collision is one in which kinetic energy is not conserved. A perfectly inelastic collision (also sometimes called completely or maximally inelastic) is one in which objects stick together after impact, and the maximum amount of kinetic energy is lost. This lack of conservation means that the forces between colliding objects may convert kinetic energy to other forms of energy, such as potential energy or thermal energy. The concepts of energy are discussed more thoroughly elsewhere. For inelastic collisions, kinetic energy may be lost in the form of heat. Figure 8.7 shows an example of an inelastic collision. Two objects that have equal masses head toward each other at equal speeds and then stick together. The two objects come to rest after sticking together, conserving momentum but not kinetic energy after they collide. Some of the energy of motion gets converted to thermal energy, or heat.

Since the two objects stick together after colliding, they move together at the same speed. This lets us simplify the conservation of momentum equation from

for inelastic collisions, where v ′ is the final velocity for both objects as they are stuck together, either in motion or at rest.

[BL] [OL] Review the concept of internal energy. Ask students what they understand by the words elastic and inelastic.

[AL] Start a discussion about collisions. Ask students to give examples of elastic and inelastic collisions.

Introduction to Momentum

This video reviews the definitions of momentum and impulse. It also covers an example of using conservation of momentum to solve a problem involving an inelastic collision between a car with constant velocity and a stationary truck. Note that Sal accidentally gives the unit for impulse as Joules; it is actually N ⋅ ⋅ s or k ⋅ ⋅ gm/s.

Grasp Check

How would the final velocity of the car-plus-truck system change if the truck had some initial velocity moving in the same direction as the car? What if the truck were moving in the opposite direction of the car initially? Why?

  • If the truck was initially moving in the same direction as the car, the final velocity would be greater. If the truck was initially moving in the opposite direction of the car, the final velocity would be smaller.
  • If the truck was initially moving in the same direction as the car, the final velocity would be smaller. If the truck was initially moving in the opposite direction of the car, the final velocity would be greater.
  • The direction in which the truck was initially moving would not matter. If the truck was initially moving in either direction, the final velocity would be smaller.
  • The direction in which the truck was initially moving would not matter. If the truck was initially moving in either direction, the final velocity would be greater.

Ice Cubes and Elastic Collisions

In this activity, you will observe an elastic collision by sliding an ice cube into another ice cube on a smooth surface, so that a negligible amount of energy is converted to heat.

  • Several ice cubes (The ice must be in the form of cubes.)
  • A smooth surface
  • Find a few ice cubes that are about the same size and a smooth kitchen tabletop or a table with a glass top.
  • Place the ice cubes on the surface several centimeters away from each other.
  • Flick one ice cube toward a stationary ice cube and observe the path and velocities of the ice cubes after the collision. Try to avoid edge-on collisions and collisions with rotating ice cubes.
  • Explain the speeds and directions of the ice cubes using momentum.
  • perfectly elastic
  • perfectly inelastic
  • Nearly perfect elastic
  • Nearly perfect inelastic

Tips For Success

Here’s a trick for remembering which collisions are elastic and which are inelastic: Elastic is a bouncy material, so when objects bounce off one another in the collision and separate, it is an elastic collision. When they don’t, the collision is inelastic.

Solving Collision Problems

The Khan Academy videos referenced in this section show examples of elastic and inelastic collisions in one dimension. In one-dimensional collisions, the incoming and outgoing velocities are all along the same line. But what about collisions, such as those between billiard balls, in which objects scatter to the side? These are two-dimensional collisions, and just as we did with two-dimensional forces, we will solve these problems by first choosing a coordinate system and separating the motion into its x and y components.

One complication with two-dimensional collisions is that the objects might rotate before or after their collision. For example, if two ice skaters hook arms as they pass each other, they will spin in circles. We will not consider such rotation until later, and so for now, we arrange things so that no rotation is possible. To avoid rotation, we consider only the scattering of point masses —that is, structureless particles that cannot rotate or spin.

We start by assuming that F net = 0, so that momentum p is conserved. The simplest collision is one in which one of the particles is initially at rest. The best choice for a coordinate system is one with an axis parallel to the velocity of the incoming particle, as shown in Figure 8.8 . Because momentum is conserved, the components of momentum along the x - and y -axes, displayed as p x and p y , will also be conserved. With the chosen coordinate system, p y is initially zero and p x is the momentum of the incoming particle.

Now, we will take the conservation of momentum equation, p 1 + p 2 = p ′ 1 + p ′ 2 and break it into its x and y components.

Along the x -axis, the equation for conservation of momentum is

In terms of masses and velocities, this equation is

But because particle 2 is initially at rest, this equation becomes

The components of the velocities along the x -axis have the form v cos θ . Because particle 1 initially moves along the x -axis, we find v 1 x = v 1 . Conservation of momentum along the x -axis gives the equation

where θ 1 θ 1 and θ 2 θ 2 are as shown in Figure 8.8 .

Along the y -axis, the equation for conservation of momentum is

But v 1 y is zero, because particle 1 initially moves along the x -axis. Because particle 2 is initially at rest, v 2 y is also zero. The equation for conservation of momentum along the y -axis becomes

The components of the velocities along the y -axis have the form v sin θ θ . Therefore, conservation of momentum along the y -axis gives the following equation:

Review conservation of momentum and the equations derived in the previous sections of this chapter. Say that in the problems of this section, all objects are assumed to be point masses. Explain point masses.

Virtual Physics

Collision lab.

In this simulation, you will investigate collisions on an air hockey table. Place checkmarks next to the momentum vectors and momenta diagram options. Experiment with changing the masses of the balls and the initial speed of ball 1. How does this affect the momentum of each ball? What about the total momentum? Next, experiment with changing the elasticity of the collision. You will notice that collisions have varying degrees of elasticity, ranging from perfectly elastic to perfectly inelastic.

If you wanted to maximize the velocity of ball 2 after impact, how would you change the settings for the masses of the balls, the initial speed of ball 1, and the elasticity setting? Why? Hint—Placing a checkmark next to the velocity vectors and removing the momentum vectors will help you visualize the velocity of ball 2, and pressing the More Data button will let you take readings.

  • Maximize the mass of ball 1 and initial speed of ball 1; minimize the mass of ball 2; and set elasticity to 50 percent.
  • Maximize the mass of ball 2 and initial speed of ball 1; minimize the mass of ball 1; and set elasticity to 100 percent.
  • Maximize the mass of ball 1 and initial speed of ball 1; minimize the mass of ball 2; and set elasticity to 100 percent.
  • Maximize the mass of ball 2 and initial speed of ball 1; minimize the mass of ball 1; and set elasticity to 50 percent.

Worked Example

Calculating velocity: inelastic collision of a puck and a goalie.

Find the recoil velocity of a 70 kg ice hockey goalie who catches a 0.150-kg hockey puck slapped at him at a velocity of 35 m/s. Assume that the goalie is at rest before catching the puck, and friction between the ice and the puck-goalie system is negligible (see Figure 8.9 ).

Momentum is conserved because the net external force on the puck-goalie system is zero. Therefore, we can use conservation of momentum to find the final velocity of the puck and goalie system. Note that the initial velocity of the goalie is zero and that the final velocity of the puck and goalie are the same.

For an inelastic collision, conservation of momentum is

where v ′ is the velocity of both the goalie and the puck after impact. Because the goalie is initially at rest, we know v 2 = 0. This simplifies the equation to

Solving for v ′ yields

Entering known values in this equation, we get

This recoil velocity is small and in the same direction as the puck’s original velocity.

Calculating Final Velocity: Elastic Collision of Two Carts

Two hard, steel carts collide head-on and then ricochet off each other in opposite directions on a frictionless surface (see Figure 8.10 ). Cart 1 has a mass of 0.350 kg and an initial velocity of 2 m/s. Cart 2 has a mass of 0.500 kg and an initial velocity of −0.500 m/s. After the collision, cart 1 recoils with a velocity of −4 m/s. What is the final velocity of cart 2?

Since the track is frictionless, F net = 0 and we can use conservation of momentum to find the final velocity of cart 2.

As before, the equation for conservation of momentum for a one-dimensional elastic collision in a two-object system is

The only unknown in this equation is v ′ 2 . Solving for v ′ 2 and substituting known values into the previous equation yields

The final velocity of cart 2 is large and positive, meaning that it is moving to the right after the collision.

Calculating Final Velocity in a Two-Dimensional Collision

Suppose the following experiment is performed ( Figure 8.11 ). An object of mass 0.250 kg ( m 1 ) is slid on a frictionless surface into a dark room, where it strikes an initially stationary object of mass 0.400 kg ( m 2 ). The 0.250 kg object emerges from the room at an angle of 45º with its incoming direction. The speed of the 0.250 kg object is originally 2 m/s and is 1.50 m/s after the collision. Calculate the magnitude and direction of the velocity ( v ′ 2 and θ 2 θ 2 ) of the 0.400 kg object after the collision.

Momentum is conserved because the surface is frictionless. We chose the coordinate system so that the initial velocity is parallel to the x -axis, and conservation of momentum along the x - and y -axes applies.

Everything is known in these equations except v ′ 2 and θ 2 , which we need to find. We can find two unknowns because we have two independent equations—the equations describing the conservation of momentum in the x and y directions.

First, we’ll solve both conservation of momentum equations ( m 1 v 1 = m 1 v ′ 1 cos θ 1 + m 2 v ′ 2 cos θ 2 m 1 v 1 = m 1 v ′ 1 cos θ 1 + m 2 v ′ 2 cos θ 2 and 0 = m 1 v ′ 1 sin θ 1 + m 2 v ′ 2 sin θ 2 0 = m 1 v ′ 1 sin θ 1 + m 2 v ′ 2 sin θ 2 ) for v ′ 2 sin θ 2 θ 2 .

For conservation of momentum along x-axis, let’s substitute sin θ 2 θ 2 /tan θ 2 θ 2 for cos θ 2 θ 2 so that terms may cancel out later on. This comes from rearranging the definition of the trigonometric identity tan θ θ = sin θ θ /cos θ θ . This gives us

Solving for v ′ 2 sin θ 2 θ 2 yields

For conservation of momentum along y -axis, solving for v ′ 2 sin θ 2 θ 2 yields

Since both equations equal v ′ 2 sin θ 2 θ 2 , we can set them equal to one another, yielding

Solving this equation for tan θ 2 θ 2 , we get

Entering known values into the previous equation gives

Since angles are defined as positive in the counterclockwise direction, m 2 is scattered to the right.

We’ll use the conservation of momentum along the y-axis equation to solve for v ′ 2 .

Entering known values into this equation gives

Either equation for the x - or y -axis could have been used to solve for v ′ 2 , but the equation for the y -axis is easier because it has fewer terms.

Practice Problems

  • 10 kg ⋅ m/s
  • 20 kg ⋅ m/s
  • 35 kg ⋅ m/s
  • 50 kg ⋅ m/s

In an elastic collision, an object with momentum 25 kg ⋅ m/s collides with another that has a momentum 35 kg ⋅ m/s. The first object’s momentum changes to 10 kg ⋅ m/s. What is the final momentum of the second object?

Check Your Understanding

What is an elastic collision?

  • An elastic collision is one in which the objects after impact are deformed permanently.
  • An elastic collision is one in which the objects after impact lose some of their internal kinetic energy.
  • An elastic collision is one in which the objects after impact do not lose any of their internal kinetic energy.
  • An elastic collision is one in which the objects after impact become stuck together and move with a common velocity.
  • Perfectly elastic collisions are not possible.
  • Perfectly elastic collisions are possible only with subatomic particles.
  • Perfectly elastic collisions are possible only when the objects stick together after impact.
  • Perfectly elastic collisions are possible if the objects and surfaces are nearly frictionless.

What is the equation for conservation of momentum for two objects in a one-dimensional collision?

  • p 1 + p 1 ′ = p 2 + p 2 ′
  • p 1 + p 2 = p 1 ′ + p 2 ′
  • p 1 − p 2 = p 1 ′ − p 2 ′
  • p 1 + p 2 + p 1 ′ + p 2 ′ = 0

Use the Check Your Understanding questions to assess whether students master the learning objectives of this section. If students are struggling with a specific objective, the assessment will help identify which objective is causing the problem and direct students to the relevant content.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/physics/pages/1-introduction
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: Physics
  • Publication date: Mar 26, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/physics/pages/1-introduction
  • Section URL: https://openstax.org/books/physics/pages/8-3-elastic-and-inelastic-collisions

© Jan 19, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • What Is a Fishbone Diagram? | Templates & Examples

What Is a Fishbone Diagram? | Templates & Examples

Published on January 2, 2023 by Tegan George . Revised on January 29, 2024.

A fishbone diagram is a problem-solving approach that uses a fish-shaped diagram to model possible root causes of problems and troubleshoot possible solutions. It is also called an Ishikawa diagram, after its creator, Kaoru Ishikawa, as well as a herringbone diagram or cause-and-effect diagram.

Fishbone diagrams are often used in root cause analysis , to troubleshoot issues in quality management or product development. They are also used in the fields of nursing and healthcare, or as a brainstorming and mind-mapping technique many students find helpful.

Table of contents

How to make a fishbone diagram, fishbone diagram templates, fishbone diagram examples, advantages and disadvantages of fishbone diagrams, other interesting articles, frequently asked questions about fishbone diagrams.

A fishbone diagram is easy to draw, or you can use a template for an online version.

  • Your fishbone diagram starts out with an issue or problem. This is the “head” of the fish, summarized in a few words or a small phrase.
  • Next, draw a long arrow, which serves as the fish’s backbone.
  • From here, you’ll draw the first “bones” directly from the backbone, in the shape of small diagonal lines going right-to-left. These represent the most likely or overarching causes of your problem.
  • Branching off from each of these first bones, create smaller bones containing contributing information and necessary detail.
  • When finished, your fishbone diagram should give you a wide-view idea of what the root causes of the issue you’re facing could be, allowing you to rank them or choose which could be most plausible.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

There are no built-in fishbone diagram templates in Microsoft programs, but we’ve made a few free ones for you to use that you can download below. Alternatively, you can make one yourself using the following steps:

  • In a fresh document, go to Insert > Shapes
  • Draw a long arrow from left to right, and add a text box on the right-hand side. These serve as the backbone and the head of the fish.
  • Next, add lines jutting diagonally from the backbone. These serve as the ribs, or the contributing factors to the main problem.
  • Next, add horizontal lines jutting from each central line. These serve as the potential causes of the problem.

Lastly, add text boxes to label each function.

You can try your hand at filling one in yourself using the various blank fishbone diagram templates below, in the following formats:

Fishbone diagram template Excel

Download our free Excel template below!

fishbone-template-excel

Fishbone diagram template Word

Download our free Word template below!

fishbone-template-word

Fishbone diagram template PowerPoint

Download our free PowerPoint template below!

fishbone-template-powerpoint

Fishbone diagrams are used in a variety of settings, both academic and professional. They are particularly popular in healthcare settings, particularly nursing, or in group brainstorm study sessions. In the business world, they are an often-used tool for quality assurance or human resources professionals.

Fishbone diagram example #1: Climate change

Let’s start with an everyday example: what are the main causes of climate change?

Fishbone Diagram example

Fishbone diagram example #2: Healthcare and nursing

Fishbone diagrams are often used in nursing and healthcare to diagnose patients with unclear symptoms, or to streamline processes or fix ongoing problems. For example: why have surveys shown a decrease in patient satisfaction?

Fishbone Diagram example

Fishbone diagram example #3: Quality assurance

QA professionals also use fishbone diagrams to troubleshoot usability issues, such as: why is the website down?

Fishbone Diagram example

Fishbone diagram example #4: HR

Lastly, an HR example: why are employees leaving the company?

Fishbone Diagram example

Fishbone diagrams come with advantages and disadvantages.

  • Great tool for brainstorming and mind-mapping, either individually or in a group project.
  • Can help identify causal relationships and clarify relationships between variables .
  • Constant iteration of “why” questions really drills down to root problems and elegantly simplifies even complex issues.

Disadvantages

  • Can lead to incorrect or inconsistent conclusions if the wrong assumptions are made about root causes or the wrong variables are prioritized.
  • Fishbone diagrams are best suited to short phrases or simple ideas—they can get cluttered and confusing easily.
  • Best used in the exploratory research phase, since they cannot provide true answers, only suggestions.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Fishbone diagrams have a few different names that are used interchangeably, including herringbone diagram, cause-and-effect diagram, and Ishikawa diagram.

These are all ways to refer to the same thing– a problem-solving approach that uses a fish-shaped diagram to model possible root causes of problems and troubleshoot solutions.

Fishbone diagrams (also called herringbone diagrams, cause-and-effect diagrams, and Ishikawa diagrams) are most popular in fields of quality management. They are also commonly used in nursing and healthcare, or as a brainstorming technique for students.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2024, January 29). What Is a Fishbone Diagram? | Templates & Examples. Scribbr. Retrieved February 15, 2024, from https://www.scribbr.com/research-process/fishbone-diagram/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, how to define a research problem | ideas & examples, data collection | definition, methods & examples, exploratory research | definition, guide, & examples, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Clin Microbiol Rev
  • v.17(3); 2004 Jul

Logo of cmr

Competency Assessment in the Clinical Microbiology Laboratory

Susan e. sharp.

Department of Pathology, Kaiser Permanente and Pathology Regional Laboratory, Oregon Health Science University, Portland, Oregon 97230, 1 Department of Microbiology, CompuNet Clinical Laboratories and Wright State University, Moraine, Ohio 45459 2

B. Laurel Elder

The laboratory comprises an invaluable part of the total health care provided to patients. Competency assessment is one method by which we can verify that our employees are competent to perform laboratory testing and report accurate and timely results. To derive the greatest benefit from the inclusion of competency assessment in the laboratory, we must be sure that we are addressing areas where our efforts can be best utilized to optimize patient care. To be competent, an employee must know how to perform a test, must have the ability to perform the test, must be able to perform the test properly without supervision, and know when there is a problem with the test that must be solved. In some cases, competency assessment protocols may demonstrate areas of competence but can fail to disclose incompetence. For example, challenges of low-complexity tasks (such as reading the technical procedure manual) are inferior to challenges that measure understanding and execution of a protocol, and poorly designed competency challenges will probably not detect substandard laboratory performance. Thus, if we are to receive the greatest benefit from our competency assessment programs, which may be time-consuming for the supervisors and the staff as well, we must not only meet the letter of the law but also find a way to make these assessments meaningful, instructive, and able to detect areas of concern. As we address competency assessment in our laboratories, we must understand that when done properly, competency assessment will reward our organizations and assist us in providing the best possible care to our patients.

INTRODUCTION: HISTORY AND OVERVIEW OF CLIA ’67 AND ’88

Few regulations for laboratory testing existed before the late 1960s. However, soon after the introduction of Medicare and Medicaid in the mid-1960s, a decades-long and continuing effort by the U.S. Government to regulate costs and ensure a high quality of health care ensued. To see that the system was not abused financially and that the quality of laboratory results was high, in 1967 Congress passed the federal Clinical Laboratory Improvement Act (CLIA '67) ( 1 ). The Health Care Finance Administration, now the Center for Medicare and Medicaid Services, was created as part of the Department of Health and Human Services to oversee the enforcement of the CLIA '67 regulations as well as to oversee the Medicare and Medicaid programs. However, CLIA '67 required only hospitals and large clinical laboratories to adhere to strict quality control, proficiency testing, test performance, and personnel standards. Each testing facility had to have a certificate and was subject to a compliance inspection every other year. CLIA '67 affected only laboratories engaged in interstate commerce and covered approximately 12,000 laboratories (mainly commercial and hospital). With the exception of a few states, this left laboratories located in physicians' offices or other small health care facilities largely unregulated.

Prior to 1988, fewer than 10% of all clinical laboratories were required by the government to meet minimum quality standards, and a significant percentage of patient testing performed in laboratories was not subject to minimum quality standards ( 8 ). Concerns raised by the media about the quality of cytology testing services, especially Pap smears, were a major catalyst behind passage of the Clinical Laboratory Improvement Amendments of 1988 (CLIA '88). A series of articles that appeared in the Wall Street Journal in the 1980s reported on the deaths of women from uterine and ovarian cancer whose Pap smears had been misread, exposed “PAP mills,” and called into question the quality of laboratories in general ( 3 , 5 , 19 ).

Congress held hearings at which people who had been harmed by laboratory errors testified. These hearings revealed serious deficiencies in the quality of work from physician office laboratories and in Pap smear testing results (R. D. Feld, M. Schwabbauer, and J. D. Olson, 2001, The Clinical Laboratory Improvement Act [CLIA] and the physician's office laboratory; Virtual Hospital, University of Iowa College of Medi-cine [www.vh.org/adult/provider/pathology/CLIA/CLIAHP.html]). In 1988, Congress once again responded to public concerns about the quality of laboratory testing by passing CLIA '88. CLIA '88 expanded the laboratory standards set by CLIA '67 and extended them to include any facility performing a clinical test. Currently, under CLIA '88, all ∼170,000 clinical laboratories, including physician office laboratories, are regulated.

CLIA '88 greatly broadened the definition of a laboratory. CLIA '88 defines a laboratory as “a place where materials derived from the human body are examined for the purpose of providing information for the diagnosis, prevention or treatment of any disease or impairment of, or assessment of the health of human beings. Laboratories may be located in hospitals, freestanding facilities or physician offices” ( 11 ). For the first time, federal laboratory regulation was site neutral. The level of regulation was determined by the complexity of the tests performed by the laboratory rather than where the laboratory was located. Physician office laboratories, dialysis units, health fairs, and nursing homes were all covered under the new law, along with other previously exempt and nonexempt laboratories. The CLIA '88 regulation unified and replaced past standards with a single set of requirements that applied to all laboratory testing of human specimens. Standards for laboratory personnel, quality control (QC), and quality assurance were established based on test complexity and potential harm to the patient. The regulations also established application procedures and fees for CLIA registration as well as enforcement procedures and sanctions applicable when laboratories fail to meet standards.

The purpose of CLIA '88 is to ensure that all laboratory testing, wherever performed, is done accurately and according to good scientific practices and to provide assurance to the public that access to safe, accurate laboratory testing is available. The ability to make this assurance has become even more urgent as knowledge of the impact of medical errors has reached both the medical and public arenas ( 13 ). One of the essential components identified as necessary to ensure high-quality test results for patients was employee training and competency. Thus, CLIA '88 set forth requirements for performance and documentation of initial personnel training and ongoing assessment of competency ( 11 ).

The following section outlines the sections of CLIA '88 that pertain to personnel training and competency assessment. As stated above, current governmental mandates make it necessary to assess the competency of all laboratory workers who handle patient specimens. The mandates are specific in what must be assessed; however, they do allow for considerable discretion on how to implement some of these specific assessments in a laboratory setting.

CLIA '88 outlines six areas that must be included as part of a laboratory competency assessment program; these are (i) direct observation of routine patient test performance; (ii) monitoring the recording and reporting of test results; (iii) review of intermediate test results, QC records, proficiency testing results, and preventive maintenance records; (iv) direct observation of performance of instrument maintenance and function checks; (v) assessment of test performance through testing previously analyzed specimens, internal blind testing samples, or external proficiency testing samples; and (vi) assessment of problem-solving skills ( 11 ).

To measure compliance with the CLIA '88 regulations, the College of American Pathologists (CAP) conducted a study in 1996 (CAP QProbes program) to survey employee competence assessment practices in departments of pathology and laboratory medicine ( 12 ). The goals of the study were to measure institutional competency assessment practices, to assess the compliance of each institution with its own practices, and to determine the competency of specimen-processing personnel. This three-part study consisted of a questionnaire concerning current competency assessment practices, evaluation of compliance with these practices using personnel records, and a written appraisal of the competence of five specimen-processing staff members per institution. The study surveyed a total of 552 institutions that participated in the CAP 1996 QProbes program ( 12 ). Their results showed that 89.2% of institutions had a written competency plan and that of those, 90.3% used their plan for microbiology. Approximately 98% of institutions reported reviewing employee competence at least annually; this consisted of direct observation in 87.5% of laboratories surveyed, review of test or QC results in 77.4%, review of instrument preventive maintenance in 60%, written testing in 52.2%, and other methods of assessment in 20.8%. When measuring adherence to the laboratory's own competence plan, it was found that the percentage of laboratory employees who complied was 89.7% when assessed using direct observation, 85.8% when assessed by reviewing QC and patient test results, 78% when assessed by reviewing instrument records, and 74% when assessed using written testing; 90.4% of new employees were assessed as indicated per policy, and 90% of employees were found to have responded satisfactorily to a written competency assessment regarding specimen processing. Failure to comply with the laboratory's own competence plan ranged from ca. 1 to 6.4%, and employees who failed competency assessment were not allow to continue their usual work in 8.6% of institutions.

This study concluded that opportunities for improvement in employee competency assessments were numerous. Toward these improvements, the CAP provided several suggestions which included the suggestion that direct observation can be used for assessing technical skills (as can patient and QC specimens), judgment and analytical decision-making processes, and teaching and training of personnel. The CAP also noted that communication, judgment, and analytical decision making are essential skills that are rarely evaluated but that when they are evaluated, written testing should be used since interpretation of these skills using direct observation is highly subjective. In addition, the CAP recommended that laboratory employees who fail an assessment should not be allowed to perform these tasks if the competency assessment is a valid test of their skills, knowledge, and abilities. The CAP also concluded that written testing was the one method of evaluation with the poorest compliance; thus, it did not recommend that written testing be used as an element of a competency assessment plan unless it can be performed consistently or is used as part of an assessment of communication and judgement skills.

The CAP QProbe suggested that “opportunities for improvement in employee competency assessment are numerous” ( 12 ), and our own experiences in presenting workshops on this topic at the American Society for Microbiology general meetings confirm that many laboratories continue to struggle with the design of a competency assessment program. The following is intended to provide guidance to supervisory personnel in clinical microbiology laboratories in the development and implementation of an effective competency assessment program and is taken, in part, from the 2003 Cumitech entitled Competency Assessment in the Clinical Microbiology Laboratory ( 4 ).

Competency assessment in the clinical laboratory, as mandated in U.S. law since 1988 as part of CLIA '88, is published in the Federal Register as part of the Code of Federal Regulations (CFR). The CFR defines the requirements for initial training verification, initial competency assessment, and ongoing competency assessments of laboratory personnel ( 11 ). As a brief explanation of the regulation titles, the number “42” indicates “Public Health,” CFR stands for “Code of Federal Regulations,” “493” indicates “Laboratory Requirements,” and the numbers “1445” or “1451” are the section standards. These standards were enacted on 28 February 1992, amended on 19 January 1993, and revised on 1 October 2002. They can be accessed online at www.gpoaccess.giv/cfr/Index/html . Included below are the pertinent CFRs relating to competency assessments in the clinical laboratory.

Code of Federal Regulations—42CFR493.1445. Standard: Laboratory Director Responsibilities

“Ensure that prior to testing patient's specimens, all personnel have the appropriate education and experience, receive the appropriate training for the type and complexity of the services offered, and have demonstrated that they can perform all testing operations reliably to provide and report accurate results.

“Ensure that policies and procedures are established for monitoring individuals who conduct pre-analytical, analytical, and post-analytical phases of testing to assure that they are competent and maintain their competency to process specimens, perform test procedures and report test results promptly and proficiently, and whenever necessary, identify needs for remedial training or continuing education to improve skills.

“Specify, in writing, the responsibilities and duties of each consultant and each supervisor, as well as each person engaged in the performance of the pre-analytical, analytical, and post-analytical phases of testing. This should identify which examinations and procedures each individual is authorized to perform, whether supervision is required for specimen processing, test performance or result reporting and whether supervisory or director review is required prior to reporting patient test results.”

Code of Federal Regulations—42CFR493.1451. Standard: Technical Supervisor Responsibilities

“The technical supervisor is responsible for identifying training needs and assuring that each individual performing tests receives regular in-service training and education appropriate for the type and complexity of the laboratory services performed.

“The technical supervisor is responsible for evaluating the competency of all testing personnel and assuring that the staff maintain their competency to perform test procedures and report test results promptly, accurately and proficiently. The procedures for evaluation of the staff must include, but are not limited to—

“1. Direct observation of routine patient test performance, including patient preparation, if applicable, specimen handling, processing and testing.

2. Monitoring the recording and reporting of test results.

3. Review of intermediate test results or worksheets, quality control records, proficiency testing results, and preventive maintenance records.

4. Direct observation of performance of instrument maintenance and function checks.

5. Assessment of test performance through testing previously analyzed specimens, internal blind testing samples or external proficiency testing samples.

6. Assessment of problem solving skills.”

ACCREDITATION

The three most widely used CMS-approved accreditation programs are the Laboratory Accreditation Program from the CAP, the Joint Commission on Accreditation of Healthcare Organizations (JCAHO), and COLA, formerly known as the Commission on Office Laboratory Accreditation. Although each organization's testing requirements are at least equivalent to those of CLIA '88, they have somewhat different testing standards and philosophies in reaching the goal of quality laboratory testing. The CAP and the JCAHO have guidelines that include several items dealing with initial training and competency assessment of laboratory personnel as a requirement for laboratory certification or accreditation. The requirements for competency assessment by each of these organizations are discussed below.

College of American Pathologists

The CAP survey checklists currently include questions pertaining to CLIA '88 and assessment of competency for laboratory personnel (CAP 2003, Commission on Laboratory Accreditation, Laboratory Accreditation Program, Laboratory General Checklist: http://www.cap.org/apps/docs/laboratory_accreditation/checklists/checklistftp.html ). These questions are included in the GENERAL area of the laboratory checklists in the PERSONNEL section. Specific questions, as well as “Notes” and “Commentary,” contained in the 2003 CAP checklists are indicated below. As a point of explanation, CAP guidelines are divided into “Phase I” and “Phase II” deficiencies. These deficiencies are defined by CAP as follows: “Deficiencies to Phase I questions do not seriously affect the quality of patient care or significantly endanger the welfare of a laboratory worker. If a laboratory is cited with a Phase I deficiency, a written response to the CAP is required, but supportive documentation of deficiency correction is not needed. Deficiencies to Phase II questions may seriously affect the quality of patient care or the health and safety of hospital or laboratory personnel. All Phase II deficiencies must be corrected before accreditation is granted by the CLA. Correction requires both a plan of action and supporting documentation that the plan has been implemented.” The CAP guidelines that address competency assessment are included in Table ​ Table1. 1 . CAP guidelines can be accessed at www.cap.org .

CAP guidelines addressing competency assessment

The Joint Commission on Accreditation of Health Care Organizations

The JCAHO began evaluating hospital laboratory services in 1979. Since 1995, clinical laboratories surveyed using JCAHO standards have been deemed to be certifiable under CLIA '88 requirements. The current JCAHO laboratory standards include competency assessment of personnel under the Human Resources requirements and mandate that the organization provide for competent staff either through traditional employer-employee arrangements or through contractual arrangements with other entities or persons (Joint Commission of Accreditation of Health Care Organizations, 2003, 2004 Laboratory Standards: http://www.jcaho.org ). JCAHO requires an initial review of credentials and qualifications of employees; it also requires that experience, education, and abilities be confirmed during orientation. JCAHO also mandates that the organization provide ongoing in-service and other education and training to increase staff knowledge of specific work-related issues and perform ongoing, periodic competence assessment to evaluate the continuing abilities of staff members to perform throughout their association with the organization ( http://www.jcaho.org ). The specific JCAHO standards involving competency assessment are indicated in Table ​ Table2 2 .

JACHO standards regarding competency assessment

ELEMENTS OF A COMPETENCY ASSESSMENT PROGRAM

For a laboratory to comply with federal regulations and national accrediting agencies guidelines, a system must be in place that will allow verification of the initial training of staff and assessment of competence twice in the first year of employment and annually thereafter. Although CLIA '88 defines what must be tested in order to assess competence in laboratory employees, it does not specifically spell out how to do this assessment. Reflecting this was a study performed by Christian et al., who interviewed a sample of 20 laboratories including hospital, blood bank, commercial reference, physician office, and independent laboratories from 12 states ( 2 ). They found that assessing the competence of laboratory personnel was a complex issue reflecting the dynamics and environment of each unique laboratory. Their research found no consistent method of implementation of competency assessment. This is because there are many approaches and tools that can be utilized to meet the federal regulations. Four additional articles, specifically targeting competency assessment in clinical microbiology, have been published and can be reviewed prior to designing a competency assessment program for a microbiology laboratory ( 4 , 14 , 15 , 18 ). In addition, tools and programs for use in laboratory competency assessment have also been included in publications concerning laboratory disciplines other than microbiology ( 6 , 7 , 9 , 10 , 20 ). One must also keep in mind that parts of a competency assessment program may be intimidating to some employees, some of whom may feel that it could jeopardize their relationship with coworkers. Care must be taken to assure the staff that the purpose of these programs, although required to meet governmental and accreditation agency requirements, is to identify areas where improvements can be made to ensure quality patient care.

As stated above, there are six areas that must be included as part of a competency assessment program: (i) direct observation of routine patient test performance; (ii) monitoring the recording and reporting of test results; (iii) review of intermediate test results, QC records, proficiency testing results, and preventive maintenance records; (iv) direct observation of performance of instrument maintenance and function checks; (v) assessment of test performance through testing previously analyzed specimens, internal blind testing samples, or external proficiency testing samples; and (vi) assessment of problem-solving skills ( 11 ). Ways to include each of the above six areas in a competency assessment program is discussed in greater detail in the following sections and summarized in Table ​ Table3. 3 . An example of a competency assessment form for bacteriology is reprinted from Cumitech 39 ( 4 ) and included in Fig. ​ Fig.1; 1 ; a partially completed form is included in Fig. ​ Fig.2 2 as an example of how this form can be used. The reader is referred to Cumitech 39 for additional examples of competency assessment forms ( 4 ).

An external file that holds a picture, illustration, etc.
Object name is zcm00304210001p1.jpg

Example of how the six areas of required CLIA competency assessment can be addressed and documented. FQ, fluoroquinolones. Reprinted from reference 4 with permission.

An external file that holds a picture, illustration, etc.
Object name is zcm00304210002p1.jpg

Example of how the assessment form can be used for documentation of competency.

Summary of competency assessment a

The six areas that must be included as part of a competency assessment program are discussed in detail in the following sections.

Direct Observation of Routine Patient Test Performance

Direct observation is the actual observation of work as it is being performed by laboratory staff. These observations are not limited to test performance but include all processes in which the employee is involved, including specimen collection, preparation of the specimen for laboratory testing, and the actual testing of the specimen. Direct observation can be the most time-consuming way to monitor employee competency (particularly when the laboratory is large), and the areas to monitor should be carefully selected to maximize gains from the time spent in the process. For example, areas which involve a higher-than-average degree of decision making, which may have a major impact on patient care if performed incorrectly, or which have been found over time to have a greater degree of employee variability might all be good prospects for direct observation. Smaller laboratories with only a few staff members may find direct observation to be less onerous, and these laboratories can be more inclusive in the areas chosen for observation. Elder and Sharp provide an example that utilizes a statement included in the laboratory's competency assessment program indicating that a certain percentage of routine work is observed through direct visual evaluation. This can be followed by either a specific listing of tests to be observed or a general listing of tests that may be included in the direct-observation portion of competency assessment ( 4 ). McCaskey and LaRocco utilized direct observation in employee competency assessment of processing and reporting of new positive blood cultures, reading and reporting of positive routine cultures, automated identification procedures, susceptibility testing, rapid antigen testing, direct smears and fluorescent smears, as well as a large variety of biochemical testing performed ( 15 ). They also included a variety of checks while performing direct observation, including adherence to written protocols, accurate interpretation of test reactions, and appropriate notification of results, as well as many others. McCarter and Robinson utilized direct observation to assess safety and specimen-processing procedures in mycobacteriology and QC ( 14 ). One must keep in mind that CLIA '88 mandates that “at least” routine patient test performance, as discussed here, and performance of instrument maintenance and function checks (see below) be assessed by direct observation.

Monitoring the Recording and Reporting of Test Results

Elder and Sharp indicate that monitoring the recording and reporting of test results requires a review of results for the proper and correct recording and reporting of patient testing ( 4 ). This is most easily accomplished either by documentation of observation of an employee writing or entering patient test results on report forms or into the computer or by a review of worksheets with computer entries for appropriate recording of patient results. This review can be done at the time a final report is verified (before the results have been released) or after verification through comparison of worksheets and computer printouts. McCarter and Robinson reviewed worksheets and patient records in bacteriology to assess blood culture competency. This method was also applied in their laboratory to selected areas of the mycobacteriology, mycology, virology and serology sections ( 14 ).

Review of Intermediate Test Results or Worksheets, QC Records, Proficiency Testing Results, and Preventive Maintenance Records

Review of results and records may also be accomplished by directly observing an employee when writing or entering preliminary patient test results onto report forms or into the computer or by reviewing worksheets or computer entries for appropriate recording of preliminary patient results ( 4 ). Unless all worksheets or reports are going to be reviewed, effort should again be taken to ensure that the time spent reviewing test recording and reporting provides the best assessment of competency (e.g., review of positive cultures, review of results from critical specimens, and review of worksheets from culture types with complicated workups). Supervisor (or designee) review of QC records, proficiency testing results, and preventative maintenance records is most easily performed as a documented review of previous data entries, as is already routinely performed in laboratories to meet the QC requirements for accreditation ( 4 ).

Direct Observation of Performance of Instrument Maintenance and Function Checks

Direct observation must be done when employees are performing maintenance procedures and checks of instruments. Documentation of these observations is necessary for competency assessment and cannot be performed by an alternative method ( 4 , 11 ). This should be assessed for each piece of equipment that the person being assessed is trained to operate ( 4 ). McCaskey and LaRocco utilized direct observation in all activities related to instrument monitoring, maintenance, and function checks, while McCarter and Robinson utilized direct observation for instrument function checks for RPR (Rapid Plasma Reagin) testing in the serology section ( 14 , 15 ).

Assessment of Test Performance through Testing Previously Analyzed Specimens, Internal Blind Testing Samples, or External Proficiency Testing Samples

Blind retesting of previously analyzed specimens can be used as an assessment in a number of different areas of the laboratory, such as appropriate setup based on the source of the unknown organisms, correct identification of unknown organisms, appropriate titers of infectious-diseases serologies, testing and reporting of antimicrobial susceptibility results, and many more ( 4 ). In addition to using previously analyzed specimens, performing testing on unknown samples or split samples as part of a proficiency testing program or as part of an internal quality assurance program can serve to meet this requirement ( 4 ). Optimally, each employee is assigned at least one proficiency testing sample that applies to each area included in his or her scope of responsibility per competency evaluation period ( 15 ). Utilizing internal blind unknown samples prepared by the supervisory staff from known organisms, seeded specimens, or previously analyzed samples can accomplish this goal. As another example, McCarter and Robinson utilized previously analyzed specimens to assess competency for agglutination and enzyme immunoassay testing in the serology section. Employees were expected to retrieve specimens from coded samples maintained at −70°C and incorporate them into their daily testing ( 14 ).

Assessment of Problem-Solving Skills

Assessment of problem-solving skills may be accomplished in several ways ( 4 ). Examples include (i) asking employees to respond orally or in writing to simulated technical or procedural problems (perhaps in the form of case studies) and (ii) asking employees to document actual problem-solving issues that they have handled in the laboratory within the last year.

A specific example of a problem-solving skill as utilized by a microbiology technologist is outlined as follows. An occasion developed where cultures from two patients that were processed for mycobacteria on the same day both grew Mycobacterium tuberculosis . One of the patients (patient A) was smear positive with numerous acid-fast bacilli, while the other patient (patient B) was smear negative for acid-fast bacilli. The culture from patient A was positive after 10 days of incubation, while the culture from patient B was positive after 18 days of incubation. The technologist (Tech 1) noticed this situation and questioned whether patient B's sample may have been contaminated by the smear-positive sample from patient A. It was decided, after consultation with the supervisor, that both M. tuberculosis isolates would be sent for molecular testing to determine if they were in fact the same organism. Tech 1 discussed the situation with the less experienced technologist (Tech 2) who initially processed the specimens, in order to determine how this might have happened. No obvious reason was identified. Tech 1 and the supervisor decided that competency assessment might shed some light on the situation, and Tech 1 was assigned to carry out direct observation of Tech 2 as she processed specimens for mycobacterial smear and culture. While carrying out this observation, Tech 1 found that Tech 2 was not capping specimen transfer tubes after adding a patient's sample prior to transferring specimen from the next patient. Tech 1 discussed this with the supervisor, and both believed that this break in protocol may have led to the suspected contamination (which was subsequently confirmed by molecular testing). Due to this deviation from the standard protocol by Tech 2, the supervisor decided that direct observations were warranted for all the Mycobacterium -processing technologists to ensure that proper techniques were being adhered to by everyone. In this instance, the problem-solving skills of Tech 1 led to competency assessment by direct observation of Tech 2, which solved the issue at hand and assisted the laboratory in improving the quality of future results from the mycobacteriology laboratory.

The above example was taken, in part, from the American Society for Microbiology's Division C web site on Competency Assessment ( www.asm.org/Division/c/competency.htm ; accessed 21 December 2003; reprinted with permission). This site also includes other examples of problem solving as well as other issues dealing with competency assessment in the clinical microbiology laboratory.

Laboratory employees solve problems very often but are frequently not aware that they are doing so. Encouraging the employees to document problem-solving situations as they occur during the year (rather than once a year when summarizing competency assessments) will facilitate this portion of the assessment process. McCarter and Robinson required at least three problem-solving examples per year per employee ( 14 ), while McCaskey and LaRocco required five separate examples in writing of problem-solving skills per competency evaluation period ( 15 ). Further, they required an employee to include four areas in their problem-solving examples, which were to (i) identify the problem, (ii) perform and document steps taken to correct the problem, (iii) resolve the problem by adhering to and correctly applying hospital and departmental procedures, and (iv) if resolution is not possible, document the reason why a resolution could not be reached and indicate suggestions for further action that may contribute to resolution of the problem ( 15 ).

Both McCaskey and LaRocco ( 15 ) and McCarter and Robinson ( 14 ) utilized written tests to assess the individual's scope of knowledge in a specific area. However, the use of examinations (written or practical), although aiding the process of competency assessment, will not completely satisfy the regulatory requirements or provide a complete look at an employee's competence ( 14 , 15 ; Virtual Hospital [www.vh.org/adult/provider/pathology/CLIA/CLIAHP.html]). Written examinations can be particularly useful in providing problem-solving scenarios but are generally unable to comprehensively reflect the many different facets of knowledge and judgement that must be used by employees in job performance. Written testing is not highly recommended by the CAP since it was the method of evaluation with the poorest compliance. The CAP recommends that written testing not be used as an element of a competency assessment plan unless it can be performed consistently ( 12 ).

DEVELOPMENT OF A COMPETENCY PROGRAM

The initial task of developing a competency assessment program can seem daunting, but it can be approached in a number of different ways. The steps taken to define the program should be included as part of the laboratory's competency program procedure. The steps commonly performed during the development of a program are discussed in the following sections.

Define Areas Requiring Competency Assessment

One of the most time-consuming portions of program development is identification of the areas requiring competency assessment, and this necessitates analysis of like tasks and skills ( 4 ). For example, identification of an isolate from a blood agar plate will be done in a similar fashion regardless of the source (e.g., urine, blood, or tissue) of the specimen. Therefore, it is not necessary to assess individual competency in each work area or division in the laboratory. On the other hand, the ability to assess whether an organism needs to be identified will vary from source to source. Similarly, the performance, recording, and QC of simple latex tests will not vary considerably from kit to kit and may be adequately assessed through evaluation of the employee's performance with any one of several different kits. Performance of this first step in program development must be done in sufficient detail that it is clear (to the person performing the assessment as well as to an inspector) what will be assessed, but with consideration of the similarity between many laboratory tasks. Organization of the areas to be assessed may be performed by bench assignment (respiratory specimens, stool specimens, etc.) or by test type (biochemical test, serologic test, etc.). As an example, areas requiring competency assessment for the anaerobe bench might include culture setup, selection of appropriate organisms for identification, identification of organisms, utilization of the anaerobic chamber, reporting of test results, and notification of critical values. One approach is to emphasize areas or methods in the design of the competency assessment program that are either problem prone or at high risk for error. Data from the laboratory's quality improvement processes (reviews of amended reports, incident reports, etc.) may be helpful in making this determination.

McCaskey and LaRocco used a team-based approach to define ongoing activities in QC and quality assurance that could easily be included in the competency program ( 15 ). They drafted lists for all tests and procedures for each subspecialty and developed a program where an employee participated in the process by selecting items from the test lists and scheduling the exercise to take place with an observer at a mutually convenient time. They felt that this participation created a more cooperative spirit between the observer and the person being evaluated and helped to eliminate negative associations with the competency assessment exercises. Care must be taken with this approach that employees are not always calling on their friends to act as observer for competency assessment, which may sway the impact of the program. These authors also stratified their activities into categories (category 1, 2, 3, or 4). Category 1 items were competencies that were deemed most critical for patient care, and employees were to be evaluated in all category 1 items during each evaluation period. In other categories, the employee could choose from several items for inclusion in their evaluation process ( 15 ). Similar to McCaskey and LaRocco, McCarter and Robinson created forms based on procedure-oriented tasks for each specialty area to be used in their competency assessment program ( 14 ). Competency assessment must also be specific for each job description; this must be taken into account when defining areas required for competency assessment, and competencies specific for each position within the microbiology laboratory must be included ( 12 ).

One of the challenges for any laboratory in establishing a competency assessment program is defining the extent of assessment that will be performed in each area once training is completed ( 4 ). Is it adequate to observe an employee work up one blood culture, or do 5 or 10 blood culture workups need to be observed? Should an employee be asked to demonstrate his or her ability to solve problems in each area of the laboratory, or is it sufficient to document problem-solving skills in only two or three key areas? Each laboratory will need to determine the extent of assessment in a way that best fits its size and complexity. For example, performing five anaerobic cultures for a successful competency assessment might prove quite impossible for a small laboratory where only one or two anaerobic cultures are performed per week or equally difficult for a large laboratory where multiple technologists perform anaerobic cultures. In this situation, instead of observing individuals performing anaerobic culture workup, direct observation of competency might be achieved through the use of a practical examination. Plates with important anaerobes and mixed organisms could be prepared and used to observe the employee's subsequent workup. If all employees performing anaerobic cultures were tested at the same time, the setup time would be reduced ( 4 ).

A helpful approach to solving the problem of how much to include in a competency assessment is to incorporate this goal into the integral part of other activities already occurring routinely in the laboratory ( 4 ). For example, performing competency assessments during routine review of QC records, review of positive-culture worksheets by a supervisor or designee, and review of results of proficiency testing surveys in which employees have participated are ways to incorporate the competency assessment program into the daily activities of the laboratory and lessen the workload associated with mandated competency assessments ( 4 ).

Identify Methods of Competency Assessment

The methods used in competency assessment should initially be driven by what is required by CLIA '88 as listed in the CFR for routine patient test performances (observation, review, proficiency testing, etc.).

Each type of assessment does not need to be performed for each area being assessed, and the type of assessment tool selected for use should be based on whether it will provide an accurate reflection of employee competency ( 4 ). As part of this process, it is very helpful to define what will be considered a successful demonstration of competency. This may be considerably different when an employee is being trained in a new area and is demonstrating competency for the first time and when an employee is demonstrating ongoing competency. For example, criteria established for an employee being evaluated following initial training in the anaerobe area will primarily utilize direct observation to assess the employee's ability to correctly follow the laboratory procedure while inoculating and incubating specimens for anaerobic culture; to identify inappropriate specimens for anaerobic culture; to demonstrate or describe the procedure followed when inappropriate specimens are received in the laboratory; to appropriately follow laboratory procedures while interpreting, working up, and reporting the results of anaerobic cultures; and to perform all required maintenance of the anaerobic chamber. In contrast, an evaluation for ongoing competency in the area of anaerobes for an experienced employee could be performed by a combination of several of the following: direct observation of the employee's workup of several cultures, indicating no deviations from written procedures; daily supervisor review of employee worksheets of positive cultures, indicating that the employee correctly selects appropriate identification and susceptibility tests and has followed the critical-value policy correctly; demonstration by the employee of the required maintenance for the anaerobic chamber; demonstration (through documentation from actual examples or through a practical examination) of the employee's ability to correctly identify and resolve problem situations with anaerobic cultures; or the use of proficiency testing samples to assess the ability of the employee to correctly identify anaerobic bacterial pathogens ( 4 ).

Problem solving, as already mentioned, could also be documented by employees throughout the year. One suggestion is to provide employees with a notebook, which will fit in a lab coat pocket, that can be used for documentation as situations occur and that will then be turned in to the manager at a scheduled time ( 4 ). This booklet could also include a schedule of other required elements of annual competency assessment (e.g., observed instrument maintenance) that the employee would be responsible for scheduling with a supervisor or designee. Use of such a booklet also helps place responsibility for part of the competency assessment with the employee. CLIA '88 does not make clear the number of assessments that must be performed in this area, only that it must be done. Each individual laboratory will have to determine the number of competency assessments in this area that it will require or the areas in which problem solving must occur.

Determine Who Will Perform Competency Assessment

Part of the written procedure for a competency program should include how competency assessment is determined and who will be allowed to perform the assessment. Although CLIA '88 states that the supervisor is responsible for competency assessment, it does not state that all assessments must be performed by the supervisor. Supervisors may choose to designate certain employees (e.g., lead technologists or employees with several years of documented successful competency) to assist with assessments ( 14 , 15 ). These employees may be authorized to perform assessment in only a few tests or in multiple laboratory areas. In addition, these employees can perform competency assessment of supervisory personnel who also perform patient testing ( 4 ). The ability of certain staff members to serve as assessors of competency of other employees should be documented on their own competency assessment, e.g., “This employee has demonstrated competency in the area of {…} within the laboratory and is capable of assessing the competency of others in this area” ( 4 ). In this way, is it obvious to an inspector that a qualified employee performed the competency assessment.

Define the Documentation of Competency Assessment

A variety of manual and computerized tools are available for documentation of competency assessment, and examples of these are included in selected references ( 4 , 6 , 7 , 9 , 14 - 18 ; ASM Division C website [www.asm.org/Division/c/competency.html]; and in Antimicrobial Susceptibility Testing—a self-study program, Department of Health and Human Services and the CDC Foundation, 2002 [www.aphl.org/ast.cfm]). There are also a variety of commercially available manual guides ( 4 , 16 ) and web-based or software systems (SoftETC [www.soft-etc.com]; Comptec-ASCP [www.asco.org]; Media Lab, Inc. [www.medialabinc.net]; GramStain-Tutor [medical.software-directory.com]; and ExamManager [www.exammanager.com]) available to assist in the development of a laboratory competency assessment program. Unless a decision is made to utilize one of these systems, easily used forms will have to be developed for documentation and to provide evidence of who was evaluated, what was evaluated, how it was evaluated, when it was evaluated, who performed the evaluation, what was done if problems were identified, whether the employee is authorized to perform and release results independently or whether review of the work is required before results are released, and whether the employee can serve as a competency assessor for other employees. Since the medical director is ultimately responsible under CLIA '88 for determining who will be allowed to work in the laboratory and what testing they can perform with or without supervision, it is prudent for the medical director to either review and sign the employee competency documentation or to delegate this task in writing to the supervisor or other appropriate personnel. Competency assessment records and forms should be retained for the entire time an individual is employed at the laboratory. Once an individual is no longer employed, discussions with Human Resources personnel can determine the appropriate length of time that competency records should be maintained for that facility.

REMEDIATION

The goal of competency assessment is to identify potential problems with employee performance and to address these issues before they affect patient care. Thus, performance and documentation of remediation is a critical component of the competency assessment process and is required by both CAP and JCAHO. Unless an employee has been deliberately negligent in the performance of his or her work, remediation should not be punitive but should, instead, be educational, and it should always be directed at improving performance ( 4 ). Employees who recognize that their mistakes will be addressed with the aim of performance improvement will be far more likely to seek assistance and admit problems than those who fear embarrassment, disciplinary action, or termination. (D. Marx, 2001, Patient safety and the “just culture”: a primer for health care executives; Columbia University. [ http://www.mers.tm.net/support/Marx_Primer.pdf ]).

A number of approaches can be taken to remedy problems identified through the competency assessment process, and some of these are outlined below ( 4 ). Since problems may develop because of the system rather than the employee, the first step is to analyze the problem so that the proper remediation can be identified and implemented. Analysis of the problem starts with looking at the protocols used for laboratory practice. The protocols should be clear and concise; if they are inadequate or confusing, this may account for the failure of competency of the employee. In proficiency testing, it should be ensured that the sample used as an unknown is adequate and that a problem with the sample itself is not what caused the competency failure. Also, the tools used for evaluation of competency should be clear, so that a consistent standard is applied to all employees.

If the above protocols are deemed sufficient and are not the cause of the competency failure, then one needs to identify the problem the employee is having. Is it a methodology problem, did the employee not perform the test correctly (i.e., did he or she not follow procedure), did the employee not understand the purpose or background of test (i.e., is he or she unable to solve problems or relate the test to the clinical situation), did the employee not understand the components of the test or instrument being used, was the employee unable to resolve QC problems, or did the employee perform correctly but made an error in documentation?

If necessary, an appropriate remedial action should be selected ( 4 ). First, discussion of the procedure with the employee is warranted to assess if further action is necessary based on the employee's verbal response. This step may be all that is necessary to identify the reason for the competency failure. Discussion of the procedure in a quality assurance-QC meeting with all employees could help everybody to understand how this type of error can be avoided. Additional actions that can be taken with an employee who fails competency include having the employee reread the procedure and discuss it with the supervisor to clarify any misinterpretations, having the employee produce a flow chart to assist him or her in properly performing a procedure, having the employee observe another trained and competent employee, having the employee practice the failed procedure with known specimens, or having the employee correctly retest the same specimen with the procedure that originally failed. Reinstitution of formal training may be necessary if the above opportunities fail to show that the employee is competent. Regardless of the method selected for remediation, it is necessary to repeat the competency assessment once remediation has been completed in order to document successful attainment of competency. As a last resort, it may be necessary to permanently remove an employee from selected duties and reassign him or her to another work area.

When an error or failure of competency was noted by McCaskey and LaRocco, corrective action was necessary within 30 days of the finding at their institution ( 15 ). If their corrective action did not resolve the failure, the employee was not allowed to perform patient testing in that area until he or she had completed further remedial action and had his or her competency reevaluated and was determined to be acceptable. Similarly, McCarter and Robinson did not permit employees who failed competency assessment to perform testing in that area until corrective action was determined ( 14 ). Following corrective action, the employee was reevaluated, and if the corrective action had been effective, the employee was considered to be competent. If the corrective action was not effective, the individual was not permitted to perform testing in the affected area until remedial training was successfully completed. In general, remediation should be instituted as quickly as possible after identification of a potential problem with employee competency. Each situation can be assessed initially to determine the extent of the problem and to determine if the employee understands the situation that has occurred, as well as the way it should have been handled. Based on this initial assessment, a decision can be made at that time about whether the employee should be allowed to continue to work independently in the area while further remediation or competency assessment (for example, direct observation) is carried out or whether the employee's work should be restricted until remediation and competence are fully documented.

QUALITY RESULTS

A formal defined competency program provides the laboratory with a valuable tool for identifying and correcting issues of employee competency. Just as valuable is the use of competency assessment as an ongoing part of the laboratory's quality assurance program to assist managers and supervisors in ensuring that high-quality results are reported. Competency assessment is an integral part of problem analysis and becomes a key tool in ensuring that errors identified through the quality assurance processes are prevented from recurring. Competency assessment procedures can help to identify problems occurring in the technical aspects of laboratory practice and assess performance deficiencies before they develop into major problems ( 7 , 9 , 10 , 18 ).

Competency assessment is also an opportunity to provide continuing education and performance feedback to employees and to document valuable objective information for performance evaluations ( 15 ). It should and can be used as a positive experience that helps to ensure that employees and employers can perform assigned tasks.

HYPOTHESIS AND THEORY article

Is there a problem in the laboratory.

\r\nAlexander Nicolai Wendt*

  • Department of Psychology, Universität Heidelberg, Heidelberg, Germany

Problem-solving research in the field of psychology has been closely linked to laboratory investigations throughout its development. However, there is a questionable conceptual assumption underlying this commitment to the laboratory, namely the assumption that one can reduce all problem-solving behavior to a cognitive mechanism. Upon validating this assumption from a phenomenological standpoint, doubts about its foundations emerge. For when we consider the experiential conditions that characterize a problematic situation, we come to determine several phenomenal aspects that are not taken into account in this approach. A phenomenologically revised notion of the problem therefore demands a modification of the scope of the empirical research. First, this paper investigates the configuration of the laboratory as an arena of experience based on Lewin’s field theory. This investigation indicates instructions as a key component of the laboratory. Second, a phenomenological description proposes a novel understanding of the problem. In this part it is shown that it is wrong to presume that problematic situations can be evoked arbitrarily by instructions. Finally, further contemplations help outlining the empirical requirements for exhaustive research. They call for novel paradigms in empirical psychology, such as live streaming, which are more faithful to the phenomenology of problems.

Introduction

From an everyday point of view, problem-solving appears to be an entirely common occurrence. It seems as if the emergence of any kind of goal or urge brings with it the need for a corresponding solution. Whether selecting a menu for dinner or playing a friendly game of chess, participating in a marathon or working on a mathematical equation, the “goal-directedness” ( Ohlsson, 2012 ) of these activities entails the idea of a solution – which is usually seen as the key feature of problems. This intuitive first look into everyday life favors agreeing with Popper’s notorious credo, “all life is problem-solving” ( Popper, 1999 ).

But is the possibility of there being a solution or mere goal-directedness really sufficient to pose a problem in the first place? One cannot answer such a question from an everyday point of view alone, because this view is already linguistically committed to a vaguely holistic use of the word “problem.” Yet, everyday language indicates some excess of phenomenal meaning by talking about the “pressure” people feel in addressing a problem, the “trouble” caused by a problem, or the habit of “problematizing” something in order to defamiliarize common sense. When saying “I hope this is no problem,” it ought to be implicitly clear that there is no interest in formal goal-directedness but rather some emotional involvement. Equally, talking about a “huge problem” does not indicate an extraordinary goal but an issue of utmost relevance. These colloquialisms hint at a specificity of problems which does not concur with the global notion of “all life is problem-solving.”

For psychology as an empirical discipline, which has sought to investigate problem-solving in the tradition of the classical “human problem solving” approach by Newell and Simon (1972) , there does not seem to be an immediate conflict from the ambivalence either. Firstly, the theoretical construct of problem-solving can be determined by a functionalist formula, such as Newell’s and Simon’s idea of the “task environment”: problem-solving here refers to the transformation of an initial state into a goal state by overcoming barriers (for a detailed discussion of the cognitivist paradigm of problem-solving research, see Wendt, 2017a ). Secondly, this operationalization is investigated on the basis of empirical data. Thus, although it is unclear what a problem really is, empirical research can still investigate problem-solving since there is no such essential claim but rather the purpose to validate an abstract construct. This is the point where the investigation of constructs disconnects from actual experience, since investigation itself does not seek to contribute to the understanding of factual problems.

As long as the everyday conception of problems appeared to be the trivial “all life is problem-solving,” psychological problem-solving research did not have to justify its construct by drawing on a functionalist concept. In other words, while problems are taken to be self-evident occurrences, such psychological research intuitively appears to be relevant for the entirety of everyday life. Gazing into the phenomenal complexity of problems, however, not all life is problem-solving and there is no alternative concept that is ready to serve as an everyday frame for the psychological construct. Instead, only a thorough investigation of the experiential qualities of problems can determine the actual relevance of psychological investigations.

Various keen and skeptical psychologists have noticed this issue (such as Getzels, 1982 ; Quesada et al., 2005 ; Ohlsson, 2012 ; Funke, 2014 ). These researchers prefer the ambition of investigating actual experience over the mere elaboration of constructs. Dörner and Funke (2017) have recently highlighted the importance of recovering the original motivation of problem-solving research to be involved with “complexity and uncertainty in the world” (8). The existence of such a contribution shows that the scientific field of psychology cannot be represented accurately by means of a single methodological conviction. Rather, the discipline is characterized by a constitutive controversy which nurtures its progress.

So far, there has been little interest in the meaning of instructions, tasks, or demand effects for the experiential constitution of problems due to its ostensible ubiquitous nature (“all life is problem solving”). Phenomenological analysis, however, might be able to show that laboratories sometimes fail to provide peculiar requirements of experiencing problems. In light of such realization, the aims of problem-solving research can be readjusted: either the research interest favors “tasks” over “problems” as an adequate subject matter, or explorations into less obtrusive empirical methods, such as live streaming, are undertaken.

In the following sections, three steps are undertaken. In the first step (see section “Situations in the Lab”), the situation given to an experimental subject in the arena of the laboratory is examined. This section’s question is whether it is possible to experience a genuine problem in such a situation. In the second step (see section “The Phenomenology of the Problem”), these considerations lead to a phenomenology of the problem. It is asked which experiential qualities are paramount when having a problem. In the last step (see section “Empirical Approaches”), new opportunities to investigate such phenomenologically authentic problems will be explored.

Situations in the Lab

The aim of the first step is to determine the experiential conditions that subjects are faced with when participating in a laboratory investigation. Psychological research has not been oblivious of such effects. On the contrary, various reflections on its own methodology yielded an advanced understanding of subjects’ behavior in laboratories. However, it is necessary to orientate these findings toward those types of situations which are presumed to be problems. The question is whether or not the task environment of laboratories actually hosts the opportunity for a subject to have a problem.

Experience is necessarily situated and having a problem is one way to be situated. Before exploring the nature of problematic situations, however, it is helpful to investigate the status quo of contemporary psychological research in the field of problem-solving, namely the laboratory as an arena, i.e., as a spatially arranged and therefore socially standardized situation. The status of the laboratory as the classical data source and site of investigation in experimental psychology has been subject to continuous skepticism due to an apparent lack of external validity (e.g., Anderson et al., 1999 ), especially when it comes to inferences made about other contexts from the observations that are made in the laboratory.

Yet, this controversy about the place of generalization as a structural equivalence between the laboratory and the rest of the field is not or merely indirectly relevant to the present matter. Moreover, this line of argument does not scrutinize the situational singularity of everyday life, since the notion of external validity supposes good laboratory research to be estimated in accordance with some prevalent normative set of rules. This idea can be easily refuted because it does not emancipate itself from the pre-phenomenological natural attitude. Instead, the basic understanding of the variety of situations which are applied here has to refuse any previous hierarchy.

In order to grasp the peculiarities of the laboratory, the description should include two approaches. First, there are several properties of the laboratory which are a concern for psychological methodology themselves, or specifically which are a concern for the systematical investigation of the psychological structure underlying the discipline’s data sources. By way of example, an important effect on the behavior of subjects has been labeled “demand characteristics” by Orne (1962) . Orne in this study observed that the subjects’ actions, motivations, and perceptions about the experiment play an active role in the observed behavior. Throughout communication about the experimental situation, such as recruitment, the experimenter’s instructions or the laboratory’s setup, the subjects perceive “demand cues” which provoke a change of attitude toward the situation.

There are two ways to comprehend these characteristics ( Sharpe and Whelton, 2016 ). Either they are “artifacts,” which means that one sees them as a contamination of an experiment’s reliability, that they can be present or absent in the laboratory, and that the researcher is able to eliminate them. Or they are a “discovery,” which means that they are elements of each and every laboratory’s situation. As they write, “as artifacts, demand characteristics are restricted to those subjects who can express the experimenter’s hypothesis. But, as a discovery, demand characteristics highlight the importance of ascertaining the experimental hypothesis as understood by the research subject” ( Sharpe and Whelton, 2016 , 360). If we accept the second view, for theoretical reasons developed in what follows, demand characteristics as a discovery do not entail the assumption that the experimental subjects try to understand the particular hypothesis of the experiment in which they are currently partaking. Instead, one can understand being part of an experiment as a state which influences the participant’s consciousness in general.

Moreover, the present descriptive attempt can subsume a different group of effects recognized by empirical psychology under the label “task instructions.” Di Mascio et al. (2016 , 1) clarify this terminology as follows: “Task instructions may work by providing a motive for problem-solving. Organized cognition—such as that involved in problem-solving—is motivated. Motives are valued goals for the problem-solving activity that can guide the duration and direction of attentional effort.” Hence, different task instructions tend to cause different behavioral responses in the experimental situation even if the remaining setup is constant. This variety can be explained as a result of behavior’s context-dependency ( Braem et al., 2017 ). There are various ways to produce effects of task instructions, such as wording, placement, or elaboration of the instruction.

In the further context of psychologically recognized occurrences in the laboratory situation, effects of compliance ( Asch, 1951 ), social desirability ( Crowne and Marlowe, 1960 ), experimenter expectancies ( Rosenthal and Rubin, 1978 ), the Hawthorne effect ( Landsberger, 1958 ) or framing ( Tversky and Kahneman, 1981 ) all become relevant. Together they illuminate the remarkable impact of the laboratory as an arena or situation on the experimental subjects. However, as empirical effects, they do not demand any implications because they are merely indicating probabilities of occurrence. This is why it is necessary to integrate these empirical findings into the context of a second, theoretically framed inspection of the laboratory as an arena for investigation.

Such a framework, with its function of describing the situational peculiarities of the laboratory, is Lewin’s field theory. Without subscribing to his topological or vector-psychological attempts that explain behavior and experience more geometrico , utilizing the descriptive faculties of Lewin’s field theory can still provide a useful aid to explicating situations. As a driving motivation behind the field-theoretical approach, Lewin states: “The psychological environment has to be regarded functionally as a part of one interdependent field, the life-space, the other part of which is the person” ( Lewin, 1939 , 878). Based on this assumption, it is possible to explicate the laboratory in its genuinely situational features and how it brings about the empirical effects discovered in experimental psychology.

In his major work Lewin (1936) , Lewin lays out a conceptualized framework by which to understand the totality of facts that determine the behavior of an individual at a certain moment as the “psychological life space” (12) of this individual. However, by leaving no room for any speculative entities, Lewin also concedes the ability to give a “complete representation of even one given situation,” since this “would presuppose the solution of all psychological problems” (82). Instead, Lewin provides a framework of empirical concepts which explains the relation between person and environment as the life space that constitutes the experienced situation. This framework successfully illuminates the decisive factors which characterize the laboratory situation.

Life space is constituted by the individual’s “life situation” as well as “momentary situation” (23) and “the specific problem with which we have to deal in a given case determines whether it is the life situation or the momentary situation which comes more strongly into the foreground” (24). However, the life space does not contain the physical, social or conceptual world as it is, but merely “to the extent and in the manner in which they affect the individual in his momentary state” ( ibid .). Therefore, Lewin employs the terms “quasi-physical,” “quasi-social,” and “quasi-conceptual” facts even though “a change in the quasi-physical facts in the life space of the person is often the result of an objective change in the physical environment” (27). This contingency between the life space’s content and the physical and social occurrences as well as concepts which surround it provides a shift of perspective from the idea of the laboratory situation as one might conceptualize it from the methodological point of view to what laboratory situation means within the subject’s life space. This meaning depends on the situation’s content. Lewin argues:

“Nevertheless, the content is in no way irrelevant, but is of greatest importance for psychological dynamics. Whether, for instance, an actual goal refers to a present or a future event, whether this event is thought of as something that definitely exists, or as something that is only possible or highly improbable – all this forms an essential characteristic of a goal. Differences in time index and in existential characteristics of the content imply a qualitative difference in the psychological facts themselves, that is, they have formally the position of properties of the psychological facts.” (38).

Thus, the goal that a person has does not depend on its conceptualization in experimental design, but on what it means in the individual life space. This thought becomes clear when considering Lewin’s notion of “alien influences,” i.e., “influences from outside on the psychological life space” (70). The life space is embedded in a hull of influences that do not immediately determine the psychological behavior. Yet, these influences impact the boundary points of life space by e.g., connecting it to other regions. The instruction given in the laboratory is seen as such an “alien influence”:

“Every act of influencing another person, whether in laboratory experiment or in everyday life, consists in creating such a hull, one which affects the boundary points of the life space and thereby the life space itself in a certain way.” (75).

Hence, the critical point is the notion of experimental instruction, since this notion is not necessarily itself a quasi-social fact. Yet, it can create such a quasi-fact by its impact on the individual’s life sphere. Talking about the Zeigarnik effect, Lewin (1940 , 19) states that “the instruction to recall given by the experimenter sets up a quasi-need.” This “quasi-need” is the possible impact of the instruction on the experimental subject’s life space. As he says, “it is able to impose certain patterns of action and to build up certain quasi needs” ( Lewin, 1936 , 192), but there is no identical representation of the instruction’s content.

At this point it is important to relate Lewin’s theoretical reflections to the psychological effects encountered in the arena of the laboratory, because the isolated notion of quasi-needs still allows for alternative meanings. Pointing toward the research interest of problem-solving investigations, the criterion needs to be that this quasi need serves to problematize the situation if the claim is that psychology investigates actual problem-solving. As its reference, however, any comment on this concern still requires the phenomenology of the problem.

Psychology’s task is to recover the problem in life space as its subject matter instead of trying to induce problems. This task neither implies an entire loss of the credibility of past research, nor does it encourage a divergence from established empirical paradigms. In any case, real problem-solving research needs to turn to properly address the subjective situational conditions of what it means to have a problem. For Lewin, therefore, the concern of valid empirical research is a matter of adequacy: “the validity of sociopsychological experiments should be judged not by the properties of isolated events or single individuals within the field but mainly by whether or not the properties of the social group or the social situation as a whole are adequately represented” ( Lewin, 1939 , 893). Concerning problem-solving, the “social situation as a whole” has to be read as a reference to the entire experience of a problem instead of as a formal and mechanistic definition of problems predominant in contemporary research.

The contingency between instruction and quasi need might constitute the most pronounced example of the influence of the laboratory situation on experience. It is this setup as a whole which occasions the change within life space. The reason for this change is that “experimentation in the laboratory occurs, socially speaking, on an island quite isolated from the life of society” ( Lewin, 1944 , 168). Such special status affects the individual in a peculiar way. It serves as the comparative instance to life space once this individual is involved in a problem.

Notwithstanding, one might wonder whether the Lewinean considerations about the state of the life space in the laboratory situation does coincide with the methodological considerations about external validity of empirical research in general. This might be true to a certain extent because, formally, the singularity of the laboratory setting queries all generalizing inferences. However, this formal relation is only a consequence of an underlying experiential structure. Such considerations concern what Lewin calls “Geschehnistypen” (types of event) in his earlier German writings ( Lewin, 1927 ). When investigating the experience of problems, psychologists who worry about external validity alone would ignore the actual meaning of what a problem is.

Compared with this, the Lewinean starting point is independent from the distinction between field and laboratory. This means that the experience itself is initially examined without asking where it occurs empirically. If and only if the arena of experience affects the constitution of experience, there is a reason to analyze it. In other words, in other cases but problems, Lewinean field-theory might not necessarily stipulate doubts about the laboratory as the site of experience. In the light of such reflections, it becomes clear that thinking about the laboratory as a situation can only be relevant because it matters to the “Geschehnistyp” of a problem. Consequently, the issue of generalizability is derivative.

The first step has resulted in a critical view on the laboratory as a field of experience within the life space of experimental subjects. While its setup might be circumstantial for the investigation of other phenomena, it is for the peculiar nature of problems that the laboratory setup as a situation is conflated with peculiarities of this particular subject matter. Lewin’s field theory hosts a possibility of explaining how this conflation comes about: Tasks impose quasi-needs onto the subjects of investigation which inhibit the independent development of a genuine stance toward the situation. However, Lewin’s theory cannot account for the following step. Field-theory is committed to the naturalist view of New Positivism about external occurrences, a view which does not permit necessary phenomenological reflection on the experiential conditions of a problem.

The Phenomenology of the Problem

Understanding that the artificial setup of the laboratory infringes the authentic development of a problematic situation provides the necessary premise for the second step, investigating the experiential qualities of authentic problems. The Lewinean analysis, as it were, renders the formal conditions of the problem as a situation in life space which stipulate a descriptive examination of the respective content. Therefore, the aim of the second step is to acquire an access to the experiential dimension of having a problem. Consequently, a shift from the outward perspective of field-theory to the inquiry of features which are intrinsic to the experience of problems is paramount.

To ask about the fundamental experiential qualities of a problem or a problematic situation ultimately calls for a phenomenological analysis. In his essay on phenomenology and psychology, Husserl (1917) pointed out that psychology relies on a naturalistic view of the phenomenon under question, while phenomenology seeks to inquire into the experiential conditions of such a view. This means that phenomenology does not investigate factual problems but rather the state of consciousness which establishes the experience of a problem. Inquiry into some of the features which qualify this experience have already been carried out in the history of phenomenological thought. Nevertheless, sketching out the phenomenology of the problem serves the purpose of demonstrating the complexity of experiential conditions involved in having a problem, as is required for a full critique of contemporary psychology.

Before obtaining the phenomenological attitude, the psychological tradition of thought about “human problem solving,” which has up until now only been cursorily elaborated, should be mentioned in passing. One can backtrack its historic development to the Gestaltist movement in the early 20th century ( Goldstone and Pizlo, 2009 ), but the major development originates from the approaches in the 1960 and 1970s to computational simulation of problem-solving, especially by Newell and Simon (1972) . This paradigm of cognitive psychology has remained dominant up until the 21st century and inspired various more specific studies in its wake.

However, it cannot pass unnoticed that cognitive sciences have experienced substantial paradigmatic shifts of their own since the emergence of theories of symbolic information-processing in the 1950s’ “cognitive revolution.” Thompson (2007 , 4) distinguishes three major “approaches to the study of the mind,” namely “cognitivism, connectionism, and embodied dynamicism.” It is therefore questionable whether the earliest (phenomenological) critique directed against the classical cognitivist concepts (such as Dreyfus, 1972 ) still remains relevant to later and modern advances of experimental psychology.

What are the specifications of this cognitivist paradigm of “human problem solving,” which is to be criticized in the following section? First of all, it subscribes to the “Mechano-Representationalist Approach” ( Hutto, 2008 ). The two predominant elements of this approach are, respectively, a mechanist understanding of the mind and a representationalist epistemology. Moreover, its explanations are functionalist, such as the infamous idea of the problem as a relation between an initial and a goal state. Ultimately, problem finding, problem classification, problem space, and problem quality ( Wendt, 2017a ) are the most pertinent research topics. Upon examination of some recent psychological contributions to this tradition of problem-solving research, it can be shown nonetheless that fundamental concerns still have not been overcome and resolved, despite critical advances in the last decades. Recent theoretical contributions to the field of problem-solving research certainly do address some flaws of prior theories, but they do not venture and lay out a genuinely new and totalist approach.

On the one hand, there is a growing number of attempts to advance cognitive architectures, such as Soar ( Laird et al., 1987 ), ACT-R ( Anderson, 1993 ), and Icarus ( Langley and Trivedi, 2013 ). But despite dealing with conceptual problems, for example by integrating “depth first search” to compensate for the flaws of heuristic searches, or by treating “the physical side of problem solving as a central tenet” ( Langley and Rogers, 2005 ) in order to establish a structure of embodiment, these attempts are still committed to mechanistic, externalist and representationalist premises which do not concur with the phenomenology of the problem ( Wendt, 2017b ).

On the other hand, there are more versatile approaches that reintegrate different perspectives from psychology. Weisberg (2015) elaborates on the relation between two psychological traditions of thought about insight as an impetus to problem-solving, namely the “special process” view following Gestaltist ideas and the “business-as-usual” view following cognitivism. Yet this controversial potential is located between two well-established psychological schools of thought which have equally been the object of extensive critique. Weisberg’s own proposal to integrate both views does not open up a new perspective, but accumulates the weak spots of its predecessors.

Similarly, with a return to concepts of Gestalt psychology, Yoshimi (2017) searches for a “phenomenology of problem-solving.” Still, approaches like this are mainly iterations of prior conceptual deficiencies. Nevertheless, Yoshimi’s idea to relate problem-solving to a “field of consciousness” draws attention to the fruitful tendency of shifting our reflective efforts toward the experiential conditions of the problematic situation at hand. The crucial phenomenological step to recover these conditions will thus be to acknowledge the situational autonomy of the problem, separating it analytically from the process and the action of solving.

But what does it really mean to say that I have a problem? First of all, there are two necessary conditions. Not all problems I face are my problems, even if they are problems that impact upon me. When talking about a “mathematical problem,” a “problem of chess,” or a “problem for science,” I do not necessarily have a problem myself. This initial consideration already clarifies an important difference which psychological problem-solving research ought to take into account. So In which cases is it justified to speak of having a problem? A problem is always a problem for someone. Facing a “mathematical problem” in a college exam, however, the task is considered a problem even before I know what the problem is. Arguably, it might be justified to speak of a problem in this case, because it will turn out to be a problem for every single student once it is presented to them, but there is also a custom to talk about “mathematical problems” even in their mere potentiality. This way one can speak of “mathematical problems nobody will ever discover.”

The difference between these problems and my problems is the experiential quality of mineness which should be distinguished from for-me-ness ( Guillot, 2017 ). In the prior case, the problem entails a sense of ownership or intrinsic possession, whereas in the latter case there can also be an alien ownership. The imaginary “mathematical problem nobody will ever discover” belongs to an abstract and effectively empty third-person agency or, if anything, to mathematics: it is a problem for mathematics as a collective subject, rather than necessarily the problem of me or of any individual practicing mathematician. However, it would be inadequate to speak of mathematics (and any other impersonal entity) as if it had experiences. The use of the word “problem” to describe such cases is not literarily correct but rather stands as a metonym. To speak more precisely, it should be a “mathematical task,” a “configuration of chess,” or a “topic for science.” They might turn out to be someone’s problem, but they are not someone’s problem in the emphatic sense of the word before. Yet, when my problem actually is a “mathematical problem,” the adjective serves to qualify the domain of the problem. This ambivalence might be a temptation for psychologists to assume that any kind of so-called problem can make me have a problem, i.e., such that I stand in a problematic situation.

Another person’s problem can also be a problem for me whilst not being my problem. Playing a friendly game of chess with my niece, I can understand that the current situation is her problem, but the subjective character of mineness might still not apply for me. It can be a problem for me because I am conscious of the situation as a problem for someone else, but this does not mean that my experience of the situation can be seen as an authentic form of problem-solving since it is not my problem. Hence, mineness is a necessary condition for authentically experiencing a problem, but it is not a sufficient condition since the subjective character of mineness is also intrinsic to various other experiences. Nevertheless, the distinction between mineness and for-me-ness bears great utility in discarding situations that only appear to be my problems. It leads the way toward understanding the problem as a mode or “type of situation” ( Dreyfus, 2004 , 237) and, analogously, problem-solving as a form of what Dreyfus calls “skillful coping” in the tradition of the Heideggerian notion of “circumspection” (see for example Dreyfus and Dreyfus, 1988 , 219). On the suggestion of phenomenology, empirical psychology should ultimately replace the ubiquitous assumption that “all life is problem-solving” with a subtler distinction, such as Dreyfus’ “skillful coping.”

Apart from instantiating mineness with a sense of ownership, my problem is a problem insofar as it is happening to me, is occurring to me, or is befalling me. In this sense, my problem affects me. Waldenfels (2015) utilizes the Greek term “pathos” to subsume these phenomena under a single semantic rubric. Such events display themselves in the form of eruptions, explosive noise, or sudden highlights of experience. The pathos of a situation is manifested in the affect by an alien object or the appeal or plea by another person. When a problem occurs, I am existentially exposed to this alterity.

Although it is my problem, it is neither constituted by a subjective act or cognition nor by an external process alone, but it is an event which occurs only insofar as it occurs to someone. The necessary conditions for problems of mineness and pathos intersect essentially at this point. The pathos of being affected by a problem can be experienced as anxiety, being puzzled, or even as paralysis, a symptom that resembles the experiences described by Gurwitsch: “somehow overwhelmed and overpowered by actual experience imposing itself upon them by a force of constraint from which they cannot emancipate themselves” ( Gurwitsch, 1949 , 179). Pathos is qualified by a peculiar temporal impression of being delayed: my problems occur because my solutions come too late, just as – in this sense – the occurrence of the problem came too early.

This delay reveals the second side to the phenomenon of pathos. Waldenfels adopts the term “responsivity” by Goldstein (1934) to describe that the occurrence inevitably demands a response. However, this response should not be understood in the tradition of behaviorist stimulus-response contingencies, but as an answer to affordances of pathos ( Waldenfels, 1994 ). Moreover, responses are neither mere effects of causes nor results evoked by goal-directed behavior. Instead, situations are essentially open. This means that they demand a certain response but do not enforce it. In this field at the intersection of normality and anomaly, the subject’s responsibility emerges (in the sense of responsibility for the other, as proposed by Levinas, 1986 ).

Mineness, as a noetic quality of experience, and the dyad of pathos and response, are two distinct necessary conditions of problems. More specifically, the two necessary conditions of mineness and pathos-response provide a general situational frame which is ontologically connected with the questionability of life. With Sartre, this questionability can be anthropologically identified as “the question as a human attitude” ( Sartre, 1956 , 7) and bears an ontological relation to negativity: “Thus in posing a question, a certain negative element is introduced into the world” (23). Adopting a key formula by Fales, phenomenology can assess that “where there is no question there is no problem” ( Fales, 1943 , 69) but there are other, more peculiar, sufficient conditions to experiencing a problem.

This having been said, I do not have a problem to call my own just because I received a task or imagined a situation step by step. Rather, when I have a problem, I find myself in a predicament that requires my response because an existential change in my life is possible, although it is not inevitable, as it would be in the case of a catastrophe. To give a response to my problem is my responsibility, but this is equally true for opportunities, challenges, and fatalities. While mineness, pathos, and response should be seen as characteristics of certain situations of experience, the peculiar phenomenal features of problems are more specific.

The situation I encounter is essentially problematic because it is just as essentially solvable. This first sufficient condition of a problem does not entail the existence of a solution, nor the comprehension of one’s own goals. The functionalist idea that the goal state has to be given to perform problem-solving is equivalent to the “conviction that the commonsense knowledge problem must be solvable, since human beings have obviously solved it” ( Dreyfus and Dreyfus, 1988 , 223). Instead, the solvability of a problem resides in the feasibility to release the pressure inherent to the problem by means of a solution. When I have a problem, I approach what is happening to me “as if” ( Vaihinger, 1965 ) there were a solution, as a matter of fiction. It is possible that I experience solvability where there is no known solution. While on the other side, a situation, despite having an otherwise known solution, might as well be experienced as a fatality – instead of a problem – since I do not find it solvable.

This fictional aspect of the experience of a problem relates to the possibility of “detach[ing] […] from the given situation and look[ing] at the latter from a distance” ( Gurwitsch, 1949 , 180). Gurwitsch calls this ability “thematization” in a Husserlian tradition, “meaning hereby disengagement and disclosure of factors which previously to the operation in question are present to consciousness in a rather implicit form” ( ibid ., 187f.). Still, this fictional characteristic of solvability does not entail a notion of illusion. Solvability, however, originates from my ability to hope that the problem can be solved.

Following Marcel, hope should be understood as “constituting our being’s veritable response” ( Marcel, 1942 , 30). This nature of hope consists of a reaction arising from being captivated by an anticipation. Marcel points out that this anticipation goes beyond the naturalist conviction that the future cannot be given, but only conceived. It is not important for solvability that things might not turn out as were hoped, and that is why an impasse does not stop the process of problem-solving – the problem prevails independently from the practical attempts that are made to solve it. Furthermore, hoping means not accepting the given situation, making “thematization” possible by a “domesticating of circumstances” (40).

Nevertheless, Marcel highlights that hope is no mere groundless optimism. It is always threatened by the “temptation to despair” (36) which therewith creates the field of tension out of which solvability can arise. Already at this point, it can be shown how the formalist idea of a given state and a goal state does not apply to the subjective nature of problems. Hope means to believe that there will not be an obstacle in my way, even if the experimental setup might include obstacles. Marcel sketches out this sense of “obstacle” as putting conditions in front of my hope and setting a limit to the process in which I can triumph over all disappointments. A truly hopeless person concedes altogether to the circumstances and will not have a problem anymore. Through hope, solvability is essentially connected with mineness as well. The more the problem relates to myself, the greater the incentive to hope. Consequently, between reasons for hoping which are exterior to myself and the hope for salvation, the second feature of problems emerges.

The situation I encounter is essentially problematic because it is oppressive. Oppressiveness is the root of the motivational experience of the problem. The psychological construct of motivation relates to oppressiveness as a resonance from the person, but it does not determine the problem when I am befallen by the pathos. Other than challenges or opportunities and fallacies, a problem is oppressive in an inevitable sense because it constitutes an oppression to me, i.e., it oppresses in a way that is more specifically directed against my will. It is for this connection that I pay attention to any goals that might or might not determine consecutive “goal-directed” actions of problem-solving.

Just as the impression of solvability originates from the ability to hope, to experience oppressiveness requires the ability to want. Wanting, however, is no relation to the object of will but, as various phenomenologists claim, to its value. Stein distinguishes values from “things” as the two possible contents of experience: “there are those [contents of experience; A. N. W.] ideally corresponding to experiences of content alien to the I [ichfremd], and others which are adequately grasped by an experience including the I [ichlich]. On the one side are “things,” on the other, e.g., values” ( Stein, 1922 , 15). Now, following Scheler, the perception of values is neither sensation in the form of my senses perceiving things nor imagination, but it is a matter of feelings, for which he adopts the term “logique du coeur” from Pascal. Scheler says: “Feeling originally directs at a proper class of objects, which are the values” ( Scheler, 1916 , 265).

My problem is not oppressive for me because, even though the material constellation intimidates me, its oppressiveness depends on how the values which are relevant in the situation affect me and stimulate my interest. I may experience a situation which poses no physical or social threat to me, but nevertheless causes me to have a tremendous problem, such as when I am confronted with nothing but my own consciousness. Some laboratory setups on the other hand might not be able to create actual oppressiveness because they do not carry such importance in the values of their participants. Yet, when thinking about what was expressed by the sweating and stuttering participants to the Milgram experiments (see Sharpe and Whelton, 2016 ), there is no reason to assume that oppressiveness is impossible to be evoked in laboratories.

Drawing on Pfänder, oppressiveness can be understood as “the feeling connected with the imagination which decides whether what is imagined is a goal or not” ( Pfänder, 1900 , 39). Thus, Pfänder concludes that, for something to be a goal, it neither means having certain goal-like characteristics nor aiming at a certain effect but being the object of someone’s striving. Inversely, in the case of a problem, the pulling force to this striving is the oppressiveness. The formalist idea of a goal state only appears to be valid to describe what a problem is when approaching the notion of the problem that already presupposes the practical purpose of problem-solving – a constraint which Wertheimer was willing to admit when reflecting on the validity of Gestaltist problem-solving research. Nerney draws on Duncker by calling these types of situations “problems with constructed foundations” ( Nerney, 1979 , 59). The concept of the goal is quite fixed, being too intellectualist to match the phenomenal quality of oppressiveness. Dreyfus, drawing on Merleau-Ponty’s critique of representationalism, emphasizes the same insight: “skillful coping does not require any representation of a goal. It can be purposive without the agent entertaining a purpose” ( Dreyfus, 2004 , 241). Instead of goals, the phenomenology of the problem is concerned with will and with striving.

Unlike in the case of a challenge, this oppressiveness is (more or less) urgent and dangerous for the subject. It constitutes an atmosphere within the situation that demands that the subject solve and thereby overcome it. Although novelty-seeking people might sometimes be pleased by this atmosphere, they will nonetheless strive for a solution. Atmosphere, in this context, is best understood as a “transmodal affordance” in the neo-phenomenological sense of Böhme (2001) . Drawing on Böhme, Griffero (2014) takes an atmosphere to be “the specific emotional quality of a given ‘lived space”’ (37), not as a merely inward state of the subject but as an aspect of the situation itself, which is evoked by the values the situation contains. Thus, an atmosphere is an affordance because it “ecologically invites” certain meanings, especially “tertiary qualities or affective (and therefore atmospheric) ones that permeate the space in which they are perceived” (47). Ultimately, this notion of atmosphere allows the Lewinean concept of life space to emancipate itself from Gestaltist limits, complementing it with phenomenal depth.

My problem is oppressive because it makes me have certain feelings about values that are important to me. A laboratory full of computers does not achieve this experience by itself. It requires a certain atmosphere that captivates me. The inevitability of pathos appears as oppressiveness for my problem. I have the feeling that I must face this situation, for some reason in my momentary or life situation. It might be quite probable that there are occurrences that are oppressive to almost everyone, such as the outbreak of a war. To consider this notion of oppressiveness relevant is an important task for experimental design if psychologists really intend to observe problem-solving. The crucial notion of atmosphere is a connection to the third and last feature of problems.

The situation I encounter is essentially problematic because it has a problem horizon. The phenomenological notion of horizon is constitutive for perception since “the object of (transcendent) perception is characterized by its adumbrational givenness” ( Zahavi, 1997 , 304). It is for this form of “inner horizon” that I perceive an object because the aspects of the object which are absent to my current view are part of its totality. Thus, “in order for a perception to be a perception-of-an-object, it must be permeated by a horizontal intentionality which intends the absent profiles” ( ibid ., 305).

In addition to the “inner horizon,” there is an “outer horizon” which embeds the individual acts of experience within a holistic experience of the world. Gurwitsch summarizes: “The outer horizon comprises things at the moment not actually perceived but referred to as perceivable” ( Gurwitsch, 2010 , 359); and he goes on to say, “[w]ith the experience of pointing references to the outer horizon, we are at the phenomenological root and origin of the awareness we have of the world as a universal all-embracing background, context, or horizon at every moment of conscious life. Whatever material object is chosen as our theme, we perceive it within that all-embracing horizon and as pertaining to the world” ( ibid .).

Now, when facing my problem, the horizon and background of my experiences changes due to having this problem. Most importantly, certain things become relevant because they are related to my problem. Also, my perspective on my objects of experience changes. It may be that I become more prone to detect solutions that are available on the horizon, or that I try to evade further problems. Either way, my problem’s horizon is different from my prior experience’s horizon, and it would change again if my problem turned into a challenge or a fatality.

This aspect of my experience’s horizonal intentionality facilitates understanding of what is the phenomenological core of “problematizing,” and the psychological term “problem finding” (for which see for example Getzels and Csikszentmihalyi, 1976 ). Whereas the experiential features of solvability and oppressiveness are implicit to the immediate experience of a problem, the state of the problem horizon can be approximated voluntarily. It does not necessarily cause the emergence of a problem but it is possible to change my perspective and my willingness to encounter a problem. The demand characteristics of psychological laboratories might be related to this possibility.

The very characteristics of the problem horizon, however, are determined by the relevance of the contents of the horizon. The notion of relevance has been elaborated by Schütz:

“[T]he articulation of the field into theme and horizon is imposed by the emergence of some unfamiliar experience, by a shift of the accent of reality from one province to another, and so on, it is characteristic of the system of intrinsic topical relevances that we may or may not direct our attention to the indications implicit in the paramount theme – indications which have the form of inner or outer horizontal structurizations or forms of topical relevances – that is, we may or may not transform these horizontal surroundings into thematic data.” ( Schütz, 2011 , 111).

In order to explain this idea of relevance, Schütz applies a conception of “typicality.” Experiences are recognized or chosen as typical based on a comparison with the individual stock of knowledge. This way a “system of relevance” is given: “it [sc. this system of relevance; the author] is responsible to determinate the characteristics that are selected as typical” ( da Costa, 2014 , 68). Thus, when I have a problem, my horizonal intentionality favors the occurrence of experiences that are relevant to the problem I face, because they appear to be typical for such a situation. Similarly, as Dreyfus and Dreyfus point out by drawing on Heidegger, “the pragmatic background plays a crucial role in determining relevance” (1988, 228). In other words, a problem’s horizon is not intellectually imposed onto one’s perception of the world, but on the contrary originates from our existential engagement with the world. Ultimately, the problem horizon is characterized by the similarity of relevant aspects among problems which exist due to their common essential features, solvability and oppressiveness.

The notion of the horizon, however, is static, whereas the experience’s dynamics in relation to the horizon should rather be identified as “perspectivity.” Graumann employs Lewin’s notion of “locomotion,” the movement in the life space, to identify the “perspectival (horizontal) structure of […] all cognitive experience” ( Graumann, 1990 , 8). He says: “An aspect is not a sharply bounded part of something, nor is the horizon a fixed limit, but the line of transition from the perceived to the perceivable, from the known to the knowable, from the actual to the possible, from the given to the new” ( ibid ., 9). The horizon is no passive object of experience, no surplus of the foreground, but it is originally interwoven with my experience’s perspective, and thereby constituting the problem.

Hence, my problem happens (pathos) to me (mineness) and it demands a response while changing my experiences’ perspective, making it accessible for aspects of the horizon which are relevant for solvability and oppressiveness. These atmospheric conditions enable problem-solving and should be considered in a psychological investigation. Ultimately, an exemplary problem can help to demonstrate the interplay of the five phenomenal features which have been introduced (namely mineness, pathos-response, solvability, oppressiveness, and the problem horizon). Yet, a written example bears the risk of suggesting certain aspects that appear to be relevant only to the way of explication. This is why it is paramount to focus on the experiential and not the linguistic qualities of the example.

Arriving early in the night at a foreign country’s airport gate, in a city I had never visited before, I had to switch terminals for transit purposes. After asking the staff for orientation, I left the building to search for a shuttle bus to reach the place from where I would board my next plane by the following morning. A taxi-driver approached me to say that the terminals would be closed at night, recommending me to take a hotel for the night. This is the moment that my problem struck me. I found myself in the middle of a tense situation that had already developed without my necessary engagement. This delay of occurrence and the retardation of my action framed a “constraint from which I could not emancipate myself.” The problem clearly was mine and this appears even more evident when thinking about the first step of my solution, which was searching for somebody with the same problem – somebody who also experienced “mineness.”

Yet, this solution and its path are no part of the very problem itself but already of the problem-solving situation. The problematic atmosphere, however, was present from the very beginning of my encountering it. Regardless of being befallen by the situation’s pathos and identified by the experience’s mineness, I did not find myself in an intractable fatality. Beyond the role of constraint, I sensed a vague impression that I would not surrender to this state, that there was a solution, rather than to ask whether there was one. This impression of hope was not directed at a goal but accompanied the oppressive feeling of imminent danger and uncertainty. I wanted my initial plans to remain in place, I wanted a secure solution, and I wanted certainty. Clearly, my perspective changed in the very moment, bringing me from a vague state of relaxation after a transatlantic flight into the very presence of a Caribbean airport. Suddenly, the strangers around me emerged in my horizon as possible sources of reassurance and knowing the time turned out to be an imminent urge. I was entirely captivated by the imminent situation.

My capacity to adapt resulted from this change of experience. In spite of this, the state did not depend on the availability of a solution but merely on the impression of a problem’s solvability. Had this feature not been present, it would have turned out to be an entirely different situation in which I had despaired instead of hoped. Whether these types of experience are possible in the laboratory, is the subsequent question.

The second step of the investigation provided evidence for a reformed understanding of problems. Instead of the functionalist formula of interrelated states, drawing on the problem as a “type of situation” allows to attend the subtler conditions to having a problem. For instance, once a subject is deprived of hope for solvability of a task, her following behavior should hardly be interpreted as problem-solving. Instead, she might be coping with a fallacy – a different type of situation. Regarding this difference will not only do justice to the phenomenal depth of problematic situations and eliminate the fallacy of taking tasks for problems, it also facilitates room for new experimental parameters of substantial predictive validity, since it suggests novel investigations of the difference of, e. g., solving a problem and taking a challenge as ways of skillful coping.

Empirical Approaches

The field-theoretical and phenomenological considerations call for a third and final step. In order to advise a change in empirical research, it is paramount to propose concrete alternatives. The previous analyses provide rudimentary gauges for the validity of empirical approaches to investigating subjects in problematic situations. Drawing on this reference and after discussing the limits of possible empirical research, four ideas for phenomenologically sobered psychological research are presented. This way, taking the phenomenology of the problem into account, it becomes possible to assess whether there is a problem in the laboratory.

The characteristics of the laboratory situation as determined according to field-theoretical reflections relate to the five experiential features of problems. Yet, there is no simple causal relation between the laboratory setup and the subjective experiences. It is, however, possible to recognize whether or not the laboratory situation’s concept is directed at the relevant features of the problem. While it is quite clear that the mere instruction to have a problem does not take the atmospheric peculiarities into account which give rise to the experience of a problem, it is equally obvious that it remains possible to have a problem even in the artificial context of a laboratory.

The experimental situation as a region of life space is characterized by the configuration of quasi facts altered by alien influences, such as experimental instruction. Looking at the variety of experimental setups, this instruction can be an introduction, a tutorial or an affordance, in its simplest form by saying “Problem: …” But neither of these alternatives directly targets the phenomenal features of the problem. Rather, they usually do not express a systematic effort to befall the experimental subject (pathos) or to favor an experiential identification (mineness). Instead, the instruction is presented and elaborated continuously and usually does not convey the characteristic delay which is an essential part of the experience of a problem in inner time-consciousness. This means that there is a “qualitative difference in the psychological facts themselves” (Lewin) between tasks which are presented this way and authentic problems. Consequently, the actual existential relevance of the situation can only originate as a random factor from the experimental subject’s life situation, e.g., when there is some biographical relevance of the specific content, but not one stemming from experimental manipulation or instruction.

It is a curious conclusion from these circumstances that empirical research effectively depends on the extraordinary behavior shown by experimental subjects in the laboratory, such as demand characteristics. This is epistemologically comprehensible because these forms of behavior are entwined with the artificial experimental setup. In other words, the common experimental instruction resorts to forms of behavior which correlate with the laboratory situation. Without the experimenter’s efforts to evoke actual problems, research depends on the participants’ decision or habit to engage with the experimental setup. If they had a real problem, however, the problem’s oppressiveness would make them engage with the situation genuinely.

Experimental setups that do intend to evoke actual problematic experiences have to entertain different measures that actually strive to make the subjects have a problem. This is why cover stories are not sufficient for this purpose, because they seek to conceal the actual experimental hypothesis and do not contribute to creating a problematical atmosphere. Lyons’ (1970) analysis of the “hidden dialog” reveals the implicit arrangements underlying the cover story. It would be naïve to assume that the experimental manipulation depends solely on the “experimentalist’s power grab” ( ibid ., 25). Problem-solving research needs to engage with its subjects’ experimental situation in order to create problems.

Furthermore, adopting commonly reported everyday problems does not serve this purpose, either, since the laboratory, as shown by field theory, is actually an entirely different form of life space. “On an island quite isolated from the life of society,” in Lewin’s picturesque phrasing, different things become relevant. Despite everything else, there are four (exemplary) directions of empirical research that carry the possibility of augmenting empirical designs so that actual problem-solving research becomes possible. Ultimately and in general, the empirical questions which arise in the light of the phenomenology of the problem invite one to reflect on empirical methodology.

Among the considerable alternative methodological approaches, the first one is straightforwardly to say that there are already experimental paradigms, such as ethnomethodology, which indirectly include the intention to cause a problem by perforating or defying the participants’ routine. In ethnomethodology’s technique of the “breaching experiments” (on which see Kew, 1986 ), puzzling and disorganized situations are provoked which occasionally succeed to create pathos. Following the phenomenology of the problem, this might not be enough to evoke the experience of an actual problem, but indicates by way of an example fruitful resources which should be used in psychology. Yet there is no engagement of the discipline with ethnomethodology as of today. This example also illustrates that spirited and creative experimental design guided by phenomenologically valid purposes can contribute to amplify the empirical validity of laboratory research.

A second methodological approach is the application of unobtrusive forms of observation. In their approach this application avoids the effects of instruction. A recent development of media psychology hints that there is remarkable potential for this form of research using digital data sources, such as live streaming. live streaming is a client-created form of content on video hosting websites which transmit content in real-time (for a detailed investigation of live streaming’s feasibility for empirical psychology, see Wendt, 2017b ). It is remarkable to what extent these broadcasts resemble laboratory setups based on video capture, for example in usability research. Another striking similarity is given between the traditional contents of problem-solving research, since live streaming primarily hosts video gaming. Furthermore, the data collected by this method matches with the concept of Natural Occurring Data Sets (NODS) and Big Data ( Goldstone and Lupyan, 2016 ).

An example can help to elucidate these potentials given to research employing live streaming. As Graumann (1969) points out, problematic situations are closely linked to the experience of frustration. Phenomenologically speaking, this observation is entwined with what has been discussed as the origin of solvability in hope. More precisely, in the experience of frustration, hope for the problem’s solvability fades. However, really frustrating experiences, which are characterized by manifold emotional reactions and their expressions identified in the phenomenology of feelings, even if they are folk-psychologically unsuspected, such as joy (see Elpidorou, 2017 ), can rarely be encountered in the laboratory. This seems partly because these experiences and reactions are not investigated, due to the predominance of intellectualist theory in psychology, and partly because the laboratory setup restricts the natural sparking of such experiences. In live streaming, on the other hand, subjects freely experience and express frustration. It is reasonable to assume that the causes for it are related to the setting where streamers might perceive an incentive to express themselves intuitively in view of voluntary exposure to an audience, and the nature of their interaction is often in situations that are frustrating by nature, such as difficult or unfair games.

It is arguably possible to manipulate frustrating conditions in the laboratory as well, namely when something is frustrating in the etymological sense of the word (Lat. fraus – harm): Not only may a person be frustrated but, seen from a situational standpoint, their frustration may primordially emerge from harm being inflicted upon their present state, e.g., by the instruction to solve a task which actually does not have a material solution (consequently, the natural expectation to perform achievable tasks is harmed). But even when setting an unsolvable task, it remains unlikely that one will observe in a laboratory what can be seen easily and frequently in a live stream: subjects shout, quit, or even get violent under eventually standardized observation conditions – occurrences which would mean the end to most contemporary laboratory investigation. Despite overt limitations to the interpretability of live streaming as a data source, these brief considerations of a handy example succeed in demonstrating the scope of the situational setup when investigating problems. In live streaming, courses of action unfold which are highly unlikely or institutionally impossible to manufacture in the same way in a laboratory.

Nevertheless, there are two methodological concerns here. On the one hand, it is not safe to say that all laboratory effects, such as demand characteristics, only occur in the laboratory. Sociology seems to favor the idea that there are further conditions to this biased behavior. Cooley’s (1902) concept of the looking-glass-self highlights the anthropological dimension of such effects that seem to originate in the most elementary act of self-perception. On the other hand, the similarity with traditional problem-solving tasks does not provide a sufficient conceptual foundation to consider the phenomenology of the problem. Since the subject matter of the standardized and most accessible live streams are video games, research on live streaming also requires the implementation of game theory in order to explore the depth of these gaming experiences.

Third, there are possibilities to enhance the experimental setup, namely by making use of forms of communication which comprise atmospheric complexity. Certainly, this is no easy task. On the contrary, it requires a sort of additional mastery from the experimenter. However, there are several ways to approach this matter which do not all include the necessity of active creative production in laboratory setups. One favorable way to increase the credibility and impact of experimental instructions is indirectly hinted at in one of Lewin’s thoughts:

“The most complete and concrete descriptions of situations are those which writers such as Dostoevski have given us. These descriptions have attained what the statistical characterizations have most notably lacked, namely, a picture that shows in a definite way how the different facts in an individual’s environment are related to each other and to the individual himself. The whole situation is presented with its specific structure. This means that the single factors of the situation are not given as characteristics which can be arbitrarily combined in a ‘summative’ way.” ( Lewin, 1936 , 13).

Empirical research has long rejected the implementation of artistic production into experimental design out of a danger of conflating itself with folk psychology. Yet, the controlled and directed use of well-investigated effects of such descriptions should suit with even the concepts of cognitivist operationalism. Another concern might be the lack of control that this type of investigation would seem to imply, but this apparent control has always been founded on a reductionist notion of language that disregards the complexity of natural languages. No instruction whatsoever can be liberated of this effect.

The fourth approach for phenomenologically sobered empirical research is the pending rejuvenation of ecological psychology. In the second half of the 20th century, there have been several attempts to investigate the peculiarities of the situated subject. While the relation between system and environment was favored in behaviorist and cognitivist research, phenomenology has explored the notion and experience of the situation ( Schott, 1991 ). The research into this field might have been neglected in the 21st century due to the recent preference for quantitative methods, but in the current spring of mixed-method approaches, it is worth reconsidering these traditions of thought as viable once more.

Ultimately, these methodological considerations have to remain no more than proposals without a decisive criterion for their implementation. Such a criterion, however, is a matter of methodological discussion in the empirical sciences which principally involves the question of introspection. By not giving more than a prospective glance to necessary deliberation, it can be said that it depends on the relevance of subjective experience in psychology whether or not the phenomenology of the problem will be considered meaningful in the discipline. In spite of this, the prevalent customs of discourse among psychologists do not invite confidence about the probability of this change of view.

The orthodox form of Husserlian phenomenology itself has dismissed the connection to introspectionism and descriptive psychology, on the grounds that both entail an empirical notion which does not provide insight into the transcendental constitution of consciousness. However, less rigorous approaches within the phenomenological movement have previously considered the connection between both domains. The exploration of pre-reflective experience calls for some empirical correlate which might be encountered in an introspection unlike the classical highly language-focused attempts, such as in the group of Würzburg psychologists around Külpe. The investigation of states as subtle as atmospheres could be a most promising direction for this research to head in. In light of this consideration, the matter of the problem emerges at the very center of contemporary discourse about introspection. As an example of the recent relevance of this debate, Petitmengin and Bitbol (2009) adopted a Brentanian view to defend introspection. Their practical considerations resemble the ideas of the Würzburg-based psychologists insofar as they recommend the education of specialists of introspection.

Regardless of this, investigation into the relation between introspection and the phenomenology of the problem calls for future reflection. Another important matter at stake here is the discriminative exploration of the problem. Once the ubiquitous credo “all life is problem-solving” has been defied, the question should be raised as to what situational alternatives do populate the life space. Challenges, fallacies, and opportunities are considerable modes of the situation and their examination might bear noteworthy potential – especially for problem-solving research. A different approach can be to draw on Fales’ phenomenology of the question. He separates four principal categories of questions: “Questions are either a matter of knowledge, or of belief, or of taste, or of action” ( Fales, 1943 , 60). These different categories of the question can be used to distinguish different types of situations, such as problems. However, based on a more complex notion of the situation, psychological investigations will succeed in discriminating the experience of a person attending a task, facing a challenge, or solving a problem, up to the point of making valid predictions about their consecutive behavior. For passionate experimental psychologists, it might turn out to be a problem that laboratories rarely foster the experience of problems for the experimental subjects. The common types of situations which are created in order to investigate problem-solving are mere tasks and do not let the experiential features of authentic problems occur. Far from being a catastrophe and certainly being more than a challenge, these circumstances should inspire more creative approaches to carrying out experimental psychology. In order to inspire advances in experimental methodology, the following step must be to establish a continuous, constructive and reciprocal dialog between empirical and phenomenological study and discourse.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

We acknowledge the financial support of the Deutsche Forschungsgemeinschaft and Ruprecht-Karls-Universität Heidelberg within the funding program Open Access Publishing.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Anderson, C. A., Lindsay, J. J., and Bushman, B. J. (1999). Research in the psychological laboratory: truth or triviality? Curr. Dir. Psychol. Sci. 8, 3–9. doi: 10.1111/1467-8721.00002

CrossRef Full Text | Google Scholar

Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Lawrence Erlbaum.

Google Scholar

Asch, S. E. (1951). “Effects of group pressure on the modification and distortion of judgments,” in Groups, Leadership and Men , ed. H. Guetzkow (Pittsburgh, PA: Carnegie Press), 177–190.

Böhme, G. (2001). Aisthetik: Vorlesungen über Ästhetik als allgemeine Wahrnehmungslehre. München: Fink.

Braem, S., Liefooghe, B., De Houwer, J., Brass, M., and Abrahamse, E. L. (2017). There are limits to the effects of task instructions: making the automatic effects of task instructions context-specific takes practice. J. Exp. Psychol 43, 394–403. doi: 10.1037/xlm0000310

PubMed Abstract | CrossRef Full Text | Google Scholar

Cooley, C. H. (1902). Human Nature and the Social Order. New York, NY: C. Scribner’s sons.

Crowne, D. P., and Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. J. Consult. Psychol. 24, 349–354. doi: 10.1037/h0047358

da Costa, T. (2014). Between relevance systems and typification structures: alfred schutz on habitual possessions. Phenomenol. Mind 66–72.

Di Mascio, R., Kalyuga, S., and Sweller, J. (2016). The effect of wording and placement of task instructions on problem-solving creativity. J. Creat. Behav. 1–19. doi: 10.1002/jocb.157

Dörner, D., and Funke, J. (2017). Complex problem solving: What It Is and What It Is Not. Front. Psychol. 8:1153. doi: 10.3389/fpsyg.2017.01153

Dreyfus, H. (1972). What Computer Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press.

Dreyfus, H. (2004). “Merleau-Ponty and recent cognitive science,” in Skillful Coping: Essays on the Phenomenology of Everyday Perception and action , eds H. L. Dreyfus and M. A. Wrathall (Oxford: Oxford University Press), 231–248. doi: 10.1017/CCOL0521809894.006

Dreyfus, H., and Dreyfus, S. (1988). “Making a mind versus modeling the brain,” in Skillful Coping: Essays on the Phenomenology of Everyday Perception and Action , eds H. L. Dreyfus and M. A. Wrathall (Oxford: Oxford University Press), 205–230. doi: 10.1016/j.bpsc.2017.04.007

Elpidorou, A. (2017). Emotions in early sartre: the primacy of frustration. Midwest Stud. Philos. 41, 241–259. doi: 10.1111/misp.12075

Fales, W. (1943). Phenomenology of questions. Philos. Phenomenol. Res. 4, 60–75. doi: 10.2307/2103005

Funke, J. (2014). “Problem solving: What are the important questions?,” in Proceedings of the 36th Annual Conference of the Cognitive Science Society , eds P. Bello, M. Guarini, M. McShane, and B. Scassellati (Austin, TX: Cognitive Science Society), 493–498.

Getzels, J. W. (1982). “The problem of the problem,” in New Directions for Methodology of Social and Behavioral Science: Question Framing and Response Consistency , Vol. 11, ed. H. Hogarth (San Francisco, CA: Jossey Bass), 37–49.

Getzels, J. W., and Csikszentmihalyi, M. (1976). The Creative Vision: A Longitudinal Study of Problem Finding in Art. New York, NY: John Wiley & Sons.

Goldstein, K. (1934). Der Aufbau des Organismus. Einführung in die Biologie unter besonderer Berücksichtigung der Erfahrungen am kranken Menschen. Haag: Nijhoff. doi: 10.1007/978-94-017-7141-2

CrossRef Full Text

Goldstone, R. L., and Lupyan, G. (2016). Discovering psychological principles by mining naturally occurring data sets. Topics Cognit. Sci. 8, 548–568. doi: 10.1111/tops.12212

Goldstone, R. L., and Pizlo, Z. (2009). New perspectives on human problem solving. J. Probl. Solving 2, 1–5. doi: 10.7771/1932-6246.1055

Graumann, C. F. (1969). Motivation. Bern-Stuttgart: Akademische Verlagsgesellschaft.

Graumann, C. F. (1990). “Perspectival structure and dynamics in dialogues,” in The Dynamics of Dialogue , eds I. Markova, and K. Foppa (New York, NY: Springer), 105-127.

Griffero, T. (2014). Atmospheres and lived space. Stud. Phaenomenol. 14, 29–51.

Guillot, M. (2017). I me mine: on a confusion concerning the subjective character of experience. Rev. Philos. Psychol. 8, 23–53. doi: 10.1007/s13164-016-0313-4

Gurwitsch, A. (1949). Gelb-goldstein’s concept of “concrete” and “categorial” attitude and the phenomenology of ideation. Philos. Phenomenol. Res. 10, 172–196. doi: 10.2307/2104073

Gurwitsch, A. (2010). “The thematic field,” in The Collected Works of Aron Gurwitsch (1901-1973) , ed. R. Zaner (Dordrecht: Springer), 301–365. doi: 10.1007/978-90-481-3346-8_10

Husserl, E. (1917). “Phänomenologie und Psychologie,” in Freiheit und Gnade , ed. E. Stein (Freiburg: Herder), 195–230.

Hutto, D. D. (2008). Articulating and understanding the phenomenological manifesto. Abstracta 4, 10–19.

Kew, F. (1986). Playing the game: an ethnomethodological perspective. Int. Rev. Sociol. Sport 21, 305–322. doi: 10.1111/tops.12234

Laird, J. E., Newell, A., and Rosenbloom, P. S. (1987). Soar: an architecture for general intelligence. Artif. Intell. 33, 1–64. doi: 10.1016/0004-3702(87)90050-6

Landsberger, H. A. (1958). Hawthorne Revisited. Ithaca, NY: Cornell University.

Langley, P., and Rogers, S. (2005). “An extended theory of human problem solving,” in Proceedings of the twenty-seventh annual meeting of the cognitive science society , Stresa, 27.

Langley, P., and Trivedi, N. (2013). Elaborations on a theory of human problem solving. Adv. Cognit. Syst. 3, 1–12.

Levinas, E. (1986). “The Trace of the Other,” in Deconstruction in Context , ed. M. Taylor (Chicago: University of Chicago Press), 345–359.

Lewin, K. (1927). Gesetz und Experiment in der Psychologie. Symposium 1, 375–421.

Lewin, K. (1936). Principles of Topological Psychology. New York, NY: McGraw-Hill Book Company. doi: 10.1037/10019-000

Lewin, K. (1939). Field theory and experiment in social psychology: concepts and methods. Am. J. Soc. 44, 868–896. doi: 10.1086/218177

Lewin, K. (1940). “Formalization and progress in psychology,” in Field Theory in Social Sciences , ed. K. Lewin (New York, NY: Harper & Brothers).

Lewin, K. (1944). “Constructs in psychology and psychological ecology,” in Field Theory in Social Sciences , ed. K. Lewin (New York, NY: Harper & Brothers).

Lyons, J. (1970). The hidden dialogue in experimental research. J. Phenomenol. Psychol. 1, 19–29. doi: 10.1186/1752-0509-8-13

Marcel, G. (1942). “Sketch of a phenomenology and a metaphysic of hope,” in Homo viator. Introduction to a Metaphysic of Hope , ed. G. Marcel (Indiana: St. Augustine’s Press).

Nerney, G. (1979). The gestalt of problem-solving: an interpretation of max wertheimer’s “productive thinking”. J. Phenomenol. Psychol. 10, 56–80. doi: 10.1163/156916279X00059

Newell, A., and Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall.

Ohlsson, S. (2012). The problems with problem solving: reflections on the rise, current status, and possible future of a cognitive research paradigm. J. Probl. Solving 5, 101–128. doi: 10.7771/1932-6246.1144

Orne, M. T. (1962). On the social psychology of the psychological experiment: with particular reference to demand characteristics and their implications. Am. Psychol. 17, 776–783. doi: 10.1037/h0043424

Petitmengin, C., and Bitbol, M. (2009). Listening from within. J. Conscious. Stud. 16, 363–404.

Pfänder, A. (1900). Phänomenologie des Wollens. Eine psychologische Analyse. Leipzig: Johann Ambrosius Barth.

Popper, K. R. (1999). All Life is Problem Solving. Hove: Psychology Press.

Quesada, J., Kintsch, W., and Gomez, E. (2005). Complex problem-solving: a field in search of a definition? Theor. Issues Ergonomics Sci. 6, 5–33. doi: 10.1080/14639220512331311553

Rosenthal, R., and Rubin, D. B. (1978). Interpersonal expectancy effects: the first 345 studies. Behav. Brain Sci. 1, 377–386. doi: 10.1017/S0140525X00075506

Sartre, J.-P. (1956). Being and Nothingness. New York, NY: Washington Square Press.

Scheler, M. (1916). Der Formalismus in der Ethik und die materiale Wertethik. Halle: Max Niemeyer.

Schott, E. (1991). Psychologie der Situation. Humanwissenschaftliche Vergleiche. Heidelberg: Asanger.

Schütz, A. (2011). Reflections on the problem of relevance, in: collected papers V: phenomenology and the social sciences. Hague 3, 93–199.

Sharpe, D., and Whelton, W. J. (2016). Frightened by an old scarecrow: the remarkable resilience of demand characteristics. Rev. Gen. Psychol. 20, 349–368. doi: 10.1037/gpr0000087

Stein, E. (1922). Beiträge zur philosophischen Begründung der Psychologie und der Geisteswissenschaften. Jahrbuch Philos. phänomenol. Forschung 5, 1–283.

Thompson, E. (2007). Mind in Life. Biology, Phenomenology, and the Sciences of Mind. Cambridge. London: Harvard University Press.

Tversky, A., and Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science 211, 453–458. doi: 10.1126/science.7455683

Vaihinger, H. (1965). The Philosophy of “As If ”: A System of the Theoretical, Practical, and Religious Fictions of Mankind. London: Routledge and Kegan Paul.

Waldenfels, B. (1994). Response und Responsivität in der Psychologie. J. Psychol. 2, 71–80.

Waldenfels, B. (2015). Sozialität und Alterität: Modi sozialer Erfahrung. Frankfurt am Main: Suhrkamp Verlag.

Weisberg, R. W. (2015). Toward an integrated theory of insight in problem solving. Thinking & Reasoning 21, 5–39. doi: 10.1080/13546783.2014.886625

Wendt, A. N. (2017a). On the benefit of a phenomenological revision of problem solving. J. Phenomenol. Psychol. 48, 240–258. doi: 10.1163/15691624-12341330

Wendt, A. N. (2017b). The empirical potential of Live Streaming beyond cognitive psychology. J. Dynamic Decis. Mak. 3, 1–9.

Yoshimi, J. (2017). The phenomenology of problem solving. Grazer Philosophische Stud. 94, 391–409. doi: 10.1163/18756735-09403006

Zahavi, D. (1997). Horizontal intentionality and transcendental intersubjectivity. Tijdschrift voor Filosofie 59, 304–321.

Keywords : problem-solving, phenomenological psychology, field theory, demand characteristics, live streaming

Citation: Wendt AN (2018) Is There a Problem in the Laboratory? Front. Psychol. 9:2443. doi: 10.3389/fpsyg.2018.02443

Received: 29 September 2017; Accepted: 19 November 2018; Published: 05 December 2018.

Reviewed by:

Copyright © 2018 Wendt. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Alexander Nicolai Wendt, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

How to improve your problem solving skills and build effective problem solving strategies

examples of problem solving in lab

Design your next session with SessionLab

Join the 150,000+ facilitators 
using SessionLab.

Recommended Articles

How to become a great workshop facilitator, workshop design 101: how to craft a successful agenda design.

  • 47 useful online tools for workshop planning and meeting facilitation
  • A step-by-step guide to planning a workshop

Effective problem solving is all about using the right process and following a plan tailored to the issue at hand. Recognizing your team or organization has an issue isn’t enough to come up with effective problem solving strategies. 

To truly understand a problem and develop appropriate solutions, you will want to follow a solid process, follow the necessary problem solving steps, and bring all of your problem solving skills to the table.  

We’ll first guide you through the seven step problem solving process you and your team can use to effectively solve complex business challenges. We’ll also look at what problem solving strategies you can employ with your team when looking for a way to approach the process. We’ll then discuss the problem solving skills you need to be more effective at solving problems, complete with an activity from the SessionLab library you can use to develop that skill in your team.

Let’s get to it! 

What is a problem solving process?

  • What are the problem solving steps I need to follow?

Problem solving strategies

What skills do i need to be an effective problem solver, how can i improve my problem solving skills.

Solving problems is like baking a cake. You can go straight into the kitchen without a recipe or the right ingredients and do your best, but the end result is unlikely to be very tasty!

Using a process to bake a cake allows you to use the best ingredients without waste, collect the right tools, account for allergies, decide whether it is a birthday or wedding cake, and then bake efficiently and on time. The result is a better cake that is fit for purpose, tastes better and has created less mess in the kitchen. Also, it should have chocolate sprinkles. Having a step by step process to solve organizational problems allows you to go through each stage methodically and ensure you are trying to solve the right problems and select the most appropriate, effective solutions.

What are the problem solving steps I need to follow? 

All problem solving processes go through a number of steps in order to move from identifying a problem to resolving it.

Depending on your problem solving model and who you ask, there can be anything between four and nine problem solving steps you should follow in order to find the right solution. Whatever framework you and your group use, there are some key items that should be addressed in order to have an effective process.

We’ve looked at problem solving processes from sources such as the American Society for Quality and their four step approach , and Mediate ‘s six step process. By reflecting on those and our own problem solving processes, we’ve come up with a sequence of seven problem solving steps we feel best covers everything you need in order to effectively solve problems.

seven step problem solving process

1. Problem identification 

The first stage of any problem solving process is to identify the problem or problems you might want to solve. Effective problem solving strategies always begin by allowing a group scope to articulate what they believe the problem to be and then coming to some consensus over which problem they approach first. Problem solving activities used at this stage often have a focus on creating frank, open discussion so that potential problems can be brought to the surface.

2. Problem analysis 

Though this step is not a million miles from problem identification, problem analysis deserves to be considered separately. It can often be an overlooked part of the process and is instrumental when it comes to developing effective solutions.

The process of problem analysis means ensuring that the problem you are seeking to solve is the right problem . As part of this stage, you may look deeper and try to find the root cause of a specific problem at a team or organizational level.

Remember that problem solving strategies should not only be focused on putting out fires in the short term but developing long term solutions that deal with the root cause of organizational challenges. 

Whatever your approach, analyzing a problem is crucial in being able to select an appropriate solution and the problem solving skills deployed in this stage are beneficial for the rest of the process and ensuring the solutions you create are fit for purpose.

3. Solution generation

Once your group has nailed down the particulars of the problem you wish to solve, you want to encourage a free flow of ideas connecting to solving that problem. This can take the form of problem solving games that encourage creative thinking or problem solving activities designed to produce working prototypes of possible solutions. 

The key to ensuring the success of this stage of the problem solving process is to encourage quick, creative thinking and create an open space where all ideas are considered. The best solutions can come from unlikely places and by using problem solving techniques that celebrate invention, you might come up with solution gold. 

4. Solution development

No solution is likely to be perfect right out of the gate. It’s important to discuss and develop the solutions your group has come up with over the course of following the previous problem solving steps in order to arrive at the best possible solution. Problem solving games used in this stage involve lots of critical thinking, measuring potential effort and impact, and looking at possible solutions analytically. 

During this stage, you will often ask your team to iterate and improve upon your frontrunning solutions and develop them further. Remember that problem solving strategies always benefit from a multitude of voices and opinions, and not to let ego get involved when it comes to choosing which solutions to develop and take further.

Finding the best solution is the goal of all problem solving workshops and here is the place to ensure that your solution is well thought out, sufficiently robust and fit for purpose. 

5. Decision making 

Nearly there! Once your group has reached consensus and selected a solution that applies to the problem at hand you have some decisions to make. You will want to work on allocating ownership of the project, figure out who will do what, how the success of the solution will be measured and decide the next course of action.

The decision making stage is a part of the problem solving process that can get missed or taken as for granted. Fail to properly allocate roles and plan out how a solution will actually be implemented and it less likely to be successful in solving the problem.

Have clear accountabilities, actions, timeframes, and follow-ups. Make these decisions and set clear next-steps in the problem solving workshop so that everyone is aligned and you can move forward effectively as a group. 

Ensuring that you plan for the roll-out of a solution is one of the most important problem solving steps. Without adequate planning or oversight, it can prove impossible to measure success or iterate further if the problem was not solved. 

6. Solution implementation 

This is what we were waiting for! All problem solving strategies have the end goal of implementing a solution and solving a problem in mind. 

Remember that in order for any solution to be successful, you need to help your group through all of the previous problem solving steps thoughtfully. Only then can you ensure that you are solving the right problem but also that you have developed the correct solution and can then successfully implement and measure the impact of that solution.

Project management and communication skills are key here – your solution may need to adjust when out in the wild or you might discover new challenges along the way.

7. Solution evaluation 

So you and your team developed a great solution to a problem and have a gut feeling its been solved. Work done, right? Wrong. All problem solving strategies benefit from evaluation, consideration, and feedback. You might find that the solution does not work for everyone, might create new problems, or is potentially so successful that you will want to roll it out to larger teams or as part of other initiatives. 

None of that is possible without taking the time to evaluate the success of the solution you developed in your problem solving model and adjust if necessary.

Remember that the problem solving process is often iterative and it can be common to not solve complex issues on the first try. Even when this is the case, you and your team will have generated learning that will be important for future problem solving workshops or in other parts of the organization. 

It’s worth underlining how important record keeping is throughout the problem solving process. If a solution didn’t work, you need to have the data and records to see why that was the case. If you go back to the drawing board, notes from the previous workshop can help save time. Data and insight is invaluable at every stage of the problem solving process and this one is no different.

Problem solving workshops made easy

examples of problem solving in lab

Problem solving strategies are methods of approaching and facilitating the process of problem-solving with a set of techniques , actions, and processes. Different strategies are more effective if you are trying to solve broad problems such as achieving higher growth versus more focused problems like, how do we improve our customer onboarding process?

Broadly, the problem solving steps outlined above should be included in any problem solving strategy though choosing where to focus your time and what approaches should be taken is where they begin to differ. You might find that some strategies ask for the problem identification to be done prior to the session or that everything happens in the course of a one day workshop.

The key similarity is that all good problem solving strategies are structured and designed. Four hours of open discussion is never going to be as productive as a four-hour workshop designed to lead a group through a problem solving process.

Good problem solving strategies are tailored to the team, organization and problem you will be attempting to solve. Here are some example problem solving strategies you can learn from or use to get started.

Use a workshop to lead a team through a group process

Often, the first step to solving problems or organizational challenges is bringing a group together effectively. Most teams have the tools, knowledge, and expertise necessary to solve their challenges – they just need some guidance in how to use leverage those skills and a structure and format that allows people to focus their energies.

Facilitated workshops are one of the most effective ways of solving problems of any scale. By designing and planning your workshop carefully, you can tailor the approach and scope to best fit the needs of your team and organization. 

Problem solving workshop

  • Creating a bespoke, tailored process
  • Tackling problems of any size
  • Building in-house workshop ability and encouraging their use

Workshops are an effective strategy for solving problems. By using tried and test facilitation techniques and methods, you can design and deliver a workshop that is perfectly suited to the unique variables of your organization. You may only have the capacity for a half-day workshop and so need a problem solving process to match. 

By using our session planner tool and importing methods from our library of 700+ facilitation techniques, you can create the right problem solving workshop for your team. It might be that you want to encourage creative thinking or look at things from a new angle to unblock your groups approach to problem solving. By tailoring your workshop design to the purpose, you can help ensure great results.

One of the main benefits of a workshop is the structured approach to problem solving. Not only does this mean that the workshop itself will be successful, but many of the methods and techniques will help your team improve their working processes outside of the workshop. 

We believe that workshops are one of the best tools you can use to improve the way your team works together. Start with a problem solving workshop and then see what team building, culture or design workshops can do for your organization!

Run a design sprint

Great for: 

  • aligning large, multi-discipline teams
  • quickly designing and testing solutions
  • tackling large, complex organizational challenges and breaking them down into smaller tasks

By using design thinking principles and methods, a design sprint is a great way of identifying, prioritizing and prototyping solutions to long term challenges that can help solve major organizational problems with quick action and measurable results.

Some familiarity with design thinking is useful, though not integral, and this strategy can really help a team align if there is some discussion around which problems should be approached first. 

The stage-based structure of the design sprint is also very useful for teams new to design thinking.  The inspiration phase, where you look to competitors that have solved your problem, and the rapid prototyping and testing phases are great for introducing new concepts that will benefit a team in all their future work. 

It can be common for teams to look inward for solutions and so looking to the market for solutions you can iterate on can be very productive. Instilling an agile prototyping and testing mindset can also be great when helping teams move forwards – generating and testing solutions quickly can help save time in the long run and is also pretty exciting!

Break problems down into smaller issues

Organizational challenges and problems are often complicated and large scale in nature. Sometimes, trying to resolve such an issue in one swoop is simply unachievable or overwhelming. Try breaking down such problems into smaller issues that you can work on step by step. You may not be able to solve the problem of churning customers off the bat, but you can work with your team to identify smaller effort but high impact elements and work on those first.

This problem solving strategy can help a team generate momentum, prioritize and get some easy wins. It’s also a great strategy to employ with teams who are just beginning to learn how to approach the problem solving process. If you want some insight into a way to employ this strategy, we recommend looking at our design sprint template below!

Use guiding frameworks or try new methodologies

Some problems are best solved by introducing a major shift in perspective or by using new methodologies that encourage your team to think differently.

Props and tools such as Methodkit , which uses a card-based toolkit for facilitation, or Lego Serious Play can be great ways to engage your team and find an inclusive, democratic problem solving strategy. Remember that play and creativity are great tools for achieving change and whatever the challenge, engaging your participants can be very effective where other strategies may have failed.

LEGO Serious Play

  • Improving core problem solving skills
  • Thinking outside of the box
  • Encouraging creative solutions

LEGO Serious Play is a problem solving methodology designed to get participants thinking differently by using 3D models and kinesthetic learning styles. By physically building LEGO models based on questions and exercises, participants are encouraged to think outside of the box and create their own responses. 

Collaborate LEGO Serious Play exercises are also used to encourage communication and build problem solving skills in a group. By using this problem solving process, you can often help different kinds of learners and personality types contribute and unblock organizational problems with creative thinking. 

Problem solving strategies like LEGO Serious Play are super effective at helping a team solve more skills-based problems such as communication between teams or a lack of creative thinking. Some problems are not suited to LEGO Serious Play and require a different problem solving strategy.

Card Decks and Method Kits

  • New facilitators or non-facilitators 
  • Approaching difficult subjects with a simple, creative framework
  • Engaging those with varied learning styles

Card decks and method kids are great tools for those new to facilitation or for whom facilitation is not the primary role. Card decks such as the emotional culture deck can be used for complete workshops and in many cases, can be used right out of the box. Methodkit has a variety of kits designed for scenarios ranging from personal development through to personas and global challenges so you can find the right deck for your particular needs.

Having an easy to use framework that encourages creativity or a new approach can take some of the friction or planning difficulties out of the workshop process and energize a team in any setting. Simplicity is the key with these methods. By ensuring everyone on your team can get involved and engage with the process as quickly as possible can really contribute to the success of your problem solving strategy.

Source external advice

Looking to peers, experts and external facilitators can be a great way of approaching the problem solving process. Your team may not have the necessary expertise, insights of experience to tackle some issues, or you might simply benefit from a fresh perspective. Some problems may require bringing together an entire team, and coaching managers or team members individually might be the right approach. Remember that not all problems are best resolved in the same manner.

If you’re a solo entrepreneur, peer groups, coaches and mentors can also be invaluable at not only solving specific business problems, but in providing a support network for resolving future challenges. One great approach is to join a Mastermind Group and link up with like-minded individuals and all grow together. Remember that however you approach the sourcing of external advice, do so thoughtfully, respectfully and honestly. Reciprocate where you can and prepare to be surprised by just how kind and helpful your peers can be!

Mastermind Group

  • Solo entrepreneurs or small teams with low capacity
  • Peer learning and gaining outside expertise
  • Getting multiple external points of view quickly

Problem solving in large organizations with lots of skilled team members is one thing, but how about if you work for yourself or in a very small team without the capacity to get the most from a design sprint or LEGO Serious Play session? 

A mastermind group – sometimes known as a peer advisory board – is where a group of people come together to support one another in their own goals, challenges, and businesses. Each participant comes to the group with their own purpose and the other members of the group will help them create solutions, brainstorm ideas, and support one another. 

Mastermind groups are very effective in creating an energized, supportive atmosphere that can deliver meaningful results. Learning from peers from outside of your organization or industry can really help unlock new ways of thinking and drive growth. Access to the experience and skills of your peers can be invaluable in helping fill the gaps in your own ability, particularly in young companies.

A mastermind group is a great solution for solo entrepreneurs, small teams, or for organizations that feel that external expertise or fresh perspectives will be beneficial for them. It is worth noting that Mastermind groups are often only as good as the participants and what they can bring to the group. Participants need to be committed, engaged and understand how to work in this context. 

Coaching and mentoring

  • Focused learning and development
  • Filling skills gaps
  • Working on a range of challenges over time

Receiving advice from a business coach or building a mentor/mentee relationship can be an effective way of resolving certain challenges. The one-to-one format of most coaching and mentor relationships can really help solve the challenges those individuals are having and benefit the organization as a result.

A great mentor can be invaluable when it comes to spotting potential problems before they arise and coming to understand a mentee very well has a host of other business benefits. You might run an internal mentorship program to help develop your team’s problem solving skills and strategies or as part of a large learning and development program. External coaches can also be an important part of your problem solving strategy, filling skills gaps for your management team or helping with specific business issues. 

Now we’ve explored the problem solving process and the steps you will want to go through in order to have an effective session, let’s look at the skills you and your team need to be more effective problem solvers.

Problem solving skills are highly sought after, whatever industry or team you work in. Organizations are keen to employ people who are able to approach problems thoughtfully and find strong, realistic solutions. Whether you are a facilitator , a team leader or a developer, being an effective problem solver is a skill you’ll want to develop.

Problem solving skills form a whole suite of techniques and approaches that an individual uses to not only identify problems but to discuss them productively before then developing appropriate solutions.

Here are some of the most important problem solving skills everyone from executives to junior staff members should learn. We’ve also included an activity or exercise from the SessionLab library that can help you and your team develop that skill. 

If you’re running a workshop or training session to try and improve problem solving skills in your team, try using these methods to supercharge your process!

Problem solving skills checklist

Active listening

Active listening is one of the most important skills anyone who works with people can possess. In short, active listening is a technique used to not only better understand what is being said by an individual, but also to be more aware of the underlying message the speaker is trying to convey. When it comes to problem solving, active listening is integral for understanding the position of every participant and to clarify the challenges, ideas and solutions they bring to the table.

Some active listening skills include:

  • Paying complete attention to the speaker.
  • Removing distractions.
  • Avoid interruption.
  • Taking the time to fully understand before preparing a rebuttal.
  • Responding respectfully and appropriately.
  • Demonstrate attentiveness and positivity with an open posture, making eye contact with the speaker, smiling and nodding if appropriate. Show that you are listening and encourage them to continue.
  • Be aware of and respectful of feelings. Judge the situation and respond appropriately. You can disagree without being disrespectful.   
  • Observe body language. 
  • Paraphrase what was said in your own words, either mentally or verbally.
  • Remain neutral. 
  • Reflect and take a moment before responding.
  • Ask deeper questions based on what is said and clarify points where necessary.   
Active Listening   #hyperisland   #skills   #active listening   #remote-friendly   This activity supports participants to reflect on a question and generate their own solutions using simple principles of active listening and peer coaching. It’s an excellent introduction to active listening but can also be used with groups that are already familiar with it. Participants work in groups of three and take turns being: “the subject”, the listener, and the observer.

Analytical skills

All problem solving models require strong analytical skills, particularly during the beginning of the process and when it comes to analyzing how solutions have performed.

Analytical skills are primarily focused on performing an effective analysis by collecting, studying and parsing data related to a problem or opportunity. 

It often involves spotting patterns, being able to see things from different perspectives and using observable facts and data to make suggestions or produce insight. 

Analytical skills are also important at every stage of the problem solving process and by having these skills, you can ensure that any ideas or solutions you create or backed up analytically and have been sufficiently thought out.

Nine Whys   #innovation   #issue analysis   #liberating structures   With breathtaking simplicity, you can rapidly clarify for individuals and a group what is essentially important in their work. You can quickly reveal when a compelling purpose is missing in a gathering and avoid moving forward without clarity. When a group discovers an unambiguous shared purpose, more freedom and more responsibility are unleashed. You have laid the foundation for spreading and scaling innovations with fidelity.

Collaboration

Trying to solve problems on your own is difficult. Being able to collaborate effectively, with a free exchange of ideas, to delegate and be a productive member of a team is hugely important to all problem solving strategies.

Remember that whatever your role, collaboration is integral, and in a problem solving process, you are all working together to find the best solution for everyone. 

Marshmallow challenge with debriefing   #teamwork   #team   #leadership   #collaboration   In eighteen minutes, teams must build the tallest free-standing structure out of 20 sticks of spaghetti, one yard of tape, one yard of string, and one marshmallow. The marshmallow needs to be on top. The Marshmallow Challenge was developed by Tom Wujec, who has done the activity with hundreds of groups around the world. Visit the Marshmallow Challenge website for more information. This version has an extra debriefing question added with sample questions focusing on roles within the team.

Communication  

Being an effective communicator means being empathetic, clear and succinct, asking the right questions, and demonstrating active listening skills throughout any discussion or meeting. 

In a problem solving setting, you need to communicate well in order to progress through each stage of the process effectively. As a team leader, it may also fall to you to facilitate communication between parties who may not see eye to eye. Effective communication also means helping others to express themselves and be heard in a group.

Bus Trip   #feedback   #communication   #appreciation   #closing   #thiagi   #team   This is one of my favourite feedback games. I use Bus Trip at the end of a training session or a meeting, and I use it all the time. The game creates a massive amount of energy with lots of smiles, laughs, and sometimes even a teardrop or two.

Creative problem solving skills can be some of the best tools in your arsenal. Thinking creatively, being able to generate lots of ideas and come up with out of the box solutions is useful at every step of the process. 

The kinds of problems you will likely discuss in a problem solving workshop are often difficult to solve, and by approaching things in a fresh, creative manner, you can often create more innovative solutions.

Having practical creative skills is also a boon when it comes to problem solving. If you can help create quality design sketches and prototypes in record time, it can help bring a team to alignment more quickly or provide a base for further iteration.

The paper clip method   #sharing   #creativity   #warm up   #idea generation   #brainstorming   The power of brainstorming. A training for project leaders, creativity training, and to catalyse getting new solutions.

Critical thinking

Critical thinking is one of the fundamental problem solving skills you’ll want to develop when working on developing solutions. Critical thinking is the ability to analyze, rationalize and evaluate while being aware of personal bias, outlying factors and remaining open-minded.

Defining and analyzing problems without deploying critical thinking skills can mean you and your team go down the wrong path. Developing solutions to complex issues requires critical thinking too – ensuring your team considers all possibilities and rationally evaluating them. 

Agreement-Certainty Matrix   #issue analysis   #liberating structures   #problem solving   You can help individuals or groups avoid the frequent mistake of trying to solve a problem with methods that are not adapted to the nature of their challenge. The combination of two questions makes it possible to easily sort challenges into four categories: simple, complicated, complex , and chaotic .  A problem is simple when it can be solved reliably with practices that are easy to duplicate.  It is complicated when experts are required to devise a sophisticated solution that will yield the desired results predictably.  A problem is complex when there are several valid ways to proceed but outcomes are not predictable in detail.  Chaotic is when the context is too turbulent to identify a path forward.  A loose analogy may be used to describe these differences: simple is like following a recipe, complicated like sending a rocket to the moon, complex like raising a child, and chaotic is like the game “Pin the Tail on the Donkey.”  The Liberating Structures Matching Matrix in Chapter 5 can be used as the first step to clarify the nature of a challenge and avoid the mismatches between problems and solutions that are frequently at the root of chronic, recurring problems.

Data analysis 

Though it shares lots of space with general analytical skills, data analysis skills are something you want to cultivate in their own right in order to be an effective problem solver.

Being good at data analysis doesn’t just mean being able to find insights from data, but also selecting the appropriate data for a given issue, interpreting it effectively and knowing how to model and present that data. Depending on the problem at hand, it might also include a working knowledge of specific data analysis tools and procedures. 

Having a solid grasp of data analysis techniques is useful if you’re leading a problem solving workshop but if you’re not an expert, don’t worry. Bring people into the group who has this skill set and help your team be more effective as a result.

Decision making

All problems need a solution and all solutions require that someone make the decision to implement them. Without strong decision making skills, teams can become bogged down in discussion and less effective as a result. 

Making decisions is a key part of the problem solving process. It’s important to remember that decision making is not restricted to the leadership team. Every staff member makes decisions every day and developing these skills ensures that your team is able to solve problems at any scale. Remember that making decisions does not mean leaping to the first solution but weighing up the options and coming to an informed, well thought out solution to any given problem that works for the whole team.

Lightning Decision Jam (LDJ)   #action   #decision making   #problem solving   #issue analysis   #innovation   #design   #remote-friendly   The problem with anything that requires creative thinking is that it’s easy to get lost—lose focus and fall into the trap of having useless, open-ended, unstructured discussions. Here’s the most effective solution I’ve found: Replace all open, unstructured discussion with a clear process. What to use this exercise for: Anything which requires a group of people to make decisions, solve problems or discuss challenges. It’s always good to frame an LDJ session with a broad topic, here are some examples: The conversion flow of our checkout Our internal design process How we organise events Keeping up with our competition Improving sales flow

Dependability

Most complex organizational problems require multiple people to be involved in delivering the solution. Ensuring that the team and organization can depend on you to take the necessary actions and communicate where necessary is key to ensuring problems are solved effectively.

Being dependable also means working to deadlines and to brief. It is often a matter of creating trust in a team so that everyone can depend on one another to complete the agreed actions in the agreed time frame so that the team can move forward together. Being undependable can create problems of friction and can limit the effectiveness of your solutions so be sure to bear this in mind throughout a project. 

Team Purpose & Culture   #team   #hyperisland   #culture   #remote-friendly   This is an essential process designed to help teams define their purpose (why they exist) and their culture (how they work together to achieve that purpose). Defining these two things will help any team to be more focused and aligned. With support of tangible examples from other companies, the team members work as individuals and a group to codify the way they work together. The goal is a visual manifestation of both the purpose and culture that can be put up in the team’s work space.

Emotional intelligence

Emotional intelligence is an important skill for any successful team member, whether communicating internally or with clients or users. In the problem solving process, emotional intelligence means being attuned to how people are feeling and thinking, communicating effectively and being self-aware of what you bring to a room. 

There are often differences of opinion when working through problem solving processes, and it can be easy to let things become impassioned or combative. Developing your emotional intelligence means being empathetic to your colleagues and managing your own emotions throughout the problem and solution process. Be kind, be thoughtful and put your points across care and attention. 

Being emotionally intelligent is a skill for life and by deploying it at work, you can not only work efficiently but empathetically. Check out the emotional culture workshop template for more!

Facilitation

As we’ve clarified in our facilitation skills post, facilitation is the art of leading people through processes towards agreed-upon objectives in a manner that encourages participation, ownership, and creativity by all those involved. While facilitation is a set of interrelated skills in itself, the broad definition of facilitation can be invaluable when it comes to problem solving. Leading a team through a problem solving process is made more effective if you improve and utilize facilitation skills – whether you’re a manager, team leader or external stakeholder.

The Six Thinking Hats   #creative thinking   #meeting facilitation   #problem solving   #issue resolution   #idea generation   #conflict resolution   The Six Thinking Hats are used by individuals and groups to separate out conflicting styles of thinking. They enable and encourage a group of people to think constructively together in exploring and implementing change, rather than using argument to fight over who is right and who is wrong.

Flexibility 

Being flexible is a vital skill when it comes to problem solving. This does not mean immediately bowing to pressure or changing your opinion quickly: instead, being flexible is all about seeing things from new perspectives, receiving new information and factoring it into your thought process.

Flexibility is also important when it comes to rolling out solutions. It might be that other organizational projects have greater priority or require the same resources as your chosen solution. Being flexible means understanding needs and challenges across the team and being open to shifting or arranging your own schedule as necessary. Again, this does not mean immediately making way for other projects. It’s about articulating your own needs, understanding the needs of others and being able to come to a meaningful compromise.

The Creativity Dice   #creativity   #problem solving   #thiagi   #issue analysis   Too much linear thinking is hazardous to creative problem solving. To be creative, you should approach the problem (or the opportunity) from different points of view. You should leave a thought hanging in mid-air and move to another. This skipping around prevents premature closure and lets your brain incubate one line of thought while you consciously pursue another.

Working in any group can lead to unconscious elements of groupthink or situations in which you may not wish to be entirely honest. Disagreeing with the opinions of the executive team or wishing to save the feelings of a coworker can be tricky to navigate, but being honest is absolutely vital when to comes to developing effective solutions and ensuring your voice is heard. 

Remember that being honest does not mean being brutally candid. You can deliver your honest feedback and opinions thoughtfully and without creating friction by using other skills such as emotional intelligence. 

Explore your Values   #hyperisland   #skills   #values   #remote-friendly   Your Values is an exercise for participants to explore what their most important values are. It’s done in an intuitive and rapid way to encourage participants to follow their intuitive feeling rather than over-thinking and finding the “correct” values. It is a good exercise to use to initiate reflection and dialogue around personal values.

Initiative 

The problem solving process is multi-faceted and requires different approaches at certain points of the process. Taking initiative to bring problems to the attention of the team, collect data or lead the solution creating process is always valuable. You might even roadtest your own small scale solutions or brainstorm before a session. Taking initiative is particularly effective if you have good deal of knowledge in that area or have ownership of a particular project and want to get things kickstarted.

That said, be sure to remember to honor the process and work in service of the team. If you are asked to own one part of the problem solving process and you don’t complete that task because your initiative leads you to work on something else, that’s not an effective method of solving business challenges.

15% Solutions   #action   #liberating structures   #remote-friendly   You can reveal the actions, however small, that everyone can do immediately. At a minimum, these will create momentum, and that may make a BIG difference.  15% Solutions show that there is no reason to wait around, feel powerless, or fearful. They help people pick it up a level. They get individuals and the group to focus on what is within their discretion instead of what they cannot change.  With a very simple question, you can flip the conversation to what can be done and find solutions to big problems that are often distributed widely in places not known in advance. Shifting a few grains of sand may trigger a landslide and change the whole landscape.

Impartiality

A particularly useful problem solving skill for product owners or managers is the ability to remain impartial throughout much of the process. In practice, this means treating all points of view and ideas brought forward in a meeting equally and ensuring that your own areas of interest or ownership are not favored over others. 

There may be a stage in the process where a decision maker has to weigh the cost and ROI of possible solutions against the company roadmap though even then, ensuring that the decision made is based on merit and not personal opinion. 

Empathy map   #frame insights   #create   #design   #issue analysis   An empathy map is a tool to help a design team to empathize with the people they are designing for. You can make an empathy map for a group of people or for a persona. To be used after doing personas when more insights are needed.

Being a good leader means getting a team aligned, energized and focused around a common goal. In the problem solving process, strong leadership helps ensure that the process is efficient, that any conflicts are resolved and that a team is managed in the direction of success.

It’s common for managers or executives to assume this role in a problem solving workshop, though it’s important that the leader maintains impartiality and does not bulldoze the group in a particular direction. Remember that good leadership means working in service of the purpose and team and ensuring the workshop is a safe space for employees of any level to contribute. Take a look at our leadership games and activities post for more exercises and methods to help improve leadership in your organization.

Leadership Pizza   #leadership   #team   #remote-friendly   This leadership development activity offers a self-assessment framework for people to first identify what skills, attributes and attitudes they find important for effective leadership, and then assess their own development and initiate goal setting.

In the context of problem solving, mediation is important in keeping a team engaged, happy and free of conflict. When leading or facilitating a problem solving workshop, you are likely to run into differences of opinion. Depending on the nature of the problem, certain issues may be brought up that are emotive in nature. 

Being an effective mediator means helping those people on either side of such a divide are heard, listen to one another and encouraged to find common ground and a resolution. Mediating skills are useful for leaders and managers in many situations and the problem solving process is no different.

Conflict Responses   #hyperisland   #team   #issue resolution   A workshop for a team to reflect on past conflicts, and use them to generate guidelines for effective conflict handling. The workshop uses the Thomas-Killman model of conflict responses to frame a reflective discussion. Use it to open up a discussion around conflict with a team.

Planning 

Solving organizational problems is much more effective when following a process or problem solving model. Planning skills are vital in order to structure, deliver and follow-through on a problem solving workshop and ensure your solutions are intelligently deployed.

Planning skills include the ability to organize tasks and a team, plan and design the process and take into account any potential challenges. Taking the time to plan carefully can save time and frustration later in the process and is valuable for ensuring a team is positioned for success.

3 Action Steps   #hyperisland   #action   #remote-friendly   This is a small-scale strategic planning session that helps groups and individuals to take action toward a desired change. It is often used at the end of a workshop or programme. The group discusses and agrees on a vision, then creates some action steps that will lead them towards that vision. The scope of the challenge is also defined, through discussion of the helpful and harmful factors influencing the group.

Prioritization

As organisations grow, the scale and variation of problems they face multiplies. Your team or is likely to face numerous challenges in different areas and so having the skills to analyze and prioritize becomes very important, particularly for those in leadership roles.

A thorough problem solving process is likely to deliver multiple solutions and you may have several different problems you wish to solve simultaneously. Prioritization is the ability to measure the importance, value, and effectiveness of those possible solutions and choose which to enact and in what order. The process of prioritization is integral in ensuring the biggest challenges are addressed with the most impactful solutions.

Impact and Effort Matrix   #gamestorming   #decision making   #action   #remote-friendly   In this decision-making exercise, possible actions are mapped based on two factors: effort required to implement and potential impact. Categorizing ideas along these lines is a useful technique in decision making, as it obliges contributors to balance and evaluate suggested actions before committing to them.

Project management

Some problem solving skills are utilized in a workshop or ideation phases, while others come in useful when it comes to decision making. Overseeing an entire problem solving process and ensuring its success requires strong project management skills. 

While project management incorporates many of the other skills listed here, it is important to note the distinction of considering all of the factors of a project and managing them successfully. Being able to negotiate with stakeholders, manage tasks, time and people, consider costs and ROI, and tie everything together is massively helpful when going through the problem solving process. 

Record keeping

Working out meaningful solutions to organizational challenges is only one part of the process.  Thoughtfully documenting and keeping records of each problem solving step for future consultation is important in ensuring efficiency and meaningful change. 

For example, some problems may be lower priority than others but can be revisited in the future. If the team has ideated on solutions and found some are not up to the task, record those so you can rule them out and avoiding repeating work. Keeping records of the process also helps you improve and refine your problem solving model next time around!

Personal Kanban   #gamestorming   #action   #agile   #project planning   Personal Kanban is a tool for organizing your work to be more efficient and productive. It is based on agile methods and principles.

Research skills

Conducting research to support both the identification of problems and the development of appropriate solutions is important for an effective process. Knowing where to go to collect research, how to conduct research efficiently, and identifying pieces of research are relevant are all things a good researcher can do well. 

In larger groups, not everyone has to demonstrate this ability in order for a problem solving workshop to be effective. That said, having people with research skills involved in the process, particularly if they have existing area knowledge, can help ensure the solutions that are developed with data that supports their intention. Remember that being able to deliver the results of research efficiently and in a way the team can easily understand is also important. The best data in the world is only as effective as how it is delivered and interpreted.

Customer experience map   #ideation   #concepts   #research   #design   #issue analysis   #remote-friendly   Customer experience mapping is a method of documenting and visualizing the experience a customer has as they use the product or service. It also maps out their responses to their experiences. To be used when there is a solution (even in a conceptual stage) that can be analyzed.

Risk management

Managing risk is an often overlooked part of the problem solving process. Solutions are often developed with the intention of reducing exposure to risk or solving issues that create risk but sometimes, great solutions are more experimental in nature and as such, deploying them needs to be carefully considered. 

Managing risk means acknowledging that there may be risks associated with more out of the box solutions or trying new things, but that this must be measured against the possible benefits and other organizational factors. 

Be informed, get the right data and stakeholders in the room and you can appropriately factor risk into your decision making process. 

Decisions, Decisions…   #communication   #decision making   #thiagi   #action   #issue analysis   When it comes to decision-making, why are some of us more prone to take risks while others are risk-averse? One explanation might be the way the decision and options were presented.  This exercise, based on Kahneman and Tversky’s classic study , illustrates how the framing effect influences our judgement and our ability to make decisions . The participants are divided into two groups. Both groups are presented with the same problem and two alternative programs for solving them. The two programs both have the same consequences but are presented differently. The debriefing discussion examines how the framing of the program impacted the participant’s decision.

Team-building 

No single person is as good at problem solving as a team. Building an effective team and helping them come together around a common purpose is one of the most important problem solving skills, doubly so for leaders. By bringing a team together and helping them work efficiently, you pave the way for team ownership of a problem and the development of effective solutions. 

In a problem solving workshop, it can be tempting to jump right into the deep end, though taking the time to break the ice, energize the team and align them with a game or exercise will pay off over the course of the day.

Remember that you will likely go through the problem solving process multiple times over an organization’s lifespan and building a strong team culture will make future problem solving more effective. It’s also great to work with people you know, trust and have fun with. Working on team building in and out of the problem solving process is a hallmark of successful teams that can work together to solve business problems.

9 Dimensions Team Building Activity   #ice breaker   #teambuilding   #team   #remote-friendly   9 Dimensions is a powerful activity designed to build relationships and trust among team members. There are 2 variations of this icebreaker. The first version is for teams who want to get to know each other better. The second version is for teams who want to explore how they are working together as a team.

Time management 

The problem solving process is designed to lead a team from identifying a problem through to delivering a solution and evaluating its effectiveness. Without effective time management skills or timeboxing of tasks, it can be easy for a team to get bogged down or be inefficient.

By using a problem solving model and carefully designing your workshop, you can allocate time efficiently and trust that the process will deliver the results you need in a good timeframe.

Time management also comes into play when it comes to rolling out solutions, particularly those that are experimental in nature. Having a clear timeframe for implementing and evaluating solutions is vital for ensuring their success and being able to pivot if necessary.

Improving your skills at problem solving is often a career-long pursuit though there are methods you can use to make the learning process more efficient and to supercharge your problem solving skillset.

Remember that the skills you need to be a great problem solver have a large overlap with those skills you need to be effective in any role. Investing time and effort to develop your active listening or critical thinking skills is valuable in any context. Here are 7 ways to improve your problem solving skills.

Share best practices

Remember that your team is an excellent source of skills, wisdom, and techniques and that you should all take advantage of one another where possible. Best practices that one team has for solving problems, conducting research or making decisions should be shared across the organization. If you have in-house staff that have done active listening training or are data analysis pros, have them lead a training session. 

Your team is one of your best resources. Create space and internal processes for the sharing of skills so that you can all grow together. 

Ask for help and attend training

Once you’ve figured out you have a skills gap, the next step is to take action to fill that skills gap. That might be by asking your superior for training or coaching, or liaising with team members with that skill set. You might even attend specialized training for certain skills – active listening or critical thinking, for example, are business-critical skills that are regularly offered as part of a training scheme.

Whatever method you choose, remember that taking action of some description is necessary for growth. Whether that means practicing, getting help, attending training or doing some background reading, taking active steps to improve your skills is the way to go.

Learn a process 

Problem solving can be complicated, particularly when attempting to solve large problems for the first time. Using a problem solving process helps give structure to your problem solving efforts and focus on creating outcomes, rather than worrying about the format. 

Tools such as the seven-step problem solving process above are effective because not only do they feature steps that will help a team solve problems, they also develop skills along the way. Each step asks for people to engage with the process using different skills and in doing so, helps the team learn and grow together. Group processes of varying complexity and purpose can also be found in the SessionLab library of facilitation techniques . Using a tried and tested process and really help ease the learning curve for both those leading such a process, as well as those undergoing the purpose.

Effective teams make decisions about where they should and shouldn’t expend additional effort. By using a problem solving process, you can focus on the things that matter, rather than stumbling towards a solution haphazardly. 

Create a feedback loop

Some skills gaps are more obvious than others. It’s possible that your perception of your active listening skills differs from those of your colleagues. 

It’s valuable to create a system where team members can provide feedback in an ordered and friendly manner so they can all learn from one another. Only by identifying areas of improvement can you then work to improve them. 

Remember that feedback systems require oversight and consideration so that they don’t turn into a place to complain about colleagues. Design the system intelligently so that you encourage the creation of learning opportunities, rather than encouraging people to list their pet peeves.

While practice might not make perfect, it does make the problem solving process easier. If you are having trouble with critical thinking, don’t shy away from doing it. Get involved where you can and stretch those muscles as regularly as possible. 

Problem solving skills come more naturally to some than to others and that’s okay. Take opportunities to get involved and see where you can practice your skills in situations outside of a workshop context. Try collaborating in other circumstances at work or conduct data analysis on your own projects. You can often develop those skills you need for problem solving simply by doing them. Get involved!

Use expert exercises and methods

Learn from the best. Our library of 700+ facilitation techniques is full of activities and methods that help develop the skills you need to be an effective problem solver. Check out our templates to see how to approach problem solving and other organizational challenges in a structured and intelligent manner.

There is no single approach to improving problem solving skills, but by using the techniques employed by others you can learn from their example and develop processes that have seen proven results. 

Try new ways of thinking and change your mindset

Using tried and tested exercises that you know well can help deliver results, but you do run the risk of missing out on the learning opportunities offered by new approaches. As with the problem solving process, changing your mindset can remove blockages and be used to develop your problem solving skills.

Most teams have members with mixed skill sets and specialties. Mix people from different teams and share skills and different points of view. Teach your customer support team how to use design thinking methods or help your developers with conflict resolution techniques. Try switching perspectives with facilitation techniques like Flip It! or by using new problem solving methodologies or models. Give design thinking, liberating structures or lego serious play a try if you want to try a new approach. You will find that framing problems in new ways and using existing skills in new contexts can be hugely useful for personal development and improving your skillset. It’s also a lot of fun to try new things. Give it a go!

Encountering business challenges and needing to find appropriate solutions is not unique to your organization. Lots of very smart people have developed methods, theories and approaches to help develop problem solving skills and create effective solutions. Learn from them!

Books like The Art of Thinking Clearly , Think Smarter, or Thinking Fast, Thinking Slow are great places to start, though it’s also worth looking at blogs related to organizations facing similar problems to yours, or browsing for success stories. Seeing how Dropbox massively increased growth and working backward can help you see the skills or approach you might be lacking to solve that same problem. Learning from others by reading their stories or approaches can be time-consuming but ultimately rewarding.

A tired, distracted mind is not in the best position to learn new skills. It can be tempted to burn the candle at both ends and develop problem solving skills outside of work. Absolutely use your time effectively and take opportunities for self-improvement, though remember that rest is hugely important and that without letting your brain rest, you cannot be at your most effective. 

Creating distance between yourself and the problem you might be facing can also be useful. By letting an idea sit, you can find that a better one presents itself or you can develop it further. Take regular breaks when working and create a space for downtime. Remember that working smarter is preferable to working harder and that self-care is important for any effective learning or improvement process.

Want to design better group processes?

examples of problem solving in lab

Over to you

Now we’ve explored some of the key problem solving skills and the problem solving steps necessary for an effective process, you’re ready to begin developing more effective solutions and leading problem solving workshops.

Need more inspiration? Check out our post on problem solving activities you can use when guiding a group towards a great solution in your next workshop or meeting. Have questions? Did you have a great problem solving technique you use with your team? Get in touch in the comments below. We’d love to chat!

Leave a Comment Cancel reply

Your email address will not be published. Required fields are marked *

examples of problem solving in lab

Facilitation skills can be applied in a variety of contexts, such as meetings, events, or in the classroom. Arguably, the setting in which facilitation skills shine the most is the design and running of workshops.  Workshops are dedicated spaces for interaction and learning. They are generally very hands-on, including activities such as simulations or games designed to practice specific skills. Leading workshops is an exciting, rewarding experience! In this piece we will go through some of the essential elements of workshop facilitation: What are workshops? Workshops are a time set aside for a group of people to learn new skills, come up with the best ideas, and solve problems together.…

A notebook and a computer

So, you’ve decided to convene a workshop, a special time set aside to work with a team on a certain topic or project. You are looking for brilliant ideas, new solutions and, of course, great participation. To begin the process that will get you to workshop success, you’ll need three ingredients: participants willing to join, someone to facilitate and guide them through the process (aka, you) and a detailed agenda or schedule of the activities you’ve planned. In this article we will focus on that last point: what makes a good agenda design? Having a good agenda is essential to ensure your workshops are well prepared and you can lead…

examples of problem solving in lab

What are facilitation skills and how to improve them?

Facilitation skills are the abilities you need in order to master working with a group. In essence, facilitation is about being aware of what happens when people get together to achieve a common goal, and directing their focus and attention in ways that serve the group itself.  When we work together at our best, we can achieve a lot more than anything we might attempt alone. Working with others is not always easy: teamwork is fraught with risks and pitfalls, but skilled facilitation can help navigate them with confidence. With the right approach, facilitation can be a workplace superpower.  Whatever your position, career path, or life story, you probably have…

Design your next workshop with SessionLab

Join the 150,000 facilitators using SessionLab

Sign up for free

A Guide for Solving Your Lab Math Problems

An image of colors to depict care for your pH meter.

Math is an important part of lab life, from making solutions to calculating protein concentrations, and miscalculations can cause mayhem for your experiments. Therefore it is important that your math is right, or you could spend weeks trying to figure out what’s going wrong in your experiments.

I was hopeless at remembering how to do even simple calculations, so I kept a cheat sheet in the back of my lab book that I referred to on a regular basis.

I want to make your life easier too, and that’s why I’ve put together some of the key (in my opinion) calculations important for a molecular biologist.

Making up solutions

The routine chore that everyone avoids until absolutely necessary. There are three key equations that you will need in order to make up simple solutions .

1.  Calculating moles. If you need to make up a solution where you know the desired concentration (molarity) and volume then you first need to calculate the number of moles in that solution.

The calculation for this is simple:

  n = M x V

That is:  moles = Molarity (concentration in molar)  x volume ( in litres)

2.  Once you’ve got the moles you can then work out the mass required using the following equation:

Where:  mass (in grams) = moles x molecular weight ( in g mol -1 ).

3.  The two above equations will enable you to make solutions from scratch but what if you want to make a solution where you already have a stock solution of a higher concentration?

Diluting stock solutions is really simple and can be achieve using the following calculation:

V 1  x C 1 = V 2 x C 2

V 1 =Volume of stock solution required

C 1 =Concentration of stock solution

V 2 =Volume of final solution

C 2 =Concentration of final solution

So in the case where we need to find the volume required from our stock solution we rearrange the equation so that:

V 1 = (V 2 x C 2 ) / C 1

The units aren’t important except that the volumes and concentrations must be in the same units.

Calculating concentrations of DNA or RNA

The simplest way to calculate DNA or RNA concentration is using a spectrophotometer. For a 1 cm path length (this is the width of the cuvette – most cuvettes have a standard width of 1 cm)  dsDNA at a concentration of 50 µg/mL and RNA at a concentration of 40 µg/mL has an optical density of 1 at 260nm. This means by measuring the optical density of a solution of DNA or RNA at 260nm we can determine the concentration in our sample using the following calculations:

ds DNA concentration (in µg/mL) = 50 x OD 260 x dilution factor

RNA concentration (in µg/mL) = 40 x OD 260 x dilution factor

The dilution factor is the dilution of the solution measured compared to the ‘stock’ solution of your DNA. It’s a good idea to dilute your DNA for measuring the OD for several reasons; firstly so you don’t use up all of your stock; secondly DNA can be quite viscous at high concentrations which can introduce pipetting errors and finally you want to aim for a OD between 0.1 and 1 in order to get the most accurate quantitation (this is the linear range for most spectrophotometers). As 260nm is in the UV spectrum you need to use a specialised UV cuvette.

Calculating purity of DNA or RNA

As much as we all like to feel like perfectionists in the lab, solutions of DNA and RNA contain contaminants that can affect the optical density at 260nm. Therefore it’s a good idea to test the purity of your DNA and RNA by measuring for contaminants. The main contaminant of extracted DNA and RNA is proteins, which also absorb at 280nm. To check for the presence of contaminating proteins in your sample simply measure the OD at 280 nm and calculate the ratio of OD 260 /OD 280 . Pure DNA should have a ratio of ~2 while pure RNA should have a ratio of ~1.8.

There are other contaminates which can also be measured at different optical densities to determine how pure your sample is, but proteins tend to be the main culprits.

Converting units

Sometimes it’s easy to forget how many picograms are in a gram so it is handy to have a simple reference to check just to make sure. The easiest way to remember, is that the difference between most units is 10 3 ;  or just check out the handy table below:

So in order to convert picograms to grams you need to multiply by 10 -12 .

Ligation calculations

For an optimal ligation reaction you want a 1:1 ratio of insert to vector, although alternative ratios can be tested (such as 1:3, and 3:1). To calculate the amount of insert required for a 1:1 ratio use the following equation:

  (kb insert / kb vector) x ng of vector = ng insert required

Calculating cell concentration using a haemocytometer

Seeding cells at the correct density is important for the consistency of experiments as well as to maintain a healthy stock of cells. Calculating cell concentration is simple with the use of a haemocytometer. Simply apply the cell solution to the haemocytometer and count the number of cell in a 1mm x 1mm square (made up of 5×5 small squares).

Then use the following calculation:

number of cells x dilution factor x 10 4 = Cells per mL

I would usually use a dilution of 1:1 with trypan blue (a vital stain that will stain any dead cells blue), but you can alter this depending on how concentrated your sample is. You want to count roughly between 40 and 70 cells in order to get an accurate reading.

A Guide for Solving Your Lab Math Problems

Haemocytometer grid.

For a quick reference guide download and print out my simple cheat sheet here to ensure all the necessary calculations are easy at hand.

examples of problem solving in lab

You wrote: “To check for the presence of contaminating proteins in your sample simply measure the OD at 280 nm and calculate the ratio of OD260/OD280. Pure DNA should have a ratio of ~2 while pure RNA should have a ratio of ~1.8.”

But DNA should be around 1.8 and RNA around 2

examples of problem solving in lab

HI, if I have v1=1ug x 10ul/500ug/ml wht will my final si unit be for the answer? sorry I’m just slow in remembering simple maths protocols Thanks a lot for the above information much helpful.

examples of problem solving in lab

Thank you for summarizing the important calculations. Could you also add the calculation to determine the number of molecules in a sample of DNA (plasmid and PCR)?

I also noticed a typo in the article above under “Calculating cell concentration using a haemocytometer”. I think that the formula should read 10^4, where as you have it as 10^-4.

Thanks again.

Leave a Comment Cancel Reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Forgot your password?

Lost your password? Please enter your email address. You will receive mail with link to set new password.

Back to login

Career Sidekick

Interview Questions

Comprehensive Interview Guide: 60+ Professions Explored in Detail

26 Good Examples of Problem Solving (Interview Answers)

By Biron Clark

Published: November 15, 2023

Employers like to hire people who can solve problems and work well under pressure. A job rarely goes 100% according to plan, so hiring managers will be more likely to hire you if you seem like you can handle unexpected challenges while staying calm and logical in your approach.

But how do they measure this?

They’re going to ask you interview questions about these problem solving skills, and they might also look for examples of problem solving on your resume and cover letter. So coming up, I’m going to share a list of examples of problem solving, whether you’re an experienced job seeker or recent graduate.

Then I’ll share sample interview answers to, “Give an example of a time you used logic to solve a problem?”

Problem-Solving Defined

It is the ability to identify the problem, prioritize based on gravity and urgency, analyze the root cause, gather relevant information, develop and evaluate viable solutions, decide on the most effective and logical solution, and plan and execute implementation. 

Problem-solving also involves critical thinking, communication, listening, creativity, research, data gathering, risk assessment, continuous learning, decision-making, and other soft and technical skills.

Solving problems not only prevent losses or damages but also boosts self-confidence and reputation when you successfully execute it. The spotlight shines on you when people see you handle issues with ease and savvy despite the challenges. Your ability and potential to be a future leader that can take on more significant roles and tackle bigger setbacks shine through. Problem-solving is a skill you can master by learning from others and acquiring wisdom from their and your own experiences. 

It takes a village to come up with solutions, but a good problem solver can steer the team towards the best choice and implement it to achieve the desired result.

Watch: 26 Good Examples of Problem Solving

Examples of problem solving scenarios in the workplace.

  • Correcting a mistake at work, whether it was made by you or someone else
  • Overcoming a delay at work through problem solving and communication
  • Resolving an issue with a difficult or upset customer
  • Overcoming issues related to a limited budget, and still delivering good work through the use of creative problem solving
  • Overcoming a scheduling/staffing shortage in the department to still deliver excellent work
  • Troubleshooting and resolving technical issues
  • Handling and resolving a conflict with a coworker
  • Solving any problems related to money, customer billing, accounting and bookkeeping, etc.
  • Taking initiative when another team member overlooked or missed something important
  • Taking initiative to meet with your superior to discuss a problem before it became potentially worse
  • Solving a safety issue at work or reporting the issue to those who could solve it
  • Using problem solving abilities to reduce/eliminate a company expense
  • Finding a way to make the company more profitable through new service or product offerings, new pricing ideas, promotion and sale ideas, etc.
  • Changing how a process, team, or task is organized to make it more efficient
  • Using creative thinking to come up with a solution that the company hasn’t used before
  • Performing research to collect data and information to find a new solution to a problem
  • Boosting a company or team’s performance by improving some aspect of communication among employees
  • Finding a new piece of data that can guide a company’s decisions or strategy better in a certain area

Problem Solving Examples for Recent Grads/Entry Level Job Seekers

  • Coordinating work between team members in a class project
  • Reassigning a missing team member’s work to other group members in a class project
  • Adjusting your workflow on a project to accommodate a tight deadline
  • Speaking to your professor to get help when you were struggling or unsure about a project
  • Asking classmates, peers, or professors for help in an area of struggle
  • Talking to your academic advisor to brainstorm solutions to a problem you were facing
  • Researching solutions to an academic problem online, via Google or other methods
  • Using problem solving and creative thinking to obtain an internship or other work opportunity during school after struggling at first

You can share all of the examples above when you’re asked questions about problem solving in your interview. As you can see, even if you have no professional work experience, it’s possible to think back to problems and unexpected challenges that you faced in your studies and discuss how you solved them.

Interview Answers to “Give an Example of an Occasion When You Used Logic to Solve a Problem”

Now, let’s look at some sample interview answers to, “Give me an example of a time you used logic to solve a problem,” since you’re likely to hear this interview question in all sorts of industries.

Example Answer 1:

At my current job, I recently solved a problem where a client was upset about our software pricing. They had misunderstood the sales representative who explained pricing originally, and when their package renewed for its second month, they called to complain about the invoice. I apologized for the confusion and then spoke to our billing team to see what type of solution we could come up with. We decided that the best course of action was to offer a long-term pricing package that would provide a discount. This not only solved the problem but got the customer to agree to a longer-term contract, which means we’ll keep their business for at least one year now, and they’re happy with the pricing. I feel I got the best possible outcome and the way I chose to solve the problem was effective.

Example Answer 2:

In my last job, I had to do quite a bit of problem solving related to our shift scheduling. We had four people quit within a week and the department was severely understaffed. I coordinated a ramp-up of our hiring efforts, I got approval from the department head to offer bonuses for overtime work, and then I found eight employees who were willing to do overtime this month. I think the key problem solving skills here were taking initiative, communicating clearly, and reacting quickly to solve this problem before it became an even bigger issue.

Example Answer 3:

In my current marketing role, my manager asked me to come up with a solution to our declining social media engagement. I assessed our current strategy and recent results, analyzed what some of our top competitors were doing, and then came up with an exact blueprint we could follow this year to emulate our best competitors but also stand out and develop a unique voice as a brand. I feel this is a good example of using logic to solve a problem because it was based on analysis and observation of competitors, rather than guessing or quickly reacting to the situation without reliable data. I always use logic and data to solve problems when possible. The project turned out to be a success and we increased our social media engagement by an average of 82% by the end of the year.

Answering Questions About Problem Solving with the STAR Method

When you answer interview questions about problem solving scenarios, or if you decide to demonstrate your problem solving skills in a cover letter (which is a good idea any time the job description mention problem solving as a necessary skill), I recommend using the STAR method to tell your story.

STAR stands for:

It’s a simple way of walking the listener or reader through the story in a way that will make sense to them. So before jumping in and talking about the problem that needed solving, make sure to describe the general situation. What job/company were you working at? When was this? Then, you can describe the task at hand and the problem that needed solving. After this, describe the course of action you chose and why. Ideally, show that you evaluated all the information you could given the time you had, and made a decision based on logic and fact.

Finally, describe a positive result you got.

Whether you’re answering interview questions about problem solving or writing a cover letter, you should only choose examples where you got a positive result and successfully solved the issue.

Example answer:

Situation : We had an irate client who was a social media influencer and had impossible delivery time demands we could not meet. She spoke negatively about us in her vlog and asked her followers to boycott our products. (Task : To develop an official statement to explain our company’s side, clarify the issue, and prevent it from getting out of hand). Action : I drafted a statement that balanced empathy, understanding, and utmost customer service with facts, logic, and fairness. It was direct, simple, succinct, and phrased to highlight our brand values while addressing the issue in a logical yet sensitive way.   We also tapped our influencer partners to subtly and indirectly share their positive experiences with our brand so we could counter the negative content being shared online.  Result : We got the results we worked for through proper communication and a positive and strategic campaign. The irate client agreed to have a dialogue with us. She apologized to us, and we reaffirmed our commitment to delivering quality service to all. We assured her that she can reach out to us anytime regarding her purchases and that we’d gladly accommodate her requests whenever possible. She also retracted her negative statements in her vlog and urged her followers to keep supporting our brand.

What Are Good Outcomes of Problem Solving?

Whenever you answer interview questions about problem solving or share examples of problem solving in a cover letter, you want to be sure you’re sharing a positive outcome.

Below are good outcomes of problem solving:

  • Saving the company time or money
  • Making the company money
  • Pleasing/keeping a customer
  • Obtaining new customers
  • Solving a safety issue
  • Solving a staffing/scheduling issue
  • Solving a logistical issue
  • Solving a company hiring issue
  • Solving a technical/software issue
  • Making a process more efficient and faster for the company
  • Creating a new business process to make the company more profitable
  • Improving the company’s brand/image/reputation
  • Getting the company positive reviews from customers/clients

Every employer wants to make more money, save money, and save time. If you can assess your problem solving experience and think about how you’ve helped past employers in those three areas, then that’s a great start. That’s where I recommend you begin looking for stories of times you had to solve problems.

Tips to Improve Your Problem Solving Skills

Throughout your career, you’re going to get hired for better jobs and earn more money if you can show employers that you’re a problem solver. So to improve your problem solving skills, I recommend always analyzing a problem and situation before acting. When discussing problem solving with employers, you never want to sound like you rush or make impulsive decisions. They want to see fact-based or data-based decisions when you solve problems.

Next, to get better at solving problems, analyze the outcomes of past solutions you came up with. You can recognize what works and what doesn’t. Think about how you can get better at researching and analyzing a situation, but also how you can get better at communicating, deciding the right people in the organization to talk to and “pull in” to help you if needed, etc.

Finally, practice staying calm even in stressful situations. Take a few minutes to walk outside if needed. Step away from your phone and computer to clear your head. A work problem is rarely so urgent that you cannot take five minutes to think (with the possible exception of safety problems), and you’ll get better outcomes if you solve problems by acting logically instead of rushing to react in a panic.

You can use all of the ideas above to describe your problem solving skills when asked interview questions about the topic. If you say that you do the things above, employers will be impressed when they assess your problem solving ability.

If you practice the tips above, you’ll be ready to share detailed, impressive stories and problem solving examples that will make hiring managers want to offer you the job. Every employer appreciates a problem solver, whether solving problems is a requirement listed on the job description or not. And you never know which hiring manager or interviewer will ask you about a time you solved a problem, so you should always be ready to discuss this when applying for a job.

Related interview questions & answers:

  • How do you handle stress?
  • How do you handle conflict?
  • Tell me about a time when you failed

Biron Clark

About the Author

Read more articles by Biron Clark

Continue Reading

15 Most Common Pharmacist Interview Questions and Answers

15 most common paralegal interview questions and answers, top 30+ funny interview questions and answers, 60 hardest interview questions and answers, 100+ best ice breaker questions to ask candidates, top 20 situational interview questions (& sample answers), 15 most common physical therapist interview questions and answers, 15 most common project manager interview questions and answers, create a professional resume for free.

No-sign up or payment required.

Our methods

Look for important problems, instead of ordinary ones, there are two reasons to look for important problems..

If you want a great career as an employee in a commercial organization, you will be promoted faster and farther by solving important problems, rather than problems of lesser significance. 

If you want to create a great venture, it is easier to find customers, investors, allies and collaborators by solving important problems. If the problem is important, the customer will want to buy it with little if no persuasion and you can usually charge a premium price.

But aren’t all problems important to someone?

No! Below we outline our methodology for identifying important problems and, as you will see, you can find evidence that allows you to judge importance objectively and dependably.

Of course, any attempt to solve an important problem is always an experiment, and any experiment can fail. But it is always better to proceed with more information, rather than less; always better to create a well-considered experiment, rather than one doomed to fail.

Our methodology

1. start looking for important problems in domains you find personally interesting.

The domain can be any topic like technology, marketplace, activity, intellectual pursuit, or any area you can define, but you need to find the topic inherently interesting otherwise you will not bring the intensity of focus, or commitment of purpose, that is necessary for success.

Be aware that the most common mistake in problem analysis is to leap into action, to try to solve the first problem you encounter whether it's important or not. It's a common human tendency, remember to continue with the process! 

2. Document as many examples of important problems as you can find

An important problem is one that is mission critical to those affected. In other words, the problem is seriously and adversely affecting a primary goal of an individual or organization. Important problems are usually:

  • Repeatedly and urgently discussed
  • Systemic, affecting many different outcomes
  • Long-standing
  • Mission-critical

You will find it relatively easy to identify important problems since they are almost always actively and repeatedly discussed in the curated media relevant to your domain. You should look at both the business and specialized media. For reliable information, it is essential that you use curated media. Social media can be used as a starting point, but does not substitute for the use of curated media. 

Do not reject a problem because you cannot immediately or easily think of a solution.

3. Select several problems for further investigation

By examining more than one important problem, you increase the likelihood that you will find a problem you really want to solve, and one you feel you thoroughly understand.

4. Analyze the problem by scale, context, history and failures

For each chosen problem, conduct the following analysis before you attempt to develop a solution. There is a natural tendency to leap into action prematurely; resist the urge and continue with the process! 

Scale of the problem

You need to confirm the importance of the problem by documenting exactly who it is important to. Who has already complained about this problem and how many times did they complain? Did they complain loudly?

You need to: 

  • Document the number and characteristics of those affected.  See Note 1 .
  • Is it their most important problem? Second most important? If it is the thirty-third most important problem, those affected should fix it themselves.
  • Generally speaking, commercial organizations are better customers than individuals. See Note 3 .
  • Solving the problem should produce global sales of at least one billion dollars in the near to medium term.

Context of the Problem

What are the causes and effects of the problem?

You need to ask:

  • If so, this makes your problem a secondary one. However, if your problem causes other problems this means it might be a primary problem and is the source, or root, of other problems. If you solved a primary problem, you would then contribute to solving other problems. The effect of the solution is therefore amplified. See Note 4 .
  • Are there a few competitors, or many? Does the aging of the population greatly affect your problem, or not? Does the legal structure impede or facilitate the problem? Is changing technology relevant or not?

Research the history of the problem

The history of the problem provides the insight necessary for an excellent solution.  See Note 5 .

You need to ask: 

  • How long has the problem been recognized?
  • Does the problem appear to be growing in importance?
  • Has the scale of the problem changed?
  • Has there been a change in those affected by the problem? 
  • Have the causes or effects of the problem changed?
  • Have the circumstances and conditions affecting the problem changed over time?
  • Has the primacy of the problem changed over time?
  • Have there been previous attempts to solve the problem?

Analyze past failures 

An essential part of the research is the documentation of past failures to solve the problem. You need to fully describe as many attempts to solve the problem as you can find. You need to know who made them, and why they failed, otherwise you are not learning from the mistakes of others and may repeat mistakes. 

  • Why did the attempt fail? You are looking for a specific mistake, one that you can take action to correct.
  • This is essential since it tells you where to start your own research for a solution.  See Note 5 .

Note 1. Scale: Who is the problem important to?

Who finds a problem important is not always immediately clear. It may be possible the importance lies with a less obvious customer. For example, a problem might be more important to the government, and its regulators, than to a consumer who may be forced to buy the solution. Mandated air bags are such an example.

Note 2. Scale: Relative Importance

To say that someone finds an issue important does not tell you enough about a problem to take action with confidence. Most people and organizations have more than one important challenge and you need to know how relatively important that issue is in context of their other problems. Is it your customers’ most or least important challenge? Is this the issue they want solved above all others or is it a high priority, with other considerations? Is it important, but not pressingly urgent? The higher the relative importance, the greater the commercial opportunity.

Note 3. Scale: Individuals or Organization

There are two categories of problems: Business and Consumer. Determining the importance is different for each category. Solutions that solve business problems are more likely to achieve faster and greater success than solutions that solve consumer problems.

Business Problems (Business to Business)

A problem is important to commercial customers for one of two reasons - either the problem has a substantial effect on the organization’s current profitability, or its future profitability. There are two ways to substantially increase profit: lower cost or increase sales.

For many organizations, a small increase in profitability is not worth the effort of implementing a solution. What constitutes a substantial profit increase varies according to the scale of the enterprise in question. For large enterprises, a substantial increase might need to be at least $100 million, while a substantial increase might be $100,000 for a smaller enterprise. It will depend on the view of the owner(s)/shareholder(s).

Additionally, a problem becomes important for commercial customers if it can threaten future profitability. Examples of this include anticipated increase in costs or competition, or changes in technology and the marketplace that change business dynamics.

A problem may also be important if it prevents a company from using existing or new technology fully, thereby limiting its future profitability. 

Consumer Problems

It is more difficult to identify an important problem for consumers than for businesses because the problems that are important to consumers arise from many different reasons. For example, consumers often find it important to save money or time, to communicate effortlessly, or to avoid boredom. For example, smartphones drive demand for the entertainment and gaming industry by allowing consumers to communicate and avoid boredom. Ridesharing services allow consumers to save time and money.

A consumer business by definition sells a product to the consumer. [If you receive a good or service without paying, you are a user, but not a customer.]

Note that although Facebook and Google appeal to consumers they are not consumer businesses. Facebook and Google, by contrast, sell advertising and consumer insight services to businesses.

Note 4. Context: Primary or Secondary Problems

Rarely is the key problem immediately obvious and frequently the obvious problem is not the root cause nor the source of urgency. Ventures and innovations failed when the wrong problem was identified, even though a real and important problem existed.

Note 5. History and Failure

There are only two ways to change the future, only two ways to solve socially and economically important problems. You can proceed either by luck or design. Without delay or hesitation or careful research, you can generate multiple potential solutions, calling it brainstorming or customer discovery or pivoting. But it is essentially taking pot shots at the future, hoping something will stick.

Or you can proceed with logic and evidence, making sure that you fully understand the problem before you begin to solve it. While success cannot be guaranteed, you are dramatically improving the likelihood of success. Where is the evidence that you need to increase the likelihood that your imagined solution will actually work? Where is the evidence for future effect? Of course, the evidence for the future lies in the past, where all evidence ultimately presides. Abstain from historical analysis, and you are doomed to repeat the mistakes of the past; abstain from historical analysis, and you embrace the goddess of luck.

Unless you understand the history of the problem, you do not understand it. And what you do not understand you cannot solve, except by luck. The more difficult the problem, the greater its importance, the more rigorous and comprehensive the analysis must be. The longer the problem has been outstanding, the farther back you must look.

Your historical analysis must document how long the problem has been outstanding, how long it has been recognized [it could be outstanding but not recognized], how its scale may have changed, how its causes and effects may have changed, how its environment may have changed and whether it has evolved from a primary to a secondary problem or the reverse. Moreover, you must document all previous attempts to solve the problem. You must probe for the mistakes that others have made. You must ask who is mistaken about what, and why were they mistaken. You are learning from the mistakes of others. You are using their experiences to improve your world. This is no more or less than aggressive historical analysis.

It is a mistake to merely assume that an important problem is unsolved because we just “do not yet know enough”. While we always need to learn more, failure analysis directs us to those areas most in need of learning, where the fruits of the learning are magnified. And all too often we already “know” how to solve the problem, but multiple layers of mistakes interfere with the application of that knowledge. Effective innovators become students of error to avoid their own mistakes and to realize solutions of power and consequence.

Here are some of the most common mistakes.

  • Failing to use all the information that is available because you have neither the skill nor patience to search for it.
  • Disregarding information from another discipline because you fail to respect it or do not understand its vocabulary.
  • Disregarding information because you disapprove of the person or organization that generated it.
  • Failing to recognize that previously unavailable information has now been created.
  • Leaping to conclusions because you do not have the patience to be thoughtful.
  • Overvaluing certain categories of information because it is the kind of information with which you are most comfortable. For example, the mathematician who loves numbers above all else.
  • Undervaluing certain categories of information with which you are uncomfortable. For example, the visionary generalist who shrinks from rigour or detail.
  • Failing to understand the scientific method, that all attempts at innovation are experiments. And all experiments can fail.
  • Failing to recognize that an experiment failed because it was so poorly designed that was doomed to fail.
  • Failing to recognize that an experiment apparently succeeded only because it was poorly designed, lacking proper controls.
  • Failing to recognize that an experiment succeeded only once, and has never been replicated.
  • Examining the most recent failure, instead of all of them.
  • Failing to view a problem comprehensively, from all angles and aspects.
  • Believing an important piece of information to be true without checking.
  • Using a new tool without carefully considering whether it is applicable to your problem or not.
  •  Using an old new tool without carefully considering whether it is applicable to your problem or not, only because you are familiar and comfortable with it.
  • Failing to consider an approach because it is not consistent with the approaches that bought you previous success [the prisoner of success syndrome].
  • Failing to recognize the assumptions of your mind and failing to challenge them.
  • Failing to explore a logical avenue of potential solution because of one or more of the above mistakes.
  • Failing to be both imaginative and rigorous at the same time.

The goal of this historical review of mistakes is to find an actionable mistake. This is a mistake [or set of mistakes] that shows the first step you need to take to move to a solution. This is the launch point into your own research and development. This may be to explore a gap in previous attempts to solve the problem, a gap that was missed because of the mistakes. Or it might be an avenue of exploration that results from a correct understanding of true information. It may be a cross disciplinary approach that professional bias previously aborted. The launch points are as varied as the mistakes themselves. But mistakes first tell you what not to do, and then tells you what you should do

Share via Facebook

Contact the Problem Lab

Mathematics and Computer Building, room 2057 University of Waterloo 519-888-4567, Ext. 36421

Competitions

Methodology

Problem archive

  • Contact Waterloo
  • Maps & Directions
  • Accessibility

The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations .

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Find the AI Approach That Fits the Problem You’re Trying to Solve

  • George Westerman,
  • Sam Ransbotham,
  • Chiara Farronato

examples of problem solving in lab

Five questions to help leaders discover the right analytics tool for the job.

AI moves quickly, but organizations change much more slowly. What works in a lab may be wrong for your company right now. If you know the right questions to ask, you can make better decisions, regardless of how fast technology changes. You can work with your technical experts to use the right tool for the right job. Then each solution today becomes a foundation to build further innovations tomorrow. But without the right questions, you’ll be starting your journey in the wrong place.

Leaders everywhere are rightly asking about how Generative AI can benefit their businesses. However, as impressive as generative AI is, it’s only one of many advanced data science and analytics techniques. While the world is focusing on generative AI, a better approach is to understand how to use the range of available analytics tools to address your company’s needs. Which analytics tool fits the problem you’re trying to solve? And how do you avoid choosing the wrong one? You don’t need to know deep details about each analytics tool at your disposal, but you do need to know enough to envision what’s possible and to ask technical experts the right questions.

  • George Westerman is a Senior Lecturer in MIT Sloan School of Management and founder of the Global Opportunity Forum  in MIT’s Office of Open Learning.
  • SR Sam Ransbotham is a Professor of Business Analytics at the Boston College Carroll School of Management. He co-hosts the “Me, Myself, and AI” podcast.
  • Chiara Farronato is the Glenn and Mary Jane Creamer Associate Professor of Business Administration at Harvard Business School and co-principal investigator at the Platform Lab at Harvard’s Digital Design Institute (D^3). She is also a fellow at the National Bureau of Economic Research (NBER) and the Center for Economic Policy Research (CEPR).

Partner Center

COMMENTS

  1. 5 Common Laboratory Skills (With Definition and Examples)

    Analysis and problem-solving skills are your ability to analyse a challenge or result and apply your problem-solving skills to rectify the situation. These skills are helpful in the laboratory because you're often testing hypotheses and examining the results. If a challenge arises in the lab, you can use these skills to find a solution quickly.

  2. 35 problem-solving techniques and methods for solving ...

    How do you identify the right solution? Finding solutions is the end goal of any process. Complex organizational challenges can only be solved with an appropriate solution but discovering them requires using the right problem-solving tool.

  3. How to be a Better Troubleshooter in Your Laboratory

    1. Identify the problem 2. List all possible explanations 3. Collect the data 4. Eliminate some possible explanations 5. Check with experimentation 6. Identify the cause Below, we present two different scenarios and how this troubleshooting process could be applied.

  4. PDF 5. Problem-Solving Labs

    Example of Adapting a Textbook Problem Lab Manual's Introduction to the Forces Lab: Example of our brief introduction to a Lab (which lasts 2 or 3 weeks), including the Objectives and Preparation. ... Solving a problem in the laboratory requires the student to make a chain of decisions based on their physics knowledge. Wrong decisions based on ...

  5. 2.6: Problem Solving and Unit Conversions

    Here is what we would have gotten: 3.55 m × 1 m 100 cm = 0.0355m2 cm 3.55 m × 1 m 100 c m = 0.0355 m 2 c m. For the answer to be meaningful, we have to construct the conversion factor in a form that causes the original unit to cancel out. Figure 2.6.1 2.6. 1 shows a concept map for constructing a proper conversion.

  6. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  7. A Problem-Solving Experiment

    A problem-solving experiment uses problem-based learning by posing an authentic or meaningful problem for students to solve and inquiry-based learning by requiring students to design an experiment to collect and analyze data to solve the problem.

  8. Laboratory Critical Thinking

    For example, in a blood bank, a technologist may need to troubleshoot weak reactions when the need for blood is urgent; in chemistry, a sequence of delta checks may suggest a method failure; and in microbiology, consulting with the physician will determine if an organism is reported as a possible contaminant or probable pathogen.

  9. Laboratory Experiences and Student Learning

    For example, students with better developed metacognitive strategies will abandon an unproductive problem-solving strategy very quickly and substitute a more productive one, whereas students with less effective metacognitive skills will continue to use the same strategy long after it has failed to produce results (Gobert and Clement, 1999).

  10. Critical thinking in the lab (and beyond)

    The example question Jon-Marc and Marcy give requires students to calculate percentage errors for two titration techniques before discussing the relative accuracy of the methods. Students have to use their data to explain which method was more accurate, prompting a much higher level of engagement than a simple calculation.

  11. 6.1.1: Practice Problems- Solution Concentration

    PROBLEM 6.1.1.6 6.1.1. 6. Calculate the molarity of each of the following solutions: (a) 0.195 g of cholesterol, C 27 H 46 O, in 0.100 L of serum, the average concentration of cholesterol in human serum. (b) 4.25 g of NH 3 in 0.500 L of solution, the concentration of NH 3 in household ammonia.

  12. 8.3 Elastic and Inelastic Collisions

    Now, to solve problems involving one-dimensional elastic collisions between two objects, we can use the equation for conservation of momentum. First, the equation for conservation of momentum for two objects in a one-dimensional collision is. p 1 + p 2 = p ′ 1 + p ′ 2 ( F net = 0) . Substituting the definition of momentum p = mv for each ...

  13. PDF Training Tools for the Clinical Laboratory

    3. Assessment of problem solving skills (quiz, case studies, or tests) 4. Direct observation of instrument maintenance/function checks • Review of worksheets, QC records, PT results, PM records and monitoring recording and reporting of test results can be documents as part of daily reviews - One can also follow a sample from processing

  14. What Is a Fishbone Diagram?

    A fishbone diagram is a problem-solving approach that uses a fish-shaped diagram to model possible root causes of problems and troubleshoot possible solutions. It is also called an Ishikawa diagram, after its creator, Kaoru Ishikawa, as well as a herringbone diagram or cause-and-effect diagram. Fishbone diagrams are often used in root cause ...

  15. 72 Examples of Problem Solving

    The following are examples of problem solving followed by a list of problem solving techniques. Problem Solving This is the complete list of articles we have written about problem solving. A/B Testing Abductive Reasoning Abstraction Abstraction Analysis Paralysis Accidental Complexity Big Picture Cause And Effect Backward Induction

  16. Lean six sigma methodologies improve clinical laboratory efficiency and

    It uses a structured methodology of problem solving described by an acronym "DMAIC" which stands for Define, Measure, Analyze, Improve, and Control. ... potential biological hazards, and stages prone to medical errors within the sample reception area and main laboratory. It is critical to have a good understanding of the entire process of ...

  17. Competency Assessment in the Clinical Microbiology Laboratory

    Assessment of problem-solving skills may be accomplished in several ways . Examples include (i) asking employees to respond orally or in writing to simulated technical or procedural problems (perhaps in the form of case studies) and (ii) asking employees to document actual problem-solving issues that they have handled in the laboratory within ...

  18. The process of creative problem-solving

    In this issue, the editors at Medical Laboratory Observer celebrate the spirit of creative problem-solving with profiles of MLO's 2021 Lab Innovators Worth Watching.. This is a recognition program we launched in 2020 to encourage laboratorians to nominate co-workers who had developed innovative solutions.

  19. Frontiers

    Department of Psychology, Universität Heidelberg, Heidelberg, Germany. Problem-solving research in the field of psychology has been closely linked to laboratory investigations throughout its development. However, there is a questionable conceptual assumption underlying this commitment to the laboratory, namely the assumption that one can ...

  20. How to improve your problem solving skills and strategies

    Project management and communication skills are key here - your solution may need to adjust when out in the wild or you might discover new challenges along the way. 7. Solution evaluation. So you and your team developed a great solution to a problem and have a gut feeling its been solved.

  21. A Guide for Solving Your Lab Math Problems

    1. Calculating moles. If you need to make up a solution where you know the desired concentration (molarity) and volume then you first need to calculate the number of moles in that solution. The calculation for this is simple: n = M x V. That is: moles = Molarity (concentration in molar) x volume ( in litres) 2.

  22. 26 Good Examples of Problem Solving (Interview Answers)

    Problem-solving also involves critical thinking, communication, listening, creativity, research, data gathering, risk assessment, continuous learning, decision-making, and other soft and technical skills. Solving problems not only prevent losses or damages but also boosts self-confidence and reputation when you successfully execute it.

  23. Our methods

    1. Start looking for important problems in domains you find personally interesting. 2. Document as many examples of important problems as you can find. 3. Select several problems for further investigation. 4. Analyze the problem by scale, context, history and failures.

  24. Find the AI Approach That Fits the Problem You're Trying to Solve

    Summary. AI moves quickly, but organizations change much more slowly. What works in a lab may be wrong for your company right now. If you know the right questions to ask, you can make better ...