compare problem solving and planning agents

Welcome back.

compare problem solving and planning agents

Continue with email  

Mumbai University > Computer Engineering > Sem 7 > Artificial Intelligence

Marks: 5 Marks

Year: Dec 2015, May 2016

Problem Solving vs. Planning : A simple planning agent is very similar to problem-solving agents in that it constructs plans that achieve its goals, and then executes them. The limitations of the problem-solving approach motivates the design of planning systems.

To solve a planning problem using a state-space search approach we would let the:

  • initial state = initial situation
  • goal-test predicate = goal state description
  • successor function computed from the set of operators
  • once a goal is found, solution plan is the sequence of operators in the path from the start node to the goal node

In searches, operators are used simply to generate successor states and we can not look "inside" an operator to see how it’s defined. The goal-test predicate also is used as a "black box" to test if a state is a goal or not. The search cannot use properties of how a goal is defined in order to reason about finding path to that goal. Hence this approach is all algorithm and representation weak.

Planning is considered different from problem solving because of the difference in the way they represent states, goals, actions, and the differences in the way they construct action sequences.

Remember the search-based problem solver had four basic elements:

  • Representations of actions: programs that develop successor state descriptions which represent actions.
  • Representation of state: every state description is complete. This is because a complete description of the initial state is given, and actions are represented by a program that creates complete state descriptions.
  • Representation of goals: a problem solving agent has only information about it's goal, which is in terms of a goal test and the heuristic function.
  • Representation of plans: in problem solving, the solution is a sequence of actions.

In a simple problem: "Get a quart of milk and a bunch of bananas and a variable speed cordless drill" for a problem solving exercise we need to specify: Initial State: the agent is at home without any objects that he is wanting.

Operator Set: everything the agent can do.

Heuristic function: the # of things that have not yet been acquired.

compare problem solving and planning agents

Problems with Problem solving agent:

  • It is evident from the above figure that the actual branching factor would be in the thousands or millions. The heuristic evaluation function can only choose states to determine which one is closer to the goal. It cannot eliminate actions from consideration. The agent makes guesses by considering actions and the evaluation function ranks those guesses. The agent picks the best guess, but then has no idea what to try next and therefore starts guessing again.
  • It considers sequences of actions beginning from the initial state. The agent is forced to decide what to do in the initial state first, where possible choices are to go to any of the next places. Until the agent decides how to acquire the objects, it can't decide where to go.

Planning emphasizes what is in operator and goal representations. There are three key ideas behind planning:

  • to "open up" the representations of state, goals, and operators so that a reasoner can more intelligently select actions when they are needed
  • the planner is free to add actions to the plan wherever they are needed, rather than in an incremental sequence starting at the initial state.
  • most parts of the world are independent of most other parts which makes it feasible to take a conjunctive goal and solve it with a divide-and-conquer strategy.  

compare problem solving and planning agents

Box Of Notes

Problem Solving Agents in Artificial Intelligence

In this post, we will talk about Problem Solving agents in Artificial Intelligence, which are sort of goal-based agents. Because the straight mapping from states to actions of a basic reflex agent is too vast to retain for a complex environment, we utilize goal-based agents that may consider future actions and the desirability of outcomes.

You Will Learn

Problem Solving Agents

Problem Solving Agents decide what to do by finding a sequence of actions that leads to a desirable state or solution.

An agent may need to plan when the best course of action is not immediately visible. They may need to think through a series of moves that will lead them to their goal state. Such an agent is known as a problem solving agent , and the computation it does is known as a search .

The problem solving agent follows this four phase problem solving process:

  • Goal Formulation: This is the first and most basic phase in problem solving. It arranges specific steps to establish a target/goal that demands some activity to reach it. AI agents are now used to formulate goals.
  • Problem Formulation: It is one of the fundamental steps in problem-solving that determines what action should be taken to reach the goal.
  • Search: After the Goal and Problem Formulation, the agent simulates sequences of actions and has to look for a sequence of actions that reaches the goal. This process is called search, and the sequence is called a solution . The agent might have to simulate multiple sequences that do not reach the goal, but eventually, it will find a solution, or it will find that no solution is possible. A search algorithm takes a problem as input and outputs a sequence of actions.
  • Execution: After the search phase, the agent can now execute the actions that are recommended by the search algorithm, one at a time. This final stage is known as the execution phase.

Problems and Solution

Before we move into the problem formulation phase, we must first define a problem in terms of problem solving agents.

A formal definition of a problem consists of five components:

Initial State

Transition model.

It is the agent’s starting state or initial step towards its goal. For example, if a taxi agent needs to travel to a location(B), but the taxi is already at location(A), the problem’s initial state would be the location (A).

It is a description of the possible actions that the agent can take. Given a state s, Actions ( s ) returns the actions that can be executed in s. Each of these actions is said to be appropriate in s.

It describes what each action does. It is specified by a function Result ( s, a ) that returns the state that results from doing action an in state s.

The initial state, actions, and transition model together define the state space of a problem, a set of all states reachable from the initial state by any sequence of actions. The state space forms a graph in which the nodes are states, and the links between the nodes are actions.

It determines if the given state is a goal state. Sometimes there is an explicit list of potential goal states, and the test merely verifies whether the provided state is one of them. The goal is sometimes expressed via an abstract attribute rather than an explicitly enumerated set of conditions.

It assigns a numerical cost to each path that leads to the goal. The problem solving agents choose a cost function that matches its performance measure. Remember that the optimal solution has the lowest path cost of all the solutions .

Example Problems

The problem solving approach has been used in a wide range of work contexts. There are two kinds of problem approaches

  • Standardized/ Toy Problem: Its purpose is to demonstrate or practice various problem solving techniques. It can be described concisely and precisely, making it appropriate as a benchmark for academics to compare the performance of algorithms.
  • Real-world Problems: It is real-world problems that need solutions. It does not rely on descriptions, unlike a toy problem, yet we can have a basic description of the issue.

Some Standardized/Toy Problems

Vacuum world problem.

Let us take a vacuum cleaner agent and it can move left or right and its jump is to suck up the dirt from the floor.

The state space graph for the two-cell vacuum world.

The vacuum world’s problem can be stated as follows:

States: A world state specifies which objects are housed in which cells. The objects in the vacuum world are the agent and any dirt. The agent can be in either of the two cells in the simple two-cell version, and each call can include dirt or not, therefore there are 2×2×2 = 8 states. A vacuum environment with n cells has n×2 n states in general.

Initial State: Any state can be specified as the starting point.

Actions: We defined three actions in the two-cell world: sucking, moving left, and moving right. More movement activities are required in a two-dimensional multi-cell world.

Transition Model: Suck cleans the agent’s cell of any filth; Forward moves the agent one cell forward in the direction it is facing unless it meets a wall, in which case the action has no effect. Backward moves the agent in the opposite direction, whilst TurnRight and TurnLeft rotate it by 90°.

Goal States: The states in which every cell is clean.

Action Cost: Each action costs 1.

8 Puzzle Problem

In a sliding-tile puzzle , a number of tiles (sometimes called blocks or pieces) are arranged in a grid with one or more blank spaces so that some of the tiles can slide into the blank space. One variant is the Rush Hour puzzle, in which cars and trucks slide around a 6 x 6 grid in an attempt to free a car from the traffic jam. Perhaps the best-known variant is the 8- puzzle (see Figure below ), which consists of a 3 x 3 grid with eight numbered tiles and one blank space, and the 15-puzzle on a 4 x 4  grid. The object is to reach a specified goal state, such as the one shown on the right of the figure. The standard formulation of the 8 puzzles is as follows:

STATES : A state description specifies the location of each of the tiles.

INITIAL STATE : Any state can be designated as the initial state. (Note that a parity property partitions the state space—any given goal can be reached from exactly half of the possible initial states.)

ACTIONS : While in the physical world it is a tile that slides, the simplest way of describing action is to think of the blank space moving Left , Right , Up , or Down . If the blank is at an edge or corner then not all actions will be applicable.

TRANSITION MODEL : Maps a state and action to a resulting state; for example, if we apply Left to the start state in the Figure below, the resulting state has the 5 and the blank switched.

A typical instance of the 8-puzzle

GOAL STATE :  It identifies whether we have reached the correct goal state. Although any state could be the goal, we typically specify a state with the numbers in order, as in the Figure above.

ACTION COST : Each action costs 1.

You Might Like:

  • Agents in Artificial Intelligence

Types of Environments in Artificial Intelligence

  • Understanding PEAS in Artificial Intelligence
  • River Crossing Puzzle | Farmer, Wolf, Goat and Cabbage

Share Article:

Digital image processing: all you need to know.

  • 90% Refund @Courses
  • Data Structure
  • Analysis of Algorithms
  • Backtracking
  • Dynamic Programming
  • Divide and Conquer
  • Geometric Algorithms
  • Mathematical Algorithms
  • Pattern Searching
  • Bitwise Algorithms
  • Branch & Bound
  • Randomized Algorithms

Related Articles

  • Solve Coding Problems
  • Statistical Machine Translation of Languages in Artificial Intelligence
  • Optimal Decision Making in Multiplayer Games
  • Syntactically analysis in Artificial Intelligence
  • Proofs and Inferences in Proving Propositional Theorem
  • Propositional Logic Reduction
  • ACUMOS: A New Innovative Path for AI
  • Introduction to Thompson Sampling | Reinforcement Learning
  • Lamarckian Evolution and Baldwin Effect in Evolutionary
  • Resolution Theorem Proving
  • What is Artificial Intelligence?
  • Resolution Algorithm in Artificial Intelligence
  • Breadth-first Search is a special case of Uniform-cost search
  • PEAS Description of Task Environment
  • Propositional Logic based Agent
  • Artificial Intelligence - Boon or Bane
  • AI | Phrase and Grammar structure in Natural Language
  • Knowledge based agents in AI
  • Explaining the language in Natural Language
  • Understanding PEAS in Artificial Intelligence

Problem Solving in Artificial Intelligence

The reflex agent of AI directly maps states into action. Whenever these agents fail to operate in an environment where the state of mapping is too large and not easily performed by the agent, then the stated problem dissolves and sent to a problem-solving domain which breaks the large stored problem into the smaller storage area and resolves one by one. The final integrated action will be the desired outcomes.

On the basis of the problem and their working domain, different types of problem-solving agent defined and use at an atomic level without any internal state visible with a problem-solving algorithm. The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem.  

We can also say that a problem-solving agent is a result-driven agent and always focuses on satisfying the goals.

There are basically three types of problem in artificial intelligence:

1. Ignorable: In which solution steps can be ignored.

2. Recoverable: In which solution steps can be undone.

3. Irrecoverable: Solution steps cannot be undo.

Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their activities. So we need a number of finite steps to solve a problem which makes human easy works.

These are the following steps which require to solve a problem :

  • Problem definition: Detailed specification of inputs and acceptable system solutions.
  • Problem analysis: Analyse the problem thoroughly.
  • Knowledge Representation: collect detailed information about the problem and define all possible techniques.
  • Problem-solving: Selection of best techniques.

Components to formulate the associated problem: 

  • Initial State: This state requires an initial state for the problem which starts the AI agent towards a specified goal. In this state new methods also initialize problem domain solving by a specific class.
  • Action: This stage of problem formulation works with function with a specific class taken from the initial state and all possible actions done in this stage.
  • Transition: This stage of problem formulation integrates the actual action done by the previous action stage and collects the final stage to forward it to their next stage.
  • Goal test: This stage determines that the specified goal achieved by the integrated transition model or not, whenever the goal achieves stop the action and forward into the next stage to determines the cost to achieve the goal.  
  • Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the goal. It requires all hardware software and human working cost.

Feeling lost in the world of random DSA topics, wasting time without progress? It's time for a change! Join our DSA course, where we'll guide you on an exciting journey to master DSA efficiently and on schedule. Ready to dive in? Explore our Free Demo Content and join our DSA course, trusted by over 100,000 geeks!

  • DSA in Java
  • DSA in Python
  • DSA in JavaScript

Please Login to comment...

author

  • Artificial Intelligence
  • Advanced Data Structure
  • Machine Learning
  • Theory of Computation
  • shivammiglani09

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Book cover

Innovative Assessment of Collaboration pp 65–80 Cite as

Assessing Collaborative Problem Solving Through Conversational Agents

  • Arthur C. Graesser 6 ,
  • Nia Dowell 6 &
  • Danielle Clewley 6  
  • First Online: 05 April 2017

2329 Accesses

7 Citations

Part of the Methodology of Educational Measurement and Assessment book series (MEMA)

Communication is a core component of collaborative problem solving and its assessment. Advances in computational linguistics and discourse science have made it possible to analyze conversation on multiple levels of language and discourse in different educational settings. Most of these advances have focused on tutoring contexts in which a student and a tutor collaboratively solve problems, but there has also been some progress in analyzing conversations in small groups. Naturalistic patterns of collaboration in one-on-one tutoring and in small groups have also been compared with theoretically ideal patterns. Conversation-based assessment is currently being applied to measure various competencies, such as literacy, mathematics, science, reasoning, and collaborative problem solving. One conversation-based assessment approach is to design computerized conversational agents that interact with the human in natural language. This chapter reports research that uses one or more agents to assess human competencies while the humans and agents collaboratively solve problems or answer difficult questions. AutoTutor holds a collaborative dialogue in natural language and concurrently assesses student performance. The agent converses through a variety of dialogue moves: questions, short feedback, pumps for information, hints, prompts for specific words, corrections, assertions, summaries, and requests for summaries. Trialogues are conversations between the human and two computer agents that play different roles (e.g., peer, tutor, expert). Trialogues are being applied in both training and assessment contexts on particular skills and competencies. Agents are currently being developed at Educational Testing Service for assessments of individuals on various competencies, including the Programme for International Student Assessment 2015 assessment of collaborative problem solving.

  • Conversational agents
  • Collaborative problem solving
  • Intelligent tutoring systems

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Biswas, G., Jeong, H., Kinnebrew, J., Sulcer, B., & Roscoe, R. (2010). Measuring self-regulated learning skills through social interactions in a teachable agent environment. Research and Practice in Technology-Enhanced Learning, 5, 123–152.

Article   Google Scholar  

Cade, W., Copeland, J., Person, N., & D’Mello, S. K. (2008). Dialogue modes in expert tutoring. In B. Woolf, E. Aimeur, R. Nkambou & S. Lajoie (Eds.), Proceedings of the ninth international conference on intelligent tutoring systems (pp. 470–479). Berlin: Springer.

Google Scholar  

Cai, Z., Feng, S., Baer, W., & Graesser, A. (2014). Instructional strategies in trialog-based intelligent tutoring systems. In R. Sottilare, A. C. Graesser, X. Hu & B. Goldberg (Eds.), Design recommendations for intelligent tutoring systems: Adaptive instructional strategies (Vol. 2, pp. 225–235). Orlando, FL: Army Research Laboratory.

Cai, Z., Graesser, A. C., Forsyth, C., Burkett, C., Millis, K., Wallace, P., & Butler, H. (2011). Trialog in ARIES: User input assessment in an intelligent tutoring system. In W. Chen & S. Li (Eds.), Proceedings of the 3rd IEEE international conference on intelligent computing and intelligent systems (pp. 429–433). Guangzhou, China: IEEE Press.

Cai, Z., Graesser, A. C., & Hu, X. (2015). ASAT: AutoTutor script authoring tool. In. R. Sottilare, A. C. Graesser, X. Hu & K. Brawner (Eds.), Design recommendations for intelligent tutoring systems: Authoring tools (Vol. 3, pp. 199–210). Orlando, FL: Army Research Laboratory.

Care, E., & Griffin, P. (2014). An approach to assessment of collaborative problem solving. Research and Practice in Technology Enhanced Learning, 9, 367–388.

Chi, M. T. H., Siler, S., Yamauchi, T., Jeong, H., & Hausmann, R. (2001). Learning from human tutoring. Cognitive Science, 25, 471–534.

Cohen, P. A., Kulik, J. A., & Kulik, C. C. (1982). Educational outcomes of tutoring: A meta-analysis of findings. American Educational Research Journal, 19, 237–248.

D’Mello, S., Dowell, N., & Graesser, A. C. (2011). Does it really matter whether students’ contributions are spoken versus typed in an intelligent tutoring system with natural language? Journal of Experimental Psychology: Applied, 17, 1–17.

D’Mello, S., & Graesser, A. C. (2012). Language and discourse are powerful signals of student emotions during tutoring. IEEE Transactions on Learning Technologies, 5, 304–317.

D’Mello, S., Lehman, B., Pekrun, R., & Graesser, A. C. (2014). Confusion can be beneficial for learning. Learning and Instruction, 29, 153–170.

Dede, C. (2015). Data-intensive research in education: Current work and next steps . Retrieved from http://cra.org/cra-releases-report-on-data-intensive-research-in-education/

Dowell, N. M., Graesser, A. C., & Cai, Z. (in press). Language and discourse analysis with Coh-Metrix: Applications from educational material to learning environments at scale. Journal of Learning Analytics., XX , XX–XX.

Fiore, S. M., Rosen, M. A., Smith-Jentsch, K. A., Salas, E., Letsky, M., & Warner, N. (2010). Toward an understanding of macrocognition in teams: Predicting processes in complex collaborative contexts. Human Factors, 52, 203–224.

Forsyth, C. M., Graesser, A. C., Pavlik, P., Cai, Z., Butler, H., Halpern, D. F., et al. (2013). OperationARIES! Methods, mystery and mixed models: Discourse features predict affect in a serious game. Journal of Educational Data Mining, 5, 147–189.

Graesser, A. C., Baer, W., Feng, S., Walker, B., Clewley, D., Hayes, D. P., & Greenberg, D. (2015). Emotions in adaptive computer technologies for adults improving learning. In S. Tettegah & M. Gartmeier (Eds.), Emotions, technology, design and learning: Communication, for with and through digital media (pp. 1–35). San Diego, CA: Elsevier.

Graesser, A. C., D’Mello, S. K., & Cade, W. (2011). Instruction based on tutoring. In R. E. Mayer & P. A. Alexander (Eds.), Handbook of research on learning and instruction (pp. 408–426). New York: Routledge Press.

Graesser, A. C., D’Mello, S. K., Hu, X., Cai, Z., Olney, A., & Morgan, B. (2012). AutoTutor. In P. McCarthy & C. Boonthum-Denecke (Eds.), Applied natural language processing: Identification, investigation, and resolution (pp. 169–187). Hershey, PA: IGI Global.

Graesser, A. C., Foltz, P. W., Rosen, Y., Shaffer, D. W., Forsyth, C., & Germany, M. (in press). Challenges of assessing collaborative problem solving. In E. Care, P. Griffin & M. Wilson (Eds.), Assessment and teaching of 21st century skills . Heidelberg, Germany: Springer.

Graesser, A. C., Forsyth, C. M., & Foltz, P. (2016). Assessing conversation quality, reasoning, and problem solving performance with computer agents. In B. Csapo, J. Funke & A. Schleicher (Eds.), On the nature of problem solving: A look behind PISA 2012 problem solving assessment (pp. 275–297). Heidelberg, Germany: OECD Series.

Graesser, A. C., Forsyth, C., & Lehman, B. (in press). Two heads are better than one: Learning from computer agents in conversational trialogues. Teachers College Record .

Graesser, A. C., Jeon, M., & Dufty, D. (2008). Agent technologies designed to facilitate interactive knowledge construction. Discourse Processes , 45 , 298–322.

Graesser, A. C., Li, H., & Forsyth, C. (2014). Learning by communicating in natural language with conversational agents. Current Directions in Psychological Science , 23 , 374–380.

Graesser, A. C., McNamara, D. S., Cai, Z., Conley, M., Li, H., & Pennebaker, J. (2014). Coh-Metrix measures text characteristics at multiple levels of language and discourse. Elementary School Journal, 115, 210–229.

Graesser, A. C., Lu, S., Jackson, G. T., Mitchell, H., Ventura, M., Olney, A., et al. (2004). AutoTutor: A tutor with dialogue in natural language. Behavioral Research Methods, Instruments, and Computers, 36, 180–193.

Graesser, A. C., & McNamara, D. S. (2012). Automated analysis of essays and open-ended verbal responses. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf & K. J. Sher (Eds.), APA handbook of research methods in psychology: Vol. 1. Foundations, planning, measures, and psychometrics (pp. 307–325). Washington, DC: American Psychological Association.

Graesser, A. C., McNamara, D. S., & VanLehn, K. (2005). Scaffolding deep comprehension strategies through Point & Query, AutoTutor, and iSTART. Educational Psychologist, 40, 225–234.

Graesser, A. C., Person, N. K., & Magliano, J. P. (1995). Collaborative dialogue patterns in naturalistic one-to-one tutoring. Applied Cognitive Psychology, 9, 495–522.

Griffin, P., & Care, E. (Eds.). (2015). Assessment and teaching of 21st century skills: Methods and approach . Dordrecht, Netherlands: Springer.

Halpern, D. F., Millis, K., Graesser, A. C., Butler, H., Forsyth, C., & Cai, Z. (2012). Operation ARA: A computerized learning game that teaches critical thinking and scientific reasoning. Thinking Skills and Creativity, 7, 93–100.

Jackson, G. T., & Graesser, A. C. (2006). Applications of human tutorial dialog in AutoTutor: An intelligent tutoring system. Revista Signos, 39, 31–48.

Jackson, G. T., & McNamara, D. S. (2013). Motivation and performance in a game-based intelligent tutoring system. Journal of Educational Psychology, 105, 1036–1049.

Jurafsky, D., & Martin, J. H. (2008). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition . Upper Saddle River, NJ: Prentice Hall.

Lehman, B., D’Mello, S. K., Strain, A., Mills, C., Gross, M., Dobbins, A., & Graesser, A. C. (2013). Inducing and tracking confusion with contradictions during complex learning. International Journal of Artificial Intelligence in Education, 22 , 85–105.

McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text and discourse with Coh-Metrix . Cambridge, MA: Cambridge University Press.

Book   Google Scholar  

Millis, K., Forsyth, C., Butler, H., Wallace, P., Graesser, A., & Halpern, D. (2011). Operation ARIES! A serious game for teaching scientific inquiry. In M. Ma, A. Oikonomou & J. Lakhmi (Eds.), Serious g ames and edutainment applications (pp. 169–196). London, England: Springer.

Morgan, B., Keshtkar, F., Duan, Y., & Graesser, A. C. (2012). Using state transition networks to analyze multi-party conversations in a serious game. In S. A. Cerri & B. Clancey (Eds.), Proceedings of the 11th international conference on intelligent tutoring systems (ITS 2012) (pp. 162–167). Berlin, Germany: Springer.

Nye, B. D., Graesser, A. C., & Hu, X. (2014). AutoTutor and family: A review of 17 years of natural language tutoring. International Journal of Artificial Intelligence in Education, 24, 427–469.

OECD. (2013). PISA 2015 collaborative problem solving framework . (Paris, France: OECD) Retrieved from http://www.oecd.org/pisa/pisaproducts/Draft%20PISA%202015%20Collaborative%20Problem%20Solving%20Framework%20.pdf

Olney, A., D’Mello, S. K., Person, N., Cade, W., Hays, P., Williams, C., & Graesser, A. C. (2012). Guru: A computer tutor that models expert human tutors. In S. Cerri, W. Clancey, G. Papadourakis & K. Panourgia (Eds.), Proceedings of intelligent tutoring systems (ITS) 2012 (pp. 256–261). Berlin, Germany: Springer.

Pennebaker, J. W., Booth, R. J., & Francis, M. E. (2007). LIWC2007: Linguistic inquiry and word count . Austin, TX: LIWC.net.

Rosé, C., Wang, Y. C., Cui, Y., Arguello, J., Stegmann, K., Weinberger, A., et al. (2008). Analyzing collaborative learning processes automatically: Exploiting the advances of computational linguistics in computer-supported collaborative learning. International Journal of Computer-Supported Collaborative Learning, 3, 237–271.

Rowe, J., Shores, L. R., Mott, B., & Lester, J. (2010). Integrating learning, problem solving, and engagement in narrative-centered learning environments. International Journal of Artificial Intelligence in Education, 20, 166–177.

Rus, V., D’Mello, S., Hu, X., & Graesser, A. C. (2013). Recent advances in intelligent systems with conversational dialogue. AI Magazine, 34, 42–54.

Rus, V., McCarthy, P. M., McNamara, D. S., & Graesser, A. C. (2008). A study of textual entailment. International Journal on Artificial Intelligence Tools, 17, 659–685.

Shute, V. J., & Ventura, M. (2013). Measuring and supporting learning in games: Stealth assessment . Cambridge, MA: MIT Press.

Sottilare, R., Graesser, A., Hu, X., & Holden, H. (Eds.). (2013). Design recommendations for intelligent tutoring systems: Learner modeling (Vol. 1). Orlando, FL: Army Research Laboratory. Sottilare, Robert Graesser, Art Hu, Xiangen Holden, Heather.

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems and other tutoring systems. Educational Psychologist, 46, 197–221.

VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science, 31, 3–62.

Ward, W., Cole, R., Bolaños, D., Buchenroth-Martin, C., Svirsky, E., & Weston, T. (2013). My science tutor: A conversational multimedia virtual tutor. Journal of Educational Psychology, 105, 1115–1125.

Zapata-Rivera, D., Jackson, G. T., & Katz, I. (2015). Authoring conversation-based assessment scenarios. In R. Sottilare, A. C. Graesser, X. Hu & K. Brawner (Eds.), Design recommendations for intelligent tutoring systems (Vol. 3, pp. 169–178). Orlando, FL: Army Research Laboratory.

Download references

Acknowledgements

The research was supported by the National Science Foundation (grants DRK-12-0918409, DRK-12 1418288, and DIBBS-1443068), the Institute of Education Sciences (grants R305A 130030, R305A100875, and R305C120001), the Army Research Laboratory (contract W911NF-12-2-0030), and the Office of Naval Research (contracts ONR N00014-12-C-0643). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF, IES, or DoD.

Author information

Authors and affiliations.

University of Memphis, Memphis, TN, USA

Arthur C. Graesser, Nia Dowell & Danielle Clewley

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Arthur C. Graesser .

Editor information

Editors and affiliations.

ACT , Iowa City, Iowa, USA

Alina A. von Davier

Research and Development Division, Educational Testing Service, Princeton, New Jersey, USA

Mengxiao Zhu

Patrick C. Kyllonen

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this chapter

Cite this chapter.

Graesser, A.C., Dowell, N., Clewley, D. (2017). Assessing Collaborative Problem Solving Through Conversational Agents. In: von Davier, A., Zhu, M., Kyllonen, P. (eds) Innovative Assessment of Collaboration. Methodology of Educational Measurement and Assessment. Springer, Cham. https://doi.org/10.1007/978-3-319-33261-1_5

Download citation

DOI : https://doi.org/10.1007/978-3-319-33261-1_5

Published : 05 April 2017

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-33259-8

Online ISBN : 978-3-319-33261-1

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. difference between problem solving and planning

    compare problem solving and planning agents

  2. examples of problem solving agents

    compare problem solving and planning agents

  3. PPT

    compare problem solving and planning agents

  4. difference between problem solving and planning

    compare problem solving and planning agents

  5. Managing problem solving using Seven Management and Planning Tools

    compare problem solving and planning agents

  6. PPT

    compare problem solving and planning agents

VIDEO

  1. Reflex and Planning Agents

  2. The problem with estimates

  3. A3 Thinking and Problem Solving- Planning Phase

  4. Unstoppable Agents: Solving Real Estate Challenges with Persistence and Energy #realestate #shorts

  5. Balancing Problem Solving and Problem Creation in Business Planning #goalsetting

  6. Chapter 8

COMMENTS

  1. Compare and Contrast problem solving agent and planning agent.

    Compare and Contrast problem solving agent and planning agent. 0 20k views Compare and Contrast problem solving agent and planning agent. written 7.2 years ago by teamques10 ★ 63k modified 21 months ago by binitamayekar ★ 6.3k Mumbai University > Computer Engineering > Sem 7 > Artificial Intelligence Marks: 5 Marks Year: Dec 2015, May 2016

  2. Problem Solving Vs. Planning

    Problem Solving vs. Planning: A simple planning agent is very similar to problem-solving agents in that it constructs plans that achieve its goals, and then executes them. The limitations of the problem-solving approach motivates the design of planning systems. To solve a planning problem using a state-space search approach we would let the:

  3. PDF Planning vs. problem solving Foundations of Arti cial Intelligence

    Planning vs. Problem-Solving Basic di erence: Explicit, logic-based representation States/Situations: Through descriptions of the world by logical formulae vs. data structures!The agent can explicitly think about it and communicate. Goal conditionsas logical formulae vs. goal test (black box)!The agent can also re ect on its goals.

  4. PDF 11

    Therefore, the problem- solving agent lacks autonomy; it requires a human to supply a heuristic function for each new problem. On the other hand, if a planning agent has access to an explicit representation of the goal as a conjunction of subgoals, then it can use a single domain-independent heuristic: the number of unsatisfied conjuncts.

  5. 6 Ways to Build Planning Agents

    While both Search and Planning agents aim to find a sequence of actions leading to a goal state, Planning agents typically work with more complex representations of actions and states. Deterministic Classical Planning Agents: These agents operate best in discrete, deterministic environments.

  6. PDF 3 SOLVING PROBLEMS BY SEARCHING

    3SOLVING PROBLEMS BY SEARCHING In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do. The simplest agents discussed in Chapter 2 were the reflex agents, which base their actions on a direct mapping from states to actions.

  7. (PDF) Planning and Problem Solving

    Abstract. Problem solving is the process of developing a sequence of actions to achieve a goal. This broad definition admits all goal-directed artificial intelligence programs to the ranks of ...

  8. PDF Hierarchical Paradigm and STRIPS

    Planning agent is different from problem- solving agent in: Representation of goals, states, actions Use of explicit, logical representations enables more sensible deliberations of potential solutions Way it searches for solutions (just an example of complicated planning/problem solving) Problem Solver Characteristics Search-based problem solver:

  9. Distributed Problem Solving and Planning

    We consider that problem to be a distributed planning problem, where each agent must formulate plans for what it will do that take into account (sufficiently well) the plans of other agents. In this paper, we characterize the variations of distributed problem solving and distributed planning, and summarize some of the basic techniques that have ...

  10. Planning and Problem Solving

    Problem solving is finding a plan for a task in an abstract domain. A problem is difficult if an appropriate sequence of steps to solve it is not known. This chapter discusses the overall planning task from an AI perspective and introduces some related terminology.

  11. Evaluating planners, plans, and planning agents

    The field of AI planning has become increasingly concerned with questions of evaluation. The papers in this special issue represent several different approaches to evaluation. The first two papers present detailed analytic and experimental comparisons ...

  12. PDF Distributed Problem Solving and Planning

    Sometimes the problem the agents are solving is to construct a plan. And often, even if the agents are solving other kinds of problems, they also have to solve planning problems as well. That is, how the agents should plan to work together---decompose problems into subproblems, allocate these subproblems, exchange

  13. Planning and Problem Solving

    Problem solving is finding a plan for a task in an abstract domain. A problem is difficult if an appropriate sequence of steps to solve it is not known. This chapter discusses the overall planning task from an AI perspective and introduces some related terminology. The chapter shows how these general principles are embodied in the seminal work ...

  14. PDF 3. Knowledge Representation, Reasoning, and Planning

    Planners' Differences: Use logic to represent goals, states, and actions. Searches are executed differently. Use of partial plans and planning algorithms to achieve a solution. In some ways more "intelligent" than a problem solving agent. Introduction Simple Planning Agent Planning vs. Problem Solving Planning Using Situation Calculus

  15. AI Qual Summary: Planning

    The process of developing a sequence of actions to solve a . This encompasses all goal-directed AI programs; it is a very general concept. Deciding upon a course of action before acting. A plan is a representation of a course of action. A finished plan is a linear or partially ordered sequence of operators.

  16. PDF Planning

    Planning • Planning vs problem solving • Situation calculus • Plan-space planning We are going to switch gears a little bit now. In the first section of the class, we talked about problem solving, and search in general, then we did logical representations. The motivation that I gave for doing the problem solving stuff

  17. PDF Solving problems by searching

    Problem-solving agents Problem formulation Example problems Basic search algorithms. 2 CS 2710 -Blind Search 3 ... Can be used to compare performance Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum ... planning movements of automatic circuit board drills CS 2710 -Blind Search 16

  18. Planning Agents

    Rational agents embedded in a realistically complicated world (e.g., human beings) may devote more time to epistemic cognition than to practical cognition, but even for such agents, epistemic cognition is in an important sense subservient to practical cognition. Keywords Rational Agent Planning Problem Practical Cognition Plan Execution

  19. Problem Solving Agents in Artificial Intelligence

    The problem solving agent follows this four phase problem solving process: Goal Formulation: This is the first and most basic phase in problem solving. It arranges specific steps to establish a target/goal that demands some activity to reach it. AI agents are now used to formulate goals. Problem Formulation: It is one of the fundamental steps ...

  20. Problem Solving in Artificial Intelligence

    1. Ignorable: In which solution steps can be ignored. 2. Recoverable: In which solution steps can be undone. 3. Irrecoverable: Solution steps cannot be undo. Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their activities.

  21. comparison

    Planning (aka automated planning or AI planning) is the process of searching for a plan, which is a sequence of actions that bring the agent/world from an initial state to one or more goal states, or a policy, which a function from states to actions (or probability distributions over actions).

  22. Planning Agent

    The planning problem formulated by the proactive manager may not always be solvable; for instance, the goal state may only be accomplished by modifying those variables that the assistant cannot access, or none of the assistant's actions have affects that can lead to the specified goal state.

  23. Assessing Collaborative Problem Solving Through Conversational Agents

    The elephant in the room continues to be the question of how conversation-based assessments with computer agents compare to human-to-human collaborations on the same problem-solving tasks. The tasks would of course need to be the same in AH (agents with human) and HH (human with humans). Otherwise, the comparison is noncommensurate and invalid.