0
4.5kviews
Problem solving. Solving problems by searching.
1 Answer
0
79views

Introduction.

This chapter describes One kind of goal based agent called a problem solving agent. Problem solving agents decide what to do by finding sequences of actions that lead to desirable states.

We start by defining precisely the elements that constitute a “problem” and it’s “solution” and give several examples to illustrate these definition. We then describe several general purpose search algorithms that can be used to solve their problems and compare the advantages of each algorithm.

The algorithms are uninformed, in the sense that they are given no information about the problem other than its definition. Informed search algorithm, ones that have some idea of where to look for solutions.

Problem – solving Agents.

Imagine an agent in the city of Arad, Romania, enjoying a touring holiday. The agents performance measure contains many factors : it wants to improve in syn-tan, improve its Romanian, take in the sights, enjoys the night life avoid hang overs and so on.

Now, suppose the agent has a non-refundable ticket to fly out of Bucharest the following day. In that case, it makes sense for the agent to adopt the goal of getting to Bucharest.

Goals help organize behavior by limiting the objectives that the agent is trying to achieve.

Goal formulation, based on the current situations and the agent performance measure, is the first step in problem solving. We will consider a goal to be a set of world states, exactly those states in which the goal is satisfied. The agent’s task is to find out which sequence of actions will get it to a goal state. Before it can do this, it needs to decide what sorts of actions and states to consider.

Problem formulation is the process of deciding what actions and states to consider, given a goal. Our agent has now adopted the goal of driving to Bucharest, and is considering where to go from Arad, one toward sibia, one to Timosoara, and one to zerind.

None of these achieves the goal, so unless the agent is very familiar with the geography of Romania, it will not know which road to follow. In other words, the agent will not know which of its possible actions is best, because, it does not know enough about the state that results from taking each action. If the agent has no additional knowledge, then it is stuck. The best it can do is choose one of the actions at random.

But suppose the agent has a map of Romania either on paper or in its memory. The point of a map is to provide the agent with information about the states it might get itself into and the actions it can take. The agent can use this information to consider subsequent stages of a hypothetical journey via each of the three eventually gets to Bucharest. Once it has found a path on the map from Arad to Bucharest, it an achieve its goal, by carrying out the driving actions that correspond to the legs of the journey. In general, an agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value and then choosing the best sequence.

This process of looking for such a sequence is called a search. A search algo takes a problem as input and returns a solution in the form of an action sequence. Once a solution is found, the actions it recommends can be carried out. This is called the execution phase. Thus, we have a simple “formulate, search, execute” design for the agent as shown in figure (1)

After formulating a goal and a problem to solve, the agent calls a search procedure to solve it. It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do typically, the first action of the sequence and then removing that step from the sequence. Once the solution has been executed, the agent will formulate a new goal.

The agent design in figure (1) assumes that the environment is static, because formulating and solving the problem is done without paying attention to any charges that might be occurring in the environment. The agent design also assumes that the initial state is known, knowing it is easiest if the environment is observable.

function SIMPLE – PROBLEM – SOLVING – AGENT (percept)

returns an action

inputs : percept, a percept

static : a goal, seq, an action sequence, initially empty

State, some description of the current world state

goal, a goal, initially null

problem, a problem formulation.

state $\leftarrow$ UPDATE – STATE (State, percept)

if seq is empty then do

goal $\leftarrow$ FORMULATE – GOAL (State)

problem $\leftarrow$ FORMULATE – PROBLEM (State, goal)

seq $\leftarrow$ SEARCH (Problem)

action $\leftarrow$ FIRST (Seq)

seq $\leftarrow$ REST (Seq)

return action.

Figure (1) A simple problem solving agent.

It first formulates a goal and a problem, searches for a sequence of actions that would solve the problem, and then executes the actions one at a time. When this is complete, it formulates another goal and starts over. Note that when it is executing the sequence it ignores its percepts, it assumes that the solution it has found will always work.

Please log in to add an answer.