0
15kviews
Types of Agents.
1 Answer
0
363views
    1.

There are four basic kinds of agent program embody the principles underlying almost all AI:

1] Simple reflex agents.

2] Model – based reflex agents.

3] Goal – based agents.

4] Utility – based agents.

1] Simple reflex agents.

These agents select actions on the basis of the current percept, ignoring the rest of the percept history. Example: The vac-um agent whose agent function is tabulated in figure (3) is a simple reflex agent, because its decision is based only on the current location and on whether that contains dirt. An gent program for this agent is shown in figure (6) returns an action.

function REFLEX – VAC-UM – AGENT ([location status]) returns an action.

if status = Dirty then return suck

else if location = A then return right

else if location = B then return left.

Figure. The agent program for a smile reflex agent in the two state vacum environment. This program implements the agent function tabulated in figure (3). Imagine yourself as the driver of the automated taxi. If the car in-front brakes, and its brake lights come on, then you should notice this and initiate braking. In other word, some processing is done on the visual input to establish the condition we call “The car in front is braking”. Then, this triggers some established connection in the agent program to the action “initiate braking” we call such a connection a Condition - action rule. It written as,

If car in front is braking then initiate braking.

A more general and flexible approach is first to build a general purpose interpreter for condition action rules and then to create rule sets for specific task environment. Figure (7) gives the structure of this general program in schematic form, showing how the condition action rules allow the agent to make the connection from percept to action.

We use rectangles to denote the current internal state of the agents decision process and ovals to represent the background information used in the process.

enter image description here

Figure (7) schematic diagram of simple reflex agent.

The agent program which is very simple, is shown in figure (8).

The INTERPRET – INPUT function generates an abstracted description of the current state from the percept and the RULE MATCH function returns the first rule in the set of rules that matches the given state description.

Note that the description in terms of “rues” and “matching” is purely conceptual, actual implementations can be as simple as collection of logic gates implementing a Boolean ckt. The agent in figure (8) will work only if the correct decision can be made on the basis of only.

The current percept – that is, only I the environment is fully observable.

function SIMPLE – REFLEX – AGENT (Percept) returns an act static : rules, a set of condition action rules.

state $\leftarrow$ INTERPRET – INPUT (Percept)

rule $\leftarrow$ RULE – MATCH (State, rules)

action $\leftarrow$ RULE – ACTION [rule]

return action.

Figure (8) A simple reflex agent. It acts according to a rule whose condition matches the current state, as defined by the percept. simple reflex agents has very limited intelligence.

2] Model – based reflex agent.

Model – based reflex agents maintain internal state to track aspects of the world that are not evident in the current percept. The most effective way to handle partial observability is for the agent to keep track of the part of the world it cant see now. The agent should maintain some sort of internal state that depends on the percept history and thereby reflects at least some of the un observed aspects of the current state.

For other driving task such as changing lanes, the agent needs to keep back of where the other cars are if it cant see them at all once. Updating this internal site information as time goes by requires two kinds of knowledge to be encoded in the agent program.

First, we need some information about how the world evolves independently of the agent - Example: that an overtaking car generally will be closer behind than it as a moment ago.

Second, we need some information about how the agents own actions affect the world – example: that when the agent turns the steering wheel clockwise, the car turns to the right or that after driving for five minutes. Northbound on the freeway one is usually about five miles north of where one was five minutes ago.

This knowledge about “how the world works” – whether implemented in simple Boolean ckts or in complete scientific theories – is called a model of the world. An agent that uses such a model is called a model – based agent.

Figure (9) gives the structure of the reflex agent with internal state, showing how the current percept is combined with the old internal state to generate the updated description of the current state.

enter image description here

Figure (9) A model – based reflex agent.

The agent program is shown in figure (10)

function REFLEX – AGENT – WITH – STATE (percept) returns an action

state: state, a description of the current world state

rules, a set of condition-action rules

action, the most recent action, initially

state $\leftarrow$ UPDATE – STATE (State, action, percept)

rule $\leftarrow$ RULE – MATCH (State, rules)

action $\leftarrow$ RULE – ACTION [rule]

return action.

Figure (10) A model-based reflex agent. It keep track of the current state o the world using an internal model. In then chooses an action in the same way as the reflex agent. The function UPDATE-STATE, which is responsible for creating the new internal state description.

3] Goal-based agents.

Knowing about the current state of the environment is not always enough to decide what to do. Example: At a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. In another words, as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable. Example: Being at the passengers destination. The agent program can combine this with information about the results of possible actions. (the same information as was used to update internal state in the reflex agent) in order to choose actions that achieve the goal. Figure (11) shows the goal based agents structure.

Sometimes, goal – based action selection is straight forward, when goal satisfaction results immediately from a single action. Sometimes it will be more tricky, when the agent has to consider long sequences of twists an turns to find a way to achieve the goal. Search and planning are the sub fields of AI devoted to finding action sequences that achieve the agents goals.

It is less efficient. It is more flexible because the knowledge that supports decisions is represented explicitly and can be modified. If it starts to rain, the agent can update its knowledge of how effectively its brakes will operate, this will automatically cause all of the relevant behaviors. To be altered to suit the new conditions.

enter image description here

Figure (11) Goal based Agent.

4] Utility – based agents.

Goals alone are not really enough to generate high quality behavior in most environments. Example: There are many actions sequences that will get the taxi t its destination. But some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude binary distinction between ‘happy’ and ‘unhappy’ states, whereas a more general performance measure should allow a comparison of different world states according to exactly how happy they would make the agent if they could be achieved. The customary terminology is to say that if one world state is preferred to another, then it has a higher utility for the agent. A utility function maps a state onto a real number, which describes the associated degree of happiness. A complete specification of the utility function allows rational decisions in two kinds of cases where goals are inadequate.

First, where there are conflicting goals, only some of which can be achieved (example: speed and safety) the utility function. Specifies the appropriate trade off.

Second, when there are several goals that the agent an aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed up against the importance of the goals. Utility based agent structure is shown in figure (12)

enter image description here

Fig (12) A model based, utility based agent.

Please log in to add an answer.