What are Agent and Environment in AI?

0
129
What are Agent and Environment in AI

AI system is composed of an agent and its environment.

What are Agent and Environment?

An agent is anything that can perceive its environment through sensors and acts upon that environment through effectors.

  • human agent has sensory organs such as eyes, ears, nose, tongue, and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors.
  • robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for effectors.
  • software agent has encoded bit strings as its programs and actions.

Act like humans Act rationally
Think like humans Think rationally

 

Agent Terminology

  • Performance Measure of Agent − It is the criteria, which determines how successful an agent is.
  • The behavior of Agent − It is the action that the agent performs after any given sequence of percepts.
  • Percept − It is an agent’s perceptual inputs at a given instance.
  • Percept Sequence − It is the history of all that an agent has perceived to date.
  • Agent Function − It is a map from the precept sequence to an action.

What is Ideal Rational Agent?

An ideal rational agent is the one, which is capable of doing expected actions to maximize its performance measure, on the basis of −

  • Its percept sequence
  • Its built-in knowledge base

The rationality of an agent depends on the following −

  • The performance measures, which determine the degree of success.
  • Agent’s Percept Sequence till now.
  • The agent’s prior knowledge about the environment.
  • The actions that the agent can carry out.

A rational agent always performs the right action, where the right action means the action that causes the agent to be most successful in the given percept sequence. The problem the agent solves is characterized by Performance Measure, Environment, Actuators, and Sensors (PEAS).

The Structure of Intelligent Agents

Agent’s structure can be viewed as −

  • Agent = Architecture + Agent Program
  • Architecture = the machinery that an agent executes on.
  • Agent Program = an implementation of an agent function.

Simple Reflex Agents

  • They choose actions only based on the current percept.
  • They are rational only if a correct decision is made only on the basis of current precept.
  • Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.

Rationality

Rationality is nothing but the status of being reasonable, sensible, and having a good sense of judgment.

Model-Based Reflex Agents

They use a model of the world to choose their actions. They maintain an internal state.

Model − knowledge about “how things happen in the world”.

Internal State − It is a representation of unobserved aspects of the current state depending on percept history.

Updating the state requires the information about −

  • How the world evolves.
  • How the agent’s actions affect the world.

Goal-Based Agents

They choose their actions in order to achieve goals. The goal-based approach is more flexible than a reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.

Goal − It is the description of desirable situations.

Utility Based Agents

They choose actions based on preference (utility) for each state.

Goals are inadequate when −

  • There are conflicting goals, out of which only a few can be achieved.
  • Goals have some uncertainty of being achieved and you need to weigh the likelihood of success against the importance of a goal.

Also Read: What is Artificial Intelligence?

4 Types of Artificial Intelligence Environments

1-Fully Observable vs. Partially Observable

A fully observable AI environment has access to all required information to complete the target task. Image recognition operates in fully observable domains. Partially observable environments such as the ones encountered in self-driving vehicle scenarios deal with partial information in order to solve AI problems.

3-Competitive vs. Collaborative

Competitive AI environments face AI agents against each other in order to optimize a specific outcome. Games such as GO or Chess are examples of competitive AI environments. Collaborative AI environments rely on the cooperation between multiple AI agents. Self-driving vehicles or cooperating to avoid collisions or smart home sensors interactions are examples of collaborative AI environments.

3-Discrete vs. Continuous

Discrete AI environments are those on which a finite [although arbitrarily large] set of possibilities can drive the final outcome of the task. Chess is also classified as a discrete AI problem. Continuous AI environments rely on unknown and rapidly changing data sources. Vision systems in drones or self-driving cars operate on continuous AI environments.

4-Deterministic vs. Stochastic

Deterministic AI environments are those on which the outcome can be determined base on a specific state. In other words, deterministic environments ignore uncertainty. Most real-world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a classic example of stochastic AI processes.