Belief-Desires-Intentions Model

Table of Contents


Book

  • Prologue
    • Multiagent Systems and Distributed Artificial Intelligence
      • Intelligent Agents that Interact
      • Challenging Issues
      • Applications
      • Rationales for Multiagent Systems
    • A Guide to This Book
      • The Chapters
      • The Exercises
      • A Few Pointers to Further Readings
      • References
  • Part I: Basic Themes
    • 1 Intelligent Agents
      • 1.1 Introduction
      • 1.2 What Are Agents?
        • 1.2.1 Examples of Agents
        • 1.2.2 Intelligent Agents
        • 1.2.3 Agents and Objects
        • 1.2.4 Agents and Expert Systems
      • 1.3 Abstract Architectures for Intelligent Agents
        • 1.3.1 Purely Reactive Agents
        • 1.3.2 Perception
        • 1.3.3 Agents with State
      • 1.4 Concrete Architectures for Intelligent Agents
        • 1.4.1 Logic-based Architectures
        • 1.4.2 Reactive Architectures
        • 1.4.3 Belief-Desire-Intention Architectures
        • 1.4.4 Layered Architectures
      • 1.5 Agent Programming Languages
        • 1.5.1 Agent-Oriented Programming
        • 1.5.2 Concurrent METATEM
      • 1.6 Conclusions
      • 1.7 Exercises
      • 1.8 References
    • 2 Multiagent Systems and Societies of Agents
      • 2.1 Introduction
        • 2.1.1 Motivations
        • 2.1.2 Characteristics of Multiagent Environments
      • 2.2 Agent Communications
        • 2.2.1 Coordination
        • 2.2.2 Dimensions of Meaning
        • 2.2.3 Message Types
        • 2.2.4 Communication Levels
        • 2.2.5 Speech Acts
        • 2.2.6 Knowledge Query and Manipulation Language (KQML)
        • 2.2.7 Knowledge Interchange Format (KIF)
        • 2.2.8 Ontologies
        • 2.2.9 Other Communication Protocols
      • 2.3 Agent Interaction Protocols
        • 2.3.1 Coordination Protocols
        • 2.3.2 Cooperation Protocols
        • 2.3.3 Contract Net
        • 2.3.4 Blackboard Systems
        • 2.3.5 Negotiation
        • 2.3.6 Multiagent Belief Maintenance
        • 2.3.7 Market Mechanisms
      • 2.4 Societies of Agents
      • 2.5 Conclusions
      • 2.6 Exercises
      • 2.7 References
    • 3 Distributed Problem Solving and Planning
      • 3.1 Introduction
      • 3.2 Example Problems
      • 3.3 Task Sharing
        • 3.3.1 Task Sharing in the Tower of Hanoi (Toll) Problem
        • 3.3.2 Task Sharing in Heterogeneous Systems
        • 3.3.3 Task Sharing for Distributed Sensor Network Establishment (DSNE)
        • 3.3.4 Task Sharing for Interdependent Tasks
      • 3.4 Result Sharing
        • 3.4.1 Functionally Accurate Cooperation
        • 3.4.2 Shared Repositories and Negotiated Search
        • 3.4.3 Distributed Constrained Heuristic Search
        • 3.4.4 Organizational Structuring
        • 3.4.5 Communication Strategies
        • 3.4.6 Task Structures
      • 3.5 Distributed Planning
        • 3.5.1 Centralized Planning for Distributed Plans
        • 3.5.2 Distributed Planning for Centralized Plans
        • 3.5.3 Distributed Planning for Distributed Plans
      • 3.6 Distributed Plan Representations
      • 3.7 Distributed Planning and Execution
        • 3.7.1 Post-Planning Coordination
        • 3.7.2 Pre-Planning Coordination
        • 3.7.3 Interleaved Planning, Coordination, and Execution
        • 3.7.4 Runtime Plan Coordination Without Communication
      • 3.8 Conclusions
      • 3.9 Exercises
      • 3.10 References
    • 4 Search Algorithms for Agents
      • 4.1 Introduction
      • 4.2 Constraint Satisfaction
        • 4.2.1 Definition of a Constraint Satisfaction Problem
        • 4.2.2 Filtering Algorithm
        • 4.2.3 Hyper-Resolution-Based Consistency Algorithm
        • 4.2.4 Asynchronous Backtracking
        • 4.2.5 Asynchronous Weak-Commitment Search
      • 4.3 Path-Finding Problem
        • 4.3.1 Definition of a Path-Finding Problem
        • 4.3.2 Asynchronous Dynamic Programming
        • 4.3.3 Learning Real-Time A*
        • 4.3.4 Real-Time A*
        • 4.3.5 Moving Target Search
        • 4.3.6 Real-Time Bidirectional Search
        • 4.3.7 Real-Time Multiagent Search
      • 4.4 Two-Player Games
        • 4.4.1 Formalization of Two-Player Games
        • 4.4.2 Minimax Procedure
        • 4.4.3 Alpha-Beta Pruning
      • 4.5 Conclusions
      • 4.6 Exercises
      • 4.7 References
    • 5 Distributed Rational Decision Making
      • 5.1 Introduction
      • 5.2 Evaluation Criteria
        • 5.2.1 Social Welfare
        • 5.2.2 Pareto Efficiency
        • 5.2.3 Individual Rationality
        • 5.2.4 Stability
        • 5.2.5 Computational Efficiency
        • 5.2.6 Distribution and Communication Efficiency
      • 5.3 Voting
        • 5.3.1 Truthful Voters
        • 5.3.2 Strategic (Insincere) Voters
      • 5.4 Auctions
        • 5.4.1 Auction Settings
        • 5.4.2 Auction Protocols
        • 5.4.3 Efficiency of the Resulting Allocation
        • 5.4.4 Revenue Equivalence and Non-Equivalence
        • 5.4.5 Bidder Collusion
        • 5.4.6 Lying Auctioneer
        • 5.4.7 Bidders Lying in Non-Private-Value Auctions
        • 5.4.8 Undesirable Private Information Revelation
        • 5.4.9 Roles of Computation in Auctions
      • 5.5 Bargaining
        • 5.5.1 Axiomatic Bargaining Theory
        • 5.5.2 Strategic Bargaining Theory
        • 5.5.3 Computation in Bargaining
      • 5.6 General Equilibrium Market Mechanisms
        • 5.6.1 Properties of General Equilibrium
        • 5.6.2 Distributed Search for a General Equilibrium
        • 5.6.3 Speculative Strategies in Equilibrium Markets
      • 5.7 Contract Nets
        • 5.7.1 Task Allocation Negotiation
        • 5.7.2 Contingency Contracts and Leveled Commitment Contracts
      • 5.8 Coalition Formation
        • 5.8.1 Coalition Formation Activity 1: Coalition Structure Generation
        • 5.8.2 Coalition Formation Activity 2: Optimization within a Coalition
        • 5.8.3 Coalition Formation Activity 3: Payoff Division
      • 5.9 Conclusions
      • 5.10 Exercises
      • 5.11 References
    • 6 Learning in Multiagent Systems
      • 6.1 Introduction
        • 6.2 A General Characterization
        • 6.2.1 Principal Categories
        • 6.2.2 Differencing Features
        • 6.2.3 The Credit-Assignment Problem
      • 6.3 Learning and Activity Coordination
        • 6.3.1 Reinforcement Learning
        • 6.3.2 Isolated, Concurrent Reinforcement Learners
        • 6.3.3 Interactive Reinforcement Learning of Coordination
      • 6.4 Learning about and from Other Agents
        • 6.4.1 Learning Organizational Roles
        • 6.4.2 Learning in Market Environments
        • 6.4.3 Learning to Exploit an Opponent
      • 6.5 Learning and Communication
        • 6.5.1 Reducing Communication by Learning
        • 6.5.2 Improving Learning by Communication
      • 6.6 Conclusions
      • 6.7 Exercises
      • 6.8 References
    • 7 Computational Organization Theory
      • 7.1 Introduction
        • 7.1.1 What Is an Organization?
        • 7.1.2 What Is Computational Organization Theory?
        • 7.1.3 Why Take a Computational Approach?
      • 7.2 Organizational Concepts Useful in Modeling Organizations
        • 7.2.1 Agent and Agency
        • 7.2.2 Organizational Design
        • 7.2.3 Task
        • 7.2.4 Technology
      • 7.3 Dynamics
      • 7.4 Methodological Issues
        • 7.4.1 Virtual Experiments and Data Collection
        • 7.4.2 Validation and Verification
        • 7.4.3 Computational Frameworks
      • 7.5 Conclusions
      • 7.6 Exercises
      • 7.7 References
    • 8 Formal Methods in DAI: Logic-Based Representation and Reasoning
      • 8.1 Introduction
      • 8.2 Logical Background
        • 8.2.1 Basic Concepts
        • 8.2.2 Propositional and Predicate Logic
        • 8.2.3 Modal Logic
        • 8.2.4 Deontic Logic
        • 8.2.5 Dynamic Logic
        • 8.2.6 Temporal Logic
      • 8.3 Cognitive Primitives
        • 8.3.1 Knowledge and Beliefs
        • 8.3.2 Desires and Goals
        • 8.3.3 Intentions
        • 8.3.4 Commitments
        • 8.3.5 Know-How
        • 8.3.6 Sentential and Hybrid Approaches
        • 8.3.7 Reasoning with Cognitive Concepts
      • 8.4 BDI Implementations
        • 8.4.1 Abstract Architecture
        • 8.4.2 Practical System
      • 8.5 Coordination
        • 8.5.1 Architecture
        • 8.5.2 Specification Language
        • 8.5.3 Common Coordination Relationships
      • 8.6 Communications
        • 8.6.1 Semantics
        • 8.6.2 Ontologies
      • 8.7 Social Primitives
        • 8.7.1 Teams and Organizational Structure
        • 8.7.2 Mutual Beliefs and Joint Intentions
        • 8.7.3 Social Commitments
        • 8.7.4 Group Know-How and Intentions
      • 8.8 Tools and Systems
        • 8.8.1 Direct Implementations
        • 8.8.2 Partial Implementations
        • 8.8.3 Traditional Approaches
      • 8.9 Conclusions
        • 8.10 Exercises
        • 8.11 References
    • 9 Industrial and Practical Applications of DAIH.
      • 9.1 Introduction
      • 9.2 Why Use DAI in Industry?
      • 9.3 Overview of the Industrial Life-Cycle
      • 9.4 Where in the Life Cycle Are Agents Used?
        • 9.4.1 Questions that Matter
        • 9.4.2 Agents in Product Design
        • 9.4.3 Agents in Planning and Scheduling
        • 9.4.4 Agents in Real-Time Control
      • 9.5 How Does Industry Constrain the Life Cycle of an Agent-Based System?
        • 9.5.1 Requirements, Positioning, and Specification
        • 9.5.2 Design: The Conceptual Context
        • 9.5.3 Design: The Process
        • 9.5.4 System Implementation
        • 9.5.5 System Operation
      • 9.6 Development Tools
      • 9.7 Conclusions
      • 9.8 Exercises
      • 9.9 References
  • Part II: Related Themes
    • 10 Groupware and Computer Supported Cooperative Work
      • 10.1 Introduction
      • 10.1.1 Well-Known Groupware Examples
      • 10.2 Basic Definitions
        • 10.2.1 Groupware
        • 10.2.2 Computer Supported Cooperative Work
      • 10.3 Aspects of Groupware
        • 10.3.1 Keepers
        • 10.3.2 Coordinators
        • 10.3.3 Communicators
        • 10.3.4 Team-Agents
        • 10.3.5 Agent Models
        • 10.3.6 An Example of Aspect Analysis of a Groupware
      • 10.4 Multi-Aspect Groupware
        • 10.4.1 Chautauqua — A Multi-Aspect System
      • 10.5 Social and Group Issues in Designing Groupware Systems
      • 10.6 Supporting Technologies and Theories
        • 10.6.1 Keepers
        • 10.6.2 Coordinators
        • 10.6.3 Communicators
        • 10.6.4 Team-Agents
      • 10.7 Other Taxonomies of Groupware
        • 10.7.1 Space/Time Matrix
        • 10.7.2 Application Area
      • 10.8 Groupware and Internet
        • 10.8.1 Internet as Infrastructure
        • 10.8.2 Internet as Presumed Software
      • 10.9 Conclusions
        • 10.9.1 Incorporating Communicators into Keepers
        • 10.9.2 Incorporating Keepers and Communicators into Coordinators
        • 10.9.3 Future Research on Agents
        • 10.9.4 Future Research on Keepers
      • 10.10 Exercises
      • 10.11 References
    • 11 Distributed Models for Decision Support
      • 11.1 Introduction
      • 11.2 Decision Support Systems
        • 11.2.1 The Decision Support Problem
        • 11.2.2 Knowledge-Based Decision Support
        • 11.2.3 Distributed Decision Support Models
      • 11.3 An Agent Architecture for Distributed Decision Support Systems
        • 11.3.1 Information Model
        • 11.3.2 Knowledge Model
        • 11.3.3 Control Model
      • 11.4 Application Case Studies
        • 11.4.1 Environmental Emergency Management
        • 11.4.2 Energy Management
        • 11.4.3 Road Traffic Management
      • 11.5 Couclusions
      • 11.6 Exercises
      • 11.7 References
    • 12 Concurrent Programming for DAI
      • 12.1 Introduction
      • 12.2 Defining Multiagent Systems
      • 12.3 Actors
        • 12.3.1 Semantics of Actors
        • 12.3.2 Equivalence of Actor Systems
        • 12.3.3 Actors and Concurrent Programming
      • 12.4 Representing Agents as Actors
        • 12.4.1 Mobility of Actors
        • 12.4.2 Resource Model
      • 12.5 Agent Ensembles -12.5.1 Customizing Execution Contexts -12.5.2 Interaction Protocols -12.5.3 Coordination -12.5.4 Naming and Groups
      • 12.6 Related Work
      • 12.7 Conclusions
      • 12.8 Exercises
      • 12.9 References
    • 13 Distributed Control Algorithms for AI
      • 13.1 Introduction 13.1.1 Model of Computation 13.1.2 Complexity Measures 13.1.3 Examples of Distributed Architectures in AI
      • 13.2 Graph Exploration 13.2.1 Depth-First Search 13.2.2 Pseudo-Fast Exploration: the Echo Algorithm 13.2.3 Searching for Connectivity Certificates
      • 13.3 Termination Detection 13.3.1 Problem Definition 13.3.2 Tracing Algorithms 13.3.3 Probe Algorithms
      • 13.4 Distributed Arc Consistency and the Constraint Satisfaction Problem (CSP) 13.4.1 Constraint Satisfaction and Arc Consistency 13.4.2 The AC4 Algorithm 13.4.3 The Distributed AC4 Algorithm 13.4.4 Termination Detection 13.4.5 Partitioning for Multiprocessor Computers 13.4.6 Distributed Constraint Satisfaction Algorithm
      • 13.5 Distributed Graph Processing 13.5.1 The Problem: Loop Cutset 13.5.2 Distributed Execution of the Algorithm 13.5.3 Complexity and Conclusions
      • 13.6 Conclusions
      • 13.7 Exercises
      • 13.8 References

Notes

loop:
  - perceive                      P = perceive
  - revise knowledge              B = belief_revision(B, P)
  - reason about knowledge 
    to obtain current desires     D = option_selection(B)
  - reconsider intentions 
    based on desires and beliefs  I = intention_selection(B, D, I)
  - planning                      A = action_selection(B, I)
  - act and communicate
  +-----+      +--------+     +-+
  |     |------>belief  |----->B|-----------+
  |  E  |      |revision|     +++           |
  |  N  |      +--------+      |            |
  |  V  |                      |            |
  |  I  |          +-----------+            |
  |  R  |          |                        |
  |  O  |          |                        |
  |  N  |     +----v----+     +-+     +-----v---+
  |  M  |     |intention<-----|D<-----|option   |
  |  E  |     |selection|     +-+     |selection|
  |  N  |     +----^----+             +-----^---+
  |  T  |          |                        |
  |     |          |                        |
  |     |          +-----------+            |
  |     |                      |            |
  |     |                      |            |
  |     |     +---------+     +v+           |
  |     <-----|action   <-----|I|-----------+
  |     |     |selection|     +-+
  +-----+     +---------+

Los deseos y las intenciones comparten sus elementos, ya que la seleccion de intenciones 'filtra' a los deseos que el agente ha seleccionado. Por lo tanto las intenciones en un ciclo deliberativo seran un subconjunto de los deseos.

Wikipedia

Provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans.

Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it).

A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.

Bratman

Implements the notions of belief, desire and (in particular) intention. Intention and desire are both pro-attitudes (mental attitudes concerned with action), but intention is distinguished as a conduct-controlling pro-attitude.

He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed.

Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans.

The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.

The BDI software model is closely associated with intelligent agents, but does not, of itself, ensure all the characteristics associated with such agents. For example, it allows agents to have private beliefs, but does not force them to be private.

It also has nothing to say about agent communication. Ultimately, the BDI software model is an attempt to solve a problem that has more to do with plans and planning (the choice and execution thereof) than it has to do with the programming of intelligent agents.

Beliefs

Beliefs represent the informational state of the agent, in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Using the term belief rather than knowledge recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).

Beliefset

Beliefs are stored in database (sometimes called a belief base or a belief set), although that is an implementation decision.

Desires

Desires represent the motivational state of the agent. They represent objectives or situations that the agent would like to accomplish or bring about. Examples of desires might be: find the best price, go to the party or become rich.

Goals

A goal is a desire that has been adopted for active pursuit by the agent. Usage of the term goals adds the further restriction that the set of active desires must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home - even though they could both be desirable.

Intentions

Intentions represent the deliberative state of the agent - what the agent has chosen to do. Intentions are desires to which the agent has to some extent committed. In implemented systems, this means the agent has begun executing a plan.

Plans

Plans are sequences of actions (recipes or knowledge areas) that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress.

Events

These are triggers for reactive activity by the agent. An event may update beliefs, trigger plans or modify goals. Events may be generated externally and received by sensors or integrated systems. Additionally, events may be generated internally to trigger decoupled updates or plans of activity.

BDI Interpreter

This section defines an idealized BDI interpreter that provides the basis of the PRS linage of BDI systems:

  1. initialize-state
  2. repeat
    1. options: option-generator(event-queue)
    2. selected-options: deliberate(options)
    3. update-intentions(selected-options)
    4. execute()
    5. get-new-external-events()
    6. drop-unsuccessful-attitudes()
    7. drop-impossible-attitudes()
  3. end repeat

Limitations and Criticisms

The BDI software model is one example of a reasoning architecture for a single rational agent, and one concern in a broader multi-agent system. This section bounds the scope of concerns for the BDI software model, highlighting known limitations of the architecture.

  • Learning: BDI agents lack any specific mechanisms within the architecture to learn from past behavior and adapt to new situations[6][7].
  • Three Attitudes: Classical decision theorists and planning research questions the necessity of having all three attitudes, distributed AI research questions whether the three attitudes are sufficient[1].
  • Logics: The multi-modal logics that underlie BDI (that do not have complete axiomatizations and are not efficiently computable) have little relevance in practice[1][8].
  • Multiple Agents: In addition to not explicitly supporting learning, the framework may not be appropriate to learning behavior. Further, the BDI model does not explicitly describe mechanisms for interaction with other agents and integration into a multi-agent system.[9].
  • Explicit Goals: Most BDI implementations do not have an explicit representation of goals[10].
  • Lookahead: The architecture does not have (by design) any lookahead deliberation or forward planning. This may not be desirable because adopted plans may use up limited resources, actions may not be reversible, task execution may take longer than forward planning, and actions may have undesirable side effects if unsuccessful[11].