Declarative Preferences in Reactive BDI Agents

Abstract

Current agent architectures implementing the belief-desire-intention (BDI) model consider agents which respond reactively to internal and external events by selecting the first-available plan. Priority between plans is hard-coded in the program, and so the reasons why a certain plan is preferred remains in the programmer’s mind. Recent works that attempt to include explicit preferences in BDI agents treat preferences essentially as a rationale for planning tasks to be performed at run-time, thus disrupting the reactive nature of agents. In this paper we propose a method to include declarative preferences (i.e. concerning states of affairs) in the agent program, and to use them in a manner that preserves reactivity. To achieve this, the plan prioritization step is performed offline, by (a) generating all possible outcomes of situated plan executions, (b) selecting a relevant subset of situation/outcomes couplings as representative summary for each plan, (c) sorting the plans by evaluating summaries through the agent’s preferences. The task of generating outcomes in several conditions is performed by translating the agent’s procedural knowledge to an ASP program using discrete-event calculus.

Publication
In International Conference on Principles and Practice of Multi-Agent Systems
Giovanni Sileno
Giovanni Sileno
Assistant Professor
Tom van Engers
Tom van Engers
Full Professor (FDR)

I conduct research on AI & Law, with a particular focus on normative reasoning. Having a track record in AI & Law research going back to 1983, I have worked both on knowledge-driven as well as data-driven AI approaches.

Related