Skip to main content Accessibility help
×
Home
  • Print publication year: 2015
  • Online publication date: November 2015

1 - Introduction to Practical Reasoning

Summary

Practical reasoning of the kind described by philosophers since Aristotle (384–322 BC) is identified as goal-based reasoning that works by finding a sequence of actions that leads toward or reaches an agent's goal. Practical reasoning, as described in this book, is used by an agent to select an action from a set of available alternative actions the agent sees as open in its given circumstances. A practical reasoning agent can be a human or an artificial agent – for example, software, a robot, or an animal. Once the action is selected as the best or most practical means of achieving the goal in the given situation, the agent draws a conclusion that it should go ahead and carry out this action. Such an inference is fallible, as long as the agent's knowledge base is open to new information. It is an important aspect of goal-based practical reasoning that if an agent learns that its circumstances or its goals have changed and a different action might now become the best one available, it can (and perhaps should) “change its mind.”

In computer science, practical reasoning is more likely to be known as means-end reasoning (where an end is taken to mean a goal), goal-based reasoning, or goal-directed reasoning (Russell and Norvig, 1995, 259). Practical reasoning is fundamental to artificial intelligence (Reed and Norman, 2003), where it is called means-end analysis (Simon, 1981). In goal-based problem-solving, a search for a solution to a problem is carried out by finding a sequence of actions from available means of solving a problem. An intelligent goal-seeking agent needs to receive information about its external circumstances by means of sensors, and store it in its memory. There are differences of opinion about how practical goal-based reasoning should be modeled. One issue is whether it should be seen as merely an instrumental form of reasoning, or whether it should be also based on values. Many automated systems of practical reasoning for multi-agent deliberation (Gordon and Richter, 2002; Atkinson et al., 2004a, 2004b; Rahwan and Amgoud, 2006) take values into account.