Intelligent Agents

Intelligent Agents Project at IBM T.J. Watson Research
Research on Intelligent agents for IBM

Application of Intelligent Agents
An Adobe File that focuses on some of advantages of an agent

An Outline of Intelligent Agents
This website deals with colleges and their research

An Online List of all Agents
This site has a great list of agents and has a interactive agent you can "talk" to.

Agents 101
The Basics

"Is there an Agent in Your Future"
What agents can do in the future

Conference on Autononous Agents
A link to a site that is allows developers of Agents and the public to interact once a year.

"Where do Intelligent Agents come from?"
Title explains enough

Every link you might ever need
A pre-complied list of links about Intelligent Agents 

"Intelligent Agents: A Primer"
A Great site that deals with the Details of how and what an agent works

 

An agent is a computer system placed in an environment that is capable and free of a dominant controller.  It acts without direct intervention of humans. Autonomous agents are software entities that are capable of independent action in dynamic, unpredictable environments. Agents are also one of the most important areas of research and development in computer science today. Agents are currently being applied in domains as diverse as computer games and interactive cinema, information retrieval and filtering, user interface design, and industrial process control. 

Agent-based software has been called the "next significant breakthrough in software development."  Agents are the focus of strong interests on the part of computer science and artificial intelligence. 

Agents are used in such things as ranges between e-mail filters to air traffic control. Other examples are:

*  Music and movie recommendation system

*  News filters

*  Literary filters

*  Webpage Pop-UP stoppers

*  Search engines that provide automated "BOTS" 

A good agent has the following characteristics:  (1) you can communicate with it properly, (2) the agent can act as well as suggest topics/answers, (3) an agent can act without supervision and (4) it can use experience to help you. 

In other words "It must be communicative: able to understand your goals, preferences and constraints. It must be capable: able to take options rather than simply provide advice. It must be autonomous; able to act without the user being in control the whole time. And it should be adaptive; able to learn from experience about both its tasks and about its users preferences."

Critics argue that a third-party might be able to take control of an agent and provide information that would merely benefit them, not the consumer.  Other people believe that a human supervisor should be implemented to monitor some of the actions that occur.  While the software may provide as that supervisor for the system it

In order for the system to be adaptive behavior can be assessed in a number of ways. The simplest way is to group users based on some set of features, and then to assume similarity between them. This can work fairly easily. A new user can fill out a questionnaire that allows the system (using a statistical clustering algorithm) to figure out to which cluster of other users this user belongs. Preferences associated with that group are then assumed to work for this user.

How They Work:

Any of several technologies can design intelligent agents. All of them use some combination of statistical operations, artificial intelligence, machine learning, inference, neural networks, and information technologies. Agent systems are not plug and play. They need to be trained or taught. Most require examples of right answers or rules for appropriate behavior. Typically, an agent system is implemented in several stages. First, one develops rules or training data. Then, one either trains the agents by giving them rules such as, “If Maes writes an article, then get the whole article and notify me by flashing the new articles icon when it arrives,” or by giving them a large set of examples with the “right” answers included. Once the agent system performs satisfactorily on the training data, it is ready to work on test data to make sure that it can extend what it has learned to unknown materials. A last step, but a continuing one, at least in theory, is to evaluate performance at several intervals. Agents should learn over time, and their performance should improve as they adapt to the user’s needs, as well as to the kinds of information they navigate.

The Questions that are involved in this process:

Trust
All these issues raise the question of trust and delegation of authority, in many different forms:

  1. Will the agent do what I want or will it misinterpret my instructions?
  2. Will it distribute my credit information to unauthorized people?
  3. Will it act effectively or will the limited agent capabilities be exceeded, resulting in poorly negotiated bargains or poorly retrieved information?
  4. In complex, high-pressure situations, can we trust an agent system to offer the best potential solutions or will it miss something important?
  5. How much can we trust an agent-based system to automate our actions? Do we want them to check with us for every initiative they take? Can we delegate authority little by little, so that we edge into a partnership with them? Will they hijack our work, as MS Word does in renumbering my outlines?
  6. Do we trust someone else’s agent to be who it says it is? Can we develop widely accepted and secure