|
|
|
In our research group we
think of agents as a programming abstraction that extends the object
oriented programming style. They are differentiated from other such
object in that we think of them as independent entities that do one task
well. Agents also have communications
with other agents at the forefront of their design. An agent itself also has a notion of where
it’s running and can use this information to its advantage.
|
|
|
|
The Metaglue Agent
Architecture has been built with several advantages in mind. For more information I invite you to look
at the metaglue website.
|
|
|
|
It is built to support synchronous and
asynchronous communication among distributed agents. This provides for fast
communications across the network.
|
|
|
|
The metaglue agent
architecture provides mechanisms for resource discovery and management. It holds a catalogue of all the agents
available in a particular space. In
the software design meeting scenario described here, this is important
because the application can query for the available meeting capture devices
in the room, and start the most appropriate available device. For instance, if a room has only audio
recording available, it will start the audio recording. However, if a room has audio and video
recording available, it will start the video recording.
|
|
|
|
The metaglue agent
architecture also provides robust recovery mechanisms for failed
components. If an agent fails,
metaglue will restart it. For
instance, if the video agent has failed for any reason, the agent will
restart it.
|
|
|
|
|
|
Metaglue has built-in
persistent storage. Agents can save
information to a database as they are running. If an agent dies and is restarted, it can
be restarted, keeping its old state by using the information stored in the
database.
|
|
|
|
The Metaglue Agent
Architecture also provides support for multimodal interactions through
speech, gesture, and graphical user interfaces. We saw some of this support in the some of
the previous diagrams. For instance,
to add speech to an agent, the user need only to provide a grammar describing
a set of expected utterances and a handler for speech input events.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|