wiki:CommunicationInterface
Last modified 8 years ago Last modified on 03/22/2010 10:24:27 AM

The information on this page represents the state of discussion in spring 2009. Since then the ideas described here have been implemented as a concrete architecture, consisting of the 3 components FaTima?, CmIon? and SamGar?.

3.1 FAtiMA

Maintains high-level memory; carries out cognitive appraisal; manages goals; affective states; generates plans (action sequences); monitors plan outcomes. The actions carried out by Fatima are higher level, for example “move to table” which is passed to ION (3.2)

3.2 ION (Java)

Contains 3 sub-systems

3.2.1 Commands translator and interpreter:

Acts as an interface performing 2 main tasks:

1.Receives commands from FAtiMA e.g. “move to table” This sequence of actions is fed into competency manager (3.2.2) 2.Receives feedback from competency manager about success/failure of particular command and also interprets messages passed from Level2 through Message receiver (3.2.4) and reports it to FAtiMA.

Communication medium between ION (3.2) and FAtiMA (3.1) will be via socket messages.

3.3.2 Competency Manager:

The competency manager accepts the action command from commands translator and interpreter (3.2.1) and formulates them into action sequence required in order to execute the command passed. The formulation of command involves mapping of competencies required to execute a particular command and creating a XML file with the required competencies.

Example1: To map a command like “move to table” to Level2 in XML

<Navigation>
  <Go-to place>Table</Go-to>
  <emote value=100>Happy</emote>
  <emote value=100>Confidence</emote>
</Navigation>

Example2: Expressive commands can also be passed explicitly as required

<Behaviour>
  <Behave value = 100>What Behaviour </Behave>
  <emote value=100>Happy</emote>
  <emote value=100>Confidence</emote>
  <mode value=1>Awareness</mode>
</Behaviour>

Note: Level3 will consist only abstract information about the companion and available competencies. For example Level3 will be aware if the companion is navigable and not care how the task is carried out. In case of robot, navigation will be carried out by Level2 differently in comparison to handheld system.

3.3.3 Message sender:

Sends the message in XML format to Level 2 Competencies execution/monitoring (2.1)

3.3.4 Message receiver:

Receives messages in XML format from Level 2 Competencies execution/monitoring (2.1) and passes it to Commands translator and interpreter (3.2.1). The message structure will be similar to said in Example1

2.1 Competencies execution/monitoring:

Will be responsible for execution and monitoring the competencies and also monitoring of Level2 affective system. SAMGAR will provide some functionality to stop/pause a competency and also recognise errors within competencies and report it to Level3 via 2.1.1

2.1.1 Message encryption/decryption:

This module will be responsible for encrypting/decrypting message in XML format to be sent/received to/from Level3 (3.2.4) For example in the receive case, the XML message passed will be first decrypted to call for required competencies with required parameter values.

2.2.2: Local emotional/affective system:

Will represent the local affective state of the companions. This will also take into account Level2 memory which can hold state of affect for example frustration due to repeated failed attempts to complete a given task.

Note: This affective states will be different from Level3 affective states represented in OCC model in FAtiMA

2.2 Blackboard/Memory?:

This unit can be perceived as Level2 memory which holds memory about the current state of the system and also important static information like the location of people, objects etc.

This unit can be implemented as a singleton class where a common knowledge base, the "blackboard", can be iteratively updated by competencies allowing competencies to share data between each other. For example an image captured by the camera can be placed on the blackboard which can be used by face detect and colour recognition competence.

Considering Example1, where a XML with embedded command “move to table” is passed, it can be decrypted to generate a sequence of competencies with required parameters using information on blackboard, refer the table

XML message (3.2.3)  
<Go-to place>Table</Go-to place>
<emote value=100>Happy</emote>
<emote value=100>Confidence</emote>
</Navigation>

Competencies Sequence (2.1.1) 
Movement.go(current, Table)
Visual.Screen(HappyFace)
Confidence+=100

Black Board (2.2)
Location:Current: 150, 200
Location:Table: 200, 300
Confidence: 20
Obstacle: False

While the sequence is being executed, the competencies will update the blackboard in case of any dynamic events like obstacle is in the way (set Obstacle: True on blackboard), then the navigation competence can re-plan.

1.1 Level1:

Will contain programs to execute the competencies tied to resources on the particular platform

Communication medium between Level1 (1.1) and Level2 can be embedded inside the competence for example for Movement, Greta it will be BML for iCat some other format and for robots it will tie with API s/w provided for robot.

Attachments