Changes between Initial Version and Version 1 of CommunicationInterface

05/20/2009 05:36:25 PM (11 years ago)



  • CommunicationInterface

    v1 v1  
     43.1 FAtiMA: Maintains high-level memory; carries out cognitive appraisal; manages goals; affective states; generates plans (action sequences); monitors plan outcomes. The actions carried out by Fatima are higher level, for example “move to table” which is passed to ION (3.2) 
     63.2 ION (Java): Contains 3 sub-systems 
     83.2.1 Commands translator and interpreter: Acts as an interface performing 2 main tasks: 
     101.Receives commands from FAtiMA e.g. “move to table” This sequence of actions is fed into competency manager (3.2.2)  
     112.Receives feedback from competency manager about success/failure of particular command and also interprets messages passed from Level2 through Message receiver (3.2.4) and reports it to FAtiMA. 
     13Communication medium between ION (3.2) and FAtiMA (3.1) will be via socket messages. 
     143.3.2 Competency Manager: The competency manager accepts the action command from commands translator and interpreter (3.2.1) and formulates them into action sequence required in order to execute the command passed. The formulation of command involves mapping of competencies required to execute a particular command and creating a XML file with the required competencies.  
     16Example1: To map a command like “move to table” to Level2 in XML 
     19<Go-to place>Table</Go-to place> 
     20<emote value=100>Happy</emote> 
     21<emote value=100>Confidence</emote> 
     24Example2: Expressive commands can also be passed explicitly as required 
     27<Behave value = 100>What Behaviour </Behave> 
     28<emote value=100>Happy  </emote> 
     29<emote value=100>Confidence </emote> 
     30<mode value=1>Awareness </mode> 
     33Note: Level3 will consist only abstract information about the companion and available competencies. For example Level3 will be aware if the companion is navigable and not care how the task is carried out. In case of robot,  navigation will be carried out by Level2 differently in comparison to handheld system. 
     353.3.3 Message sender:  Sends the message in XML format to Level 2 Competencies execution/monitoring (2.1) 
     373.3.4 Message receiver:  Receives messages in XML format from Level 2 Competencies execution/monitoring (2.1) and passes it to Commands translator and interpreter (3.2.1). The message structure will be similar to said in Example1 
     402.1 Competencies execution/monitoring: Will be responsible for execution and monitoring the competencies and also monitoring of Level2 affective system. SAMGAR will provide some functionality to stop/pause a competency and also recognise errors within competencies and report it to Level3 via 2.1.1 
     422.1.1 Message encryption/decryption: This module will be responsible for encrypting/decrypting message in XML format to be sent/received to/from Level3 (3.2.4) For example in the receive case, the XML message passed will be first decrypted to call for required competencies with required parameter values. 
     442.2.2: Local emotional/affective system: Will represent the local affective state of the companions. This will also take into account Level2 memory which can hold state of affect for example frustration due to repeated failed attempts to complete a given task. 
     46Note: This affective states will be different from Level3 affective states represented in OCC model in FAtiMA 
     482.2 Blackboard/Memory: This unit can be perceived as Level2 memory which holds memory about the current state of the system and also important static information like the location of people, objects etc. 
     50This unit can be implemented as a singleton class where a common knowledge base, the "blackboard", can be iteratively updated by competencies allowing competencies to share data between each other. For example an image captured by the camera can be placed on the blackboard which can be used by face detect and colour recognition competence.  
     52Considering Example1, where a XML with embedded command “move to table” is passed, it can be decrypted to generate a sequence of competencies with required parameters using information on blackboard, refer the table 
     55XML message (3.2.3) 
     56Competencies Sequence (2.1.1) 
     57Black Board (2.2) 
     59<Go-to place>Table</Go-to place> 
     60<emote value=100>Happy</emote> 
     61<emote value=100>Confidence</emote> 
     63Movement.go(current, Table) 
     67Location:Current: 150, 200 
     68Location:Table: 200, 300 
     69Confidence: 20 
     70Obstacle: False 
     73While the sequence is being executed, the competencies will update the blackboard in case of any dynamic events like obstacle is in the way (set Obstacle: True on blackboard), then the navigation competence can re-plan. 
     751.1 Level1: Will contain programs to execute the competencies tied to resources on the particular platform 
     77Communication medium between Level1 (1.1) and Level2 can be embedded inside the competence for example for Movement, Greta it will be BML for iCat some other format and for robots it will tie with API s/w provided for robot.