wiki:CompetencyInterfaces
Last modified 9 years ago Last modified on 12/07/2009 09:00:09 AM

Competency Interfaces

Text to speech

Non verbal sounds

Gesture execution (robot)

Lip sync

Facial expression (robot)

Move limb

Follow person

Gaze/head movement

Expressive body behaviour

Grasp/Place object

User recognition

I've used yarp bottles containing a string, with arguments where needed to denote each message.

Input message bottles go to a port named: /faceident-ctrl

  • "train" ID (string, int) : Train for this user
  • "detect" (string) : Switch to detection mode
  • "save" sessionname (string,string) : Save the detected faces
  • "load" sessionname (string,string) : Load previously detected faces
  • "clear" (string) : Clears all faces
  • "idle" (string) : Switch to idle mode, mostly frees up cpu
  • "errorthresh" value (string,float) : Set the error threshold (default 0.2)

Output messages come from a port called: /faceident

  • "user appeared" ID confidence (string, int, float) : A user has entered the view of the camera
  • "user disappeared" ID (string, int) : The user has left the camera view

Object recognition

Face detection

Gesture recognition

Affect recognition

Body tracking

Speech recognition

Non verbal sounds

Localisation

Locate/find person

Obstacle avoidance

Locate object

User proxemic distance

Power management

Competency exectution/monitoring