Skip to content

Master Plan AsTeRICS Ergo

deinhofer edited this page Jun 6, 2017 · 9 revisions

Master Plan

In this page the big picture of AsTeRICS Ergo is described (see fig. 1).

Input modalities

The UI of AsTeRICS Ergo mainly contains big buttons which can be operated easily by many alternative input devices already. But in some cases it is necessary to add functionality to better support certain input modalities. The following list provides examples

  • Face tracking (Camera Mouse)
    • Add configuration panel for camera settings and mouse settings
  • Buttons (FABI)+Scanning
    • Add scanning support to UI
    • Learn buttons (keys) and assign them to scanning actions
  • Lipmouse (FLipMouse)
    • Add configuration panel for important flipmouse settings
    • add link to documentation,...
  • Voice recognition
    • Integrate Google Web Speech API and add voice recognition and speech synthesis.
    • Integrate voice recognition actions into UI e.g. by showing codes at UI buttons which could directly be addressed by commands.
  • Eyetracking
    • Add hovering support for UI elements
Clone this wiki locally