Skip to content

Architecture

Adrian Wennberg edited this page Nov 4, 2021 · 11 revisions

Bouvet Development Kit has an architecture separated into several sections. The goal of the development kit has been to separate it as much as possible from Unity to not be broken by later updates. Lets go through each of the parts of the main architecture seen in the picture below. See the color coding of the different classes and structs in the bottom left. image.png

The description of the Structs and Helper classes are on their own pages. SpatialMappingObserver also has its own page.

Application starts - InputManager

InputManager is the main class of the application and is the access point for the developer to get access to the different input methods and attach their own classes to the action listeners of BTK.

InputManager has a reference to InputSettings which can be changed in the Inspector and the user can toggle which input methods should be active:

image.png

Based on the input the developer inputs, InputManager will set up InputManagerInternal that sets up the different input methods: Hand tracking, voice recognition and eye tracking.

Generating input methods - InputManagerInternal

Hand tracking

Based on the settings in InputSettings, InputManagerInternal can set up hand tracking. Here is a short description of the different classes set up:

  • HandGestureListenerInternal: This class sets up the internal event functions for dealing with hand tracking input. It also sets up HandGestureListener.
  • HandGestureListener: This class calls many of the event functions for dealing with hand tracking input. It also sets up HandJointController.
  • HandJointController: This class updates each individual joints in both hands each frame.

If hand tracking is enabled in InputSettings these three classes will trigger the following events from InputManager if their requirements are met:

Each of these events contains an InputSource as part of their parameters.

Voice recognition

Based on the settings in InputSettings, InputManagerInternal can set up voice recognition. Here is a short description of the different classes set up:

  • KeywordListenerInternal: This class sets up the internal event functions for dealing with voice recognition input. It also sets up KeywordListener.
  • KeywordListener: This class calls the internal event functions for dealing with voice input. It holds a ConcurrentDictionary that holds the key phrases and their connected action. It listens for voice input and responds in kind.

If voice recognition is enabled and a command is recognized, the KeywordListener will invoke the corresponding action and trigger the following event:

This event contains an InputSource as part of its parameters.

Eye tracking

Based on the settings in InputSettings, InputManagerInternal can set up eye tracking. Here is a short description of the different classes set up:

  • EyeGazeListenerInternal: This class sets up the internal event functions for dealing with eye tracking input. It also sets up EyeGazeListener.
  • EyeGazeListener: This class calls the internal event functions for dealing with eye tracking input.

If eye tracking is enabled and the eyes are properly calibrated, the EyeGazeListener will invoke the corresponding actions and trigger the following event:

Each of these events contains an InputSource as part of their parameters.

Head tracking

Based on the settings in InputSettings, head tracking functionality will be enabled. If head tracking is enabled, a script called HololensTransformUpdate is added to the HoloLens gameobject. This script will call the following event in InputManager:

This event contains an InputSource as part of its parameters.