Multimodal Interaction in Smart Environments A Model-based Runtime System for Ubiquitous User Interfaces
The increasing popularity of computers in all areas of life raises new challenges for computer scientists and developers in all areas of computing technology. Networked resources form smart environments, that integrate devices and appliances with sensors and actors, and make an ongoing paradigm shift towards ubiquitous computing paradigms visible. With this growing pervasiveness of computing technology, their user interfaces need to transport and hide an increasing complexity. In this work the term Ubiquitous User Interface (UUI) is coined to denote user interfaces that support multiple users, using different devices to interact via multiple modalities with a set of applications in various contexts. The creation of such user interfaces raises new requirements for their development and runtime handling. The utilization of models and modeling technologies is a promising approach to handle the increasing complexity of current software. This thesis describes a model-based approach that combines executable user interface models with a runtime architecture to handle UUIs. Executable models identify the common building blocks of dynamic, self-contained, models that integrate design-time and runtime aspects. Bridging the gap between design- and runtime models allows the utilization of design information for runtime decisions and reasoning about interaction and presentation semantics. Based on the concept of executable models a set of metamodels is introduced, that picks-up design-time features of current user interface description languages and integrates additional runtime aspects like state information and dynamic behavior. The defined metamodels range from context-, service- and task- to abstract- and concrete interaction model and aim at the definition of the aspects of UUIs on different levels of abstraction. Mappings between the models allow the exchange of information for state synchronization and data exchange. The integration of the concepts into the Multi-Access Service Platform as an architecture for the interpretation of the models, provides a novel approach to utilize user interface models for the creation and handling of Ubiquitous User Interfaces at runtime. It provides components to support shaping according to interaction device specifics, multimodal interaction, user interface distribution and the dynamic adaptation of the user interface to context information. The integration of the stateful user interface models with the outside world is addressed by the projection of the model state to UUI presentations and the stimulation of state transitions within the models, based on user input. Integrating distribution, fusion and adaptation models bridges real-world needs and the modeled user interface definition. Various interaction devices are supported to convey the internal state of the user interface model via a multimodal presentation, distributed across multiple interaction resources. The implementation of the runtime architecture has been integrated into a smart home environment as part of the Service Centric Home project and served as implementation platform for different multimodal home applications. Case studies have been conducted, to evaluate the developed concepts. The realization of various executable models supported their combination into a complex net of models at runtime and allowed to prove the feasibility of the developed approach.