During the last years a tremendous interest in the analysis of human behavior has emerged to better understand and meet people's needs and demands. Understanding how and why behaviour occurs is fundamental to elicit effective behavioural change for healthy living, a current primary goal for governments, organisations and communities in the development of healthier societies. Digital technologies such as smartphones and wearables can help now supporting a new approach towards the observation of human behaviour. These technologies can be used to sense and measure, continuously and unobtrusively, in the wild and for massive cohorts of people, the digital traces generated by each person’s behaviour during their everyday interaction with digital devices. My research investigates on the development of novel methods building upon people-generated daily-life digital traces for accurately identifying, quantifying and mining physical, emotional, cognitive and social aspects of behaviour.
Sensors may just happen to be discovered as available to the current user context. In fact, there is a tendency towards an increased availability of sensors readily deployed by users by themselves (e.g., smartphones, sensor-equipped gadgets, smart objects, smart clothing) or integrated as part of living environments (e.g., sensors for climate control, security, or entertainment). In the general case, many of these sensors may not have associated activity models to use them for activity recognition, as they are deployed for other purposes. However, most of this sensing equipment could be used for activity recognition purposes since they are in principle capable of measuring human behavior (e.g., body motion). Part of my research seeks the development of models and architectures that opportunistically exploits information from sensors placed in the users outfit, in the environment and other sources of information to recognize human behavior.
The inference of human behavior normally requires from the analysis of multiple sources of information or heterogeneous sensor data. To accurately gain knowledge from these data scalable and efficient aggregation mechanisms are required. My investigation deals with the definition and implementation of sophisticated models that leverage the information captured through multi-sensor configurations in an efficient and collaborative fashion. The devised fusion methods are also defined to support the unsupervised dynamic adaptation and autonomous evolution of the recognition systems to cope with short term changes and long term trends in sensor infrastructure.
Nowadays technology progresses in a way ever imaginable before. This technological evolution is seen to empower applications with extraordinary new characteristics as well as remarkably improve user experience. To support that, apps are continuously updated, devices are under timely maintenance and systems frequently upgraded. Likewise, context-aware systems of the real-world requires a constant adaptation to ensure a seamless, efficient and lifelong usage. An example of this adaptation refers to the autonomous reconfiguration of the sensor ecology. The training of newly incorporated sensors should be performed without the involvement of a system designer, which otherwise would limit the approach to predefined sensor setups and deployments. This must also happen without user involvement. To fulfill these requirements, the most reasonable approach is to use the actual knowledge of the existing recognition system to instruct the new sensors on the activity-awareness tasks. This is accomplished in a process in which the original activity recognition system transfers its knowledge to the newcomer untrained sensor. My investigation deals with the definition and development of multimodal transfer learning methods that operate at runtime, with low overhead and without user or system designer intervention. This kind of approach serves to automatically translate activity recognition capabilities from an existing system to an untrained system even for different sensor modalities. This is of key interest to support sensor replacements as part of equipment maintenance, sensor additions in system upgrades and to benefit from sensors that happen to be available in the user environment.
The practice of medicine and public health may highly profit from the use of mobile devices. Mobile devices such as "phablets" or "wearables" can be used for the collection of community and clinical health data, delivery of healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vital signs, and direct provision of care. Apps developed for this purpose are normally based on ad-hoc solutions and little work has been performed so far towards systems standardization and development of mobile health frameworks. The mHealhDroid initiative is here devised to contribute to this respect. mHealthDroid is an open-source mobile framework designed to facilitate the rapid and easy development of mHealth and biomedical applications. The framework is devised to leverage the potential of mobile devices such as smartphones or tablets, wearable sensors and portable biomedical devices while bringing together heterogeneous platforms and multimodal sensors, including both research and commercial systems. The framework comprises an extensive set of modules and libraries for sensor data acquisition, data management, remote storage, signal processing, machine learning, multidimensional data visualization, as well as intelligent recommendations and multimedia guidelines among others features. A more holistic approach is followed in Mining Minds, a novel open framework that builds on the core ideas of the digital health and wellness paradigms to enable the provision of personalized healthcare and wellness support. Mining Minds embraces some of the currently most prominent digital technologies, ranging from Big Data and Cloud Computing to Wearables and Internet of Things, and state-of-the-art concepts and methods, such as Context-Awareness, Knowledge Bases or Analytics, among others.