Lakmal Meegahapola is a Postdoctoral Researcher at ETH Zurich, advised by Prof. Catherine Jutzeler. He got his PhD from EPFL in 2023, advised by Prof. Daniel Gatica-Perez. His research is at the intersection of mobile and wearable sensing, machine learning and deep learning, and human-computer interaction, with a focus on applications in digital health. Previously, he was a Research Intern at Google Research and Nokia Bell Labs, and was a Visiting Researcher at the Mobile Systems Group of the University of Cambridge, advised by Prof. Cecilia Mascolo. Prior to his PhD, he was a Research Engineer at Singapore Management University, advised by Prof. Archan Misra, and got his bachelor's degree in computer science and engineering from the University of Moratuwa, Sri Lanka. He has won multiple awards for his research, including being named a finalist (top four) for the "Gaetano Borriello Outstanding Student Award" at ACM UbiComp 2023, and the "Distinguished Paper Award" of the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT, UbiComp) in 2023.
Mental and behavioral factors significantly influence an individual's well-being and health. Therefore, the utilization of smartphone and wearable sensors for health sensing has become increasingly popular in both clinical and non-clinical settings. However, developing generalized machine learning (ML) models that operate effectively across diverse real-world contexts remains challenging due to geographical and temporal variations in sensor data, along with issues related to data labeling, dataset standardization, and reproducibility. Hence, my research focuses on developing robust ML models capable of accommodating variations across different countries and time periods, utilizing complex sensor data characterized by high dimensionality, noise, and context sensitivity. Initially, a comprehensive study involving eight countries (Italy, Denmark, UK, Mongolia, China, India, Paraguay, and Mexico) was conducted, and we collected multimodal sensor and self-report data from over 650 participants for a month. Analysis of this dataset revealed challenges in model generalization across countries, particularly in mood inference, complex activity recognition, and social context inference tasks. Country-specific models proved effective for mood inference and activity recognition, while multi-country models excelled in social context inference. However, the lack of model generalization persisted even among geographically proximate countries in Europe. To address this challenge, I developed a multi-branch deep learning architecture called M3BAT, for unsupervised domain adaptation of multimodal sensor data. This approach, utilizing gradient reversal layers and modality-specific branches, demonstrated up to a 12% improvement in performance across multiple datasets without labeled data from target domains, showing promise for domain adaptation of multimodal sensor data.