<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects |</title><link>https://www.fabricionarcizo.com/projects/</link><atom:link href="https://www.fabricionarcizo.com/projects/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sun, 19 May 2024 00:00:00 +0000</lastBuildDate><item><title>AffectAI - Dynamic Affective Profile Modeling and Profile-Based Multimodal Emotion Recognition and MMLLM Integration for Hybrid Meetings</title><link>https://www.fabricionarcizo.com/projects/affect-ai/</link><pubDate>Wed, 05 Nov 2025 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/projects/affect-ai/</guid><description>&lt;p&gt;This project investigates how artificial intelligence can enhance human-centered interaction in virtual and hybrid meeting environments by enabling systems to better understand users&amp;rsquo; emotional and cognitive states. It focuses on developing intelligent models that interpret subtle behavioral and physiological signals, enabling technology to respond more naturally and effectively to users during collaborative activities.&lt;/p&gt;
&lt;p&gt;The project explores integrating multimodal data sources (e.g., visual, behavioral, physiological signals) to capture dynamic aspects of the user experience that are often overlooked in current systems. By leveraging recent advances in machine learning, particularly in multimodal and large-scale models, the project aims to move beyond static user representations and instead model their evolving states during real interactions.&lt;/p&gt;
&lt;p&gt;The project has three main objectives:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Investigate how user states evolve over time in collaborative settings and how these variations can be captured through multimodal signals.&lt;/li&gt;
&lt;li&gt;Develop machine learning approaches that can model these dynamics in a robust and adaptive way, improving the interpretation of user behavior and emotional context.&lt;/li&gt;
&lt;li&gt;Design and validate intelligent system components that integrate these capabilities into real-world meeting environments, enhancing interaction quality and collaboration outcomes.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This research addresses a growing need in digital collaboration technologies, where existing solutions often lack awareness of user engagement, emotional context, and interpersonal dynamics. While current systems focus primarily on audio and video communication, they do not fully capture the richness of human interaction, leading to reduced engagement and inefficiencies in remote collaboration.&lt;/p&gt;
&lt;p&gt;By introducing more context-aware and adaptive capabilities, this project aims to improve the overall user experience in virtual and hybrid meetings. The expected outcomes include more intuitive and responsive systems that support better communication, reduce misunderstandings, and foster more effective collaboration across diverse teams.&lt;/p&gt;
&lt;p&gt;Importantly, the project adopts a human-centered and ethical approach, ensuring that user data is handled responsibly and that the developed technologies promote transparency, inclusivity, and trust. The goal is not only to improve technical performance but also to contribute to the development of AI systems that align with human values and real-world needs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This project is conducted by an Industrial Postdoc researcher,
, who has received a grant of DKK 2.7 million from
and
. Meisam works with the Cybernetic Science and Engineering team at
, and at the
in the
, a space and a group of people with a common interest in research and education at the crossroad between machine learning, psychophysiology, neuroscience and cognition.&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Using Machine Learning to Identify Communal Worldwide Hand Gestures for Virtual and Hybrid Meetings Context</title><link>https://www.fabricionarcizo.com/projects/gestsense/</link><pubDate>Sat, 25 Nov 2023 00:00:00 +0000</pubDate><guid>https://www.fabricionarcizo.com/projects/gestsense/</guid><description>&lt;p&gt;This project explores how machine learning can support a hand gesture vocabulary that promotes global standardization and inclusivity. It investigates hand gesture recognition technology that allows users to communicate and control devices using natural, intuitive hand movements without touching anything. This technology can enhance user experience, safety, hygiene, and accessibility, especially for companies with international employees.&lt;/p&gt;
&lt;p&gt;The project has three main objectives:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Investigate how users from diverse backgrounds use hand gestures in virtual and hybrid meetings and which hand gestures they prefer for specific actions.&lt;/li&gt;
&lt;li&gt;Train machine learning models to identify the most common and consistent hand gestures among cross-cultural users for controlling a given function in the interactive system or device.&lt;/li&gt;
&lt;li&gt;Propose a universal hand gesture dictionary that can support a global standardization for new collaboration products and systems that use this technology and foster understanding and well-being among users who work with international teams.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Hand gesture recognition technology has huge potential in business meetings and collaboration products, as demand for meetings and e-learning is growing worldwide due to changes in work and study modes. Many industries are adopting this innovative resource, and some applications have already been launched, including the Zoom platform and certain collaborative business cameras. However, there is still room for improvement and innovation, as there is no shared standard vocabulary for hand gestures, and some gestures may have different or offensive meanings in different cultures. Therefore, it is important to consider the cultural significance of gestures and to create a conscious, communal vocabulary that is universally understood and accepted.&lt;/p&gt;
&lt;p&gt;To create hand gesture recognition products that can be used by global users from diverse backgrounds, it is not enough to ensure a high recognition rate alone. These products also need to provide a positive user experience, avoiding any embarrassment, misunderstanding, or offense that may discourage users from using the technology. Therefore, there is a need for a standardized hand gesture vocabulary that can achieve universal understanding, inclusivity, and acceptability. By conducting cross-cultural user studies, a hand gesture vocabulary can be carefully constructed to suit the needs and preferences of users from different cultures. This can increase consumer confidence and the market potential of the products, as well as improve the state of the art in hand gesture recognition for virtual and hybrid meeting contexts.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This project is conducted by an Industrial Ph.D. student,
, who has received a grant of DKK 2.0 million from
and
. Elizabete works with the Video Technology team at
, a leading company in the collaboration business products, and studies at the
, a renowned institution for research and education in information technology.&lt;/strong&gt;&lt;/p&gt;</description></item></channel></rss>