The College of Engineering and Information Technology (COEIT) Research Day Poster Sessions are scheduled 9:00 am to 3:00 pm in the UC Ballroom. The full agenda for the event is available here.
See below for more information about the poster presentations by our faculty and students. Topics and abstracts are presented in thematic areas.
Accessibility and Human-centered Technology | AI Foundations and Applications | Bioengineering/Biomimetics | Education | Environment and Sustainability | Healthcare | Manufacturing | Physical Systems | Security and Privacy | Software Engineering
Accessibility and Human-centered Technology
Kirk Crawford and Foad Hamidi
Communication in Disabled and Neurodivergent LGBTQIA+ Romantic Relationships
Communication is essential for fostering emotional intimacy; however, there is limited research on how disabled and neurodivergent LGBTQIA+ partners develop accessible communication strategies collaboratively over time and, specifically, how intersectional identities influence relational communication within these partnerships. This research seeks to address this gap by exploring how disabled and neurodivergent LGBTQIA+ individuals with limited or fluctuating communication abilities and their partners navigate communication accessibility in their daily lives. To investigate these experiences, we are conducting a longitudinal diary study and co-design sessions with LGBTQIA+ couples and multi-partner relationships. The ongoing diary study, which lasts three months, aims to capture communication strategies and adaptations as partners negotiate daily routines and everyday life. Following the diary study, we will invite partners to individual and group co-design sessions to reflect on emerging themes from their diaries, discuss current technologies and research on LGBTQIA+, disability, and neurodivergent issues that affect communication, and engage in creative activities to envision new methods for supporting communication in their relationships.
Hasan Mahmud Prottoy, Yaxing Yao, and Foad Hamidi
Infrastructuring for Access to Online Subscription-Based Services in Bangladesh
As online access to entertainment, education, news, and information increasingly becomes mitigated through subscription-based services, it is important to study how inequities in access impact users in low and middle-income countries (LMICS), and what infrastructuring strategies they employ to overcome obstacles. In this presentation, we present findings from an interview study with 22 participants from Bangladesh who use and share online subscription-based services. Due to the lack of availability and limitations using formal international payment methods, procedural difficulties, and infrastructural challenges in Bangladesh, we found an emergence of a distinct informal ecosystem of accessing, sharing, and using subscription-based services. We report a detailed analysis of the adoption, sharing practices, and dynamics of sharing online subscription-based services in Bangladesh, that builds on and extends previous HCI literature on informality, informal marketplace, intermediaries, and media sharing in the Global South. Our findings show how a vibrant and growing user base of subscription-based online services is using creative and sometimes risky ways to gain access to media and information through informal intermediaries and administrators. Finally, we discuss potential directions for practice and policy innovations that include facilitating international payments for online services and platforms and reconsidering their policies and service delivery mechanisms to better support users in the Global South context.
Jennifer Posada
Reflection at Your Fingertips: Reducing Effort in Chronic Condition Data Engagement
The objective of this study is to design, develop, and evaluate a system to reduce the barriers users managing multiple chronic conditions face in the various stages of the stage-based model of personal informatics systems. The barriers addressed include lack of time, sparse data, lack of context, and lack of technical knowledge. Lack of time specifically is understood within the context of managing a chronic condition and how it limits available resources for activities like tracking. In the first phase of this study, I collected biodata from myself using a wearable tracker and gathered contextual data using ecological momentary assessment via the Beiwe app across measures like mood, energy, focus, and food. LLM GPT4o was used for data analysis in the data integration phase for timely processing of data from multiple inputs, a typical barrier to integration, and enabled reflection over insights that may not have been found otherwise due to lack of time or technical knowledge. Barriers like lack of time, sparse data, and lack of context that had an impact on data reflection served as a source of inspiration for the following prototype. In phase two, using autobiographical design, a prototype was developed for a customizable shortcut on a mobile device to mitigate barriers to data collection caused by lack of time due to symptoms related to chronic illness. It was designed and iterated ensuring the tool meets information needs for customizability during the crucial data preparation stage due to the dynamic nature of learning what triggers flare-ups and adjusting what information will be tracked while also reducing time and steps needed to collect data. The next phase will be doing a field test of this prototype with other users who manage chronic conditions to understand its utility for reducing barriers to reflection, and receive feedback on design.
Our developed crowd-sourced data collection system, MyPath, can collect trip data directly from the user’s smartphone and classify the type of surface with other accessible information. Considering all user preferences, demographic information, weather conditions, time of day, and the classified surface type from the vibration data, we can provide a better accessible path to navigate. We incorporated the feedback from real users into every phase of the design and development process. By developing a system for collecting crowd-sourced information considering all the factors related to accessibility, we can advance the development of accessible navigation technologies, ultimately providing greater independence and mobility for individuals with disabilities in urban environments.
Agnny Vannessa Morant, Krystal Zhang, Erin Higgins, Foad Hamidi, and Maria Sanchez
The Digital Fabrication/Assistive Technology Project
The Digital Fabrication/Assistive Technology project is a project co-sponsored by the University of Maryland Baltimore County’s Engineering and Computing education program and the National Science Foundation. Advised by graduate student Erin Huggins, Lab Director Foad Hamidi, and the Director of information technology Dr. Maria Sanchez, the project aims to provide 3D printed assistive technologies to individuals with disabilities, and research on the impacts that maker spaces may have on communities in need of assistive technologies. Throughout the fall semester of 2024, over 30 prints for the project, failed with various errors during the fabrication process with there being 7 failed print attempts for the paper money brailer device. This was due to there being too much filament support, and when the filament was being removed it would break the device. Redesigns with a hinge were incorporated, along with printing the design in different positions. Ultimately success was found when the money brailer was printed with its original design with the adjustment of the support being removed. This poster also includes plans for the future of the project.
Krystal Yangmengzi Zhang, Erin Higgins, Vannessa Morant, and Foad Hamidi
What’s Next for DIY-AT? Exploring the Lasting Impact of a State-Supported Assistive Technology Program
Do-It-Yourself (DIY) methods for creating assistive technology (AT) have the potential to empower individuals with disabilities by fostering customization and accessibility. However, challenges such as technical skill requirements, financial constraints, and difficulties in collaboration can limit participation. In partnership with the Maryland Department of Disabilities, we established a state-funded DIY-AT program to explore how structured support can facilitate the adoption and development of 3D-printed AT devices. This longitudinal study examines the program’s ongoing impact by assessing participant experiences, engagement with AT customization, and shifts in perceptions of state-supported initiatives. Initial findings identified an ecosystem of stakeholders working together to expand the definitions and uses of AT. In this follow-up study, semi-structured interviews with program participants provided insights into device usability, personal independence, and the effectiveness of public initiatives in meeting accessibility needs. While participants acknowledged the program’s role in improving daily functionality, they also highlighted limitations related to customization, equitable access to resources, and long-term sustainability. These findings contribute to a broader discussion on integrating DIY-AT into public services, offering recommendations for program refinement, educational outreach, and advancements in digital fabrication technologies to enhance accessibility.
AI Foundations and Applications
Shamita Nandeeswaran, Josephine Namayanja, and Lydia Fletcher
A Comprehensive Assessment of Open Science Platforms: Through the Lens of Interdisciplinary Research in Polar Science
Open science aims to increase the accessibility of scientific research artifacts such as data, software, models, and publications to the broader community. In turn, this enables collaboration and reproducibility of research. The growing efforts in supportive tools, platforms, guidelines, and other activities facilitate the open science movement. However, open science remains an open challenge in bridging certain specialized research communities, a characteristic of interdisciplinary research. Oftentimes, scientific research outcomes from interdisciplinary efforts are not fully aligned with existing open science tools/platforms and the participating communities thereof. Even so, the constraints on these tools/platforms such as data sizes, computing capabilities, support for DOI registration, utility cost, and long-term sustainability of platforms, to mention a few, further amplify the effective utilization of the tools/platforms in question. This research study aims to identify how to increase accessibility to polar science research outcomes, thereby promoting reachability to a niche community intersecting polar science and artificial intelligence, to enable knowledge sharing and collaboration. Our proposed approach entails conducting a comprehensive assessment of open science platforms through a feature-based micro-mapping process to identify platforms that align with the needs for open knowledge sharing and collaboration for interdisciplinary research in polar science and artificial intelligence. Our preliminary findings confirm that there are limited tools/platforms to effectively promote open science among the target niche community of polar science and computing disciplines such as artificial intelligence. More so, this study aims to define features associated with ‘openness’ at different stages in the research life cycle process which will be measured against the adherence of open science tools/platforms to the – Findable, Accessible, Interoperable, and Reusable principles. The overall outcomes of this research project will pose as stepping-stones for best practices in advancing open science efforts among interdisciplinary niche research communities.
ASM Mobarak Hossain, Neil Kamlesh Advani, and Md Osman Gani
Advancing Wheelchair Accessible Navigation through Crowd-Sourced Data: Development of the MyPath System and A Computational Model for Accessibility
Accessible navigation for individuals with mobility challenges in urban environments is crucial for ensuring equal access to public spaces. Various systems have been proposed to facilitate efficient travel for individuals with restricted mobility, yet no existing system leverages real-time wheelchair user data to provide truly accessible navigation. Our approach utilizes crowd-sourced data, updating accessibility information dynamically to enhance navigation experiences. This research explores the potential of real-time, user-generated data in mapping mobility barriers, identifying accessible routes, and improving navigation assistance. Through literature reviews, case studies, and interviews with wheelchair users, we look at the pros and cons of crowd-sourced accessibility data, such as how efficient, complete, and open it is to everyone. We propose strategies to optimize data collection, processing, and integration into navigation systems to enhance accessibility. The MyPath system collects trip data directly from users’ smartphones, classifies surface types using vibration data, and considers user preferences, demographic information, weather conditions, and time of day to recommend the most accessible paths. We have recently expanded MyPath to provide real-time, turn-by-turn navigation for wheelchair users, allowing them to follow step-by-step directions along accessible routes. Users can now report obstacles such as stairs, steep slopes, or construction in real time, and instant notifications are sent to alert others, ensuring up-to-date accessibility information. The entire UMBC campus has been fully mapped, making it the first comprehensive wheelchair-accessible navigation system in this environment. By integrating real-time, crowd-sourced accessibility data with direct user feedback, MyPath represents a significant advancement in wheelchair navigation technology. This research demonstrates how data-driven, community-powered approaches can enhance urban mobility, independence, and inclusivity for individuals with disabilities.
AI-Driven Culinary Innovation: Leveraging LLMs to Develop a Chemically Aware Recipe Generation Model
While leveraging the use of chatbots such as ChatGPT, Gemini, and Llama has become increasingly more common, its abilities to generate recipes and cooking directions have been proven to be limited. While chatbots are capable of producing very convincing text that looks like a recipe, it lacks the understanding of the chemistry needed to generate innovative, safe, and chemically aware recipes. By incorporating food chemistry principles, and nutritional needs, our model leverages retrieval-augmented generation (RAG) to address the limitations of current Large Language Models (LLMs) in recipe generation.
Muhammad Hasan Ferdous and Md Osman Gani
DCD- Decomposition-based Causal Discovery from Autocorrelated and Non-Stationary Temporal Data
Multivariate time series data across domains such as finance, climate science, and healthcare often contain complex causal structures influenced by long-term trends, seasonal variations, and short-term residual effects. While existing causal discovery methods have shown promise in identifying causal relationships in time series data, they often struggle when faced with non-stationary components and autocorrelated effects, leading to incorrect causal attributions and spurious causal relationships. This paper introduces a novel decomposition-based causal discovery framework that systematically separates time series data into its fundamental components, including trend, seasonality, and residuals, to improve causal discovery. By applying stationarity tests to trend components, kernel-based dependence measures to seasonal components, and constraint-based causal discovery algorithms to residual components, we construct component-specific causal graphs that are later integrated into a comprehensive causal graph. Our framework effectively disentangles long-term and short-term causal effects, thereby reducing spurious causal associations while preserving essential dependencies. Through comprehensive evaluations on synthetic benchmark datasets and real-world climate data, we demonstrate that our approach more accurately identifies the true causal structure from underlying data, effectively distinguishing true causal relationships from spurious correlations in the presence of non-stationarity and autocorrelation. Compared to state-of-the-art causal discovery methods, our framework achieves superior performance in identifying meaningful causal relationships, enhancing both the interpretability and reliability of causal structure in complex multivariate time series.
Alabi Jamiu Ahmed and Vandana Janeja
Detecting Deepfakes in Conversations: Addressing the Challenges of Multi-Speaker Audio Deepfake Detection
The increasing sophistication of audio deepfakes, generated by AI techniques such as text-to-speech (TTS) synthesis and voice conversion (VC), poses significant challenges in various sectors, including media, telecommunications, and cybersecurity. While current research has made substantial progress in detecting single speaker deepfakes, the detection of deepfakes in multi-speaker environments, such as conversations, debates, and meetings remains largely underexplored. The primary research gaps in this field lie in the lack of multi-speaker datasets and the inability of existing models to handle the complexities of overlapping speech, speaker diarization, and real-world acoustic conditions. Our work addresses these critical gaps by highlighting the need for multi-speaker audio deepfake detection and the creation of datasets specifically designed to simulate real-world conversations. We propose the development of new datasets that include speech overlaps, multi-speaker interactions, and background noise, which are essential for training models capable of detecting deepfakes in dynamic conversational settings. Additionally, we emphasize the importance of improving detection models to account for the challenges of multi-speaker environments and adapting current evaluation metrics to better assess performance in these contexts. Through this study, our work contributes to the advancement of deepfake detection technologies and aims to secure multi-speaker communication channels from the growing threat posed by AI-generated fake audio content. Our future work will focus on the gaps that are highlighted.
Uzma Hasan and Md Osman Gani
DKC: Data-driven and Knowledge-guided Causal Discovery Using a Tailored Scoring Criterion
This study presents a novel causal discovery algorithm called DKC that leverages both observational data and prior knowledge for effective learning of causal graphs. Traditional causal discovery methods often rely solely on data, limiting their effectiveness when data is scarce, noisy, or when dealing with complex causal graphs. Moreover, they do not efficiently integrate or leverage relevant prior knowledge. Our approach aims to address these challenges of data driven causal discovery by efficiently incorporating different types of causal priors during the search. A key contribution is that we propose a novel score function to facilitate adherence to both data and prior knowledge including hard and soft constraints. The proposed method DKC follows a three-step procedure: i) estimate a topological order, ii) rank edges based on their likelihood, and iii) perform a causal search using the proposed score function to balance model fit, complexity, and adherence to constraints. We provide theoretical guarantees of the score’s consistency, ensuring convergence to the true causal structure as sample size increases. Through extensive experiments on multiple datasets of varied network sizes, our approach demonstrates superior performance over state-of-the-art methods in both scenarios where prior knowledge is leveraged or not. This method’s ability to harmonize knowledge constraints with data-driven discovery makes it highly applicable to fields requiring robust causal inference, such as biology, medicine, climate and social sciences.
Tamil Selvan Gurunathan, Muhammad Shehrose Raza, Aswin Kumar Janakiraman, Md Azim Khan, Biplab Pal, and Aryya Gangopadhyay
Edge LLMs for Real-Time Contextual Understanding with Ground Robots
In this research, we introduce a new framework that combines Edge Large Language Models (LLMs) with robotic platforms to facilitate real-time decision-making and contextual understanding in challenging environments with zero visibility. By integrating LLMs directly onto edge devices, our system allows autonomous robots to analyze multi-modal sensor inputs, such as mmWave radar and thermal cameras, enabling real-time decision-making without the need for remote servers. This framework is designed for low-latency inference while adhering to strict computational limits, making it ideal for dynamic situations like search and rescue, tactical operations, and disaster response. Our experiments showcase the system’s capability to navigate, identify threats, and prioritize essential tasks, such as providing medical assistance, all while achieving high semantic accuracy. The proposed system significantly surpasses traditional methods, including Few-Shot Learning and Prompt Engineering, and supports scalable deployment in resource-limited environments. Our approach enhances real-time situational awareness, boosts autonomous navigation, and minimizes reliance on high-bandwidth communication links, offering a robust solution for autonomous robotic systems in complex and resource-constrained settings.
Seraj Mostafa, Chenxi Wang, Jia Yue, Yuta Hozumi, and Jianwu Wang
Enhancing Satellite Object Localization with Dilated Convolutions and Attention-aided Spatial Pooling
Object localization in satellite imagery is particularly challenging due to the high variability of objects, lower spatial resolution, and interference from noise and dominant features such as clouds and city lights. These obstacles can lead to inaccurate object detection, which is critical when analyzing climate-related issues and extracting accurate Earth informatics. In this research, we focus on three satellite datasets, Upper Atmospheric Gravity Waves (GW), mesospheric Bores (Bore), and Ocean Eddies (OE), each presenting its own unique challenges. These challenges include the variability in the scale and appearance of the main object patterns, where the size, shape and feature extent of objects of interest can differ significantly. To address these challenges, we introduce YOLO-DcAP, an enhanced version of YOLOv5 designed to improve object localization in these complex scenarios. YOLO-DcAP incorporates a Multi-Dilated Residual Block (MDRC) to capture multi-scale features and an Attention-aided Spatial Pooling (AaSP) module to focus on the global relevant spatial regions, enhancing feature extraction. These structural improvements help to better localize objects in satellite imagery. Experimental results demonstrate that YOLO-DcAP significantly outperforms state-of-the-art models under challenging conditions across all three satellite datasets, highlighting the generalizability of the approach.
Don Engel, Avi Donaty, and Jacob Rubinstein
Exploring Virtual Photogrammetry Techniques and Applications For Advancement of Digital Twin Generation
We explore challenges and opportunities in the use of photogrammetry for digital twin generation, namely we aim to improve our understanding of what makes good input data for photogrammetry and to quantify the different traits of various photogrammetry processes. We propose the use of virtual photogrammetry – utilizing synthetic 2D images from pre-existing 3D models as input – to aid in this goal. Our approach aims to create a pipeline for generating datasets of synthetic images which can be used to evaluate and improve camera pose/ intrisics estimation as well as to assess the impact of errors on 3D reconstruction accuracy. By leveraging the advantages of this synthetic data, we aim to evaluate the resilience and accuracy of photogrammetry systems, leading to the higher quality results from non-virtual photogrammetry in the future.
Md Azim Khan and Aryya Gangopadhyay
Frequency domain gating for multimodal Mixture of Expert (MoE) models
Spiking Neural Networks (SNNs) have emerged as an energy-efficient alternative to traditional Deep Neural Networks (DNNs) for decision-making tasks, particularly in neuromorphic computing. In this work, we introduce a Frequency-Domain SNN Router for Mixture-of-Experts (MoE) models, leveraging Fourier and Cosine Transforms (DFT/DCT) to enhance expert selection in different modalities. Unlike conventional DNN-based gating networks, our approach applies frequency transformations to input data, allowing the SNN router to process spectral information for more robust expert assignments. We evaluate our Frequency-SNN Router against an DNN-based router across multi-modal datasets: Lidar, Thermal, Radar, IR, RGB, Audio datasets. Experimental results demonstrate that frequency-domain SNN routing improves expert diversity, reduces expert selection bias, and enhances robustness to noise, particularly in vision and multimodal learning tasks. We further analyze the trade-offs in accuracy, latency, power consumption, and inference stability, showing that SNN-based gating incurs higher computational latency but offers improved power efficiency, making it a suitable candidate for low-power edge AI applications.
Emam Hossain and Md Osman Gani
Identification of Causal Representation in Variational Autoencoders
Causal Representation Learning (CRL) seeks to uncover latent structures in data that capture causal relationships, enabling more interpretable and robust machine learning models. A key challenge in CRL is ensuring that interventions on observed variables are consistently reflected in the learned latent space, preserving causal dependencies and allowing for meaningful counterfactual reasoning. In this work, we propose a Variational Autoencoder (VAE)-based framework that systematically analyzes the effects of interventions on both the input data and the latent space, as well as how these changes propagate to model outputs. To improve identifiability and robustness, we introduce novel constraints that enforce consistency between latent causal structures and observed interventions. Our approach effectively separates independent causal mechanisms, maintaining their modularity and facilitating structured reasoning about interventions. We evaluate our method on the pendulum and flow datasets, demonstrating that it accurately captures causal dependencies and improves intervention efficacy. Our empirical results show that the proposed framework significantly enhances reconstruction accuracy under interventional shifts, outperforming conventional methods that lack structured intervention modeling. These findings underscore the importance of structured interventions in learning high-quality causal representations, ultimately advancing the development of interpretable and generalizable causal models. This work contributes to the broader effort of bridging deep learning and causality, providing a principled approach to causal representation learning in complex environments.
Tai Akinlosotu
Journalistic Detection Tools for Audio Deepfakes
This work discusses the effectiveness of various tools used to detect audio deepfakes based on their capabilities, strengths, limitations, and accessibility. Audio deepfakes pose a significant challenge in misinformation and fraud, and thus detection systems with high accuracy are required. To test detection accuracy, a diverse set of audio samples were utilized, made of real speech, text-to-speech (TTS) synthesized speech, and voice conversion (VC) modified audio. These samples were tested with free deepfake detection software, for example, Resemble Detect and Play HT Voice Classifier, in order to compare their effectiveness and accuracy. Both software packages were tested at levels of success, evaluating their ease of use, accuracy, and ability to perform real-time analysis. The research also documented some instances of false positives and false negatives, highlighting the vulnerabilities of such software, as well as the limitations of being unable to have very high detection levels. This research also involved utilizing the Iris dataset in Google Colab to import and process files into an Excel sheet for analysis using Python code. All sources were referenced in a scholarly format to maintain academic integrity and avoid plagiarism, as well as contributing to the growing field of deepfake detection. These findings are an appeal to continue developing audio deepfake detection in order to decrease the potential risk of AI-manipulated audio. More recent studies focus on the benevolent applications of audio deepfakes and how they may be used across many different industries, including entertainment, video games, and healthcare.
Mohana Kundurthi, Tamilselvan Gurunathan, Md Azim Khan, Muhammad Shehrose Raza, and Aryya Gangopadhyay
Real-time Object Tracking on the Edge
This paper presents a real-time detection, tracking, counting, and distance estimation framework deployed on the Boston Dynamics Spot robot, equipped with RGB and thermal cameras. Leveraging edge computing devices such as the NVIDIA Jetson Nano, the system autonomously processes data in dynamic terrains with minimal human intervention. A custom-trained YOLOv8 model, fine-tuned on a unique dataset tailored for military applications, is integrated with the StrongSORT algorithm for object tracking. Additionally, a novel geometric calculation methodology enables precise angle estimation and spatial mapping, enhancing situational awareness. The framework’s capabilities include live visualization of detected objects, area-based counting, and mapping within an operational environment using ROS-based tools. Field demonstrations conducted at the Army Research Lab validate the system’s effectiveness in processing complex thermal and RGB data in real-time. The proposed solution offers significant potential for enhancing autonomous robotic deployments in mission-critical applications.
Prakhar Dixit and Tim Oates
SBI-RAG: Enhancing Math Word Problem Solving for Students through Schema-Based Instruction and Retrieval-Augmented Generation
Many students struggle with math word problems (MWPs), often finding it difficult to identify key information and select the appropriate mathematical operations. Schema-based instruction (SBI) is an evidence-based strategy that helps students categorize problems based on their structure, improving problem-solving accuracy. Building on this, we propose a Schema-Based Instruction Retrieval-Augmented Generation (SBI-RAG) framework that incorporates a large language model (LLM). Our approach emphasizes step-by-step reasoning by leveraging schemas to guide solution generation. We evaluate its performance on the GSM8K dataset, comparing it with GPT-4 and GPT-3.5 Turbo, and introduce a “reasoning score” metric to assess solution quality. Our findings suggest that SBI-RAG enhances reasoning clarity and facilitates a more structured problem-solving process potentially providing educational benefits for students.
Jumman Hossain, Emon Dey, Snehalraj Chugh, Masud Ahmed, MS Anwar, Abu-Zaher Faridee, Jason Hoppes, Theron Trout, Anjon Basak, Rafidh Chowdhury, Rishabh Mistry, Hyun Kim, Jade Freeman, Niranjan Suri, Adrienne Raglin, Carl Busart, Timothy Gregory, Anuradha Ravi, and Nirmalya Roy
SERN: Simulation-Enhanced Realistic Navigation for Multi-Agent Robotic Systems in Contested Environments
The increasing deployment of autonomous systems in complex environments necessitates efficient communication and task completion among multiple agents. This paper presents SERN (Simulation-Enhanced Realistic Navigation), a novel framework integrating virtual and physical environments for real-time collaborative decision-making in multi-robot systems. SERN addresses key challenges in asset deployment and co- ordination through our bi-directional AuroraXR ROS Bridge communication framework. Our approach advances the SOTA through: accurate real-world representation in virtual envi- ronments using Unity high-fidelity simulator; synchronization of physical and virtual robot movements; efficient ROS data distribution between remote locations; and integration of SOTA semantic segmentation for enhanced environmental perception. Our evaluations show a 15% to 24% improvement in latency and up to a 15% increase in processing efficiency compared to traditional ROS setups. Real-world and virtual simulation experiments with multiple robots demonstrate synchronization accuracy, achieving less than 5 cm positional error and under 2◦ rotational error. These results highlight SERN’s potential to enhance situational awareness and multi-agent coordination in diverse, contested environments.
TAVIC-DAS: Task and Channel-Aware Variable-Rate Image Compression for Distributed Autonomous System
In network-constrained environments, distributed multi-agent systems—such as UGVs and UAVs—must communicate effectively to support computationally demanding scene perception tasks like semantic and instance segmentation. These tasks are challenging because they require high accuracy even when using low-quality images, and the network limitations restrict the amount of data that can be transmitted between agents. To overcome the above challenges, we propose TAVIC-DAS to perform a task and channel aware variable-rate image compression to enable distributed task execution and minimize communication latency by transmitting compressed images. TAVIC-DAS proposes a novel image compression and decompression framework (distributed across agents) that integrates channel parameters such as RSSI and data rate into a task-specific “semantic segmentation” DNN to generate masks representing the object of interest in the scene (ROI maps) by determining a high pixel density needed to represent objects of interest and low density to represents surrounding pixels within an image. Additionally, to accommodate agents with limited computational resources, TAVIC-DAS incorporates resource-aware model quantization. We evaluated TAVIC-DAS on platforms such as ROSMaster X3 and Jetson Xavier, which communicated using a low-frequency proprietary Doodle radio operating at 915 MHz. The experimental results show that TAVIC-DAS achieves approximately 7.62% higher PSNR and is about 6.39% more resource efficient compared to state-of-the-art techniques.
Haishi Bai
Unified Management Experience Across Cloud-Edge Continum with Orchestration as Code
Cloud platforms have demonstrated remarkable capabilities in securely and efficiently managing large-scale computing resources. However, extending cloud practices to the edge presents significant challenges due to the heterogeneous, multi-vendor, and dynamic nature of the edge computing landscape. Standardization efforts face adoption barriers, full-stack solutions fragment the ecosystem rather than unifying it, and point-to-point integrations and custom solutions are costly and difficult to scale. This paper introduces a novel Orchestration as Code approach, which explicitly models toolchain orchestration concerns as code. Building upon an orchestration object model, we define three key capability pillars to address workload management challenges: state-seeking, workflow orchestration, and information graphs. Together, these pillars establish the foundation for a generic, extensible orchestration engine that enables seamless, end-to-end workload management across the cloud-to-edge continuum.
Benjamin Kale and Meilin Yu
Complex Owl Geometry Smoothing and Meshing for High-fidelity Numerical Simulation
Small-scale unmanned aerial vehicles (UAVs) face a number of obstacles that larger, well-established modes of flight do not. Their smaller size and oftentimes lower speed mean that they typically operate in flow regimes where viscous forces have an increased effect on flight behavior. When the traditional fixed wing is brought into these flow regimes, its performance suffers: lift to drag ratios decrease because viscous forces are more dominant, and current solutions to the challenges of navigating transient flow phenomena may no longer be optimal. The purpose of this project is to investigate how these challenges are addressed in nature. In particular, the fluid flow characteristics around the wing structure of a common barn owl will be analyzed. To this end, a model of a barn owl, generated via lidar, was initially obtained from the University of Bristol, after which it underwent various stages of preprocessing. The model was smoothed to reduce its complexity to a level which would save on computational cost while retaining the dominant characteristic features of the wing. This was achieved using a variety of smoothing techniques, but primarily a signal processing approach known as Taubin filtering, which essentially applies a low-pass filter to the data, attenuating structures smaller than those associated with a certain passband frequency. In cases where undesired structures remained, particularly at the trailing edge of the wing, which, for the purposes of simulation, was required to taper to a single curve, polynomial regression was used to impose desired conditions. After geometry generation is complete, the surface will be tessellated with a fully quadrilateral, unstructured mesh to facilitate hexahedral volume mesh generation. These meshes, together with the geometry, will then be used in high-fidelity simulations to unveil unsteady aerodynamics adopted by nature’s flyers, and potentially inspire innovation in UAV design.
Julia Van Der Marel, Chad Sundberg, Elias Gilotte, and Govind Rao
Monitoring of NADH Concentrations in Cell-Free Systems
Tithi Prajapati, Anino Mebahganje, Michael Tolosa, Pegah Rezaei, Venkatesh Srinivasan, Xudong Ge, and Govind Rao
Non-Invasive Wearable Device for Transcutaneous CO2 Based Early Detection of Opioid Induced Respiratory Depression
The opioid crisis is a significant public health concern due to the severe medical complications it causes, particularly in respiratory function. One critical issue is the depletion of respiratory function over time, which can ultimately lead to mortality. This research proposes a noninvasive device for measuring transcutaneous CO2 diffusion, which can indicate CO2 levels and provide timely measurements of respiratory status. Using a novel rate-based analytics method, we developed a compact, handheld device for measuring CO2 diffusion based on our published clinical study with neonates on transcutaneous monitoring for respiratory health. The wearable device also monitors heart rate, temperature, and humidity. The device was tested on human subjects, alongside the Radiometer TCM4, at rest and during active jogging. The device demonstrated a high correlation with the Radiometer TCM4. Unlike the Radiometer, our device does not require skin heating to 42-44°C, making it non-invasive. It is not bulky like the Radiometer TCM4 since our device is portable. In addition to measuring CO2 concentration, our device provides a rate of change of CO2, which is crucial for detecting drastic changes in respiratory function. Optimization of this device for detecting respiratory depression in opioid overdoses is underway.
Mesha Shajahan, Chad Sundberg, Vikash Kumar, Elias Gilotte, Mike Tolosa, and Govind Rao
Optimizing Cell-Free Biomanufacturing to Facilitate Personalized Immunotherapy Production in Clinical Settings
Cell-free expression technologies present a compelling alternative to traditional cell-based methods for efficient and cost-effective biotherapeutic production by eliminating the constraints of working within living, single-celled organisms. Lysing cells liberates cellular machinery from life-sustaining functions, affording greater control over protein synthesis and enabling the production of complex proteins. However, fully maximizing the biomanufacturing potential of cell-free systems requires precise regulation of metabolic pathways and transcription-translation dynamics. Preliminary data from cell-free reactions indicates that protein biosynthesis declines following periods of oxygen deprivation, suggesting a metabolic shift from oxidative phosphorylation to fermentation, which may be difficult to reverse. Findings also indicate that regulated oxygen supplementation enhances biotherapeutic yields, emphasizing the need for optimized bioreactor conditions. A deeper understanding and refinement of cell-free molecular mechanisms is crucial for tailoring the bioreactor design for specific applications, such as manufacturing personalized immunotherapy. Personalized immunotherapy involving neo-antigen modification, monoclonal antibody production, and chimeric antigen receptor (CAR) T-cell engineering have shown considerable efficacy in cancer treatment. However, the extensive culturing requirements of cell-based protein manufacturing make large-scale implementation in hospitals unfeasible. This research seeks to leverage the flexibility and streamlined development of cell-free systems to make personalized immunotherapy widely accessible in clinical settings.
Cahree Myrick, Rajasekhar Anguluri, and Ramana Vinjamuri
Robust Principal Component Analysis for Dimensionality Reduction in Control and Coordination of the Human Hand
Robust principal component analysis (RPCA) was applied to investigate the challenge of dimensionality reduction in the control and coordination of the human hand using the concept of kinematic synergies. In this study, we examined conditions under which RPCA could decompose joint angular velocity data into a low‐rank matrix representing the underlying synergies and a second matrix containing sparse outliers. A robust lasso technique was subsequently implemented to estimate the weights and onset times of the extracted synergies, thereby producing a sparse representation of hand movements. The proposed methodology demonstrated promise for isolating coordinated motion patterns while effectively separating excess noise, even from low‐quality, inexpensive data collection devices. This capability suggests potential cost savings in experimental setups and broader accessibility in both biomedical research and practical applications. The implications of this research extend to prosthetic control and robotic manipulation. Our approach offers a framework for future work on efficient motion control strategies in biomedical engineering and robotics, potentially leading to improved designs for assistive devices.
Feng Li, M. Nicole Belfiore, and M. Ali Yousuf
Applications of Generative AI for Students with Learning Challenges
AI has been used extensively in making the lives of people easier. We have been developing Generative AI tools to help students with learning challenges. This includes an app for Art Appreciation, that captures photos and explain the piece of art in voice. Another tool to help a blind student understand the beauty of the architectural and a tool for the American Sign Language understanding. By integrating these AI-powered tools into educational environments, we are reimagining inclusive education and demonstrating how AI can be a powerful force for equity, enabling all students to access diverse forms of knowledge and creativity. We present some of our tools.
Deborah Kariuki, Hidare Debar, and Ida Ngambeki
Building Cyber-Ready Classrooms through Community Driven Research. A CyberCell Project
The Cybersecurity Curriculum Express Learning Library (CyberCELL) was established as part of the Cyber Exploratory Grant awarded in 2023, aimed at enhancing K-12 cybersecurity education throughout Maryland. This initiative intends to create a structured repository of cybersecurity curricula, tools, and training resources to support educators, especially those lacking prior expertise in this field. By collaborating with the Maryland Chapter of the Computer Science Teachers Association (MD CSTA) and the Maryland Center for Computing Education (MCCE), CyberCELL strives to develop a sustainable framework for integrating cybersecurity education across various subjects through an integrated curriculum model. As part of the project, data collection and thematic analysis have been conducted, revealing key insights into the challenges and opportunities within cybersecurity education. The findings emphasize the critical need for professional development, a strong interest in expanding cybersecurity education, and the growing demand for structured curricula and accessible teaching tools. Furthermore, the results highlight the importance of establishing a cybersecurity sandbox environment, which allows students to engage in hands-on learning when traditional computing resources are unavailable. The study also identifies effective strategies to address teaching barriers and limitations, ensuring that educators can effectively introduce cybersecurity concepts. These findings are informing the ongoing refinement of CyberCELL’s framework and guide for future initiatives aimed at scaling and sustaining cybersecurity education in Maryland, which we hope to share at the COEIT Research Day. As the project progresses, the insights gained from this research will be utilized to secure long-term external funding, enhance resource availability, and foster a statewide network of cybersecurity educators. By addressing crucial gaps in cybersecurity education, CyberCELL seeks to bridge the cybersecurity skills gap, promote equitable access to computing resources, and position Maryland as a national leader in K-12 cybersecurity education.
Centering Student Perspectives on Generative AI Integration in a Design Classroom
Generative AI is rapidly becoming a key tool in education, yet most classroom policies regarding its use are developed without student input, leading to confusion and frustration. By positioning students as “lead users”—early adopters of generative AI in education—we recognize their critical insights and advocate for their inclusion in university AI policies to better support student learning objectives. This study explores student perspectives on AI integration in design pedagogy, emphasizing student-driven AI policies that reflect their experiences and needs. Through a three-part participatory workshop series and follow-up interviews, we engage 10–14 students who took HCC629: Fundamentals of Human-Centered Computing at the University of Maryland, Baltimore County (UMBC) in Fall 2024 with Dr. Yasmine Kotturi. This study aims 1) to develop student-authored AI policies for a design classroom and 2) to evaluate both the student-authored policies through peer and expert feedback. By centering student voices in AI policy creation and design, this study seeks to provide actionable insights that can inform future AI-related educational policies and technology development. We will explore two key research questions: RQ1) How do students perceive AI use in design courses? and RQ2) How can student-driven AI policies inform AI integration in classroom settings? The workshops will engage students to be open to sharing and reflecting on their experiences of AI use in the classroom, allowing them to collaboratively create a student-authored AI policy zine – a small, self-published booklet – as a tangible and shareable outcome of their perspectives. By incorporating the students’ perspective of generative AI use in design classes, this study fosters ethical and effective AI integration in design courses, enhancing both student agencies and learning outcomes.
Patricia Ordóñez and Jamie Gurganus
Computing for All: Cultivating a Culturally Responsive and Inclusive Computing Ecosystem
This project seeks to cultivate a dynamic learning ecosystem in partnership with Bay Ridge Apartments, specifically engaging women in marginalized communities with UMBC students. By providing undergraduate and graduate computing students with service-learning opportunities to apply their technical skills in meaningful, hands-on experiences, we aim to foster social impact through the Born to Grow, Brave to Grow, & Bold to Grow (B2G) Learning Centers, one of the projects of the UMBC ICE (Informal Computing Education) Lab under Dr. Patti Ordóñez’s guidance. In return, this initiative will allow us to assess computing students’ computing identity and mindset, focusing on their sense of belonging and self-efficacy in the field. This opportunity will also lead to students earning a Center for the Integration for Research, Teaching & Learning certification for both undergraduate and graduate populations.
Ida Ngambeki, Deborah Kariuki, Miracle Onyebuchim, and Hidare Debar
Developing Interest in Computing: The Role of Access – A CyberCELL Project
Marjory Pineda, Patricia Ordóñez, Foad Hamidi, and Ravi Kuber
Envisioning a Culturally-Responsive and Accessible Makerspace in the Global South
Kevin Lemus, Christian Ruiz, Stephanie Lunn, and Edward Dillon
Faculty Preparation Strategies: Empowering Computing Students to Achieve Technical Interview Success
Technical interviews are a crucial part of the hiring process for computing roles, requiring applicants to solve coding problems on a whiteboard, text editor, or paper while articulating their thought process. Students tend to struggle with this format. Likewise, faculty may lack industry experience or familiarity with current technical interview expectations, posing a challenge in preparing students effectively. We created an eight-week virtual workshop series for faculty to address these concerns, seeking to enhance awareness and bolster their technical interviewing skills through mock interviews, planned institutional initiatives, and insight from tech industry professionals. Throughout our inquiry, we explored what faculty may see as valuable for better preparing students for their careers in the field. In addition, we examined what efforts educators may have undertaken at their institutions, or that they may plan to integrate in the future, specifically for technical interviews. Data was collected from 21 faculty from 18 different institutions, who participated in two separate cohorts of the workshop series. Across both cohorts, we administered pre and post surveys, which we analyzed using descriptive statistics and content analysis. We also conducted a focus group at the end of the second cohort (n = 9), which was analyzed with thematic analysis. Three themes were noted as central to students’ preparation for their future in computing: Problem-Solving Practice, Career Readiness, and Resource Accessibility. Faculty spoke not only about integrating training for the hiring process but also the value of support and other forms of professional development. Content analysis of the post-workshop survey described the different approaches faculty planned to take going forward within their courses and more informally including: more directed efforts to raise awareness, offering mock technical interviews, providing real-world examples, and additional assignments or projects.
Maria Lopez Delgado and Ravi Kuber
State of Computational Thinking and Creative Technologies in Early Childhood Education
This study explores the interest and knowledge of early childhood educators in integrating computational thinking and creative technologies in their curricula. We are investigating the current challenges faced, the use of technology in the classroom, and what strategies and adaptations they currently use in their mixed-ability classrooms. The study aims to contrast the experiences of early childhood educators from both Puerto Rico and the US mainland. Preliminary findings include a lack of familiarity with the term ‘computational thinking’, but having experience integrating STEM concepts (not including computing) into preschool classrooms. Regarding creative technologies, some participants were familiar with technologies such as 3D printers and vinyl cutters but lacked experience using these for classroom activities. Participants were found to promote creativity in their classrooms, ranging from arts and crafts to creative storytelling and problem-solving. Overall, participants view technology as a tool to enrich students’ learning, but try to limit the use because they are aware of the amount of time students use technology outside the classroom. This study is the first step in a research project, with a view to creating a series of best practices to support computational thinking and creative technologies in the preschool classroom.
Environment and Sustainability
Zahid Hassan Tushar, Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang, and Sanjay Purushotham
Cloud Optical Thickness Retrievals Using Angle Invariant Attention Based Deep Learning Models
Cloud Optical Thickness (COT) is a critical cloud property influencing Earth’s climate, weather, and radiation budget. Satellite radiance measurements enable global COT retrieval, but challenges like 3D cloud effects, viewing angles, and atmospheric interference must be addressed to ensure accurate estimation. Traditionally, the Independent Pixel Approximation (IPA) method, which treats individual pixels independently, has been used for COT estimation. However, IPA introduces significant bias due to its simplified assumptions. Recently, deep learning-based models have shown improved performance over IPA but lack robustness, as they are sensitive to variations in radiance intensity, distortions, and cloud shadows. These models also introduce substantial errors in COT estimation under different solar and viewing zenith angles. To address these challenges, we propose a novel angle-invariant, attention-based deep model called Cloud-Attention-Net with Angle Coding (CAAC). Our model leverages attention mechanisms and angle embeddings to account for satellite viewing geometry and 3D radiative transfer effects, enabling more accurate retrieval of COT. Additionally, our multi-angle training strategy ensures angle invariance. Through comprehensive experiments, we demonstrate that CAAC significantly outperforms existing state-of-the-art deep learning models, reducing cloud property retrieval errors by at least a factor of nine.
Rohan Putatunda, Vandana Janeja, and Sanjay Purushotham
Deep Learning for Ice Calving Front Analysis: Advancing Understanding through AI-Driven Techniques
Ice calving, the process where large ice masses detach from a glacier’s terminus, is a major driver of ice mass loss and contributes significantly to global sea-level rise. These events generate massive icebergs, some as large as Manhattan, which fragment into smaller pieces known as “chicklets.” Driven by ocean currents and winds, chicklets follow complex, unpredictable trajectories that pose risks to maritime navigation. Traditional methods for monitoring ice calving fronts rely on manual satellite image analysis, which is time-intensive, error-prone, and lacks scalability. While deep learning has introduced automation in segmentation, critical challenges remain in predicting future calving front positions and forecasting chicklet trajectories. This research addresses three fundamental challenges in ice calving prediction. First, we develop SEATTNET, a hybrid attention model integrating squeeze-and-excitation (SE) blocks with spatial attention gates to enhance segmentation accuracy for calving fronts, which are challenging to delineate due to their sparse pixel representation. Second, we tackle the prediction of future calving front positions, where the absence of structured spatiotemporal data and the nonlinear nature of latitude-longitude sequences introduce significant challenges. We resolve this by applying linear interpolation and binning, constructing a georeferenced dataset from segmentation masks. Our GlaSpectra model, utilizing spectral convolution layers, FFT, and IFFT, captures both global and local spatial dependencies, enabling precise trajectory forecasting. Finally, we propose to predict iceberg trajectories using ConvLSTM, which models spatial and temporal dependencies. To improve accuracy under uncertainty, we introduce a custom drift loss function grounded in a physics-informed drag policy, ensuring that larger ice masses exhibit slower movement due to resistance forces. This novel approach captures the intricate relationship between iceberg size, shape, and velocity. By integrating deep learning with physics-based modeling, this research advances scalable, data-driven methodologies for ice calving prediction, with broader applications in a landslide and coastal erosion modeling.
Bayu Adhi Tama, Mansa Krishna, Mostafa Cham, Omar Faruque, Gong Cheng, Jianwu Wang, Mathieu Morlighem, Vandana Janeja, and Homayra Alam
Deep Learning for Subglacial Topography Reconstruction in Greenland
Understanding Greenland’s subglacial topography is essential for predicting future ice sheet mass loss and its contribution to global sea-level rise. However, sparse observational data, particularly on bed topography beneath the ice sheet, introduce significant uncertainties in model projections. Traditional methods rely on airborne ice-penetrating radar, which measures ice thickness directly under flight paths but leaves gaps of tens of kilometers between lines. To address this challenge, we introduce DeepTopoNet , a deep learning framework that integrates radar-derived ice thickness observations with the widely-used BedMachine Greenland dataset through a novel dynamic loss-balancing mechanism. BedMachine combines mass conservation principles and radar measurements to produce high-resolution bed elevation estimates. Our approach employs a convolutional neural network (CNN), named BedTopoCNN , designed for subgrid-scale predictions. The proposed loss function adaptively balances radar and BedMachine data, ensuring robust performance in areas with limited radar coverage while leveraging the high spatial resolution of BedMachine. Additionally, we incorporate gradient-based and trend surface features to enhance model accuracy. Tested systematically on the Upernavik Isstrøm region, the model achieves exceptional performance metrics (MAE: 12.49 m, RMSE: 19.38 m, R²: 0.99), outperforming baseline methods in reconstructing subglacial terrain. This work demonstrates the potential of deep learning to bridge observational gaps in subglacial topography, offering a scalable and efficient solution for inferring bed elevation. By improving the accuracy of bed topography estimates, our framework supports advancements in ice sheet modeling, enabling more reliable predictions of ice flow dynamics and their impact on sea-level rise.
Kamal Acharya, Mehul Lad, Liang Sun, and Houbing Song
Demand Modeling for Advanced Air Mobility
In recent years, the rapid pace of urbanization has posed profound challenges globally, exacerbating environmental concerns and escalating traffic congestion in metropolitan areas. To mitigate these issues, Advanced Air Mobility (AAM) has emerged as a promising transportation alternative. However, the effective implementation of AAM requires robust demand modeling. This study delves into the demand dynamics of AAM by analyzing employment based trip data across Tennessee’s census tracts, employing statistical techniques and machine learning models to enhance accuracy in demand forecasting. Drawing on datasets from the Bureau of Transportation Statistics (BTS), the Internal Revenue Service (IRS), the Federal Aviation Administration (FAA), and additional sources, we perform cost, time, and risk assessments to compute the Generalized Cost of Trip (GCT). Our findings indicate that trips are more likely to be viable for AAM if air transportation accounts for over 70% of the GCT and the journey spans more than 250 miles. The study not only refines the understanding of AAM demand but also guides strategic planning and policy formulation for sustainable urban mobility solutions.
Zahid Hassan Tushar, Adeleke Ademakinwa, Zhibo Zhang, and Sanjay Purushotham
Enhancing Aerosol and Cloud Retrievals Based on Hyperspectral Observations with Deep Learning: A Case Study with PACE-OCI
Above-cloud aerosols play a crucial role in Earth’s energy balance, climate modeling, and weather forecasting through their direct, semi-direct, and indirect effects. The direct effect involves changes in the reflection of radiance at the Top of Atmosphere (TOA). The semi-direct effect alters the thermodynamic properties of the lower atmosphere, while the indirect effects influence cloud microphysics and radiative transfer profiles, requiring continuous study. The relative significance of above-cloud aerosol’s direct, semi-direct, and indirect effects remains uncertain, and its long-term trends amid regional and global climate change are not yet fully understood. Addressing these uncertainties requires sustained satellite monitoring of above-cloud aerosols. The recently launched Plankton, Aerosol, Cloud, and ocean Ecosystem (PACE) has Ocean Color Instrument (OCI) that provides hyperspectral observations with very high resolutions and multitude of information offering the opportunities to reduce uncertainties in the above-cloud aerosol and cloud properties retrievals. Unfortunately, the traditional optimal inversion algorithms and look-up-table based retrievals are too slow, and use a few bands. These algorithms may be adapted to the PACE-OCI by selecting closer bands but a huge part of the hyperspectral observations will remain unused, and incur a waste of opportunities that PACE-OCI offers. This necessitates advanced algorithms to be developed that effectively uses the full PACE-OCI hyperspectral observations. The goal of this ongoing work is three-fold. First, develop a deep learning algorithm to replace the look-up-table based retrievals. Second, exploring dimensionality reduction type techniques and merging them with deep learning algorithms towards lightweight methods for real-time retrievals. Lastly, developing deep learning algorithms for joint retrievals of aerosol and cloud properties with least amount errors and bias. In our preliminary experiments, the LUT required 30,720 MB of memory, whereas the deep learning model occupied only 4 MB—just 1/7680th of the LUT’s size, highlighting the advantages.
Olivia Patterson and Rebecca Williams
Glacial Guardians: Development of a Data-Driven Educational Video Game Exploring Antarctic Iceberg Lifecycles
Understanding the iceberg life cycle—from calving to drifting, fracturing, and melting—is useful to climate scientists because melting icebergs release freshwater and nutrients into the ocean, and contribute to sea ice and ocean currents, all of which are important contributors to scientific models of sea level rise. Visualization of these processes aids scientists in the development and understanding of more accurate climate change models, and visual narration of compelling stories about specific calving events can help scientists engage the public and highlight the significance of these processes in the broader context of climate change. Prior studies have found that video games can help encourage people to think about and advocate for climate change mitigation, especially those that incorporate real-world data. With this in mind, we have created a data-driven educational video game, called Glacial Guardians, which incorporates real-life Antarctic data and interactive elements. In our game, we focus on an iceberg named A68, which broke off from the Larsen C ice shelf in July 2017. It quickly fractured into two pieces: A68A and the smaller piece A68B, and continued to fracture until its eventual demise in April 2021. An interactive and data-driven experience of the iceberg lifecycle invites the public to explore and learn in an engaging virtual world, while grounding that experience in real-life data. In video games, users can become emotionally attached to the world, so the goal is to create a game that not only informs but also motivates action toward addressing climate challenges, involving the viewers in the tension and outcome of a visual story that will inevitably change life on Earth.
Maloy Kumar Devnath, Sudip Chakraborty, and Vandana P. Janeja
Graph-Based Anomaly Detection for Identifying the Interaction Between Sea Ice Retreat and Ice Sheet Melting in the Antarctic Region
Zahid Hassan Tushar, Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang, and Sanjay Purushotham
Joint Retrieval of Cloud properties using Attention-based Deep Learning Models
Accurate cloud property retrieval is vital for understanding cloud behavior and its impact on climate, including applications in weather forecasting, climate modeling, and estimating Earth’s radiation balance. The Independent Pixel Approximation (IPA), a widely used physics-based approach, simplifies radiative transfer calculations by assuming each pixel is independent of its neighbors. While computationally efficient, IPA has significant limitations, such as inaccuracies from 3D radiative effects, errors at cloud edges, and ineffectiveness for overlapping or heterogeneous cloud fields. Recent AI/ML-based deep learning models have improved retrieval accuracy by leveraging spatial relationships across pixels. However, these models are often memory-intensive, retrieve only a single cloud property, or struggle with joint property retrievals. To overcome these challenges, we introduce CloudUNet with Attention Module (CAM), a compact UNet-based model that employs attention mechanisms to reduce errors in thick, overlapping cloud regions and a specialized loss function for joint retrieval of Cloud Optical Thickness (COT) and Cloud Effective Radius (CER). Experiments on a Large Eddy Simulation (LES) dataset show that our CAM model outperforms state-of-the-art deep learning methods, reducing mean absolute errors (MAE) by 34% for COT and 42% for CER, and achieving 76% and 86% lower MAE for COT and CER retrievals compared to the IPA method.
Tartela Tabassum, Pavan Raj Ravi, and Jianwu Wang
Physics-Informed Neural Networks (PINNs) and Neural Networks for Bedrock Topography Prediction: A Study of PDE-Regularized Models and SIA-Based PINNs
Greenland’s ice sheet is melting due to global warming. Thus, identifying the shape of bedrock beneath ice sheets is important to predict ice melt rates and their global sea-level contribution. Traditional machine learning (ML) methods often require high-resolution training data for accurate predictions. Additionally, interpolation-based approaches struggle to capture subglacial features in data-sparse regions and demand high computational resources, making them inefficient. In this work, we utilize Physics-Informed Neural Networks (PINNs) and worked on a Neural Network (NNs) approach to predict subglacial bedrock elevation using radar data from Upernavik, West Greenland, an important area for understanding ice sheet behavior. We compare two approaches, the first approach of which employed a neural network enhanced with gradient-based PDE regularization (PDE + MSE loss). The second approach implemented a Physics-Informed Neural Network (PINN) based on the 1D Shallow Ice Approximation (SIA) with boundary loss, explicitly integrating governing physics into the model. Our findings demonstrate that the PDE-regularized neural network significantly outperforms the SIA-based PINN in predictive accuracy. The PDE-regularized model achieved an MAE of 34.77, RMSE of 52.26, and R² of 0.92, with a total loss of 2731.32. The SIA-based PINN approach resulted in a higher error, with an MAE of 82.94, RMSE of 111.34, and R² of 0.63. This indicates that NNs with the PDE loss approach are better suited for predicting ice bed elevation than those using purely physics-based Stokes flow loss. Even though PINNs offer improved physical consistency and accuracy with limited data, they come at the cost of high computational demands. Future research will fine-tune physics constraints to improve the PINN model while minimizing computational cost.
Predicting Arctic Sea Ice using Machine Learning: A Data-Driven Approach
Sai Vikas Amaraneni, Sudip Chakraborty, and Vandana P. Janeja
Predicting Sea Ice Concentration over the Antarctic region using Conv-LSTM
Antarctic sea ice is crucial in regulating global climate by influencing ocean salinity, circulation, and sea level rise. Antarctic sea ice acts as a protective buffer for the ice shelves from thermal advection and mechanical forces that trigger the disintegration of ice shelves leading to sea level rise. However, current ongoing sea ice retreat events can cause excessive ocean warming. They will lead to changes in ocean circulation, which influences local and regional climate along with marine wildlife thereby affecting sea level rise. Antarctic sea ice can raise sea level by 200 ft if completely melted. Unlike the Arctic, which has shown a continuous decline since 1978, Antarctic Sea Ice Extent (SIE) grew until 2015, reaching a record 20.14 million km² in 2014, before experiencing a sharp decline in 2023 by reaching a record low extent of 1.965 million km². This variability has led to a limited understanding of Antarctic sea ice dynamics compared to the Arctic. To address this, we developed a Patch Convolutional Neural Network (Patch-CNN), which segments daily SIE images into 16 patches to capture localized features. Our prior study demonstrated Patch-CNN’s superiority over traditional CNNs, achieving improved predictive performance. However, its accuracy declines with increasing lead time, as it relies solely on image data without incorporating critical climate variables such as Sea Surface Temperature (SST), wind, albedo, etc. To enhance prediction accuracy, we propose using Convolutional Long Short-Term Memory (ConvLSTM) to integrate spatial and temporal dependencies for SIE forecasting up to 3 months. By leveraging satellite data from CERES, MODIS, and other sources, our study aims to improve Sea Ice Concentration predictions (0: no ice, 1: full coverage), providing deeper insights into long-term climate patterns and sea ice retreat factors, ultimately contributing to better climate forecasting.
Emam Hossain, Md Osman Gani, Devon Dunmire, Aneesh Subramanian, and Hammad Younas
Time Series Classification of Supraglacial Lakes Evolution over Greenland Ice Sheet
The Greenland Ice Sheet (GrIS) is a significant driver of global sea level rise, largely due to increasing meltwater runoff. Supraglacial lakes, which form on the ice sheet surface during the melt season, influence ice dynamics by storing and transporting meltwater. Understanding their seasonal evolution—whether they refreeze, drain, or become buried as subsurface lakes—is critical for assessing their impact on ice sheet mass balance. In this study, we propose a computationally efficient time series classification approach using Gaussian Mixture Models (GMMs) applied to Reconstructed Phase Spaces (RPSs). This method categorizes supraglacial lakes into three classes: (1) those that refreeze, (2) those that drain, and (3) those that remain buried beneath the surface. We utilize time series data from Sentinel-1 (microwave) and Sentinel-2 (optical) satellites, ensuring robust classification across varying atmospheric conditions. Our model, trained with just a single representative sample per class, achieves an accuracy of 85.46% using Sentinel-1 alone and 89.70% when combining Sentinel-1 and Sentinel-2 data. This significantly outperforms conventional machine learning and deep learning models that require extensive labeled datasets. The results highlight the effectiveness of the RPS-GMM approach in capturing complex supraglacial lake dynamics with minimal training data, providing a scalable and data-efficient solution for monitoring ice sheet hydrology. This study demonstrates the potential for physics-informed time series analysis to improve our understanding of ice sheet processes and their role in climate change.
Omar Faruque, Sahara Ali, Xue Zheng, and Jianwu Wang
TTCD: Transformer Integrated Temporal Causal Discovery from Non-Stationary Time Series Data
The growing availability and importance of time series data across various domains, including environmental science, epidemiology and economics, has led to an increasing need for time-series causal discovery methods that can identify intricate relationships in non-stationary, nonlinear and often noisy real world data. Existing constraint-based causal discovery methods for non-stationary time series data rely on conditional independence tests whereas score-based methods use statistical processes. In this work, we propose a Transformer Integrated Temporal Causal Discovery (TTCD) Framework to discover contemporaneous and lagged causal relations from non-stationary time series data. The proposed framework comprises a transformer-integrated Non-Stationary Feature Learner module to learn non-stationary features from input temporal data and a Causal Structure Learner module to discover causal connections between different variables utilizing the learned latent features from the Non-Stationary Feature Learner. Through experiments on multiple synthetic, real world and benchmark datasets, we demonstrate the empirical proficiency of our proposed approach as compared to several state-of-the-art methods. The inferred graphs for the real world datasets are also in good agreement with domain understanding.
Yiming Liao and Yiheng Li
A New Benchmark Solution for T-cell Receptor Epitope Binding Prediction
Applying deep learning methods to predict T-cell receptor epitope binding is a recent trend in immunotherapy treatment. Many solutions have been developed, but unlike LLMs in the linguistic field, there are no standard benchmark datasets, evaluation metrics, or procedures to compare the performance of different solutions, particularly those related to unknown receptors and epitopes. We established a series of new benchmark procedures based on an unpublished T-cell receptor epitope binding dataset. We used ROC-AUC and partial AUC as metrics to evaluate the performance of eight different prediction solutions. The results indicate that none of the current solutions significantly outperformed random guessing on unknown T-cell receptor and epitope data. This highlights a significant need for a benchmark in this area.
Md Alomgeer Hussein, Lu He, and Tera L. Reynolds
A Patient-centered Approach to Evaluating Large Language Model-generated Responses to Patients’ Questions
Barriers to Continuous Glucose Monitoring Adoption Among Vulnerable Populations in Central Maryland
Despite advancements in digital health, vulnerable populations such as minorities, refugees, and low-income individuals continue to face challenges in managing chronic conditions like diabetes. Continuous glucose monitoring (CGM) offers real-time glucose tracking, yet its adoption remains low due to financial, cultural, and systemic barriers. This study explores the obstacles preventing CGM use among vulnerable communities in central Maryland, where healthcare disparities persist. This research aims to identify key barriers limiting CGM adoption by: Examining socio-economic, cultural, and systemic factors affecting CGM utilization. Propose targeted interventions to improve CGM accessibility and diabetes management. A mixed-methods approach was used, including semi-structured interviews and surveys with patients, healthcare providers, and community stakeholders. Participants were recruited from diabetes centers, physicians’ offices, and community health programs. Data collection focused on experiences with CGM accessibility, affordability, and usability. Preliminary results indicate that cost, lack of insurance coverage, digital literacy gaps, and cultural beliefs significantly impact CGM adoption. Many patients struggle with navigating the healthcare system for CGM prescriptions, while providers cite bureaucratic obstacles in securing approvals. Additionally, the absence of culturally tailored educational resources further hinders uptake. The significance of this research is addressing these barriers is crucial for reducing diabetes-related disparities in central Maryland. This study highlights the need for policy reforms, including Medicaid expansion for CGM coverage, community-based diabetes education, and simplified access to digital health tools. By tackling these challenges, we can enhance diabetes care and promote health equity in vulnerable populations.
Riishav Guptaa and Dong Li
Continuous Blood Pressure Monitoring Using Smartphones and Wearables: A Survey of Recent Advances and Challenges
Continuous, non-invasive blood pressure (BP) monitoring using smartphones and wearable devices is rapidly advancing, driven by developments in sensor technology and signal processing. Various techniques have been introduced, leveraging built-in sensors including photoplethysmography (PPG), oscillometric methods, phonocardiography (PCG), accelerometers, microphones, and pulse transit time (PTT) measurement. Academic research has extensively investigated multimodal sensor fusion, machine learning, and deep learning models to enhance accuracy and usability. Concurrently, industry innovations have emerged, such as smart rings integrating PPG and electrodermal activity (EDA) sensors, and earbuds using in-ear acoustic sensing. Despite these advancements, significant challenges remain, particularly related to motion artifacts, ambient environmental interference, calibration complexity, and low signal-to-noise ratios. This survey critically reviews current academic and industry developments, identifies key challenges impeding widespread adoption, and discusses promising future directions for continuous BP monitoring solutions.
Anjali Jha and Kai Sun
Data-Driven Clinician Scheduling Under Resource Uncertainty in an Outpatient Clinic: A Predict-Then-Optimize Approach
This work addresses a practical scheduling problem faced by an outpatient pain consultant clinic affiliated with an academic anesthesiology department. Anesthesiologists with a pain medicine subspecialty credential hold dual appointments, with primary duties in the operating room (OR) and intensive care unit (ICU) and secondary responsibilities in the pain clinic. While this dual-role structure aligns with clinicians’ clinical preferences, it aggravates staffing availability uncertainty for the pain clinic, as clinicians may be reallocated to OR/ICU sites due to urgent clinical needs after the pain clinic schedules have been published for patients booking visits. Increasing clinical demand and persistent physician shortages have exacerbated the frequency and extent of these reallocations, further disrupting patient access to care and misaligning clinicians’ designated full-time equivalent (FTE) allocations, ultimately leading to dissatisfaction among both patients and clinicians. To address this challenge, we propose a two-step, predict-then-optimize scheduling framework to automate pain clinic scheduling under staffing uncertainty. In the prediction step, clinician availability rates are predicted for each day in the scheduling horizon using time series models. The optimization step employs a multi-objective mixed-integer programming model that considers a proportional balance of shift types, adherence to individual contractual FTE allocations, and schedule consistency for each clinician . Clinician availability uncertainty is incorporated using robust optimization (RO) with a budget of uncertainty approach to control the conservatism typically associated with RO. The proposed framework is benchmarked against the clinic’s current scheduling template, using actual schedules as ground truth. Results demonstrate that the framework improves scheduling consistency , minimizes duties assigned beyond contractual obligations, and potentially enhances clinician well-being . The proposed framework interfaces predictive and prescriptive analytics by incorporating predictions and associated uncertainties into optimization models. It can be further extended to account for reliability issues in advanced machine learning techniques, e.g., large language models, when applied in real-world decision-making practices.
Pronob Kumar Barman, Tera L. Reynolds, and James Foulds
Enhancing Online Support Group Formation Using Topic Modeling Techniques
Online health forums serve as vital platforms where patients exchange experiences, seek advice, and find peer support. Despite their importance, forming personalized and effective support groups in these forums remains a challenging task, hindered by scalability issues and reliance on static categorizations. To address these limitations, we propose two advanced models: the Group-specific Dirichlet Multinomial Regression (gDMR) model and the Group-specific Structured Topic Model (gSTM), which leverage user-generated content, demographic information, and interaction data to automate the creation of interpretable and contextually relevant support groups. Through extensive experiments on a large-scale online health forum dataset, we demonstrate that both models significantly outperform traditional baselines—including Latent Dirichlet Allocation (LDA), DMR, and STM—on metrics such as held-out log-likelihood, topic coherence, and within-group semantic similarity. The gDMR model effectively captures interaction-based features, while the gSTM model excels in producing semantically meaningful and interpretable topics. By automating support group formation, our models provide scalable and efficient frameworks for online health forums. These advancements reduce the manual effort required to organize peer support groups, enhance user engagement, and foster stronger community networks, ultimately contributing to improved patient outcomes and support system efficacy.
Ommo Clark and Karuna P. Joshi
Evaluating Causal AI Techniques for Health Misinformation Detection
The proliferation of health misinformation on social media, particularly regarding chronic conditions such as diabetes, hypertension, and obesity, poses significant public health risks. This study evaluates the feasibility of leveraging Natural Language Processing (NLP) techniques for real-time misinformation detection and classification, focusing on Reddit discussions. Using logistic regression as a baseline model, supplemented by Latent Dirichlet Allocation (LDA) for topic modeling and K-Means clustering, we identify clusters prone to misinformation. While the model achieved a 73% accuracy rate, its recall for misinformation was limited to 12%, reflecting challenges such as class imbalance and linguistic nuances. The findings underscore the importance of advanced NLP models, such as transformer-based architectures like BERT, and propose the integration of causal reasoning to enhance the interpretability and robustness of AI systems for public health interventions.
Shadman Sakib, Gaurav Shinde, Snehalraj Chugh, Mohammad Saeid Anwar, and Nirmalya Roy
MultiRespNet: An Edge-Optimized Multi-Model Ensemble Approach for Real-Time Contactless Respiratory Rate Estimation
Monitoring respiratory rate (RR) is a vital sign of physiological health, providing the early identification of diseases from respiratory distress to metabolic problems. However, the limitations of contact-based sensors and the lack of reliability in current contactless methods make continuous and accurate assessment challenging outside of clinical settings. To tackle these problems, we present MultiRespNet, a novel edge- enabled framework for real-time, contactless respiratory rate estimation and breathing pattern classification utilizing video input. Our architecture incorporates optical flow analysis with multi-modal features (temporal, statistical, frequency-domain). It uses an edge-optimized multi-model ensemble of Exponential Smoothing Transformer (ETSformer), Temporal Fusion Transformer (TFT), and Informer, incorporating a stacking-based meta-learner for adaptable prediction throughout varying scenarios. To validate our method, we present an in-house dataset collected by drone-mounted RGB camera, comprising synchronized ground truth recordings across various postures and surrounding factors (lighting variations, motion artifacts, etc.). Experimental findings demonstrate that MultiRespNet outperforms existing state-of-the-art baselines with a mean absolute error (MAE) of 0.98 breaths per minute (bpm) and real-time inference at 1.22s on edge device. This computational effectiveness demonstrates how our proposed method could be used in emergency response platforms, telemedicine, and clinical assessment.
Paris von Lockette, Swapna Kshirsagar, Tamia Bowers, Ben Bazasuren, Mia Merritt, and Deepak Bhukya
Advanced Manufacturing and Soft Robotics in the ElectroMagnetically Active-Composites and Structures (eMACS) Lab at UMBC
The eMACS Lab at UMBC is engaged in advanced additive manufacturing and the development of soft robotics. Topically, the work exists at the intersection of polymer and composite smart materials behavior; materials processing, classical electromechanics, and design optimization. Recent manufacturing work has used electromagnetic (EM) field processing to control composite microarchitectures, and subsequent magnetic, electric and magnetic properties, in EM sensitive composites. This work focused on iron and ferrite powders in rubbery matrices such as silicone elastomer. Our “Future Manufacturing” aspirations are to optimize sustainable material sourcing in conjunction with EM processing, providing multifunctional-property control to drive ecologically-optimized material allocation decisions. Here we study lignin , cellulose, and other by fibers in a variety of matrices. We are additionally studying real-time localized EM processing to control fiber alignment in additive manufacturing of hierarchically-reinforced carbon and Kevlar fiber composites. Recent work in the soft robotics field has developed magnetically actuated walking, swimming, and gripping devices as well as push-pull accordion actuators and origami structures. These devices have been fabricated from rubbery matrices and various magnetic powders. This work has also included biomedical devices to assist cardiac function, remedy heart defects, and simplify surgical procedures. The core competencies of the eMACS Lab span a range of experimental property testing (mechanical, dielectric, magnetic), spectroscopy methods (x-ray diffraction, Raman), numerical simulation and analysis (traditional and machine learning optimization, finite element methods), multiphysics analytical modeling of magneto- electromechanical responses at micro and macro scales; and material and device fabrication including material compounding, mold casting, and additive methods.
Investigating the Work Practices of Assembly Line Workers with Visual Impairments
Assembly lines provide a streamlined method for the mass-production of goods. Due to the visual-centric nature of tasks when assembling a product, challenges can be faced by workers with visual impairments. In this paper, we describe a study investigating the interactions on a garment construction assembly line, operated by a mixture of workers with and without visual impairments. We highlight experiences from workers, along with strategies and workarounds which have been developed to support workflow. More specifically, we focus on the importance of low-tech accessibility design to support workplace accessibility, and stress the importance of blind-perspective training in assembly line environment.
Elias Gilotte, Chad Sundberg, Vikash Kumar, and Govind Rao
Uncovering The Relationship Between Oxygen Availability And Energy Sources In Cell-free Protein Synthesis
In vivo protein production is essential for the pharmaceutical industry and other sectors, but it is limited by the host cell’s tolerance to the product and inefficient energy use. Cell-free protein synthesis offers a promising alternative by using cellular components without the constraints of a living cell. For commercial scale protein production, the parameters affecting reaction yields will need to be understood to optimize production. In particular, the relationship between oxygen consumption and energy sources is understudied. Using our diffusive reactors, we studied the relationship between oxygen and energy sources by varying the available oxygen and energy sources. Using metabolomics and real-time oxygen and protein sensors we uncovered the relationship between oxygen and various metabolic processes.
Sanzida Akter, Pradyoth Shandilya, Logan Courtright, Giuseppe D’Aguanno, Rajasekhar Anguluri, Omri Gat, and Curtis R. Menyuk
An Efficient Algorithm for Modeling the Slow Evolution of Soliton Molecules Due to Interaction and Noise in Microresonators
“Microresonator optical frequency combs (OFCs), widely used in fields such as optical communications, precision metrology, and spectroscopy, stand out for their suitability in applications demanding a compact size and low power consumption. OFCs are formed by stationary waveforms called solitons within the microresonator that create a periodic stream of optical pulses after coupling into an output waveguide. Soliton molecules are bound states formed due to the interaction between individual solitons, and as the separation between solitons increases compared to their duration, their interactions can span time scales much longer than the photon lifetime. The step size possible with conventional numerical methods is limited by the order of photon lifetime. This limitation makes it impossible to simulate slow processes inside microresonators, which include modeling slow soliton interactions and analyzing the impact of noise on solitons on laboratory timescales. We describe a computational scheme called the synergetic method that overcomes this limitation by taking step sizes that are many orders of magnitude larger than is possible with conventional methods, making it possible to simulate slow processes 100 times faster. We apply this approach to study the slow interactions between two soliton and to carry out Monte Carlo simulations to determine when the molecule becomes unstable due to white noise. The synergetic method is promising for analyzing complex soliton structures and assessing their stability on a laboratory timescale.”
Logan Courtright, Pradyoth Shandilya, Thomas F. Carruthers, and Curtis R. Menyuk
Formation of Multiple Stable Regions for Single Solitons in the Presence of an Avoided Crossing
Microresonator-generated optical frequency combs have matured as a technology over the past two decades [1]. Microresonator frequency combs are created by coupling light from a reference laser pump through an input waveguide or tapered optical fiber into a microresonator, usually a small, low-loss, circular cavity. In recent years, microresonator devices often depend heavily on higher-order dispersion effects beyond quadratic dispersion, such as cubic and quartic dispersion and avoided crossings, in order to generate solitons. In particular, avoided crossings have emerged as important tools for soliton generation in the context of different physical systems, such as photonic crystal resonators and microresonator dimers [2, 3]. Thus, understanding how avoided crossings affect the regions of stability for solitons has become an important issue. Computational analysis performed on microresonator systems with quadratic dispersion predicts one continuous stability region for single solitons in the pump detuning-power parameter space [4]. In this work, we calculate the stability region in the pump detuning-power space of a single soliton in an anomalous-dispersion microresonator system with an avoided crossing. We model the avoided crossing as a singularity term a/(μ −b), where a is the avoided crossing strength and b is the avoided crossing location, that is added onto the integrated dispersion of a quadratic-dispersion microresonator Dint = (D2/2)μ2, where μ is the mode number of each microresonator resonance. We show that in the presence of an avoided crossing this continuous stability region can split up into discontinuous islands of stability with qualitatively different soliton solutions in each section.
Abdullah Al Imran and Meilin Yu
Integration of the High-Order Flux Reconstruction Method with Actuator Disk Models for Wind Energy Applications
This study focuses on implementing the Actuator Disk Model (ADM) in a high-order Flux Reconstruction (FR)/Correction Procedure via Reconstruction (CPR) framework to simulate Vertical Axis Wind Turbines (VAWTs). ADM applies body forces over a circular disk representing the turbine effects on fluid flows, instead of directly modeling turbine blades. This makes the simulation faster and more efficient while still capturing key flow physics, especially wake physics. A Gaussian filtering approach is used to ensure a smooth force distribution, which also helps blend the turbine forces naturally into the surrounding flow. The forces are applied to mesh elements marked as interface and solid cells, mimicking the effect of a real turbine. As a first step, source terms creating zero total drag and lift are used to represent an inviscid cylinder. Test results from the velocity and pressure fields show similar trends as those of the reference flow fields obtained from body-fitted mesh simulations. These results demonstrate that this method is useful for studying VAWT performance, wake interactions, and optimization of arrays of VAWTs.
Physics Informed Machine Learning for Arctic Sea Ice Thickness Prediction
Predicting Arctic sea ice thickness (SIT) is crucial for understanding climate change, given its complex behavior driven by physical processes. Traditional models often fall short in capturing this complexity, while purely data-driven machine learning can lack physical grounding. This work introduces a physics informed machine learning (PIML) approach to improve SIT prediction. PIML integrates sea ice physics, such as thermodynamic and dynamic equations, directly into the model’s architecture or training process. This allows the model to learn from both observational data and physical constraints, resulting in more accurate and physically realistic predictions. We demonstrate this approach using hybrid neural network architectures that incorporate physical loss terms or physics-based feature engineering. Our results show that PIML significantly improves SIT prediction accuracy compared to purely data-driven models, particularly in data-sparse regions. This research highlights the potential of PIML to advance our understanding and predictive capabilities of complex geophysical phenomena in the rapidly changing Arctic.
Raonaqul Islam, Ishraq Md. Anjum, Curtis R. Menyuk, and Ergun Simsek
Plasmonic Phototransistors with Improved Heat Dissipation
Phototransistors are important photonic devices with extensive applications in photodetection, solar cells, and image sensing. Due to their improved detectivity, sensitivity, and faster switching speeds, two-dimensional (2D) material-based phototransistors have been of utmost interest for the last two decades. One challenge with these devices is their low absorption coefficient, which can be addressed by placing an array of plasmonic metal nanoparticles on top of the 2D material layer. However, when in touch with 2D materials, metal nanoparticles act like local heat centers and significantly increase the temperature inside the 2D material. This reduces the overall quantum efficiency, or if it is under strong excitations, it even physically damages the device. To address this, we propose a novel design where metal nanoparticles are placed beneath the 2D material and supported by silicon nanopillars. This configuration enables more efficient heat dissipation, leading to more stable and consistent quantum efficiency enhancement across a wider range of optical powers. Our findings highlight the critical importance of considering thermal management alongside optical field enhancement in designing high-performance 2D-material-based photodetectors for demanding applications.
Nicholas Pankow, Gary Carter, and Alioune Niang
Quality Factor and coupling condition analysis in microresonators
There is a strong fundamental and applicative interest in the use of microresonators for the manipulation and generation of optical frequency combs (OFCs). Generating OFCs not only requires microresonators with a high loaded (total) quality factor (Q) but also the nature of the coupling which directly affects how efficiently light is transmitted within the microresonator. This coupling condition can be under-coupled, where minimal light is transferred; over-coupled, where excessive light recouples back into the waveguide; or critically coupled, the optimal state, where all power is transferred into the microresonator’s resonant from the waveguide. Here, we linearly characterized our crystalline magnesium fluoride MgF2 microresonator by launching a continuous-wave laser (CW) into the microresonator, at around 1550 nm. First, we measured the total Q-factor from the power transmission, in the stationary regime. We found the loaded Q≈1.1×10^8 close to the reported value listed on the microresonator’s datasheet (Q≈1.7×10^8). The nature of the coupling condition cannot be deduced from the power transmission, except in the critical coupling case where the normalized transmission drops to zero. Therefore, we determined the coupling condition of our MgF2 by sending the CW laser through a phase modulator driven by a pulse generator and then into the microresonator. The laser frequency is set to the microresonator resonant frequency, and we apply a square pulse with a pulse width of 10 microseconds and a rising edge of around 300 ns. The voltage is adjusted to launch a π-phase shift into the input light. The transient response of the output light after this phase shift reveals a distinguishable behavior for the microresonator coupling regime. Both theoretical and experimental studies show that our microresonator exhibits over-coupled behavior.
Alioune Niang, Pradyoth Shandilya, Logan Courtright, Gary Carter, and Curtis R. Menyuk
Soliton dynamics in singly and doubly pumped microresonators
In recent years, using microresonators for controlling and generating optical frequency combs (OFCs) has attracted great interest thanks to their applications in metrology and optical frequency synthesis. OFCs are optical spectra that consist of discrete and equally spaced frequencies. The detection of OFCs using photodetectors create radio frequency signals at frequencies equal to the repetition rate (the frequency spacing in OFCs) and its harmonics, which in turn have applications in optical clocks. By coupling a continuous-wave laser (pump) into a microresonator with sufficiently high loaded quality factor (Q), OFCs can be generated. Single solitons, soliton molecules, breathing solitons, and soliton crystals are among the major waveforms which have been experimentally and theoretically observed in microresonators. Here, we experimentally and numerically explore soliton dynamics in a silicon nitride microresonator with ≈ 100 GHz FSR and quadratic dispersion. We first use a main pump around 193 THz (1550 nm) to generate single solitons, two soliton molecules, and crystals. Next, we add a second pump at 187 THz (1598 nm) close to one of the soliton comb teeth. The second pump permits the formation of new frequencies due to nonlinear interactions with the OFC formed by the main pump, leading to the generation of interleaved frequency combs which we refer to as multi-color combs. In each state, the comb tooth spacing, i.e. the repetition rate, of the new colors is identical to the main color.
Cindy Almeida, David T. Booth, John T. Hrynuk, and Meilin Yu
Spectral Proper Orthogonal Decomposition Flow Analysis of Gust Mitigation With the Morphing Airfoil Technology at Low Reynolds Number
In this study, we use the spectral proper orthogonal decomposition (SPOD) method to analyze the transient characteristics of the flow around a NACA 0012 airfoil, with an oscillating trailing edge, subjected to a cross-flow gust. SPOD analyses were performed on the vertical gust exit, leading-edge flow over the suction surface of the airfoil, and airfoil wake. Results show that on a temporal analysis, the wake region is the region most affected by the gust, with a frequency shift from the dimensionless trailing edge low input frequency of 0.1 to those around 0.5.
Adaptive Domain Inference Attack with Concept Hierarchy
With increasingly deployed deep neural networks in sensitive application domains, such as healthcare and security, it’s essential to understand what kind of sensitive information can be inferred from these models. Most known model-targeted attacks assume attackers have learned the application domain or training data distribution to ensure successful attacks. Can removing the domain information from model APIs protect models from these attacks? This project studies this critical problem. Unfortunately, even with minimal knowledge, i.e., accessing the model as an unnamed function without leaking the meaning of input and output, the proposed adaptive domain inference attack (ADI) can still successfully estimate relevant subsets of training data. We show that the extracted relevant data can significantly improve, for instance, the performance of model-inversion attacks. Specifically, the ADI method utilizes the concept hierarchy extracted from the public and private datasets that the attacker can access and applies a novel algorithm to adaptively tune the likelihood of leaf concepts in the hierarchy showing up in the unseen training data. For comparison, we also designed a straightforward hypothesis-testing-based attack – LDI. The ADI attack not only extracts partial training data at the concept level but also converges fastest and requires the fewest target-model accesses among all candidate methods.
Sara Khanjani, Vandana Janeja, Christine Mallinson, and James Foulds
ALiRAS: Audio Linguistic Representation for Audio Anti-Spoofing
The increasing sophistication of synthetic speech poses significant challenges for audio security and speaker verification systems. To address this, we introduce ALiRAS (Audio Linguistic Representation for Audio Anti-Spoofing), a novel approach designed to enhance spoofed audio detection at scale. ALiRAS generates linguistic-based audio representations, incorporating key speech features that can differentiate between authentic and spoofed audio. Our research explores multiple front-end architectures for ALiRAS. ALiRAS representations aim to capture anomalies in pitch, pause, overall audio quality—and existence of breath. In this work, we demonstrate the large-scale performance of ALiRAS on common datasets containing both genuine and spoofed speech samples. Through this study, we establish ALiRAS as a scalable and effective framework for audio anti-spoofing. Future work will focus on domain adaption techniques and integrating them in ALiRAS.
Kelvin Echenim and Karuna Joshi
Automating IoT Data Privacy Compliance by Integrating Knowledge Graphs with Large Language Models
Regulatory compliance is mandatory for Internet of Things (IoT) manufacturers, particularly under stringent frameworks like the General Data Protection Regulation (GDPR), which governs the handling of personal data. We present a novel approach for automating compliance verification for IoT devices by integrating a Large Language Model (LLM) with a domain-specific Knowledge Graph (KG). This framework addresses two primary objectives: first, leveraging the LLM to interpret natural language compliance queries while mitigating issues such as bias and hallucination, and second, using a KG to provide structured, up- to-date regulatory guidance that complements the LLM’s interpretative capacity. Our KG, populated with synthetic data, models GDPR obligations, permissions, and prohibitions, supporting precise SPARQL query execution for both deontic (normative) and non-deontic queries. Our evaluation demonstrates high semantic alignment between LLM-generated and gold-standard compliance guidance, confirming the framework’s effectiveness in delivering accurate, context-specific recommendations. This work offers IoT manufacturers a scalable, automated solution for meeting data privacy requirements.
Yuechun Gu and Keke Chen
Calibrating Practical Privacy Risks for Differentially Private Machine Learning
Differential privacy quantifies privacy through the privacy budget , yet its practical interpretation is complicated by variations across models and datasets. Recent research on differentially private machine learning and membership inference has highlighted that with the same theoretical setting, the likelihood-ratio-based membership inference (LiRA) attack success rate (ASR) may vary according to specific datasets and models, which might be a better indicator for evaluating real-world privacy risks. Inspired by this practical privacy measure, we study the positive correlation between the setting and ASR. We also find that for a specific dataset and a specific task we can lower the attack success rate by modifying the dataset. As a result, we may enable flexible privacy budget settings in model training. One dataset modification strategy is selectively suppressing privacy-sensitive features without significantly damaging application-specific data utility. We use the SHAP (or LIME) model explainer to evaluate features’ privacy sensitivity and utility importance and develop an optimized feature-masking algorithm. We have conducted extensive experiments to show (1) the inherent link between ASR and the dataset’s privacy risk in terms of a specific modeling task; (2) By carefully selecting features to mask, we can preserve more data utility with equivalent practical privacy protection and relaxed settings.
Yuechun Gu, Jiajie He, and Keke Chen
Data Privacy in Fine-tuning Machine Learning Models
Data privacy has been a top concern in the AI era. Despite the recent development of differentially private learning methods, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments. In controlled data access, authorized model builders work in a restricted environment to access sensitive data, which can fully preserve data utility with reduced risk of data leak. However, unlike differential privacy, there is no quantitative measure for individual data contributors to tell their privacy risk before participating in a machine learning task. We developed the demo prototype FT-PrivacyScore to show that it’s possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task. Using this tool to estimate sample-specific privacy risk in model fine-tuning tasks, we found a interesting phenomenon. That is, an iteration of fine-tuning on a subset will increase the privacy risks of some samples of some other subsets that are used in previous iterations of fine-tuning. In this project, we will analyze this effect in detail and show that this effect should attract model builders’ eyes when they fine-tune their models.
Social engineering, while contributing to the majority of cyberattacks, poses a uniquely difficult problem in cybersecurity because of a combination of factors. First, social engineering is low cost and presents multiple increasingly complex and subtle attack vectors. Second, the majority of computer users are not cybersecurity literate, with under 30% judged competent on basic knowledge. Third, it takes advantage of humans being the weakest link in cybersecurity by taking advantage of human vulnerabilities like habit formation and susceptibility to persuasive techniques. This all results in a significant gap in security caused by people’s unpreparedness to counteract social engineering. While many companies provide training for their employees, the majority of the population using information technology daily remains uneducated about social engineering threats. There is thus a need for a novel approach to education against social engineering attacks for casual users without high computing competencies that would take advantage of human psychology, just like the attacks themselves do. This project addresses the dual problems of lack of cybersecurity literacy and increasing social engineering attacks by creating training materials designed for non-security professionals. These materials integrate humor theory knowledge, entertainment education, and social psychological markers to create effective pedagogical tools around social engineering. These materials were evaluated with a population of non-security professionals (n=200) and determined to be over 80% effective.
Mariel Maughan, Ravi Kuber, Sy Saulynas, and Marilyn Nguyen
Not allowing older adults to be left behind: the need for cybersecurity advocates
Due to the growing cyber crimes targeting older adults, cybersecurity advocates are needed to protect and empower older adults, allowing them to navigate the digital world safely. Cybersecurity advocates are a new work role, and none currently specialize in supporting older adults, who have different needs from other generations based on their skill levels and experiences. This study interviewed participants from government agencies, non-profit organizations, private sector organizations, and individual advocates, who are all working to protect and/or help older adults recover from cyber crimes. The study consisted of semi-structured interviews with questions focusing on participants’ overall experiences with advocacy, older adults, and cybersecurity. All interviews lasted between 30-60 minutes and were conducted through video conferencing software. Many of the participants reported that older adults tend to not report cybercrimes due to shame and fear of losing autonomy. A universal finding was that better and more user-friendly reporting mechanisms are needed. Most of the participants’ work was preventative, but some organizations had restorative measures. The most interesting recommendations were the organizations that incorporated artificial intelligence (AI) into their work. Others were using intergenerational methods to spread cybersecurity awareness and training among high schoolers, college students, and older adults. Others expressed the effects of the digital divide on older adults, and how that was hindering preventative measures. Participants also identified the needed skills and characteristics of a cybersecurity advocate for older adults. This study will produce best practices for raising cybersecurity preventative measures to older adults. Ideally being shared amongst government agencies, nonprofits, the private sector, and individual advocates. Due to scarce resources and understaffing, one of the findings of the study was the importance of networking and sharing information in order to protect as many older adults as possible.
Zhiyuan Chen, Meilin Yu, Md Badrul Hasan, and Suyash Kalwani
Stealthy Long-Term Cyber Attacks Detection on Wind Turbines Using Physics-Informed Neural Networks
Wind energy contributes 7.8% of global electricity production as of 2023, with an expected annual growth rate exceeding 8%. However, the increasing digitalization and interconnectivity of wind energy assets make them attractive targets for cyber-attacks, as evidenced by recent incidents involving ENERCON Wind Turbines, Nordex Group SE, and Deutsche Windtechnik. While existing research focuses primarily on traditional cyber threats—such as ransomware and supply-chain attacks—stealthy long-term attacks that degrade performance over time remain largely unexplored. These attacks can manipulate turbine control parameters, leading to suboptimal rotor angles, increased wear and fatigue, and reduced energy efficiency. Given the economic implications, even a modest decrease in efficiency (e.g., 5%) can extend the return on investment (ROI) period by four years and reduce turbine lifespan by 13%, causing substantial financial losses. This research proposes a Physics-Informed Neural Network (PINN)-based framework to detect such stealthy long-term cyber-attacks on wind turbines. PINNs integrate domain knowledge from wind turbine physics and meteorology into machine learning models, enabling precise anomaly detection. Our approach involves: Developing PINN models that predict normal turbine operation using inputs such as wind speed, its amplitude of oscillation, and frequency of wind variation. Simulating adversarial attacks (e.g., Fast Gradient Sign Method – FGSM) to introduce imperceptible perturbations in control parameters, degrading power generation efficiency. Detecting deviations by comparing real-time turbine output with PINN-generated expected values, issuing alerts for prolonged deviations indicative of cyber-attacks. This research contributes to securing green energy infrastructure, ensuring the economic viability of wind farms, and fostering resilience in the renewable energy sector.
Ainaz Jamshidi, Janakan Sivaloganathan, Andriy Miranskyy, and Lei Zhang
Automating quantum software maintenance: flakiness detection and root cause analysis
Flaky tests, which pass or fail inconsistently without code changes, are a major challenge in software engineering in general and in quantum software engineering in particular due to their complexity and probabilistic nature. In this study, our goals are twofold. First, building on prior manual analysis of 14 quantum software repositories, we expanded the dataset using embedding transformers and cosine similarity. Second, we propose an automated framework for quantum flaky test detection and root cause analysis, which can overcome manual method limitations. We conducted experiments with Large Language Models (LLMs) from the OpenAI GPT and Meta LLaMA families to assess their ability to detect and classify flaky tests from code and issue descriptions. Embedding transformers proved effective: we identified 25 new flaky tests, expanding the dataset by 54%. Top LLMs achieved an F1-score of 0.8871 for flakiness detection but only 0.5839 for root cause identification. Future work will focus on improving root cause detection and developing automatic flaky fixes.
Mahsa Radnejad and Carolyn Seaman
Games Designed to Teach or Practice Technical Debt Management: A Systematic Mapping Study
Software companies increasingly seek effective ways to train employees in managing technical debt (TD), which refers to accumulated software issues that result from shortcuts and other suboptimal decisions during software development and maintenance. Managing technical debt is a key challenge in software development, and games have emerged as an innovative approach to teaching and practicing Technical Debt Management (TDM) practices. This systematic mapping study examines such games. The primary goal is to identify all relevant games that support TDM learning and practice, regardless of the target audience. Additionally, we will investigate evidence of the effectiveness and impact of these games and strategies. To achieve this, we have conducted a search of potentially relevant sources, resulting in 1,892 unique papers that were published between 2010 and 2024. After a careful selection process, we have included 9 primary studies describing games focused on TDM. This mapping study encompasses both peer-reviewed papers and grey literature (e.g. websites, blogs, etc) to identify all possible relevant games and strategies. The outcomes of this review will provide insights into the strengths and gaps in current game-based learning approaches and games for TDM, highlighting areas for future research. This study is also expected to offer meaningful insight to support researchers in developing new and effective games for TDM.
Dongchan Kim, Khushdeep Kaur, Ainaz Jamshidi, and Lei Zhang
Identify flakiness in quantum code: a machine learning approach
As quantum computers emerge, the importance of software engineering for quantum systems grows, and so do its unique challenges. One critical issue is test flakiness—unpredictable behavior in test outcomes—which becomes even more pronounced due to the inherent indeterminacy of quantum mechanics. While extensive research has been conducted on flaky tests in classical computing, quantum flakiness remains unexplored. In this work, we propose to detect flakiness in quantum software using machine learning approaches. We start with a keyword-search-based approach on GitHub quantum software repositories. We evaluate various machine learning models such as Extreme Gradient Boosting(XGB), Decision Tree, Random Forest, K-Nearest Neighbors, and Support Vector Machines. Also, we conduct experiments in both balanced and imbalanced datasets to simulate real-world scenarios in which non-flaky cases are more common. Our experimental results demonstrate that these approaches can successfully identify flaky tests in quantum software, with XGB achieving the highest F1 score of 0.884, even in imbalanced scenarios. Our results show the effectiveness of classical machine learning models in detecting quantum flakiness. In the future, we aim to expand the dataset with more quantum flaky and non-flaky tests to reflect real-world scenarios. We also plan to explore other unsupervised machine learning techniques, such as large language models, to detect and classify quantum flaky tests and develop a more effective method to predicate quantum flakiness.