COEIT Research Day Posters 2024

The College of Engineering and Information Technology (COEIT) Research Day Talks are scheduled 10:30 am to 12:30 pm in the ITE building. The full agenda for the event is available here.

See below for more information about the poster presentations by our faculty and students. Topics and abstracts are presented in thematic areas.

Accessibility | AI/ML | Bioengineering | Education | Environment & Sustainability | HCI/Visualization | Healthcare | IoT | Manufacturing | Mathematical Foundations

Back to Top


In my autoethnographic research I experiment with improving critical reading and recall of weekly assigned papers in my Human-Centered Computing Ph.D. program using peripheral interaction during my everyday activities. Peripheral interaction is engaging with interactive systems in the periphery of our attention that are embedded into our routines and environment. My process first involves reading each assigned paper on a main computer monitor while peripherally viewing reading questions on an adjacent monitor to help focus my reading. As I read, I use my mouse as a visual pacer, highlight passages from each paper, type my own notes, and answer the questions I was viewing peripherally while reading. All mouse movements and keystrokes cause the peripheral monitor to play back slides of the reading questions. After reading each paper and setting up the content to play back, I replay the highlighted text from the papers, my notes about the papers, and the answers to the reading questions throughout the rest of the week peripherally while I am riding an exercise bike, using a treadmill, practicing guitar, working on my computer, and playing with my cats. Each of these activities uses sensors, hardware, and software to allow my movements to interact with the content on a peripheral laptop screen. To test my critical reading and recall of the assigned papers before class, I type answers to the reading questions without using any notes by performing the same routine activities using custom chin interfaces for text input. This research contributes novel practices for peripheral interaction, slow technology, ubiquitous computing, and learning.

Blind and low-vision (BLV) university students frequently encounter social and institutional barriers that hinder their ability to navigate campus environments effectively. Our study employs a participatory design (PD) approach to investigate the process and efficacy of co-designing 3D-printed tactile campus maps to facilitate a university’s navigation for BLV students. We further supplement our co-design with input from staff from the university’s Student Disability Services (SDS). Key contributions include insights into the specific navigational challenges and preferences of BLV students within a university setting, the complexities and benefits of co-design, and SDS staff perspectives on 3D-printed tactile maps and their efficacy on campus.

Accessible navigation for individuals with mobility challenges in urban environments is crucial for ensuring equal access to public spaces. Various systems have been proposed to facilitate efficient travel for individuals with restricted mobility. There is no existing system that usages wheelchair data directly from the users to provide accessible navigation. Our way of leveraging crowd-sourced data, and updating the accessible information in real-time presents a promising way to enhance navigation experiences. This research explores the potential of crowd-sourced data directly collected from real wheelchair users in facilitating accessible navigation for individuals with mobility impairments. Through a review of existing literature, case studies, and interviews with numerous wheelchair users regarding their path preferences, we explore the utilization of crowd-sourced data in mapping mobility barriers, identifying accessible routes, and enhancing navigation assistance. We analyze the challenges and opportunities associated with crowd-sourced data, including efficiency, comprehensiveness, and inclusivity. We propose and demonstrate strategies to optimize the data collection, processing, and utilization of crowd-sourced data to create more inclusive and accessible navigation solutions.

Our developed crowd-sourced data collection system, MyPath, can collect trip data directly from the user’s smartphone and classify the type of surface with other accessible information. Considering all user preferences, demographic information, weather conditions, time of day, and the classified surface type from the vibration data, we can provide a better accessible path to navigate. We incorporated the feedback from real users into every phase of the design and development process. By developing a system for collecting crowd-sourced information considering all the factors related to accessibility, we can advance the development of accessible navigation technologies, ultimately providing greater independence and mobility for individuals with disabilities in urban environments.

This research project aims to explore (1) how a youth program empowers low-vision and blind (BLV) youth, (2) how participatory design and community engagement approaches can be used to co-design empowering learning experiences for BLV youth, and (3) how to build a relationship with a community partner to contribute to the Human-Computer Interaction (HCI) accessibility field. To achieve this, the research team observed two workshops and interviewed the youth transitions program coordinator. This immersive experience allowed the team to witness firsthand how the community partner empowers BLV youth by understanding their program design, how they cater to students’ assets and needs, understanding instructor/student family-student dynamics, and fostering independence. Using insights from relevant research, observations, and interview data and through thematic analysis. The study also provides an understanding of how participatory design might encourage innovation, collaboration, and empowerment among BLV youth and instructors. Our findings demonstrate how this particular youth program creates a safe space where BLV youth can safely share their experiences and continue shaping their identities. This poster also includes plans for the future of the project.

Back to Top


Biclustering is a problem in machine learning and data mining that seeks to group together rows and columns of a dataset according to certain criteria. In this work, we highlight the natural relation that quantum computing models like boson and Gaussian boson sampling (GBS) have to this problem. We first explore the use boson sampling to identify biclusters based on matrix permanents. We then propose a heuristic that finds clusters in a dataset using Gaussian boson sampling by (i) converting the dataset into a bipartite graph and then (ii) running GBS to find k-densest sub-graphs. Our simulations for the above proposed heuristics show promising results for future exploration in this area.

Mobile Autonomous Systems refer to robots, ground vehicles, drones etc. that can move autonomously. Such a system relies on sensors such as cameras and GPS to learn the states of the system itself as well as its surroundings and often machine learning (such as computer vision) and AI for autonomous decision making (e.g., to navigate). Deception is a serious threat to securing such systems because both the input to sensors as well as the machine learning models used in decision making can be misled by injected deceptive information. This project conducts a comprehensive survey of existing deception-based attacks against mobile autonomous systems and identifies gaps in the existing research. Compared to existing surveys, our survey considers various degrees of sophistication of the attacking strategies and type of data being altered. Our work also identifies gaps of existing research such as relatively little work has been done on multi-stage attacks and attacks that influence long term decision making. Our work can be used to help better understand deception based attacks as well as developing better defense strategies.

Learning causal relationships solely from observational data provides insufficient information about the underlying causal mechanism and the search space of possible causal graphs. As a result, often the search space can grow exponentially, particularly for greedy approaches that use a score-based approach to search the space of equivalence classes of graphs. Prior causal information such as the presence or absence of a causal edge can be leveraged to guide the discovery process towards a more restricted and accurate search space. In this study, we present a knowledge-guided score-based causal discovery approach based on an efficient search strategy that uses both observational data and structural priors (causal edges) as constraints to learn the causal graph. The initial part of the study involves an application of knowledge constraints that leverage any of the following prior information between any two variables: the presence of a directed edge, the absence of an edge, and the presence of an undirected edge. Next, we plan to propose a novel score function which makes decisions based on both data and prior knowledge and an efficient search strategy based on sorting the edges as per their individual scores that will further allow narrowing down the exponential search space to a much smaller one. We plan to extensively evaluate our approach across multiple settings in both synthetic and benchmark real-world datasets. We did some preliminary experiments for the initial phase of the study where the results demonstrate that structural priors of any type and amount are helpful and guide the search process towards an improved performance and early convergence. Our broader goal is to support accurate and efficient discovery of causal graphs in the presence of some prior knowledge using the proposed score-based search strategy.

Audio spoofing, both AI generated (deepfake) and non AI generated (mimicry and replay attack), is a serious social challenge in the era of online communications. Different types of representation learning have been explored to detect spoofed audios, including but not limited to acoustics and deep neural pre-trained networks. Augmenting acoustic representations with linguistic phonetic and phonological features, have been shown promising to improve spoofed audio detection. Applying the aforementioned linguistic method on a large scale, is the main motivation behind automated Audio Linguistic Data Augmentation for anti-Spoofing (ALDAS). ALDAS is trained with linguistic labels designed and extracted by sociolinguistics experts. Our findings indicate ALDAS enhances the performance of the acoustic representations in spoofed audio detection. ALDAS is also validated by the sociolinguistics experts.

Attention mechanisms, known for their prowess in processing sequential data, are primarily explored within the realm of natural language. However, recent studies have demonstrated that their application extends to astrophysical spectra, another form of sequential data. The structured nature of stellar spectra makes them a compelling dataset for Transformer-based models. This similarity opens new avenues for leveraging these advanced algorithms in astrophysics, promising enhanced analysis and understanding of spectral data through the focused examination of sequences in stellar spectra with transformers.

With recent breakthroughs in quantum computing, it has become a coming reality instead of a promising future. Quantum computing applications offer revolutionary potential across multiple domains including artificial intelligence (AI), optimization, healthcare, energy, and space, known as quantum advantage. The power of quantum computing relies on novel quantum algorithms, quantum software, and hardware. Unlike classical software, quantum software has unique features because of quantum mechanics such as superposition and non-cloning. This opens a new research field – quantum software engineering (QSE). While the software engineering (SE) research community became aware of this need in 2019, we noticed the lack of a comprehensive investigation of state-of-the-art technologies and tools to improve quantum software quality. Testing and debugging are the two most efficient approaches to assure software quality in classical SE. In QSE, testing and debugging quantum programs become challenging due to quantum mechanics. While we can leverage some best practices from the classical world, new techniques and tools are needed to address the concerns in QSE.

In this study, we first conduct a survey study of the state-of-the-art technologies and tools for testing and debugging quantum software, includes but is not limited to quantum bug pattern analysis and detection, quantum software testing techniques and classification, and quantum debugging techniques. In the second place, we provide our visions and insights of testing and debugging quantum software in terms of challenges and opportunities for improving quantum software quality. This survey has the potential to foster a research community committed to developing novel methods and tools for QSE.

Achieving fairness in AI systems is a critical yet challenging task due to conflicting metrics and their underlying societal assumptions which might be difficult to assess, e.g. the extent to which racist and sexist societal processes cause harm and the extent to which we should apply affirmative corrections. Moreover, these measures often contradict each other and might make the AI system less accurate. This work proposes a unifying human-centered fairness framework to guide stakeholders in navigating these complexities, including their potential incompatibility and the trade-offs they pose between fairness and predictive performance. Our framework acknowledges the spectrum of fairness definitions–individual vs. group fairness, infra-marginal (politically conservative) vs. intersectional (politically progressive) treatment of disparities–allowing stakeholders to prioritize desired outcomes. Stakeholders can define their priorities by assigning weights to various fairness considerations (e.g., individual fairness, infra-marginal fairness, intersectional fairness), trading them off against each other and against predictive performance. Our learning algorithms then ensure the resulting AI system reflects the stakeholder-chosen priorities. We performed preliminary experiments to validate our methods on the UCI Adult census dataset. In ongoing and future work, we will apply our methodology to help stakeholders navigate fairness trade-offs in fake news detection and the prioritization of healthcare resources such as Medicaid.

Storytelling is an innate part of language-based communication. Today, current events are reported via Open Source Intelligence (OSINT). Scattered and fragmented sources such as these can be better understood when organized as chains of event plot points, or narratives, that have the ability to communicate end to end stories. Though search engines can retrieve aggregated event information, they lack the ability to sequence relevant events together to form narratives about different topics. I propose an AI system inspired by general and domain- specific narrative theories and use knowledge graphs to represent, chain, and reason over narratives from disparately sourced event details to better comprehend convoluted, noisy information about critical events during intelligence analysis.

In this study, we present a universal nonlinear numerical scheme design method for shock capturing enabled by multi-agent reinforcement learning (MARL). Different from contemporary supervised-learning-based and reinforcement-learning-based approaches, no reference data and special numerical treatments are used in the MARL-based method developed here; instead, a first-principle-like approach using fundamental computational fluid dynamics (CFD) principles, including total variation diminishing (TVD) and k-exact reconstruction, is used to design nonlinear numerical schemes. The third-order finite volume scheme is employed as the workhorse to test the performance of the MARL-based nonlinear numerical scheme design method. Numerical results demonstrate that the new MARL-based method is able to strike a balance between accuracy and numerical dissipation in nonlinear numerical scheme design, and outperforms the third-order MUSCL (Monotonic Upstream-centered Scheme for Conservation Laws) with the van Albada limiter for shock capturing. Furthermore, we demonstrate for the first time that a numerical scheme trained from one-dimensional (1D) Burger’s equation simulations can be directly used for numerical simulations of both 1D and 2D (two-dimensional constructions using the tensor product operation) Euler equations. The working environment of the MARL-based numerical scheme design concepts can incorporate, in general, all types of numerical schemes as simulation machines.

Back to Top


We present a re-evaluation of filamentous fungal growth dynamics. Specifically, we bifurcate classical branching rate into two distinct rates: germination rate and true branching rate. Our results reveal that the inclusion of germination rate in the classical branching rate masks significant strain and condition-specific variances in the true branching rate when germination rate is not considered separately. We support our approach with evidence gathered from experiments conducted on various strains of the model organism Aspergillus nidulans. Where classical branching rate remains constant, we find a marked variation in true branching rate. Our findings prompt a critical reconsideration of historical data where re-analysis would reveal previously unrecognized strain-specific morphologies. Our proposed method for calculating branching rate will enable a more complete understanding of filamentous fungal growth dynamics and morphology. By identifying previously overlooked subtleties in fungal growth patterns, our research offers renewed value in existing datasets that under classical methods have been exhausted. Additionally, our insight can be applied to bioprocess models to more accurately represent mycelial growth at low tip count conditions.

A barrier to cell-free protein synthesis (CFPS) scale-up is the lack of options for process monitoring of energy substrates. CFPS has the potential to replace live-cell bioreactors for rapid, large-scale manufacturing of biologics. A vital substrate is adenosine triphosphate (ATP) because it is the energy currency in CFPS. Existing ATP quantification methods are resource-intensive making them impractical for continuous ATP monitoring. To address this challenge, we designed a low-cost sensor, ≈$15 per chip for online, automated ATP monitoring. The recognition element is a fluorescent protein that selectively binds ATP, leading to changes in fluorescence intensity upon ATP binding. The protein is immobilized in a microfluidic chip and interfaced with the Center for Advanced Sensor Technology’s proprietary biochemical analyzer, an adaptable platform for collecting and processing signal data. One microfluidic chip can be used for 20 consecutive samples at a maximum rate of one sample every 10 minutes. The ATP sensor’s broad detection range of 150 uM to 10 mM ATP was validated by comparison with a standard luciferase based ATP assay. Continuous ATP monitoring could enable further CFPS process development efforts to improve reaction longevity and product yields.

The rising shift towards distributed production of biologics requires new sensors that are simple, miniaturized, cost-effective and enable real-time monitoring for process control. Such alternatives to the commonly used analytical techniques involving immobilized metal affinity liquid chromatography (IMAC) are highly coveted. We have developed a microfluidic electrochemical sensor consisting of a nickel-nitriloacetic acid (Ni-NTA) complex immobilized on a polymer and carbon-based nanocomposite, which can strongly adsorb the Histidine-tag (His-tag) in binding proteins. Since these proteins have a high affinity towards a corresponding analyte, the binding process is rapid and specific. The amount of analyte bound can be quantitatively measured from the change in charge transfer resistance (RCT). As a proof-of-concept, glucose binding protein (GBP) is currently being tested with spiking studies over a wide range of 0.05-0.4 mg/mL. The sensor has shown high sensitivity and a low detection limit of 0.02 mg/mL of GBP. The sensor has also exhibited a swift response time, high selectivity and month-long durability. Currently, efforts are underway to improve ligand stability, increase loading capacity, reduce leaching and evaluate regeneration of its functionality. This prototype can be readily extended to other His-tagged proteins and looks promising as a platform technology for rapid quantitative analysis.

Process Analytical Technology (PAT) and the Quality-by-Design (QbD) approaches focus on continuously monitoring and controlling the parameters associated with producing high-quality finished biopharmaceutical products and ensuring that the manufactured products are safe and effective for human consumption. In the pharmaceutical industry, monitoring of density and porosity is one of the important Critical Quality Attributes (CQAs). PAT methodologies provide a comprehensive understanding of the manufacturing process, enabling the industry to maintain the consistency and quality of drug products. The production of biopharmaceutical products in bulk quantities creates a high probability of having batches that may be contaminated or disrupted while packaging. As a result, it is imperative to employ PAT technologies to prevent such occurrences. Visual-based techniques are among the most effective and widely used PAT technologies. They are easy to use, non-destructive, cost-effective, and can provide real-time, in-process measurements. However, there is a need for state-of-the-art updates in terms of digital automation and enabling Internet-of-things which would pave the way for AI/ML integration in the future.

This research aims to utilize portable spectroscopy to provide a reliable and practical solution for biopharmaceutical product analysis. The custom-designed portable spectroscopy device is battery-run and interfaced with the Android Application. This allows for in-process measurements to be taken without the need for expensive laboratory equipment. The reflectance data for different biopharmaceutical products showed improved analytical capability compared to the traditional camera-based CIE space mapping technique. In the future, this device can be adapted for UV and NIR range analytics with modular exchange of micro spectrometers and Android app-based multiplexing and automation.

Continuous monitoring of transdermal CO₂ emissions gives insights into the efficacy of alveoli and capillary gas exchange systems that assist in diagnosing respiratory issues such as sleep apnea, COVID-19, opioid overdose, and pulmonary diseases. Although transcutaneous CO₂ monitoring sensors are commercially available today, they have significant limitations that are not optimal for most medical conditions and non-hospital settings. Limitations include frequent recalibration requirements due to risky local heating to 42°C, inaccuracies during low perfusion states, and heavy equipment use. Our redox-mediated transcutaneous CO₂ sensing technology provides an alternative assessment method that eludes the existing constraints of commercially available sensors with the additional benefit of heatless and cost-effective monitoring. The preliminary results published by our research team illustrate a skin-compliant polymer composite offering highly selective and sensitive CO₂ response (Ahuja et al., 2023). The sensing discoveries made in this preliminary study underpin the experimental approach and intentions for real-time application of our research. Our research focus was optimizing sensor selectivity and performance in variable surrounding conditions. The sensing ability of the polymer composite is ascribed to the surface’s amine group-based CO₂ absorption-desorption capabilities that result in redox-mediated resistance change. The proposed sensing device utilizes the resistance change caused by composite and CO₂ interactions by correlating the response data to CO₂ concentration measurements. Following critical sensing trait modifications, we analyzed the sensor’s resistance change when exposed to varying CO₂ concentrations under different conditions to test its sensitivity and measurement repeatability. Data from these response tests demonstrate that the sensor has a strong and specific reaction to CO₂ emitted from the skin with a response time of 120 seconds and stable measurements in varying humidity and temperature conditions. Redox-mediated sensor technology provides non-invasive and heatless monitoring for continuous transcutaneous CO₂ measurement. The flexibility and small size of the sensor make fabricating a sensing wristband or incorporating it into existing smartwatches a simple task. Our device will endow daily users with easily accessible insights into their respiratory health while also minimizing the need for tedious hospital diagnostic tests.

Back to Top


In pursuit of teaching and innovation excellence, UMBC made the decision to join the Center for the Integration of Research Teaching and Learning (CIRTL) as a member in 2016. This initiative, housed within the graduate school, forms a part of the PROMISE program, aimed at the professional development of future faculty. To strengthen the university’s efforts in cultivating proficient teaching faculty and enhancing STEM undergraduate education, UMBC personnel proposed to the national leadership of CIRTL obtaining approval to extend an inaugural certification program to the undergraduate assistant (UTA) and Teaching Fellow population within COEIT.

To distinguish this initiative from its graduate-level counterpart, it was designated as the CIRTL Undergraduate Associate Certificate (the first of its kind). To qualify for this certification, Teaching Fellow students (undergraduate teaching assistants) are required to complete two seminar classes: Engineering 396 (ENES 396): Fundamentals of Teaching Fellow Scholarship and Engineering 397 (ENES 397): Advanced Topics of Teaching Fellow Scholarship. These seminars, developed under the Engineering and Computing Education Program held within COEIT, facilitate multidisciplinary enrollment. Additionally, the CIRTL curriculum and other pedagogical methods were adapted and redesigned to suit the needs of undergraduate students.

Since the fall of 2022, 31 students have successfully completed their undergraduate CIRTL certification. Within this framework, assessment was focused on the students’ research efficacy and guiding Teaching Fellows in the development of comprehensive teaching philosophy statements. The results of these evaluations will be presented demonstrating the success of the program and ensuring that desired outcomes are achieved.

What is cheating? Are student, professors and employer views on cheating aligned? Given the advances in technology in the past decade, and the amount of information that is a simple prompt away, we no longer feel there is an alignment.

Our research focuses on exploring what is cheating, and why academic cheating may be prevalent among college students in both liberal arts and science programs. We intend to explore how students and professors view cheating from an ethical standpoint by delving into technology advancements like generative AI, language models, and anti-plagiarism tools.

Our approach will combine multiple research methods to unearth the motivations behind cheating and assess how current tech tools are holding up in the fight against it. By enlightening students and faculty on the ethical use of technology to enhance learning, rather than as a shortcut, we aim to cultivate a culture of honesty and innovation in academic settings that leverages technologies, not punish those who use it as a fulcrum to greater learning and understanding.

Ultimately, our goal is to propose actionable strategies or a tech-based framework that reduces cheating and simultaneously boosts learning and retention. We’re not just looking to penalize dishonesty but to encourage genuine understanding and application of knowledge. By bridging ethical considerations with technological advancements, our research seeks to make a significant contribution to the ongoing conversation on academic integrity, offering insights and practical tools for educators, policymakers, and students alike.

Understanding how to design and implement equity-based approaches to technology-rich learning can lead to increased and diversified participation in computing. Do-it-yourself (DIY) and maker approaches to interactive technology learning have been hailed as potential equalizers of science, technology, engineering, and math (STEM) education for underserved youth, a narrative challenged by scholarship that has shown that if not designed carefully, making can be exclusionary and hegemonic. Over the past three years, we have conducted a participatory design research study, working to develop a localized curriculum for technology-rich learning experiences implemented in recreation centers in two Eastern United States cities. During this study we have observed and begun reporting outcomes in three areas: equity-based strategies for community educators, localized survey tools for use in informal education settings, and the impact of government infrastructure on the structure of learning programs in this setting. We will share our insights into best practices for equity-based education, strategies for implementing surveys for youth in informal settings, and considerations around adjusting tech-rich learning for unique infrastructure restrictions.

Can one-on-one support from undergraduate mentors be the key to unlocking tech education for marginalized communities? As an effort to alleviate educational, financial, and health disparities in marginalized communities, this project explores the transformative potential of B2G Community Computing Learning Centers, a program developed at the University of Maryland, Baltimore County, and piloted in Anne Arundel County. The program provides in-person computer and tech training to single mothers in low-income communities. By utilizing a “train-the-trainer” approach, B2G prepares women to become computational community leaders. The curriculum itself integrates diverse computing curricula, i.e. creative technology, cyber hygiene, digital literacy, and entrepreneurship to co-design the centers with the women. Culturally-relevant, inclusive, and trauma-informed pedagogies are used to foster deeper connections and understanding. The B2G initiative goes beyond just technical skills by engaging undergraduate students as mentors to cultivate a safe and supportive learning environment with participants. Undergraduate mentors are trained to be sensitive to potential past experiences and foster a growth mindset by encouraging participants to embrace challenges and learn from mistakes. This space promotes cultural exchange among participants, mentors, and instructors who can share life experiences and advice, building a strong sense of belonging and inclusion, and computing identity. Early findings indicate that this collaboration empowers undergraduate mentors in developing interpersonal skills, critical thinking, and self-efficacy in the computing field.

The disparity in access to makerspaces for students is significant, ranging from universities with a lack of resources to have one, to institutions with more makerspaces than libraries, in contradiction to how the maker movement was expected to provide access to technology to everyone. There exists a need to provide spaces for low-income families to experience the power of the Maker Movement to enable the development of key life skills, such as focusing on the process rather than the end product, seeing mistakes as opportunities to learn, and strengthening problem-solving and tinkering abilities. These activities encourage the initial exploration of computing and STEM ideas in an informal, fun, and creative space with a low knowledge point of entry. Co-designing these spaces with the community may aid in the social mobility of its members. After launching such a program that promoted creative technology using maker tools, participants expressed the experience as an opportunity to build their confidence, have hope for the future, and serve as therapy for coping with their daily challenges.

Our research focuses on investigating the experiences of instructors of cybersecurity courses for older adults (aged 55 to 79), with a view to determining best practices for supporting cyberhygiene among this community. We conducted a qualitative study of 11 instructors, examining their motivation for teaching older adults, generational differences, impact of older adult memory, effects of shame, status of cyber hygiene, the importance of human connection, teaching challenges, as well as specific instructor recommendations for teaching older adults. Semi-structured interviews were conducted with the participants, and transcripts were qualitatively coded for analysis, identifying themes. This study will provide an analysis of best practices, suggested teaching improvements, and gaps in content coverage or methods. These can be disseminated to current instructors and other interested parties aiming to better cater to older adults cybersecurity needs.

Current visualizations in healthcare are not making optimal use of the abundance of data accessible in our time. Prior works by Dr. Ordonez proposed and then studied an implemented visualization tool based on radar plots created from the combination of several parameters at each time interval measured from 8 babies in a 24 hour period. In this project, we want to build upon that visualization system and the proposed multivariate time series data representation (MTSA) by focusing on performing exploratory data analysis on MTSAs and the radar plots created by the visualization tool using the same dataset.

Our research aims to answer two questions. First, we seek to identify the approximate combination of values for the six vital sign parameters, DBP, HR, RR, SatO2, SBP, and FiO2, that could be significant or uncommonly similar, indicating the possibility or impossibility of Patent Ductus Arteriosus (PDA) in neonates. Using clustering techniques, we analyze the data from each 30 minute interval in a 48 hour period of eight babies to pinpoint the approximate critical parameter-value combinations (collections of radar plots) associated with babies with PDA. Secondly, we aim to determine the sequence of parameter-value combinations that should be indicative of PDA. By analyzing temporal aspects of parameter-value combinations, we hope to enhance the diagnosis time and accuracy in such situations in neonatal intensive care units (NICU), providing helpful insights for health conditions in neonates through noninvasive methods. This study will provide insight on information extraction from MTSAs, which can later be tested on big datasets of multivariate physiological and clinical data.

Community laboratories are do-it-yourself (DIY) spaces where people with and without formal training in biology and with a range of interests and occupational backgrounds come together to work on projects, teach and learn techniques, and discuss current issues through the lens of life science. In the past 20 years, synthetic biology has made genetic engineering accessible, leading to the proliferation of community labs in the United States and around the world. In this project, we investigate the possibilities and shortcomings of community labs as transdisciplinary sites of community-based STEM learning, inquiry, and activism. Our research shows that community labs operationalize open source and DIY ethoses to support hobbyists, entrepreneurs, people seeking career advancement or change, homeschool and alternative education communities, activists, and artists. Also known as biomakerspaces, community labs enable accessible and meaningful engagement with emerging science and technology similar to traditional makerspaces with the additional layer of organic connection to local communities embedded.

There is a rich tradition of ethnographic research conducted in science spaces such as laboratories that deepens and challenges our understandings of the world as we think we know it. Through participant observation at the Baltimore Underground Science Space (BUGSS), we develop an understanding of publics wherein members can debate and create knowledge and culture, in particular pertaining to DNA, an enigmatic molecule our participants have described as “the base of us all.” Our presentation will focus on the aesthetics of DNA publics in a community lab. Aesthetics explore the nature and meaning of a given issue through personal experience and systems of shared meaning. In our research, we begin with bioart as a community-engaged transdisciplinary method to explore aesthetics of emerging technology, which help inform areas of focus for future research.

Back to Top

Environment & Sustainability

Multivariate time series analysis is an important yet understudied area. Often times to study extreme climate phenomena such as snow melting in the polar regions there are several features where the variations are captured. In this paper We propose a multivariate timeseries anomaly detection with instance level feature attribution for extreme event detection and studying trends in climate change in the polar region. Our framework leverages Variational Autoencoder (VAE) integrated with dynamic thresholding and correlation-based feature clustering techniques, significantly improving the detection of time periods marked by extreme climate conditions. This framework enhances the VAE’s ability to capture local dependencies and learn the temporal relationships within climate data. The performance of our framework is demonstrated with a higher F1 score on benchmark datasets and methods. Our framework identifies anomalies with enhanced explainability through detailed feature attribution at both the anomaly instance and global levels. The key contributions of this study are fourfold: a novel multivariate time series anomaly detection model, Cluster-LSTM-VAE (CLV); enhanced feature representation in VAEs by incorporating clustering; a dynamic thresholding strategy that sharpens the temporal precision in the anomaly detection phase; and a methodology for instance-level feature attribution that lends clarity to the detected anomalies. Our framework offers a practical tool for explainable anomaly detection in climate research with validations and co-production of our method along with domain experts insights.

The warming of the Arctic, also known as Arctic amplification, is led by several atmospheric and oceanic drivers. However, the details of its underlying thermodynamic causes are still unknown. Inferring the causal effects of atmospheric processes on sea ice melt using fixed treatment effect strategies leads to unrealistic counterfactual estimations. Such methods are also prone to bias due to time-varying confoundedness. Further, the complex non-linearity in Earth science data makes it infeasible to perform causal inference using existing marginal structural techniques. In order to tackle these challenges, we propose TCINet -Time-series Causal Inference Network to infer causation under continuous treatment using recurrent neural networks and a novel probabilistic balancing technique. More specifically, we propose a neural network based potential outcome model using the long-short-term-memory (LSTM) layers for time-delayed factual and counterfactual predictions with a custom weighted loss. To tackle the confounding bias, we experiment with multiple balancing strategies, namely TCINet with the inverse probability weighting (IPTW), TCINet with stabilized weights using Gaussian Mixture Model (GMMs) and TCINet without any balancing technique. Through experiments on synthetic and observational data, we show how our research can substantially improve the ability to quantify leading causes of Arctic sea ice melt, further paving paths for causal inference in observational Earth science.

In recent years, Compute-in-memory (CiM) architectures have emerged as a promising solution for deep neural network (NN) accelerators. Multiply-accumulate (MAC) is considered a de facto unit operation in NNs. By leveraging the inherent parallel processing capabilities of CiM, NNs that require numerous MAC operations can be executed more efficiently. This is further facilitated by storing the weights in static random-access memory (SRAM), reducing the need for extensive data movement and enhancing overall computational speed and efficiency. Traditional CiM architectures execute MAC operations in the analog domain, employing an Analog-to-Digital converter (ADC) to convert the analog MAC values into digital outputs. However, these ADCs introduce significant increases in area and power consumption, as well as introduce non-linearities. This work proposes the first-ever ADC-less resonant time-domain CiM (rTD-CiM) architecture on an 8KB SRAM memory array using TSMC 28 nm technology. The proposed rTD-CiM architecture demonstrates a throughput of 2.36 TOPS with an energy efficiency of 28.05 TOPS/W.

The anomalous melting process was first noticed along the outer boundary of the sea ice extent in early September 2022 and gradually engulfed the entire sea ice region by February 2023. The anomalous melting events are the highest along the outer boundary of the sea ice extent. Traditional statistical analyses do not reveal the spatial locations and the temporal occurrences of anomalous events representing a loss of sea ice extent. To address this problem, we present a novel method named Convolutional Matrix Anomaly Detection (CMAD). The onset and progression of anomalous melting events over the Antarctic Sea ice are studied as loss in sea ice extent, which are essentially negative values, where the traditional convolutional operation of the CNN approach is ineffective. CMAD is based on an inverse max pooling concept in the convolutional operation of Convolutional Neural Network (CNN) to address this gap. Satellite images are utilized to establish the loss in the Antarctic region. Our analysis shows that anomalous melting patterns have significantly affected the Weddell and the Ross Sea regions more than any other regions of the Antarctic, consistent with the largest disappearance in sea ice extent over these two regions. These findings support the applicability of the inverse max pooling based CMAD to identify the spatiotemporal evolution of anomalous melting events over the Antarctic region. Moreover, these findings indicate that there is a necessity to delve deeper into the role of the anomalous melting process on sea ice retreat for a better understanding of the sea ice retreat process and future prediction. In contrast to well established conventional methods such as Matrix Profile and DBSCAN, when applied to dynamic multidimensional data, CMAD exhibits superior capabilities in detecting extreme events. The comparative analysis reveals that CMAD outperforms these traditional approaches, showcasing its heightened sensitivity and efficacy in capturing significant variations within evolving multidimensional datasets over time. This heightened detection accuracy positions CMAD as a valuable tool for discerning extreme events in the context of dynamic and changing multidimensional data.

The growing availability and importance of time series data across various domains, including environmental science, epidemiology, and economics, has led to an increasing need for time-series causal discovery methods that can identify the intricate relationships in the non-stationary, non-linear, and often noisy real world data. However, the majority of current time series causal discovery methods assume stationarity and linear relations in data, making them infeasible for the task. Further, the recent deep learning-based methods rely on the traditional causal structure learning approaches making them computationally expensive. In this paper, we propose a Time-Series Causal Neural Network (TS-CausalNN) – a deep learning technique to discover contemporaneous and lagged causal relations simultaneously. Our proposed architecture comprises (i) convolutional blocks comprising parallel custom causal layers, (ii) acyclicity constraint, and (iii) optimization techniques using the augmented Lagrangian approach. In addition to the simple parallel design, an advantage of the proposed model is that it naturally handles the non-stationarity and non-linearity of the data. Through experiments on multiple synthetic and real world datasets, we demonstrate the empirical proficiency of our proposed approach as compared to several state-of-the-art methods. The inferred graphs for the real world dataset are in good agreement with the domain understanding.

Formic and acetic acid (FA and AA) are the most abundant organic acids in the atmosphere. FA and AA are water-soluble and semi-volatile so they can partition to aqueous droplets and particles in the atmosphere. There is often a significant disparity between measurements and predictions of the abundance and gas-particle partitioning of both acids. Inorganic salts in aqueous aerosols and cloud droplets may alter the partitioning of organics, potentially contributing uncertainty to these predictions. To address this, we conducted experiments using a dual mist-chamber setup to investigate the effects of salt identity, concentration, and pH on the partitioning of FA and AA. The study encompassed various salts, including (NH4)2SO4, NaCl, and mixed salt solutions, under pH conditions ranging from 1 to 6, and ionic strengths ranging from 0.01 to 3 mol/kg. Quantification of these measurements was achieved by applying Henry’s constant, enabling us to predict the “salting-in” or “salting-out” effects on the partitioning of formic acid and acetic acid. The findings offer valuable insights into interactions between organic acids and inorganic salts in the atmosphere, enhancing our understanding of gas-aqueous partitioning processes.

The Greenland Ice Sheet (GrIS) is experiencing significant mass loss, primarily driven by surface meltwater runoff, highlighting the critical need to understand the dynamics of supraglacial lakes. These lakes, formed by summer melt, play a pivotal role in various processes such as firn air depletion, hydrofracture events, and the formation of moulins, all of which significantly influence the ice sheet’s mass balance. To address this, we propose a Causal Attention-based Time-Series Classifier (CA-TSC) for classifying supraglacial lakes into four categories: refreeze, slow drainage, fast drainage, and buried.

The CA-TSC model selectively incorporates causal features identified in previous steps and prioritizes timestamps aligned with causal relationships. The feature selection layer dynamically selects relevant features based on causal importance and a temporal attention mechanism weighs the importance of different time lags. This approach allows for the identification of crucial temporal patterns essential for accurate classification. Furthermore, the proposed model architecture, comprising convolutional layers, LSTM layers, and dense layers, effectively captures complex spatiotemporal patterns. Leveraging year-round data from 1000 supraglacial lakes across all six sub-regions of GrIS, CA-TSC integrates causal attention with time series classification techniques to predict the future of supraglacial lakes at the end of the melt season.

This research represents a significant advancement in understanding and managing supraglacial lakes on the Greenland Ice Sheet. By incorporating causal insights and leveraging both optical (Sentinel-2) and microwave (Sentinel-1) imagery, CA-TSC offers a comprehensive framework for analyzing and predicting these critical hydrological features. The proposed causal model may facilitate informed decision-making in environmental management efforts, contributing to our understanding of the impacts of climate change on polar ice sheets and global sea level rise.

Understanding the intricate dynamics of polar regions, particularly the phenomena driving significant snow flows on the Greenland ice sheet, is crucial for grasping the nuances of sea level rise due to polar ice melting. The complexity of polar systems, characterized by diverse phenomena each represented by unique spatiotemporal data sets, poses a significant challenge. Such data sets are marked by their heterogeneity, temporal variability, spatial interdependence, and multi-dimensional, multi-resolution nature. This paper introduces a methodological framework aimed at synthesizing these disparate data sets to construct spatial neighborhoods that reflect local behaviors. Within these neighborhoods, multivariate data sets undergo anomaly detection using graph deviation networks. Our approach involves: (a) the holistic integration of sub-domain data, including both spatial and temporal elements, specifically employing spatiotemporal data from Greenland; (b) the generation of neighborhoods to preserve the inherent spatial autocorrelation and heterogeneity of the data; and (c) the application of graph deviation networks—a subset of graph neural networks—to pinpoint anomalous features and regions within Greenland. Additionally, we employ a frequent pattern mining algorithm to identify and display co-occurring anomalous features most responsible for these irregular behaviors. Our research in the Greenland region has led to the identification of anomalous patterns, validated through comparison with ground truth data from polar science experts, thereby enabling localized analyses across Greenland. Another objective of this study is to investigate the potential disparities arising when observational data is replaced with reanalysis data in our research framework. The insights gained from this investigation could significantly enhance our understanding of the inherent uncertainties within reanalysis data, potentially sparking further research across various domains, including data assimilation services.

In the realm of Earth science, effective cloud property retrieval, encompassing cloud masking, cloud phase classification, and cloud optical thickness (COT) prediction, remains pivotal. Traditional methodologies necessitate distinct models for each sensor instrument due to their unique spectral characteristics. Recent strides in Earth Science research have embraced machine learning and deep learning techniques to extract features from satellite datasets’ spectral observations. However, prevailing approaches lack novel architectures accounting for hierarchical relationships among retrieval tasks. Moreover, considering the spectral diversity among existing sensors, the development of models with robust generalization capabilities over different sensor datasets is imperative. Surprisingly, there is a dearth of methodologies addressing the selection of an optimal model for diverse datasets. In response, this paper introduces MT-HCCAR, an end-to-end deep learning model employing multi-task learning to simultaneously tackle cloud masking, cloud phase retrieval (classification tasks), and COT prediction (a regression task). The MT-HCCAR integrates a hierarchical classification network (HC) and a classification-assisted attention-based regression network (CAR), enhancing precision and robustness in cloud labeling and COT prediction. Additionally, a comprehensive model selection method rooted in K-fold cross-validation, one standard error rule, and two introduced performance scores is proposed to select the optimal model over three simulated satellite datasets OCI, VIIRS, and ABI. The experiments comparing MT-HCCAR with baseline methods, the ablation studies, and the model selection affirm the superiority and the generalization capabilities of MT-HCCAR.

Classifying subsets based on their spatial and temporal features is crucial to the analysis of high-dimensional data. Since no single clustering algorithm ensures optimal results, researchers have increasingly explored the effectiveness of ensemble approaches. Recently, ensemble clustering has attracted much attention due to increased diversity, better generalization and overall improved clustering performance compared to conventional approaches. While ensemble clustering yields promising results on simple dataset, it has not been fully explored on complex multi-dimensional spatiotemporal data. Second, its performance relies on the choice of base clustering algorithms, and for heterogeneous ensemble, choosing the right base clustering combination can be challenging. Third, existing approaches suffer from stability drawbacks in terms of the structure and density of final clusters and do not support spatiotemporal data. Lastly, the problem of noise and outliers generated from the misclassification by base clustering algorithms have not been fully addressed. For our contribution in this field, we propose Hybrid Ensemble Spectral Clustering (HESC), a novel approach to efficiently cluster high-dimensional spatiotemporal data and addresses the aforementioned limitations of existing ensemble approaches. HESC mitigates the drawback of cluster stability by efficiently integrating homogeneous and heterogeneous ensemble clustering approach. Furthermore, it addresses the issue of noise and misclassification by consolidating algorithms from object co-occurrence based and median partition based consensus cate- gory to derive final partitions of data. When evaluated on five real-world high dimensional spatiotemporal data, HESC outperforms a set of existing state-of-the-art ensemble clustering models showing better robustness and improved stability with consistent results. This indicates that HESC can effectively capture implicit semantic spatial and temporal patterns in complex multi-dimensional data with better clustering accuracy and improved performance.

To mitigate climate change impacts, the strategic implementation of carbon capture technologies at significant CO2 emission points, such as industrial sites and electric power generation facilities, has become crucial. Solvent-based carbon capture solutions are pivotal in reducing atmospheric CO2 levels and enhancing air quality by capturing harmful pollutants. Amine-based solvents, favored for their efficiency in post-combustion CO2 capture, are susceptible to thermal and oxidative degradation, leading to complex emissions profiles that demand comprehensive management strategies. We develop a Machine Learning model designed to predict future amine emissions in real-time, thereby assisting in the formulation of mitigation strategies required for the operation of capture plants. We conducted a preliminary study using data from test campaigns run at the Technology Centre Mongstad (TCM). We employed an LSTM auto-encoder model to predict anime emission using historical data. The preliminary results were quite promising: we achieved a mean absolute percentage error ranging from 6.3% to 9.7% for the real-time prediction of amine emissions. As future work, we plan to explore different deep learning techniques as well as applying transfer learning to improve the generalizability of machine learning models across different operational settings, especially when different CO2 capturing techniques are employed.

Neural networks are powerful prediction tools, but, understanding how they arrive at their answers can be a significant challenge. Neural networks have become a go-to tool for making predictions, yet their inner workings often remain a mystery. This lack of interpretability makes it difficult to understand how they arrive at their answers, especially when solving domain-specific problems. Our work aims to explore the feasibility of logical reasoning in predicting the impact of sea ice melt by combining neural networks with logic rules. These rules, capable of capturing expert knowledge or relationships within the data, serve as a guiding framework for the network. By embedding external domain-specific rules, the network can not only learn from data patterns but also leverage human expertise. This approach has the potential to enhance predictions, making them both more accurate and easier to comprehend, thereby offering a powerful synergy of data-driven learning and human reasoning.

We believe that by integrating logic rules into neural networks, we can address the challenges of interpretability and domain-specific understanding. Our research focuses on demonstrating the effectiveness of this approach in predicting the impact of sea ice melt, a critical issue with implications for various domains such as climate science, environmental policy, and resource management. By leveraging both data-driven learning and human expertise encoded in logic rules, we aim to provide insights that are not only accurate but also actionable. Through our investigation, we seek to contribute to the advancement of interpretable and domain-aware predictive modeling, ultimately leading to more informed decision-making and solutions to real-world challenges.

Earth’s ice sheets are the largest contributor to sea level rise. For this reason, understanding the flow and topology of ice sheets is crucial for the development of accurate models and predictions. In order to aid in the generation of such models, ice penetrating radar is used to collect images of the ice sheet through both airborne and ground-based platforms. Glaciologists then take these images and visualize them in 3D fence diagrams on a flat 2D screen. We aim to consider the benefits that an XR visualization of these diagrams may provide to enable better data comprehension, annotation, and collaborative work. In this paper, we discuss our initial development and evaluation of such an XR system.

Greenland bed topography is important to estimate to control the flow of ice, subglacial drainage, and the impact of climate change on the ice sheet. A lot of information about the bed is transmitted to the surface of the ice but the amount of the satellite data is pretty low compared to the huge ice sheets. Therefore, those missing data points need to be predicted by combining all the features of the ice bed surface of Greenland. In this research, we have used transformer models which are Feature Based Transformer and Tabnet Transformer to capture the complex spatial patterns and relationships in our dataset. Since we are dealing with a large amount of unlabeled data we used self-supervised learning so that the model can make learn about the data itself and can make predictions on the unlabeled data. The model performance was evaluated with RMSE. MSE, R2, and MAPE. We find out that among all Tabnet transformer gets the robust prediction of complex pattern of Greenland bed topography with self-supervised learning. In addition, this model achieves the lowest RMSE score with cross validation technique with lower amount of epochs. While the feature based transformer got low RMSE but it could not generate a firm representation of the topography on the big landscape. These models have adeptly grasped the intricacies of the Greenland ice sheet topography with accuracy and efficiency, rendering them invaluable assets for depicting spatial patterns.

Modeling ice flow is a central component of sea level rise projections, yet we have few datasets capable of improving our understanding of large-scale ice dynamics. Extracting the englacial layer configuration of the Greenland ice sheet provides a way to extrapolate the age of the ice from where it has been measured in the field. Such an understanding of the age of the ice can inform studies of past snow accumulation and glacier sliding and provide context for modern glacier change. While these englacial layers have been thoroughly surveyed using ice-penetrating radar/radio-echo sounding, the imagery data (i.e., radargrams) collected are not easily ingested into glacier models, and as a result, labeling englacial layers in images is often carried out manually or semi-automatically. In this paper, we advance prior approaches for englacial layer annotation; in particular, we build on the Automated Radio-Echo Sounding Englacial Layer-tracing Package (ARESELP) method to propose a MorphoLayerTrace (MLT). Our approach to automatic unsupervised englacial layer annotation evaluates the peak intensity and utilizes morphological image processing techniques for selecting reliable seed points from the peak intensity output to enhance the labeling of the englacial layers. Our approach is effective for single frames and multiple frames as well. We assess our proposed approach using 100 radargrams that were obtained across North Greenland, interpreting both individual frame and multi-frame sets. Finally, we present novel metrics to assess the quality and consistency of englacial layer annotation techniques.

The Earth’s radiation budget relies on cloud properties like Cloud Optical Thickness obtained from cloud radiance observations. Traditional physics-based cloud properties retrieval methods face challenges due to 3D radiative transfer effects. Deep learning approaches have emerged to address this, but their performances are limited by simple deep neural network architectures and vanilla objective functions. To overcome these limitations, we propose CloudUNet, a modified UNet-style architecture that captures spatial context and mitigates 3D radiative transfer effects. We introduce a cloud-sensitive objective function with regularized L2 and SSIM losses to learn thick cloud regions often underrepresented in input radiance data. Experiments using realistic atmospheric and cloud Large-Eddy Simulation data demonstrate that our proposed CloudUNet obtains 5-fold improvement over the existing state-of-the-art deep learning, and physics-based methods.

Back to Top


Similar to most emerging technologies or revolutionary ways of thinking in our digital age, the field of Human-Computer Interaction (HCI) has spread rapidly to many regions around the globe. In the past decade, its human-centered methodologies have been discussed and proven value, but have matured and heavily been developed only in certain geographical “hubs”, leaving other regions around the world farther behind. This movement tends to be centralized in American and European contexts, as opposed to Asian, African, or Middle Eastern ones. We are interested in better understanding the characteristics of this movement, and what dynamics play a role in HCI’s growth, use, and practicality in a certain region. In this work, we focus on the Middle Eastern region, and how researchers and practitioners working in the Middle East have navigated the field. How these frontline experts discuss and spread knowledge of HCI, actively contribute to the growth of the field, and progress in their careers uncovers how HCI matures in the context of the sociocultural factors of the region. Six researchers/professors from the academic sector in 4 different Middle Eastern countries (Egypt, Saudi Arabia, UAE, Kuwait) were interviewed. Our initial findings show that in an academic environment, the growth of HCI is driven by individual agents rather than a systematic effort. These individual agents are heavily influenced by external trends (ie. Private sector success) rather than formal HCI expertise and best practice, which makes it challenging to establish programs or courses within their institutions. We discuss how student initiative and contribution and connection to the larger HCI community are necessary for the practical use of HCI methods in the region.

As technology continues to play an increasingly prominent role in our daily lives, it is crucial to understand how we can develop digital tools to support self-expression and reflection for well-being. While there have been previous studies in the field of HCI on aspects of well-being, further investigation is still needed into the relationship between self-expression through digital technology and personal growth, specifically. Psychological research has explored the concept of personal growth, demonstrating that it represents a form of growth characterized by an inner purpose. In this exploratory survey study, we examined individuals’ perceptions of using technology for self-expression and how it relates to their perceived personal growth on the Personal Growth Initiative Scale-II. We obtained 138 valid participant responses to the survey (38% identified as female), which was distributed broadly. We conducted ANOVA analyses examining the differences between participants’ perceptions of their creativity and differences in personal growth scores according to demographic factors of age, education, and gender revealed that individuals who perceived themselves as more creative also had higher personal growth scores. In our future work, we will investigate how individuals’ expressing themselves through different modalities affects both personal growth and emotion regulation. By understanding users’ perspectives and preferences, our goal is to inform the development of digital tools to better support individuals’ personal growth.

In this study, we explored the integration of urban cycling and interactive technology to address transportation inequities and promote cycling activism in Baltimore. We have engaged with nine bicycling advocates from five organizations in Baltimore in participatory design activities to understand the motivations for promoting cycling in urban areas and how interactive technology can aid in creating a cyclist-friendly city. We found that participants promote cycling with a desire to challenge historical discriminatory practices and infrastructural inequalities and as a means of community building and identity expression. In the Participatory Design session, we identified three interconnected directions for future change: ancillary bicycle infrastructure and supporting DIY repair practices, changing perceptions through community engagement, and using technology to support community awareness and inclusion. We argue cycling can be seen as a tool for resistance and identity shaping in this context, and participatory design can offer innovative directions for urban designers, policymakers, and system designers to strengthen efforts toward creating a more inclusive and equitable urban environment.

The current state-of-the-art in 2D data visualizations often fails to capture the intricate complexity and depth of information crucial for integrated decision-making. In response, the Systems Exploration and Engagement environment (SEEe) offers a cutting-edge virtual immersive data experience that seamlessly integrates geo-referenced spatial data, abstract data visualization, and qualitative data encompassing text, images, videos, and conceptual diagrams. Based on the results of a preliminary user study, SEEe proves to be a novel and promising tool for exploring, visualizing, and analyzing data comprehensively and innovatively. Sensemaking, a mentally challenging process of making sensible explanations of situations, becomes increasingly crucial in complex information environments. Immersive analytics, an emerging field, investigates how technologies like Augmented and Virtual Reality (AR/VR) support data analysis and decision-making. Our research delves into how immersive analytics VR systems support sensemaking within complex environments integrating qualitative and quantitative data. It also assesses whether these environments facilitate inquiry-based learning and the individual strategies users adopt to perform sensemaking in such complex environments. Contributions include advancing understanding of VR-based immersive analytics systems and enhancing sensemaking within complex environments, offering valuable insights for system design and addressing existing research gaps in understanding sensemaking in immersive analytics environments.

Virtual reality is progressively more widely used to support embodied AI agents, such as robots, which frequently engage in `sim-to-real’ based learning approaches. At the same time, tools such as large vision-and-language models offer new capabilities that tie into a wide variety of tasks and capabilities. In order to understand how such agents can learn from simulated environments, we explore a language model’s ability to recover the type of object represented by a photorealistic 3D model as a function of the 3D perspective from which the model is viewed. We used photogrammetry to create 3D models of commonplace objects and rendered 2D images of these models from an fixed set of 420 virtual camera perspectives. A well-studied image and language model (CLIP) was used to generate text (i.e., prompts) corresponding to these images. Using multiple instances of various object classes, we studied which camera perspectives were most likely to return accurate text categorizations for each class of object.

Back to Top


Sepsis is a serious disease where the body improperly responds to infection, causing organ failure and can ultimately lead to death. Sepsis is the leading cause of death in hospitals. The condition is caused by a bacterial infection entering the bloodstream. It can develop acutely and requires immediate attention. To better understand this disease and its acute nature, ICU patient data from Johns Hopkins Hospital is collected, creating a record of 24 hours of historical data. By converting raw patient data such as vital signs into a string of letters using a combination of data knowledge representation methods known as Symbolic Aggregate Approximation (SAX) and Bag-of-Patterns (BOP), doctors are enabled to better identify signs of illness by highlighting anomalies in vital signs. Knowledge can be extracted through recommendation systems for mutually beneficial collaborations between both hospitals with high research activity and smaller community hospitals. When anomalies are detected, alerts signal the patient may be going into septic shock. Using the converted, simplified patient data, a multi-sensorial interactive representation of data is made, allowing doctors to see how vital signs are impacted by chosen variables, hinting at potential causes of sepsis.

The U.S. Food and Drug Administration (FDA) was tasked with guaranteeing the efficacy and safety of medications before they were marketed at the beginning of the 20th century. The Federal Food Drug and Cosmetics Act was amended in 1976, extending the agency’s jurisdiction to supervise medical device safety development.

A device is defined as “an instrument, apparatus, implement, machine, contrivance, implant, or an in vitro reagent” under the Federal Food Drug and Cosmetics Act if it satisfies three requirements: 1) it must be listed in the U.S. Pharmacopoeia or the Commercial National Formulary; 2) it must be intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease; or 3) it must be intended to affect the structure or function of the human body. The average time it takes to approve a new medicine is 12 years, whereas it only takes 3 to 7 years on average to get a new medical gadget from idea to market.

We propose an innovative approach to streamline these processes through the development of an end-to-end automated ontological framework. The framework’s goal is to automate medical device categorization and regulation, saving time and money on approval processes.

Paroxysmal Sympathetic Hyperactivity (PSH) occurs with high prevalence among critically ill Traumatic Brain Injury (TBI) patients and is associated with worse outcomes. The PSH-Assessment Measure (PSH-AM) consists of a Clinical Features Scale (CFS) and a Diagnosis Likelihood Tool (DLT), intended to quantify the severity of sympathetically-mediated symptoms and the likelihood that they are due to PSH, respectively, daily. Here, we aim to identify and explore the value of dynamic heterogeneous trends in the evolution of sympathetic hyperactivity following acute TBI. We performed an observational cohort study of 221 acute critically ill TBI patients for whom daily PSH-AM scores were calculated over the first 14 days of hospitalization. A principled group-based trajectory modeling approach using unsupervised K-means clustering was used to identify distinct patterns of CFS evolution within the cohort as well as the relationships between trajectory group membership and PSH diagnosis as well as other outcomes of interest. Baseline clinical and demographic features predictive of trajectory group membership were analyzed using univariate screening and multivariate multinomial logistic regression. We identified four trajectory groups, each with a distinct CFS trend pattern. Trajectory group membership was significantly associated with clinical outcomes including PSH diagnosis, ICU length of stay, and duration of mechanical ventilation. Baseline features independently predictive of trajectory group membership included age and post-resuscitation motor Glasgow Coma Scale (GCS). This study adds to the sparse research characterizing the heterogeneous temporal trends of sympathetic nervous system activation during the acute phase following TBI. This may open avenues for early identification of at-risk patients to receive tailored interventions to limit secondary brain injury associated with autonomic dysfunction and thereby improve TBI patient outcomes.

More health information is in cyberspace than ever before, presenting both opportunities and challenges for health information seeking and self-care practices, particularly in underserved populations who face health disparities for various reasons including limited healthcare access and high costs. We have investigated the effect of increased health information accessibility in cyberspace on self-care practices and trust in underserved populations of African descent by surveying how increased health information accessibility in cyberspace affects self-care practices and trust in underserved.

Our study observed that online health information seeking is pervasive, growing exponentially, and driven by convenience and accessibility of resources, regardless of access to healthcare providers. However, participants expressed concerns about trustworthiness, accuracy, and potential misdiagnosis or inappropriate treatment choices. So, it is important to focus on enhancing the trustworthiness and quality of online health information, implementing verification mechanisms to reduce misinformation, and addressing underlying factors driving self-medication.

The paper details the result of our study and contributes to the existing literature by providing a deeper understanding of the motivations, experiences, and outcomes of online health information use and self-care practices among underserved communities of African descent. Our investigation also highlights the need for further research and interventions to address the challenges and opportunities that arise from the increased accessibility of health information in cyberspace and self-care practices in underserved populations.

Healthcare data comes from many sources, such as patient records, vital signs, and wearable devices. Finding causal structures in this data can help us make better health decisions. However, current methods struggle with data that changes over time and has long-term dependency often making mistakes by either finding false causes or missing them entirely. This paper introduces a novel approach to identifying long-term lagged causal relationships in time series data by leveraging the Autoformer architecture, a transformer-based model renowned for its efficiency and accuracy in long-term time series forecasting. Our method leverages the Autoformer to detect long-term dependencies, which are then used to construct a long-term causal graph. Instead of creating a complete undirected graph, we form our graph by linking all instantaneous variables and the delayed edges pinpointed by the Autoformer. This strategy significantly reduces the number of conditioning tests needed to ascertain the causal graph. We then employ a constraint-based causal discovery technique to determine the final causal graph. Our proposed approach has been thoroughly tested on both synthetic and real-world clinical datasets, and its performance has been benchmarked against various baseline methods. The results from these experiments affirm the capability of our approach to accurately identify causal relationships and adapt modules within autocorrelated and non-stationary time series data.

Mental health has drawn increased attention in recent years, motivated by the social impact of the COVID-19 pandemic. Given the need to detect issues in early stages, relying on the conventional doctor visit based approach is not scalable and is too costly. The development of wearable mental health monitoring solutions is an effective means for filling the gap where individuals are provided with an assessment that can help them in adjusting their lifestyle. Stress is one of the leading causes of mental health problems. We opt to serve such a goal by devising a novel wearable-based approach for detecting stress in real time. In this paper, we are experimenting with two benchmark datasets in this domain, namely, the WESAD Dataset and the SWELL Knowledge Work Dataset. For these datasets, the subjects are shown some videos; based on the situation provided, the subjects document their reaction by rating their feelings like valence, arousal, etc. Specifically, we are exploring the use of Electrocardiogram (ECG) and Photoplethysmography (PPG) data. Although Electroencephalogram (EEG) is usually used to assess stress, acquisition of EEG data is much more logistically-involving and expensive than ECG and PPG data. To determine the relevant features, we are studying how ECG and PPG can be correlated with EEG data using a third dataset, called Deap. To assess stress, we apply ConvLSTM and ResNet to the ECG and PPG signals. The paper evaluates the different statistical features and their computational complexity.

For international students, learning about US health insurance and healthcare may feel like a daunting and near impossible task. In an effort to help, we partnered with UMBC international students to develop a website through a user-centered design (UCD) approach that hosts features they believe will best assist international students in understanding, navigating, acquiring, and utilizing US health insurance and healthcare.

We began by collecting survey responses from 49 UMBC international students to acquire a broad understanding of international students’ experiences with US health insurance and healthcare. Next, we interviewed six students to better understand their experiences and, to date, we have more interviews scheduled in the coming weeks. Participants told stories of interacting with US health insurance and healthcare that ranged from finding a dentist to being hospitalized due to severe injury. Themes that arose among all participants included high costs of healthcare, confusing insurance jargon, and confusing insurance infrastructure.

Through the survey and interviews, we made connections with students who were interested in partnering with us to develop a website that can help international students better navigate US health insurance and healthcare. We invited our international student partners to participate in a co-design workshop where they will ideate technological solutions and create low-fidelity prototypes of those solutions. We will continue to work with our international student partners to further refine and develop the prototypes until we arrive at a finished product. We plan on conducting usability testing on our finished product to ensure that as many users benefit from the technology as possible.

We hope that our research and technology will not only help UMBC international students, but also provide support to students outside of UMBC and, potentially, international residents in the US such as immigrants, refugees, and more.

Back to Top


Gathering knowledge about surroundings and generating situation awareness for autonomous systems is of utmost importance for systems developed for smart urban and uncontested environments. For example, a large area surveillance system is typically equipped with multi-modal sensors such as cameras and LIDARs and is required to execute deep learning algorithms for action, face, behavior, and object recognition. However, these systems are subjected to power and memory limitations due to their ubiquitous nature. As a result, optimizing how the sensed data is processed, fed to the deep learning algorithms, and the model inferences are communicated is critical. In this paper, we consider a testbed comprising two Unmanned Ground Vehicles (UGVs) and two NVIDIA Jetson devices and posit a self-adaptive optimization framework that is capable of navigating the workload of multiple tasks (storage, processing, computation, transmission, inference) collaboratively on multiple heterogenous nodes for multiple tasks simultaneously. The self-adaptive optimization framework involves compressing and masking the input image frames, identifying similar frames, and profiling the devices for various tasks to obtain the boundary conditions for the optimization framework. Finally, we propose and optimize a novel parameter split-ratio, which indicates the proportion of the data required to be offloaded to another device while considering the networking bandwidth, busy factor, memory (CPU, GPU, RAM), and power constraints of the devices in the testbed. Our evaluations captured while executing multiple tasks (e.g., PoseNet, SegNet, ImageNet, DetectNet, DepthNet) simultaneously, reveal that executing 70% (split-ratio=70%) of the data on the auxiliary node minimizes the offloading latency by ≈ 33% (18.7 ms/image to 12.5 ms/image) and the total operation time by ≈ 47% (69.32s to 36.43s) compared to the baseline configuration (executing on the primary node).

Long queues in retail environments can frustrate customers and negatively impact their experience. Traditional queue monitoring methods, like cameras, raise privacy concerns due to potential personal identification and data collection. This research offers a privacy-preserving IoT solution for measuring queue lengths and estimating waiting time in retail settings. We created a prototype with an RPLidar A1 sensor to collect and transmit real-time data to a server via MQTT broker that uses the pub-sub method. Strategic positioning of the sensor ensures data primarily represents people in the queue, data is then processed by a spatial clustering algorithm based on DBSCAN to estimate the number of customers. To support efficient storage and access to the data, we leveraged a Time Series Database Management System and subsequently transferred to a Relational Database Management System for analysis. Additionally, a Seasonal Autoregressive Integrated Moving Average algorithm predicts future queue lengths by analyzing historical data patterns. We are building an app and website for real-time data. Our goal is to deploy the system at UMBC’s Starbucks, Chick-fil-A, and other locations for campus members to access real-time and historical queue data. Ultimately, we aim to develop a universal approach for queues of various shapes and sizes.

The proliferation of the Internet of Things (IoT) has led to an exponential increase in data generation, especially from wearable IoT devices. While this data influx offers unparalleled insights and connectivity, it also brings significant privacy and security challenges. Existing regulatory frameworks like the United States (US) National Institute of Standards and Technology Interagency or Internal Report (NISTIR) 8228, the US Health Insurance Portability and Accountability Act (HIPAA), and the European Union (EU) General Data Protection Regulation (GDPR) aim to address these challenges but often operate in isolation, making their compliance in the vast IoT ecosystem inconsistent. This work introduces the IoT-Reg ontology, a holistic semantic framework that amalgamates these regulations, offering a stratified approach based on the IoT data lifecycle stages and providing a comprehensive yet granular approach to IoT data handling practices. The IoT-Reg ontology aims to transform the IoT domain into one one which regulatory controls are seamlessly integrated system components by emphasizing risk management, compliance, and the pivotal role of manufacturers’ privacy policies, ensuring consistent adherence, enhancing user trust, and promoting a privacy-centric IoT environment. We include the results of validating this framework against risk mitigation for Wearable IoT devices.

Working Memory (WM) involves the temporary retention of information over small amounts of time. It is an important aspect of cognitive function that allows humans to perform a variety of tasks that require online processing, such as dialling a phone number, recalling where you placed the keys, or which turn you take to reach a store in the mall. Inherent limitations in the individual capacity to hold information lead to people often forgetting important specifics during such tasks. While prior works have successfully used wearable and assistive technologies to improve other types of memory functions that are longer-term in nature (e.g., episodic memory), how such technologies can aid in daily activities is under-explored. To this end, we present Memento, a framework that leverages multimodal, wearable sensor data to legibly extract significant cognitive state changes during those activities to intelligently cue, in-situation, to improve the recall of those activities. Through two user studies (15 participants and 25 participants, respectively) on a desktop-based navigation task, we demonstrate that users who received visual cues from Memento showed significantly better recall of the route compared to free recall ≈ (20 − 23)%. Memento also incurred significantly less cognitive load, and review time (46% less) on the part of the participant, and at the same time, drastically reduced computation time (3.98 secs to 7 secs, ≈ 43%), in comparison to other alternatives such as computer-vision-based selection of cues.

We propose research to overcome deficiencies of in-industry offerings of power/time sources available to Fixed/Mobile off-grid electronic devices and platforms and develop self-contained SWAP-C constrained, agile, power banks integrated with time servers that have the ability to function independently of GNSS/GPS constellations, communication networks, grid power sources, utilizing low-energy RF carriers for power conversion and a low-frequency (LF) band time standard signal as reference.

Currently, industries use stored energy/renewable energy generators and power storage/processing components to enable boot-strapping, turn-on, connections to a communications network, synchronization with a GNSS/GPS-derived time server, exchange of data/control traffic. Operations continue until low-energy threshold is reached, followed by shutdown until power levels are restored. There are key vulnerabilities: failure/loss/inefficiencies/exhaustion of field power sources, network time disruption, hacking/failures/resources constraints/misconfiguration, degradable cyber-physical security (CPS) attacks, GPS spoofing, GPS attacks/jamming, and Natural Disasters.

Our approach is to adopt commercial parts to a Testbed and experiment with a system-on-chip (SoC) in order to evaluate RF/Power/Time processing architectures for a stand-alone, “Power and Time Service” SoC, which will manage all firmware, electronic, RF and computing hardware that (1) harvests micropower from RF bands (2) switches and tunes antennas (3) converts and stores power (4) receives and distributes US NIST 60 kHz UTC (NIST) signals. The first stage of this research activity is to conduct experimental trials with a custom-built 2-GHz adjustable power and time broadcast terminal and customized receiver terminals to simultaneously power and synchronize time on a R/C Car and a R/C Quad-Copter.

Autonomous robots exploring unknown areas face a significant challenge: navigating effectively without prior maps and with limited external feedback. This challenge intensifies in sparse reward environments, where traditional exploration techniques often fail. In this paper, we present TopoNav, a novel topological navigation framework that integrates active mapping, hierarchical reinforcement learning, and intrinsic motivation to enable efficient goal-oriented exploration and navigation in sparse-reward settings. TopoNav dynamically constructs a topological map of the environment, capturing spatial connectivity and semantic attributes. A two-level hierarchical policy architecture, comprising a high-level graph traversal policy and low-level motion control policies, enables effective navigation and obstacle avoidance while maintaining focus on the overall goal. Crucially, TopoNav incorporates intrinsic motivation to guide exploration towards informative regions and frontier nodes in the topological map, addressing the challenges of sparse extrinsic rewards. We evaluate TopoNav both on the simulated and in real-world off-road environments across three challenging navigation scenarios: goal-reaching, feature-based navigation, and navigation in complex terrains. Our results demonstrate substantial improvements over state-of-the-art methods in terms of exploration coverage, navigation success rate, and sample efficiency. Index Terms—Topology, Navigation, Sparse Reward, Reinforcement Learning.

Event-based vision sensors, such as dynamic vision sensors (DVS), capture changes in a scene with very high temporal resolution (in the order of microseconds). This allows for the detection and analysis of fast-moving objects and rapid changes in a scene, which is crucial in applications like autonomous vehicles, drones, robotics, and many industrial use cases. Event-based vision sensors consume power only when there are changes in the scene, making them highly energy-efficient. This is particularly important for battery-operated devices and systems where power efficiency is critical. Conventional image sensors require large neural networks for object detection and localization.

In this research, we proposed a clustering-based object location approach for adaptive event vision. The proposed method can potentially reduce computation by 105x compared to EfficientDet and 1019x compared to MobileNet with Feature Pyramid Networks (FPN).

Time synchronization is crucial for the reliability of interconnected Internet of Things (IoT) devices within a distributed system. In our research, we leverage the electric network frequency to achieve synchronization of IoT devices without relying on active network connections. By exploiting the sensing capabilities of IoT devices, we decode the internal clock frequency of these devices. Furthermore, we analyze the accumulated offset of each IoT component to detect abnormal behavior or potential attacks on the system. By eliminating the dependency on network connections, we enhance the reliability of the proposed attack detection system and the overall cyber-physical system (CPS) that monitors the connected electric grid using dedicated IoT devices.

To validate the proposed methodology, we constructed a testbed incorporating electric network frequency. The testbed comprises a sample edge server, which utilizes a Raspberry Pi to modulate the electric grid signal from an AC adapter using off-the-shelf chips. Certain IoT devices functioning as sources (e.g., LED, Speaker) modify their brightness or adjust their tone based on the period value calculated from the network frequency sampler. Arduino Uno boards are employed to control the sensors and transmit the collected data via I2C to the Raspberry Pi board. This amassed data can subsequently be subjected to further analysis using an FPGA board.

Back to Top


Non-invasive flow measurement techniques like particle image velocimetry (PIV) and particle tracking velocimetry (PTV) are state-of-the-art techniques that resolve complex 3D flow features for a wide range of applications. These methods image illuminated tracer particles (that follow the flow) and cross-correlate particle image patterns to estimate the most probable shift. 3D methods also employ algebraic reconstruction techniques or triangulation methods to estimate particle position in 3D space. This finds applications in high-speed aerodynamics, multi-phase flows, benchtop flow experiments mimicking biological flows in in-vitro setups, micro-channel flows, diffusion measurement, and even in echocardiographic PIV. Uncertainty quantification in such applications is critical to establish a bound on the measurement. This provides a confidence interval to compare any simulated results with the experiments. Moreover, an uncertainty bound on physical quantities of interest (e.g., pressure, and wall shear stress) sets the design tolerance for any flow device. However, this is challenging due to the complexity of the measurement chain and non-linear error propagation. My research has developed novel methods to quantify the uncertainty in the following quantities – 2D and 3D velocity fields, pressure, density, diffusion, and echo-PIV velocity. The uncertainty quantification methods modeled the inherent uncertainty in the cross-correlation based image pattern matching as well as error propagation through image registration, triangulation, and trajectory fitting. The purpose of this poster is to attract collaborative efforts such that these uncertainty methods can be employed in other diverse applications where image cross-correlation based pattern matching and flow measurement play a crucial role.

We aim to develop a comprehensive codebase for control of a Dobot CR3 model robotic arm and its associated gripper attachment. Although the provided Dobot software provides manual control of both the arm and attached gripper, we seek to gain finer control through use of Python programs loaded onto the robot. We have significant understanding of the control of the arm itself, but the attached gripper, which is not made by Dobot, does not easily interface with provided software. We seek to unify these two systems and develop an easy control system for the gripper that we can integrate with our autonomous programs. The most promising method involves using the ModbusRTU protocol to directly communicate with gripper registers, where we can load and read specific values. Following the development of this software, we seek to determine the full capabilities of the gripper in combination with the arm. We perform experiments to test grip strength and ability to grasp objects of both known and unknown locations. The goal is to gain a greater understanding of both the gripper and the arm to pave the way for future developments.

We present the comprehensive numerical study of two-dimensional (2D) material-based phototransistors by using the drift-diffusion model. We chose monolayer molybdenum disulfide (MoS2) as the 2D material on top of silicon-di-oxide (SiO2) coated silicon (Si) substrate as our phototransistor structure. The 2D material is numerically characterized with effective models available in the literature. Output currents of the phototransistor at various laser wavelengths are determined by dynamically solving current continuity equations self-consistently with Poisson’s equation utilizing Newton’s method. Numerical integration followed by implication of electromagnetic wave propagation in layered media enable us to calculate the quantum efficiency of the device with higher accuracy that agrees with existing experiments. With the underlying methodology, we also compute phase noise and broadband RF power of the phototransistor. This numerical framework will assist in characterization and optimization of phototransistors for higher gain optoelectronic applications.

An image-based tracking continuously scanning laser Doppler vibrometer (CSLDV) system is developed to track a rotating structure. Different edge detection methods are developed to determine positions of the rotating structure in images captured by a camera in the image-based tracking CSLDV system. Edge detection methods can locate edge points of the rotating structure to be scanned with simple or complicated backgrounds. The image-based tracking CSLDV system can track and scan the rotating structure once the position of the rotating structure is determined, which can be applied to vibration measurement and structural health monitoring of rotating wind turbine blades. Measured response of the tracking CSLDV system is processed by an improved demodulation method to estimate damped natural frequencies and undamped mode shapes of the rotating structure.

Attitude filtering is a critical technology with applications in diverse domains such as aerospace engineering, robotics, computer vision, and augmented reality. Although attitude filtering is a particular case of the state estimation problem, attitude filtering is uniquely challenging due to the special geometric structure of the attitude parameterization. This paper presents a novel data-driven attitude filter, called the retrospective cost attitude filter (RCAF), for the SO(3) attitude representation. Like the multiplicative extended Kalman filter, RCAF uses a multiplicative correction signal, but instead of computing correction gains using Jacobians, RCAF computes the corrective signal using retrospective cost optimization and measured data. The RCAF filter is validated numerically in a scenario with noisy attitude measurements and noisy and biased rate-gyro measurements.

This paper presents an adaptive, model-based, nonlinear controller for the bicopter trajectory-tracking problem. The nonlinear controller is constructed by dynamically extending the bicopter model, stabilizing the extended dynamics using input-output linearization, augmenting the controller with a finite-time convergent parameter estimator, and designing a linear tracking controller. Unlike control systems based on the time separation principle to separate the translational and rotational dynamics, the proposed technique is applied to design a controller for the full nonlinear dynamics of the system to obtain the desired transient performance. The proposed controller is validated in simulation for a smooth and nonsmooth trajectory-tracking problem.

This paper presents a model-based, adaptive, nonlinear controller for the bicopter stabilization and trajectory-tracking problem. The nonlinear controller is designed using the backstepping technique. Due to the non-invertibility of the input map, the bicopter system is first dynamically extended. However, the resulting dynamically extended system is in the pure feedback form with the uncertainty appearing in the input map. The adaptive backstepping technique is then extended and applied to design the controller. The proposed controller is validated in simulation for a smooth and nonsmooth trajectory-tracking problem.

This work investigates the application of a learning-based adaptive controller to mitigate the effect of a vertical gust on the lift generated by a pitchable airfoil in an unsteady flow environment. High-order accurate computational fluid dynamics (CFD) methods are used to model the interaction between a pitching NACA 0012 airfoil and a vertical gust. The learning-based adaptive controller is based on the retrospective cost adaptive control (RCAC) technique. The learning algorithm is tuned in a nominal scenario, and its ability to adapt to off-nominal scenarios is investigated. Furthermore, the effect of measurement noise and controller update frequency on the lift regulation performance is investigated. Finally, the performance of the adaptive controller with a fixed-gain controller is investigated.

Advancements in thermal management, catalysis, and tissue engineering demand novel materials with a set of seemingly incompatible functional properties, such as high-surface area, low density, high thermal/electrical conductivities, and strong mechanical strength and toughness. Inspired by the microstructures of natural materials, material scientists seek to deliver the desired properties by engineering porous material with ordered morphology. However, reproduce the complex architecture of the nature in a controlled process constitutes a main challenge.

Ice-templating method presents a versatile processing technique capable of generating interconnected, ordered porosity by freezing an aqueous colloidal suspension. Under a controlled temperature gradient, the formation of highly anisotropic ice crystals pushes the particles away from the growing ice front, forming a templated morphology replicating that of ice after the removal of ice by sublimation.

Using the ice-templating, we developed copper matrices exhibiting a long-range lamellar morphology and pore size smaller than 100 micron. These features result in a high porosity and thermal conductivity. When infiltrated with phase change materials, the high thermal conductivity and small pore size offers efficient heat dissipation and energy absorption for thermal management. The ice freezing kinetics, the interactions between the particle and ice front are studied at various freezing rates. The effect of freezing rate, particle concentration, and particle size on ice morphology, particle densification, and ultimately the porosity and thermal conductivity of the as-produced porous copper is investigated in the experimental study.

Assembly lines have been proven to be an efficient large-scale manufacturing method particularly within the vehicle and electronic industries. A single assembly line is split into numerous smaller tasks that an individual preforms repetitively to contribute to the entire product. In this paper, the researchers look at current literature that investigates the experiences of assembly line workers, the involvement of disabled assembly line workers, as well as the use of virtual reality in the support of disabled assembly line workers. Despite the assembly line being a widely used method for manufacturing, the research that investigates these workers’ experiences and more specifically the experiences of disabled assembly line workers are lacking. The purpose of this research is to bring attention to these experiences and open the discussion for possible improvements or guidelines for working with disabled assembly line workers. Insights from this work aim to benefit both researches and designers interested in improving the efficiency and accessibility of workers in the manufacturing of large-scale products.

In this study we developed active physics-informed turbine blade pitch control methods to conquer the inconsistent energy harvesting efficiency challenges encountered by the vertical-axis turbine (VAT) technology. Specifically, individual turbine blades were pitched by actuators following commands from the physics-informed controllers, and the turbine performance improvements as a result of the blade pitch control mechanism and the associated flow physics were studied. The aim of the blade pitch control was to maintain constant effective angles of attack (AoAs) experienced by turbine blades through active blade pitch, and the constant AoA function was designed to facilitate control mechanism implementation into real-world VATs. To gain in-depth understanding on the capability of the control, flow physics was studied for different constant AoA control strategies across a wide range of tip speed ratios (TSRs) and wind speeds, and was compared with that from the corresponding baselines without control, and that from the sinusoidal AoA control strategy. The comparison between the turbine performance with constant AoA control and that without control showed a consistent increase in the time-averaged net power coefficient, a measure of energy harvesting efficiency taking out of the actuator loss, ranging from 27.4% to 704.0% across a wide spread of wind speeds. The superior turbine performance with constant AoA control was largely attributed to blade dynamic stall management during the blade upstream and downstream cycles and the transition between the two cycles.

Laminar separation bubbles (LSB’s) over the suction surface of a wing at low Reynolds number can significantly affect the aerodynamic performance of the wing. Since LSB’s possess highly nonlinear flow physics and their behavior is very sensitive to flow environments and wing surface conditions, LSB’s pose a unique challenge for the predictive capabilities of analytical and simulation tools. In this work a series of two-dimensional (2D) and three-dimensional (3D) low-order, and high-order accurate unstructured-grid-based numerical methods with varying model fidelity levels were used to study LSB physics over a NACA 0012 airfoil. This was done in both a clean freestream and in a turbulent freestream at a chord-based Reynolds number of 12,000. Lift production and time-averaged flow fields were compared with available experimental results. Key flow physics, and the associated impact on airfoil performance and lift prediction were discussed. A major discovery is that in clean freestream flow a 3D high-order numerical scheme is necessary to capture LSB physics. This is due to the sensitivity of LSB-induced laminar-turbulent transition to flow conditions and boundary geometry at low Reynolds number. In freestream flows with moderate background turbulence (∼5%), 2D simulations failed to capture subtle 3D flow physics due to their intrinsic limitation, but can reasonably predict time-averaged airfoil performance. Similarity and distinction between freestream vortex-LSB interaction in 2D and eddy-LSB interaction in 3D were explained. LSB flow physics captured by the 3D high-order method in clean and turbulent freestreams were also discussed and compared. The role of the Kelvin-Helmholtz instability and Klebanoff modes in the transition of airfoils were shown to be critical for understanding laminar-turbulent transition and LSB formation on airfoils in clean and turbulent freestreams.

Back to Top

Mathematical Foundations

Many physical systems of interest operate on multiple time scales, where different physical effects occur simultaneously but at different speeds. To accurately model these systems numerically, all present time scales must be properly resolved. The computational cost of solving differential equations using traditional numerical methods depends strongly on the step size taken in evolution; solutions to multi-time scale systems thus have their computational cost determined by the fastest time scale present. The time scales can be analyzed through the eigenvalue decomposition of the functional derivative of the equation that governs the system. Differential equations are stiff when the ratio of magnitudes between any two eigenvalues is large. Large-magnitude eigenvalues are associated with faster physical processes, which restricts the step size that a numerical method can take. This limitation results in massive inefficiencies when studying comparatively slower system behaviors over a long period of time, where taking larger steps would be preferable.

In this work, we introduce a numerical method in which we eliminate the eigenvectors with large, negative eigenvalues, making it possible to take much larger steps and enabling the study of systems over much longer durations than are otherwise computationally feasible. We apply this technique to study the two-soliton interaction in microresonators, mm-sized resonators that are used to create frequency combs. Our simulations are based on the Lugiato-Lefever equation, which is a partial differential equation and is more specifically the nonlinear Schrodinger equation with additional terms to account for the energy source and dissipation. We obtain a significant speedup in computation time—on average, 100 times faster than is possible using conventional methods.

Microresonator-generated optical frequency combs (OFCs) have matured as a technology over the past two decades, with a wide range of realizable applications such as spectroscopy and metrology. Microresonators are typically cylindrical, small (radius < 1 mm), semiconductor devices, which behave as whispering-gallery mode resonators when injected with light. Microresonators with a ring structure and a rectangular cross-section have become widely used and have a variety of applications, including OFC generation. However, in these structures the lower order transverse modes typically have a strong spatial overlap. Since only a single transverse mode is used for comb generation, typically the fundamental TE mode, this overlap results in external and intrinsic losses. As a result, it is desirable to obtain as much spatial mode separation as possible. To increase the spatial separation, we designed a ring resonator with increased width and a notched section towards the outside of the cross-section. To optimize the design, we used an inverse design method to search for an appropriate notch scheme that increases the fundamental TE and TM mode separation. We used particle swarm optimization (PSO) to perform a parameter space search over the widths of 4 notches, and we found a novel design that separates the modes in question by a third of the resonator width. To verify the effectiveness of the design, we performed simulations using coupled mode theory (CMT) to study the coupling of light from a straight waveguide to the new microresonator design. We found that the coupling was essentially unchanged by adding notches, ensuring that the spatial separation does not affect normal device operation, and determined the combination of coupling parameters for optimal power transfer to the microresonator.

Back to Top

Did you find what you were looking for today?