Our research explores how EEG, music, and AI can work together in a closed loop to help trauma survivors— especially veterans—move from hyperarousal and shutdown toward stability, connection, and meaning.
Our work is organized around four core domains that inform one another: the signals of the nervous system, the structure of music, the behavior of intelligent systems, and the realities of clinical practice.
We study how trauma shows up in the brain’s real-time patterns—alpha asymmetry, arousal shifts, and regulation attempts—using accessible EEG hardware designed for real-world settings.
We explore how tempo, harmony, rhythm, and density interact with trauma states—and how adaptive music can support grounding, processing, and post-traumatic growth.
We develop models and datasets that are specific to trauma contexts, moving beyond generic emotion classifiers toward systems that actually understand dysregulation and recovery.
We design therapist-in-the-loop tools that can live alongside EMDR, trauma-focused CBT, and other evidence-based therapies, supporting clinicians rather than replacing them.
Within each pillar, we maintain focused tracks of research that evolve as we gather data and clinical feedback. These tracks inform prototypes, clinical tools, and future trials.
Investigating how approach/withdrawal patterns and hemispheric balance correlate with trauma-related hyperarousal, shutdown, and emergence into regulation.
Mapping which musical structures support stabilization versus emotional activation, and how these can be sequenced for different phases of trauma work.
Adapting emotion models to recognize trauma-specific patterns instead of generic “happy/sad/angry” labels, with an emphasis on uncertainty and humility in predictions.
Studying how these technologies can be integrated into busy clinical practices without adding cognitive overload or ethical risk for clinicians.
Our flagship project aims to build a closed-loop system where a client’s neurophysiology drives changes in the music they hear. When arousal spikes, the engine pivots toward stabilizing structures; as regulation increases, it gradually opens space for exploration and meaning-making.
The long-term research question: Can this closed loop—EEG → interpretation → music → nervous system—improve outcomes in trauma therapy beyond talk therapy alone?
Early prototypes are being tested in controlled settings with veterans and trauma survivors, with a focus on safety, tolerability, and real-world usability.
A toolkit that translates EEG signals into responsive color fields and simple metrics clients can actually understand, supporting grounding and regulation work in-session.
Early-stage work toward a de-identified dataset of trauma-related EEG recordings, designed to give future models a more accurate picture of what dysregulation looks like in the wild.
Designing the first controlled pilot comparing traditional talk therapy with and without our EEG + adaptive music adjuncts for veterans living with PTSD.
Extending our clinical note and briefing system to surface anonymized pattern data that can inform future research while preserving client privacy and dignity.
A structured research protocol integrating trauma assessment, EEG capture, adaptive music, and clinical outcomes—forming the backbone for future trials and publications.
Using original music written from a veteran’s perspective as controlled stimuli, allowing us to test how lyrical content, harmonic language, and arrangement impact regulation.
Trauma, AI, and biometrics are a volatile combination. We treat ethics as a core research track, not an afterthought. Every prototype is evaluated for consent, interpretability, potential harm, and the risk of mis-use before it ever reaches a client.