Resonance Research Institute was founded by a combat veteran, trauma therapist, and AI researcher who believes that technology should deepen human connection—not replace it.
What if the brain could speak in color—and music could answer back?
From that question came a series of prototypes: EEG-powered chromatic visualizations, adaptive music engines, and clinical AI tools designed to help veterans and trauma survivors regulate their nervous system in new ways.
The Institute is where these threads come together: trauma science, creative practice, and machine intelligence collaborating for healing.
Before there was an institute, there was a small office and a heavy question: Why are we still asking trauma survivors to heal using only language?
As a Licensed Clinical Social Worker and combat veteran, our founder spent years sitting with people who could brief a mission in perfect detail but struggled to find words for the aftermath. Traditional talk therapy helped, but it was clear that something was missing— especially for veterans who found retelling their story re-traumatizing.
Parallel to clinical work, a second thread unfolded: building AI systems for emotion detection, music mastering, and pattern recognition. Eventually, the lines crossed. EEG, music, and AI were no longer separate interests—they became a single research question: Can we design intelligent systems that respond to trauma where it actually lives—in the body and nervous system?
Thomas is a clinician, combat veteran, and AI PhD student whose work sits at the boundary between human suffering and technological possibility. He is committed to building tools that keep the therapeutic relationship at the center—while leveraging machine learning, EEG, and music science to expand what’s possible in the therapy room.
Resonance Research Institute exists so that this work can grow beyond one office, one clinician, and one caseload—into a shared project of reimagining trauma recovery.
We believe that the best technology doesn’t automate care—it amplifies it. Every model we train and every system we deploy is evaluated against a simple question: Does this deepen human connection and support real healing?
Our work is trauma-informed at every layer: from how we design interfaces and visualizations, to how we train datasets, to how we structure research participation and feedback.
Every participant, client, and collaborator is treated as a full agent in the process. We honor lived experience and protect autonomy in how technology is used—or not used—in care.
We avoid black-box tools that obscure what’s happening. Clinicians and participants deserve to understand how signals, models, and outputs are being used in their care.
We move boldly yet responsibly at the edge of trauma science, neuroscience, and AI. Our work is iterative, data-informed, and always open to revision as we learn.
Our first loyalty is to those who carry trauma—especially veterans and first responders. The institute exists to serve them, not the other way around.
We work across three interconnected layers: the nervous system’s signals, the music and visuals that respond, and the clinical relationship that makes sense of it all. None of these layers stand alone.
We use EEG and other biometrics to observe how trauma shows up in real time: hyperarousal, dissociation, shutdown, and fragile states of regulation.
We design music and chromatic visuals that respond to those signals—guiding the nervous system toward safety, grounding, and emotional processing.
Clinicians and clients remain at the center. AI surfaces patterns and possibilities, but the story—and the healing—belong to the humans in the room.