Trauma, biometrics, and AI are powerful and risky. This page outlines how we think about those risks, what we refuse to do, and how we intend to be accountable to the people our work touches.
No dataset, model, or paper is worth treating a human being as raw material. Participants, clients, and collaborators retain full personhood in every phase of our work. Opting out is always allowed.
We avoid black-box vibes. People have the right to understand what’s being collected, what models are doing, and how those outputs are being used in care or research.
We assume nervous systems are already carrying more than they should. Every interface, protocol, and experiment is evaluated for overwhelm, re-triggering, and power dynamics.
Veterans, trauma survivors, and clinicians are not “end users”—they’re co-authors. We seek feedback from them on what we should build, pause, or never ship.
Consent isn’t a one-time signature; it’s a continuous conversation. We explain, in plain language:
Saying “no” to research participation never affects access to therapy, services, or support.
When in doubt, we err on the side of protecting participants—even if that slows down the science.
As our work scales, formal structures will accompany the informal ones already in place: clinical ethics consultation, IRB review for research, and ongoing feedback loops with veterans and clinicians.
If you have concerns, suggestions, or want to participate in shaping our ethical framework, you’re invited into the conversation.