Our Commitments
We build technology that people rely on for their primary experience. That reliance comes with obligations we've chosen to make public, permanent, and enforceable.
LifeLight is not a conventional product. When a participant enters a session, they are relying on our systems for their sensory environment, their physical maintenance, and their safety. The level of access our technology requires — and the duration over which participants choose to use it — creates obligations beyond what standard terms of service address.
These commitments are how we hold ourselves accountable. They are not aspirations. They are operational constraints that govern how we build, deploy, and scale LifeLight — today and as the technology continues to advance.
Human oversight at every level
Every active session is monitored by a human phader. Every facility operates a staffed core. We do not run unsupervised sessions. We do not automate safety-critical decisions.
Automation handles rendering, optimization, and routine monitoring. Humans handle judgment. When a session drifts outside normal parameters, a person decides what happens next — not a system. This distinction is load-bearing and non-negotiable.
Transparent safety evaluations
Before every major capability release, we conduct and publish safety evaluations describing what we tested, what we found, and what we changed. We maintain an internal red team whose sole purpose is to find vulnerabilities before they find participants.
When things go wrong — and they have — we publish full post-mortems. Not summaries. Not press releases. Complete technical accounts of what happened, why, and what we did about it. The v3.1 post-mortem is our standard, not our exception.
Independent safety infrastructure
The alpha grid — our independent failover and integrity system — operates on separate power, separate processing, and separate physical infrastructure from the main grid. It cannot be compromised by a failure in the system it protects. This is the architectural equivalent of separation of powers.
The origin code is known to one person. It is not stored digitally. It cannot be accessed remotely. These are not security-through-obscurity measures. They are deliberate constraints on our own ability to modify the deepest layer of the system without physical, in-person authorization.
Participant autonomy is inviolable
Every participant can end their session at any time. The right button on the wrist controller is a hardware interrupt — it does not pass through the experience engine, the processing code, or any software layer. It is a direct signal to the cubicle to disengage.
We do not optimize for session duration. We do not use behavioral design to discourage session termination. We do not measure our success by how long participants stay. We measure it by whether the experience they chose to have was the experience they received.
Safety outranks growth
Our directors and senior engineers have the authority to suspend operations territory-wide. This authority cannot be overridden by commercial interests, investor pressure, or demand forecasts.
We have paused facility openings when phader staffing couldn't keep pace. We have delayed capability releases when safety evaluations surfaced concerns. We have chosen not to ship features that our technology can support but that our safety infrastructure cannot yet govern.
Subconscious rendering — the ability to generate experience content from neural data the participant isn't consciously aware of — is technically possible in v4.0. We disabled it. The v3.1 incident demonstrated what happens when subconscious data is surfaced without safeguards. We will not enable this capability until we can guarantee it's safe. We don't have a timeline for that. That's the point.
We will be honest about what we don't know
LifeLight generates experiences indistinguishable from reality using the participant's own neural data. We understand how this works mechanistically. We do not fully understand what it means for the people who use it — cognitively, psychologically, socially — over the timescales our participants are choosing to engage with it.
We are studying this. We are publishing what we find. But we will not pretend to have answers we don't have, and we will not wait for certainty before acting on credible concerns.
Accountability
These commitments are published here, permanently, because we want them to be used against us if we fail to honor them. We invite scrutiny. We expect it. The alternative — building this technology without public constraints — is not something we're willing to do.
If you believe we have fallen short of any commitment listed here, contact our Safety team or reach out to the directors directly.
Read our latest safety work
Our most recent red team report and capability evaluation.