← Back to News

Red Teaming LifeLight: What We Found

April 3, 2026 · By Aja Killian

At Veelox, we don’t wait for things to go wrong. We hire people to make them go wrong on purpose.

Our internal red team operates continuously. Their job is to probe the experience engine, the grid architecture, the sensory rendering pipeline, and the physical security systems for vulnerabilities — and to do it with the same creativity and persistence that a genuine adversary would bring.

This is the first time we’re publishing a summary of their findings. It won’t be the last.

How We Red Team

Our red team operates under a simple mandate: assume the system can be broken, and prove it.

They work independently from the engineering team that builds and maintains LifeLight. They have no access to the origin code, no architect-level credentials, and no advance knowledge of upcoming capability releases. They operate under the same constraints as an external adversary — with one exception: they have permission.

Red team engagements fall into three categories:

  • Experience manipulation — Attempting to alter the content of a participant’s session from outside the session. Can a phader inject elements that don’t originate from the participant’s own neural data?
  • Grid integrity — Attempting to compromise the processing code, bypass sensory fidelity caps, or propagate unauthorized modifications through the grid.
  • Physical security — Attempting to access restricted areas (including the Alpha Core), bypass the dual-key suspension protocol, or interfere with cubicle life support systems.

What We Found

Finding 1: Session Bleed Under Load

During peak utilization (>90% facility capacity), the red team identified a narrow window in which sensory rendering data from one session could leak into an adjacent session’s output buffer. The leak was sub-perceptual — no participant would have noticed — but it represented a boundary violation in our isolation model.

Status: Patched. Buffer isolation now operates at the hardware level, independent of load. Verified across all facilities.

Finding 2: Fidelity Cap Drift

The physical circuit breakers that cap sensory output were found to drift by up to 0.3% over extended continuous operation (>6 months without maintenance cycling). While still within certified thresholds, any drift in safety-critical hardware is unacceptable.

Status: Maintenance cycling now mandatory every 90 days. Automated drift monitoring deployed to all facilities.

Finding 3: Green Card Cloning

The green authorization cards used for Alpha Core access and grid suspension could, in theory, be cloned using equipment available to any senior phader. The card’s cryptographic signature was verified at the reader level but not cross-checked against a central revocation list in real time.

Status: Real-time revocation checking deployed. Card firmware updated with rolling cryptographic challenges. All existing cards rotated.

Finding 4: No Finding (Alpha Grid)

The red team spent four months attempting to compromise the alpha grid. They could not. The alpha grid’s physical isolation, independent power, and lack of network connectivity to the main grid make it effectively unreachable without physical access to the Alpha Core — which itself requires green card authorization (now hardened per Finding 3).

We consider this a validation of the architecture, not a reason to stop testing.

Why We’re Publishing This

Because trust is not built by telling people the system is safe. It’s built by showing them how hard we work to prove it isn’t — and then fixing what we find.

Every finding in this report was identified, patched, and verified before any participant was affected. That’s the point. Red teaming is not incident response. It’s incident prevention.

We will publish these summaries annually. Our participants deserve to know not just that we’re watching the system, but that we’re actively trying to break it — because that’s the only way to know it holds.

— Aja Killian, Chief Architect, Veelox Corporation