Platform overview

The infrastructure for the first universal neural model.

Voxel normalizes raw EEG across any device, quality-gates every signal window, and stores each session as a labeled training record. The infrastructure ships today. The dataset is being built. The model is what it all points toward.

Ingestion

Ingestion Layer

POST raw EEG batches from any device to a single REST endpoint. Voxel handles device normalization, channel alignment, and sample rate differences so your application never has to.

  • Any device, any channel layout — MUSE, OpenBCI, Emotiv, custom hardware
  • Batch idempotency via batch_id — safe to retry on network failure
  • Required: batch_id, t0_unix_ms (ms epoch), and channel data arrays
  • Automatic session lifecycle — open, ingest, close
curl -X POST /v1/sessions/$SESSION_ID/ingest \
  -H "Authorization: Bearer $API_KEY" \
  -d '{
    "batch_id": "b001",
    "t0_unix_ms": 1730000000000,
    "data": {"Fz": [0.1, 0.11], "Cz": [0.2, 0.21]}
  }'
Raw batch
any device
Normalize
μV · channels
Store + emit
features in ~12ms
Quality

Signal Quality & Artifact Detection

Every feature window carries a Signal Quality Index (SQI) per channel and per-artifact scores. You always know whether to trust a window before feeding it to your model.

  • quality.overall — median SQI across all channels (0 = noise, 1 = clean)
  • quality.per_channel.[ch] — per-electrode SQI
  • artifacts.blink_like, .motion_like, .emg_like, .bad_contact — each 0–1
  • Scores are floats, not booleans — threshold at whatever suits your use case
const res = await fetch(
  `/v1/sessions/${SESSION_ID}/features/latest`,
  { headers: { Authorization: `Bearer ${API_KEY}` } }
);
const win = await res.json();

// Signal Quality Index per channel (0 = noise, 1 = clean)
console.log(win.quality.overall);          // 0.82
console.log(win.quality.per_channel.Fz);   // 0.85

// Artifact scores (0 = none, 1 = likely artifact)
console.log(win.artifacts.blink_like);     // 0.12
console.log(win.artifacts.emg_like);       // 0.03
Fz0.85
Cz0.72
Pz0.45
Oz0.91
Features

Feature Extraction

Each ingested window is decomposed into five frequency bands, spectral entropy, and cross-band ratios — per channel. All values are returned alongside their quality metadata.

  • Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–13 Hz), Beta (13–30 Hz), Gamma (30+ Hz)
  • Bandpower in μV² — calibrated, not raw FFT magnitude
  • Spectral entropy (0–1) — measures signal complexity per window
  • Alpha/beta and theta/beta ratios — useful for focus/relaxation classification
{
  "features": {
    "Fz": {
      "bandpower": {
        "alpha": 15.7,  "theta": 8.1,
        "beta":  6.3,   "delta": 12.4,
        "gamma": 1.2
      },
      "spectral_entropy": 0.74
    }
  },
  "quality": { "overall": 0.82 },
  "artifacts": { "blink_like": 0.12 }
}
Delta
55%
Theta
36%
Alpha
70%
Beta
28%
Gamma
9%

values in μV² · single 2s window · channel Fz

Evaluation

Generalization Benchmarks

Deep tech credibility comes from measurable generalization. Voxel pipelines are designed to be evaluated across the axes that matter for a universal model — not just within a single device or subject.

  • Across-device accuracy — same task, different hardware, no retraining
  • Across-session stability — feature drift detection per subject over time
  • Across-subject transfer — how well window features generalize to new users
  • Uncertainty quantification — confidence scores on every feature window
  • Reproducible pipelines + full audit trails for every processed session
# Benchmark output (per evaluation run)
{
  "eval_type":        "cross_device",
  "source_device":    "MUSE_2",
  "target_device":    "OPENBCI_CYTON",
  "task":             "alpha_suppression",
  "windows_tested":   1240,
  "feature_drift":    0.08,   // low = good transfer
  "sqi_agreement":    0.91,
  "pipeline_version": "1.0.0",
  "audit_hash":       "sha256:4f2a..."
}
Cross-device
91%
Cross-session
84%
Cross-subject
73%

SQI agreement scores · alpha_suppression task · v1.0 pipeline

Realtime

Realtime WebSocket Streaming

Each session gets its own WebSocket URL. Feature windows are pushed as they complete — no polling, no batch waits. The message type is 'features'; quality and artifact data are included on every event.

  • Dedicated ws_url per session — returned on session creation
  • Message type: "features" — data nested under msg.data
  • msg.data includes features, quality, artifacts, window timestamps
  • Typical end-to-end latency: ~12ms from last ingest batch
const ws = new WebSocket(session.ws_url);

ws.onmessage = (event) => {
  const msg = JSON.parse(event.data);
  if (msg.type === "features") {
    const { quality, features, artifacts } = msg.data;
    // quality.overall — 0-1 SQI
    // features.Fz.bandpower.alpha — μV²
    updateDashboard(quality, features);
  }
};
alpha15.7 μV²
sqi0.91
lag11 ms
Neural Dataset

Every session is a training record.

Voxel stores each processed window as a normalized, quality-gated, device-labeled training example. There is no large-scale cross-device EEG dataset today — no ImageNet for the brain. Every API call you make contributes to building it. The infrastructure is the wedge. The neural model trained on this data is the product.

Consent-first data network

All data is de-identified and permissioned at the session level. Researchers and companies contribute and access datasets under a governance framework — not a data broker model. You control what you share.

Device labelMUSE_2
SQI gate> 0.70
Artifact maskincluded
Formatnormalized
Governancepermissioned
Identityde-identified
Technical advantage

Why teams choose Voxel

Purpose-built for neural data. Faster, smarter, and more reliable than rolling your own.

FeatureDIYCompetitors
Voxel
Setup time4-6 months2-4 weeks< 5 minutes
Device support1 device/pipeline2-3 devicesAny device
Real-time streamingPolling only
Latency (P95)300-800ms150-400ms< 50ms
Quality gating (SQI)Manual
Artifact detectionCustom codeBasicAdvanced ML
WebSocket streaming
Batch idempotency
SDK supportPython onlyPython (TS planned)
Uptime SLASelf-hosted99%99.9%

What makes Voxel fast?

Streaming-first architecture

Per-session WebSocket endpoints. No polling, no batch delays — feature windows are pushed as they complete, typically within 12ms of the last ingest batch.

Single-pass DSP pipeline

FFT windowing, bandpower decomposition, artifact scoring, and SQI computed in a single pass per ingest batch — quality metadata is never an afterthought.

Global deployment

Deployed on Fly.io with automatic traffic routing to the nearest region. REST and WebSocket endpoints share the same latency budget.

Build the app. Shape the dataset.

Every session you run in production contributes to the neural dataset that doesn't exist yet. Book a demo to see Voxel in action.

Book a Demo