| timestamp | TP9 | AF7 | AF8 | TP10 |
|---|---|---|---|---|
| 00:00.000 | 820.5 | 830.1 | 2847.3 | -199.4 |
| 00:00.004 | NaN | 831.4 | 814.7 | 825.2 |
| 00:00.008 | 819.8 | 828.9 | 815.2 | NaN |
| 00:00.012 | 2941.2 | 830.5 | -189.3 | 825.9 |
| 00:00.016 | 820.1 | NaN | 814.9 | 824.6 |
The moment
The hardware arrived.
The infrastructure didn't.
Consumer EEG devices are proliferating — MUSE, OpenBCI, Emotiv, and dozens of others are putting neural sensing in the hands of developers and researchers. The hardware is getting cheaper, more accurate, and easier to use.
But the software stack hasn't kept up. Every team rebuilds the same pipelines — device drivers, signal ingestion, quality gating, artifact detection, feature extraction, real-time streaming. Undifferentiated work that takes 4–6 months and distracts from building actual products.
The BCI market is at an inflection point — moving from research labs to consumer products. This is the moment to build the infrastructure layer before thousands more teams waste months rebuilding it.
Any device, one endpoint.
POST raw signal batches from any neural device. Variable sample rates, arbitrary channel layouts, batch-level idempotency.
curl -X POST /v1/sessions/$SESSION_ID/ingest \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-d '{"batch_id":"batch_001",
"t0_unix_ms":1730000000000,
"data":{"TP9":[820.5,821.2,...],"AF7":[...]}}'Know when to trust your data.
Per-channel SQI (0–1), blink, motion and EMG artifact detection. Bad channels flagged — never silently corrupting your model.
TP9
0.91
AF7
0.85
AF8
0.62
TP10
0.41
Five bands, per-channel confidence.
Delta, theta, alpha, beta, gamma bandpower plus spectral entropy and cross-band ratios — all with confidence scores.
Sub-100ms, no polling.
Dedicated WebSocket per session. Feature windows publish as computed — typically within 12ms of the last ingest batch.
const ws = new WebSocket(session.ws_url);
ws.onmessage = ({ data }) => {
const msg = JSON.parse(data);
if (msg.type === "features") {
const alpha = msg.data.features.bandpower.alpha;
const sqi = msg.data.quality.overall; // 0–1
}
};Raw signal in. Structured data out.
Every device has its own SDK, format, and quirks. Voxel normalizes all of it — one API call returns the same shape regardless of hardware.
import muselsl
stream = muselsl.stream("your_muse_mac")
# Returns: raw LSL stream
# Format: device-specific
# Quality: none
# Latency: you figure it outboard = BoardShim(BoardIds.CYTON_BOARD, params) board.start_stream() # Returns: numpy array, 24-bit # Format: board-specific CSV # Quality: none — artifacts yours
headset = CortexClient(client_id, secret) await headset.subscribe(["eeg"]) # Returns: vendor JSON with proprietary # channel names and scaling # Quality: none
{
"session_id": "ses_4f2a9b8c",
"window_id": "w_00342",
"device": "MUSE_2",
"t_ms": 2000,
"bandpower": {
"TP9": { "alpha": 15.7, "sqi": 0.91 },
"AF7": { "alpha": 11.2, "sqi": 0.85 },
"AF8": { "alpha": 13.8, "sqi": 0.79 },
"TP10": { "alpha": 18.3, "sqi": 0.62 }
},
"artifacts": { "blink": false, "emg": false },
"latency_ms": 11
}Same response schema whether your device is a MUSE, OpenBCI, Emotiv, or custom hardware. SQI, bandpower, and artifact flags included on every window.
< 47ms
Avg latency
5
Freq bands
256 Hz
Max sample rate
99.9%
Uptime SLA
Start building on neural data.
Free tier up to 100 sessions / month. EEG support live now — EMG, fNIRS and additional devices on the roadmap.