The platform for neural data
From raw EEG streams to production-ready features, Voxel handles the entire pipeline so you can focus on your application.
Ingestion Layer
Accept raw EEG batches from any device via a simple REST endpoint. Supports variable sample rates, arbitrary channel configurations, and batch-level idempotency.
- Any device, any channel layout
- Batch-level idempotency via batch_id
- Up to 4096 samples per batch
- Automatic session lifecycle management
curl -X POST /v1/sessions/$SESSION_ID/ingest \
-H "Authorization: Bearer $API_KEY" \
-d '{"batch_id":"b001","data":{"Fz":[0.1,0.11],"Cz":[0.2,0.21]}}'Quality & Artifact Detection
Every feature window includes per-channel Signal Quality Index and artifact detection scores. Know exactly when your data is trustworthy.
- Per-channel SQI (0–1 scale)
- Blink, motion, EMG, bad contact detection
- Configurable quality thresholds
- Confidence scores on all outputs
const { quality, artifacts, confidence } = await window.json();
console.log(quality.overall); // 0.82
console.log(quality.per_channel.Fz); // 0.85
console.log(artifacts.blink_like); // 0.12Feature Store
Bandpower decomposition, spectral entropy, and cross-band ratios — computed per window and stored for retrieval.
- Delta, theta, alpha, beta, gamma bandpower
- Alpha/beta and theta/beta ratios
- Spectral entropy per window
- Per-channel feature breakdown
{
"features": {
"Fz": {
"bandpower": {
"alpha": 15.7, "theta": 8.1,
"beta": 6.3, "delta": 12.4
},
"spectral_entropy": 0.74
}
}
}Realtime Streaming
Every session gets a WebSocket endpoint. Feature windows are published as they're computed — typically under 100ms from ingestion.
- WebSocket endpoint per session
- Sub-100ms feature delivery
- Automatic reconnection support
- Structured JSON payloads
const ws = new WebSocket(session.ws_url);
ws.onmessage = (event) => {
const { type, quality, features } = JSON.parse(event.data);
if (type === "feature_window") {
updateDashboard(quality, features);
}
};Why teams choose Voxel
Purpose-built for neural data. Faster, smarter, and more reliable than rolling your own.
| Feature | DIY | Competitors | Voxel |
|---|---|---|---|
| Setup time✦ | 4-6 months | 2-4 weeks | < 5 minutes |
| Device support✦ | 1 device/pipeline | 2-3 devices | Any device |
| Real-time streaming | Polling only | ||
| Latency (P95)✦ | 300-800ms | 150-400ms | < 50ms |
| Quality gating (SQI) | Manual | ||
| Artifact detection | Custom code | Basic | Advanced ML |
| WebSocket streaming | |||
| Batch idempotency | |||
| Multi-language SDKs | Python only | Python, TypeScript, Go | |
| Uptime SLA | Self-hosted | 99% | 99.9% |
What makes Voxel fast?
Streaming-first architecture
Built on WebSockets and persistent connections. No polling, no batch delays—features publish as computed.
Optimized DSP pipeline
SIMD-accelerated FFTs, zero-copy windowing, and inline quality checks. Every microsecond counts.
Edge deployment
Low-latency compute regions worldwide. Your data stays close to your users.