Documentation
Everything you need to integrate with the Voxel platform — real-time EEG streaming and offline batch processing with a single API.
Quickstart
Get from zero to processed EEG feature windows in under 5 minutes.
Python
pip install voxel-sdkNode / TypeScript
npm install @voxel/sdk- Sign up at your hosted URL or run locally with
make setup. - Create a project and generate an API key from the dashboard.
- Install the SDK, create a session, and start ingesting data.
Python SDK — streaming
import time
import voxel
client = voxel.Client(
api_key="vxl_...",
project_id="your-project-id",
)
# Create a streaming session
session = client.sessions.create(
external_user_id="user_1",
device_type="MUSE_2",
sample_rate_hz=256,
channels=["TP9", "AF7", "AF8", "TP10"],
)
# Ingest a batch of EEG samples
session.ingest(
batch_id="batch_001",
t0_unix_ms=int(time.time() * 1000),
data={
"TP9": [820.5, 821.2, 819.8, 820.1],
"AF7": [830.1, 831.4, 828.9, 830.5],
"AF8": [815.3, 816.0, 814.7, 815.8],
"TP10": [825.0, 825.7, 824.3, 825.2],
},
)
# Get the latest computed feature window
features = session.features.latest()
print(f"Alpha: {features.bandpower.alpha:.2f} μV²")
print(f"SQI: {features.sqi:.2f}")
session.end()TypeScript SDK — streaming
import { VoxelClient } from "@voxel/sdk";
const client = new VoxelClient({
apiKey: "vxl_...",
projectId: "your-project-id",
});
// Create a session
const session = await client.createSession({
external_user_id: "user_1",
device_type: "MUSE_2",
sample_rate_hz: 256,
channels: ["TP9", "AF7", "AF8", "TP10"],
});
// Ingest a batch
await client.ingestBatch(session.session_id, {
batch_id: "batch_001",
t0_unix_ms: Date.now(),
data: {
TP9: [820.5, 821.2, 819.8],
AF7: [830.1, 831.4, 828.9],
},
});
// Get computed features
const windows = await client.getFeatures(session.session_id, 1);
const alpha = windows[0]?.features.bandpower.alpha;
console.log("Alpha:", alpha, "SQI:", windows[0]?.quality.overall);
await client.endSession(session.session_id);curl — full streaming sequence
# 1. Login
TOKEN=$(curl -s -X POST http://localhost:8000/v1/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"demo@voxel.dev","password":"demo1234"}' | jq -r .access_token)
# 2. Create a project
PROJECT_ID=$(curl -s -X POST http://localhost:8000/v1/projects \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"My Project"}' | jq -r .id)
# 3. Create an API key
API_KEY=$(curl -s -X POST http://localhost:8000/v1/projects/$PROJECT_ID/keys \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"dev-key"}' | jq -r .raw_key)
# 4. Create a streaming session
SESSION_ID=$(curl -s -X POST http://localhost:8000/v1/sessions \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{
"external_user_id": "user_1",
"device_type": "MUSE_2",
"sample_rate_hz": 256,
"channels": ["TP9","AF7","AF8","TP10"]
}' | jq -r .session_id)
# 5. Ingest a batch
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/ingest \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{
"batch_id": "batch_001",
"t0_unix_ms": 1730000000000,
"data": {
"TP9": [820.5, 821.2, 819.8, 820.1],
"AF7": [830.1, 831.4, 828.9, 830.5],
"AF8": [815.3, 816.0, 814.7, 815.8],
"TP10": [825.0, 825.7, 824.3, 825.2]
}
}'
# 6. End the session
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/end \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"Authentication
Voxel uses two auth schemes depending on what you are doing.
| Scheme | Used for | How to obtain |
|---|---|---|
| JWT Bearer | Projects, keys, user management | POST /v1/auth/login |
| API Key | Sessions, ingestion, features, export | Dashboard → API Keys |
All API-key requests require both Authorization: Bearer <API_KEY> and X-Project-Id: <PROJECT_ID> headers.
# JWT auth — management operations (projects, keys, sessions list)
curl http://localhost:8000/v1/projects \
-H "Authorization: Bearer $JWT_TOKEN"
# API key auth — data operations (ingest, features, export)
curl http://localhost:8000/v1/sessions \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"Ingestion Modes
Every session has an ingestion_mode that controls the processing lifecycle. Choose the right mode for your use case when creating the session.
Live EEG device. Batches are processed as they arrive. Feature windows are published to the WebSocket channel in real time. Session stays open until you call end.
Offline file or historical export. Upload all batches, then call end. The session transitions PROCESSING → COMPLETE. Poll/status to know when feature extraction is done.
Streaming session
# Create a STREAMING session (default)
curl -s -X POST http://localhost:8000/v1/sessions \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{
"external_user_id": "user_1",
"device_type": "MUSE_2",
"sample_rate_hz": 256,
"channels": ["TP9","AF7","AF8","TP10"],
"ingestion_mode": "STREAMING"
}'Recorded session — Python SDK
import voxel
client = voxel.Client(api_key="vxl_...", project_id="proj_...")
# Use RECORDED mode for offline EDF / existing data
session = client.sessions.create(
external_user_id="subject_42",
device_type="CYTON",
sample_rate_hz=250,
channels=["Fp1", "Fp2", "C3", "C4", "P3", "P4", "O1", "O2"],
ingestion_mode="RECORDED",
)
for i, batch in enumerate(my_eeg_batches):
session.ingest(batch_id=f"batch_{i:04d}", t0_unix_ms=batch.t_ms, data=batch.data)
session.end()
# Block until feature extraction finishes
status = session.wait_for_processing(
on_progress=lambda s: print(f" {s.processing_state} — {s.window_count} windows")
)
windows = session.features.list()
alpha_series = [w.bandpower.alpha for w in windows]Recorded session — curl
# Create a RECORDED session
SESSION_ID=$(curl -s -X POST http://localhost:8000/v1/sessions \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{
"external_user_id": "user_1",
"device_type": "MUSE_2",
"sample_rate_hz": 256,
"channels": ["TP9","AF7","AF8","TP10"],
"ingestion_mode": "RECORDED"
}' | jq -r .session_id)
# Upload all batches
for BATCH in batch_001 batch_002 batch_003; do
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/ingest \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d "{"batch_id":"$BATCH","t0_unix_ms":1730000000000,"data":{...}}"
done
# End session — triggers processing_state → COMPLETE
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/end \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"
# Poll until done
curl -s http://localhost:8000/v1/sessions/$SESSION_ID/status \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"GET /v1/sessions/:id/status
Returns the current processing state and number of extracted windows. Useful for polling recorded sessions until processing_state is COMPLETE.
{
"session_id": "fbf09e2e-8688-4631-a4ab-388719005cac",
"ingestion_mode": "RECORDED",
"processing_state": "COMPLETE",
"ended_at": "2025-01-15T10:35:00Z",
"window_count": 88
}/status every 2–5 seconds. Once processing_state === "COMPLETE", call/features to retrieve all extracted windows or /export to download the full dataset.Sessions
A session represents a single recording from a device — either a live stream or an uploaded file. Sessions own all raw samples and computed feature windows.
POST /v1/sessions — create
curl -s -X POST http://localhost:8000/v1/sessions \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{
"external_user_id": "user_1",
"device_type": "MUSE_2",
"sample_rate_hz": 256,
"channels": ["TP9","AF7","AF8","TP10"],
"ingestion_mode": "STREAMING"
}'Response:
{
"session_id": "fbf09e2e-8688-4631-a4ab-388719005cac",
"project_id": "a3b2c1d0-...",
"external_user_id": "user_1",
"device_type": "MUSE_2",
"sample_rate_hz": 256,
"channels": ["TP9", "AF7", "AF8", "TP10"],
"started_at": "2025-01-15T10:30:00Z",
"ended_at": null,
"ingestion_mode": "STREAMING",
"processing_state": null,
"ws_url": "ws://localhost:8000/v1/ws?session_id=fbf09...&ws_token=eyJ..."
}GET /v1/sessions — list
Accepts optional ingestion_mode=STREAMING|RECORDED and limit (max 200) query parameters.
# List all sessions
curl -s "http://localhost:8000/v1/sessions?limit=50" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"
# Filter by ingestion mode
curl -s "http://localhost:8000/v1/sessions?ingestion_mode=RECORDED" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"POST /v1/sessions/:id/end — end
Marks the session as ended. For RECORDED sessions this immediately sets processing_state to COMPLETE.
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/end \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{"ended_at_unix_ms": 1730000030000}'Ingestion
Data is ingested in batches via POST /v1/sessions/:id/ingest. Each batch carries a batch_id idempotency key, a start timestamp, and channel data as parallel arrays of floats (one float per sample per channel).
- All channels declared at session creation must be present in every batch.
- Channel arrays must be the same length.
- Duplicate
batch_idreturnsduplicate_ignored— safe to retry on failure. - Works identically for both
STREAMINGandRECORDEDsessions.
curl example
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/ingest \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" \
-H "Content-Type: application/json" \
-d '{
"batch_id": "batch_001",
"t0_unix_ms": 1730000000000,
"data": {
"TP9": [820.5, 821.2, 819.8, 820.1, 820.9],
"AF7": [830.1, 831.4, 828.9, 830.5, 831.0],
"AF8": [815.3, 816.0, 814.7, 815.8, 815.5],
"TP10": [825.0, 825.7, 824.3, 825.2, 825.8]
}
}'Response:
{
"status": "accepted",
"sample_count": 5
}
// Idempotent re-submission:
{ "status": "duplicate_ignored", "sample_count": 0 }Python SDK
import time, voxel
client = voxel.Client(api_key="vxl_...", project_id="proj_...")
session = client.sessions.create(
external_user_id="user_1",
device_type="MUSE_2",
sample_rate_hz=256,
channels=["TP9", "AF7", "AF8", "TP10"],
)
session.ingest(
batch_id="batch_001",
t0_unix_ms=int(time.time() * 1000),
data={"TP9": [820.5, 821.2], "AF7": [830.1, 831.4]},
)TypeScript SDK
import { VoxelClient } from "@voxel/sdk";
const client = new VoxelClient({ apiKey: "vxl_...", projectId: "proj_..." });
const session = await client.createSession({
external_user_id: "user_1",
device_type: "MUSE_2",
sample_rate_hz: 256,
channels: ["TP9", "AF7", "AF8", "TP10"],
});
await client.ingestBatch(session.session_id, {
batch_id: "batch_001",
t0_unix_ms: Date.now(),
data: {
TP9: [820.5, 821.2, 819.8],
AF7: [830.1, 831.4, 828.9],
AF8: [815.3, 816.0, 814.7],
TP10: [825.0, 825.7, 824.3],
},
});Feature Windows
Feature windows are computed server-side as batches arrive. Each 2-second window contains band power (δ θ α β γ), inter-band ratios, spectral entropy, per-channel artifact scores, a signal quality index (SQI), and a model confidence score.
Paginate with limit (max 1000) and after_ms. Filter to quality-gated windows with only_passed=true.
GET /v1/sessions/:id/features
curl -s "http://localhost:8000/v1/sessions/$SESSION_ID/features?limit=10&after_ms=0" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"Response schema
[
{
"id": "wnd_abc123",
"session_id": "fbf09e2e-...",
"window_start_ms": 1730000000000,
"window_end_ms": 1730000002000,
"features": {
"bandpower": {
"delta": 25.3, "theta": 12.1,
"alpha": 18.7, "beta": 8.2, "gamma": 3.1
},
"ratios": { "alpha_beta": 2.28, "theta_beta": 1.48 },
"spectral_entropy": 0.72
},
"artifacts": {
"blink_like": 0.04,
"motion_like": 0.02,
"emg_like": 0.03,
"bad_contact": 0.01
},
"quality": { "overall": 0.95, "per_channel": { "TP9": 0.96, "AF7": 0.94 } },
"confidence": { "overall": 0.91 },
"gate": { "passed": true },
"created_at": "2025-01-15T10:30:02Z"
}
]WebSocket Streaming
For streaming sessions, connect over WebSocket to receive feature windows as they are computed — typically within 100 ms of a batch arriving.
1. Obtain a token
WebSocket tokens are short-lived (60 min) and session-scoped. Fetch one via the REST API before opening the connection.
# Obtain a WebSocket token for a session
curl -s -X POST http://localhost:8000/v1/sessions/$SESSION_ID/ws-token \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"
# Response:
{
"ws_url": "ws://localhost:8000/v1/ws?session_id=fbf09...&ws_token=eyJ...",
"ws_token": "eyJ...",
"expires_in": 3600
}2. Connect and listen
const { ws_url } = await getWsToken(sessionId);
const ws = new WebSocket(ws_url);
ws.onopen = () => console.log('Connected to session stream');
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
switch (msg.type) {
case 'features':
// msg.data is a feature window object
console.log('SQI:', msg.data.quality.overall);
console.log('Alpha:', msg.data.features.bandpower.alpha);
console.log('Blink:', msg.data.artifacts.blink_like);
break;
case 'session_ended':
console.log('Session ended:', msg.data.ended_at);
ws.close();
break;
}
};Message types
| type | When | Payload |
|---|---|---|
| features | New feature window computed | Feature window object (see schema above) |
| session_ended | Session ended server-side | { "ended_at": "..." } |
STREAMING sessions. For RECORDED sessions, poll /status then fetch /features when complete.Export
Download the full contents of a session as structured JSON for offline analysis, fine-tuning pipelines, or long-term storage. Raw sample batches can optionally be included.
GET /v1/sessions/:id/export
| Parameter | Default | Description |
|---|---|---|
| limit | 1000 | Max windows per page (up to 5000) |
| offset | 0 | Page offset for large sessions |
| include_raw | false | Also include raw sample batches |
# Export session as JSON (feature windows + metadata)
curl -s "http://localhost:8000/v1/sessions/$SESSION_ID/export" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" > session_export.json
# Include raw sample batches
curl -s "http://localhost:8000/v1/sessions/$SESSION_ID/export?include_raw=true" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID" > session_export_full.json
# Paginate large exports
curl -s "http://localhost:8000/v1/sessions/$SESSION_ID/export?limit=500&offset=0" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"Response
{
"session": {
"id": "fbf09e2e-...",
"external_user_id": "user_1",
"device_type": "MUSE_2",
"sample_rate_hz": 256,
"channels": ["TP9","AF7","AF8","TP10"],
"ingestion_mode": "STREAMING",
"processing_state": null,
"started_at": "2025-01-15T10:30:00Z",
"ended_at": "2025-01-15T10:35:00Z"
},
"feature_windows": [...],
"pagination": {
"offset": 0, "limit": 1000,
"total": 88, "has_more": false
}
}User Deletion
To comply with right-to-erasure requests, call DELETE /v1/users/:external_user_id. This performs a hard delete of all data associated with the given user within the project. An immutable audit record is retained.
What gets deleted
- All sessions belonging to the user.
- All raw EEG sample batches across those sessions.
- All computed feature windows.
curl example
curl -s -X DELETE "http://localhost:8000/v1/users/user_1" \
-H "Authorization: Bearer $API_KEY" \
-H "X-Project-Id: $PROJECT_ID"Response
{
"deleted_external_user_id": "user_1",
"sessions_deleted": 12,
"samples_deleted": 184320,
"features_deleted": 720,
"audit_id": "aud_abc123"
}Rate Limits
All API-key endpoints are rate-limited per key using a sliding window.
| Limit | Value |
|---|---|
| Sustained | 60 requests / minute |
| Burst | 120 requests |
Response headers
X-RateLimit-Limit— requests allowed per window.X-RateLimit-Remaining— requests left in window.X-RateLimit-Reset— epoch when window resets.
HTTP 429
HTTP/1.1 429 Too Many Requests
Retry-After: 42
{
"detail": "Rate limit exceeded. Retry after 42 seconds."
}Voxel is developer infrastructure for EEG data. It is not a medical device and is not intended for clinical diagnosis or treatment decisions.