Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.streampixel.io/llms.txt

Use this file to discover all available pages before exploring further.

The outcome: a stream that hits the latency, FPS, and visual quality your use case requires, with the right tradeoffs for your audience’s devices and networks. Streaming performance is multidimensional — encode time on the worker, network RTT to the viewer, decode time on the client. Tuning means picking the right point on the curve, not maximizing one axis.

Targets by use case

There is no universal “good performance.” Pick the profile that matches your project:
Use caseGlass-to-glass latencyFPSResolutionBitrate
Cinematic walkthrough≤ 200 ms301080p6–10 Mbps
Product configurator≤ 150 ms601080p6–10 Mbps
Multiplayer game (host + viewers)≤ 120 ms601080p4–6 Mbps
Interactive simulation / sim training≤ 100 ms601440p12–18 Mbps
VR≤ 80 ms motion-to-photon72/901920×1080 per eye20–30 Mbps
Glass-to-glass latency = camera move on the worker → photons on the viewer’s display. Below 100 ms feels native; 100–200 ms is fine for non-twitch content; above 300 ms feels like you’re operating a remote machine.

Region selection

The single biggest lever. Streampixel runs in three regions:
  • US-East-1
  • Europe
  • Asia Pacific
A user in Tokyo connecting to US-East-1 will see ~150 ms RTT before any encoding/decoding even starts. The same user connecting to Asia Pacific sees ~10–30 ms. Pick the region closest to where most of your users are. If your audience spans continents, run the same project in multiple regions and route users at your application layer (geolocate IP, send them to the right URL). See Regions for the full list.

Codec choice

CodecQuality / bitrateDecode supportWhen to use
H264BaselineUniversal — every browser, every mobile, every headsetDefault for broad audiences, mobile, VR
VP8Slightly worse than H264WideRare; prefer H264
VP9~40% better than H264 at same bitrateMost modern browsers, software-decoded on many mobilesDesktop-first audiences with bandwidth constraints
AV1BestChrome desktop onlyNiche — Chrome desktop power users
Set the preferred codec from the dashboard (Codec settings) or per-session in the SDK. For most projects: H264 unless you have a specific reason.
H264 has the lowest decode latency on most hardware. If your latency budget is tight, H264 wins even when VP9 / AV1 would give better quality at the same bitrate.

Resolution and bitrate

Pair these as a unit — high resolution at low bitrate looks worse than lower resolution at the same bitrate.
Use caseResolutionBitrate
Mobile720p2–4 Mbps
Desktop standard1080p6–10 Mbps
Desktop high quality1440p12–18 Mbps
Desktop 4K (rare)2160p25–40 Mbps
VR1920×1080 per eye20–30 Mbps
Two rules of thumb:
  1. Bits per pixel per second. A 1080p60 stream at 6 Mbps gives ~50 milli-bits per pixel per frame. Below ~30 mbpp/f you start seeing compression artifacts in motion. Use this when sizing custom resolutions.
  2. Decode budget. A device that can play your stream at 1080p smoothly may stutter at 1440p even if the network has the bandwidth. Decoder throughput is the hidden ceiling.

Adaptive vs fixed bitrate

Adaptive (default) lets the encoder ramp bitrate up and down based on observed network capacity. Best for variable networks (mobile, home Wi-Fi). Fixed holds a constant bitrate. Best for controlled environments (events on dedicated networks, kiosks on Ethernet) where you want predictable visual quality. Tradeoffs:
AdaptiveFixed
Quality on bad networksDrops bitrate, stays smoothDrops frames, freezes
Quality on good networksClimbs to ceilingHeld constant
Predictable bandwidthNoYes
Best forVariable / mobileKiosks / events
Most production deployments should use adaptive. Switch to fixed when you have metrics showing the adaptive controller is making bad decisions (e.g., oscillating in a stable network).

Reducing perceived latency

If your stream feels laggy even though throughput is fine, the bottleneck is somewhere other than bandwidth. Try, in order:
1

Verify region

Run ping api.streampixel.io (loose proxy for region selection). RTT > 100 ms is a red flag — pick a closer region.
2

Switch to H264

H264 hardware decode is faster than VP9/AV1 software decode on most consumer hardware. Wins ~10–30 ms on mobile and standalone headsets.
3

Lower minQP

Lowering the minimum quantization parameter lets the encoder spend more bits on each frame, reducing the chance of B-frame queueing. Configure from the dashboard’s adaptive settings.
4

Raise maxBitrate

If you’ve capped maxBitrate aggressively for bandwidth reasons but the network can handle more, raising the cap reduces compression artifacts and the perception of lag.
5

Disable forceTurn (only on trusted networks)

forceTurn: true adds a relay hop. On a network you control where direct WebRTC works, removing it shaves a few ms. On any consumer network, leave it on — failed direct connections cost much more than the relay’s overhead.

Network: TURN-only mode for restrictive networks

Corporate networks often block UDP and direct WebRTC entirely. Set:
StreamPixelApplication({
  appId: PROJECT_ID,
  forceTurn: true,
});
This forces all media through Streampixel’s TURN servers, which support TCP fallback. The cost is a few ms of extra latency and slightly more relay load. The benefit is connections that simply work behind hotel Wi-Fi, hospital networks, and enterprise firewalls. If you find yourself supporting many users on locked-down networks, leave forceTurn on by default and only disable it for performance-sensitive internal use cases.

Diagnosing performance: stream stats

Subscribe to the SDK’s stats events to see what’s actually happening:
pixelStreaming.addEventListener('statsReceived', (e) => {
  const stats = e.data.aggregatedStats;
  const inbound = stats.inboundVideoStats;
  const codec = stats.codecs.get(inbound.codecId);

  console.log({
    codec: codec?.mimeType,
    fps: inbound.framesPerSecond,
    bitrate: inbound.bitrate,
    resolution: `${inbound.frameWidth}x${inbound.frameHeight}`,
    packetsLost: inbound.packetsLost,
    rtt: stats.candidatePair?.currentRoundTripTime,
    jitter: inbound.jitter,
  });
});
What to look for:
SymptomLikely cause
FPS oscillating between 30 and 60Decoder throttling (often thermal on mobile) or bitrate adaptation churn
Bitrate well below configured maxNetwork can’t sustain higher; check RTT and packet loss
RTT > 100 msWrong region, or user on bad network
Packets lost > 1%Network congestion or Wi-Fi interference
Jitter > 30 msUnstable connection, likely Wi-Fi
Codec not what you expectedBrowser negotiated a different codec — check preferredCodec
Render these in a dev overlay during testing. The stream stats panel in the example app shows one approach.

UE-side optimizations

Streampixel can’t make a slow Unreal project fast. If your scene drops below 60 fps on the worker, no amount of network tuning will produce a 60 fps stream. Quick wins that often help:
  • Lock the target framerate. t.MaxFPS 60 keeps the encoder in sync with consistent frame intervals. Variable frame times confuse the encoder.
  • Disable expensive shadows where not needed. Movable lights with dynamic shadows are the most common framerate killer in real-time scenes.
  • Lower texture mip levels. Streamed pixels won’t show your highest-mip detail anyway. r.Streaming.MipBias 1 saves VRAM with negligible visual impact at typical streaming resolutions.
  • Forward renderer for VR / mobile-targeted streams. Cheaper, more deterministic frame times than the deferred renderer.
  • Disable AA passes that don’t survive video encoding. Heavy TAA gets blurred away by H264 anyway; FXAA or no AA can look identical and render faster.
These are general UE perf tips, not Streampixel-specific. Profile with Unreal Insights to find your actual bottleneck before tuning blindly.

Debugging high latency

When users report “the stream feels laggy,” walk this checklist:
1

Confirm region

Which region is the project in? Where is the user? If they don’t match, fix that first.
2

Check stats

Have the user open dev tools and report RTT, FPS, bitrate, packet loss. Stats > intuition.
3

Check codec

If they’re on Safari and the stream negotiated VP9, software decode is killing them. Re-init with preferredCodec: 'H264'.
4

Check network type

Cellular vs. Wi-Fi vs. Ethernet. Cellular adds 30–80 ms vs. wired. Wi-Fi 5 GHz is much better than 2.4 GHz for streaming.
5

Check forceTurn

If they’re on a corporate network and forceTurn: false, the connection might have failed over to slower paths. Try with forceTurn: true.
6

Check UE framerate

If the worker is rendering at 30 fps because the scene is heavy, no amount of network tuning fixes that. Profile UE.

Pre-launch perf checklist

Before opening to a real audience:
  • Region matches audience geography.
  • Codec set explicitly, not left to default negotiation.
  • Resolution + bitrate pair appropriate for target devices.
  • Adaptive bitrate enabled (unless you have a specific reason for fixed).
  • forceTurn: true for broad audiences.
  • UE project hits target FPS on the worker hardware.
  • Stream stats panel available in dev/staging for spot-checks.
  • Tested on the slowest device class your users will bring (e.g., a mid-range Android on LTE).

Next steps

Regions

Where your project runs determines your latency floor.

Codec settings

Choose H264, VP8, VP9, or AV1 per project.

Mobile streaming

Bitrate and codec recipes specific to mobile browsers.

Multiplayer streaming

Per-viewer bitrate budgets for SFU sessions.