Modern live streaming workflows (especially using OBS) require fine-grained control over individual video streams — not just a fixed layout.
However, existing video conferencing tools:
❌ Provide only pre-defined layouts (grid, speaker view)
❌ Do not expose raw per-user video streams
❌ Cannot be directly integrated into OBS as independent sourcesThis creates a major limitation for:
- Live streamers
- Podcast creators
- Online events & interviews
- Multi-person broadcasts
👉 You are forced to either:
- screen capture layouts (low quality, no control), or
- build complex custom pipelines.
There are tools like ping.gg that solve this problem by:
- Providing individual stream URLs per participant
- Allowing OBS ingestion via browser sources
- Enabling dynamic switching and layouts
But:
❌ These systems are closed-source
❌ No transparency in architecture
❌ Limited flexibility for developers
❌ Cannot be self-hosted or customized deeplyBuilding this system is non-trivial, because it requires combining multiple hard problems:
WebRTC + SFU (mediasoup)
Low latency streaming
Bandwidth optimizationMapping users → streams → viewers
Handling joins/leaves in real-time
Switching streams without breaking playbackSlots / scenes / layouts
Host-controlled routing
State synchronization across clientsBrowser source compatibility
Stable stream endpoints
Decoupled viewer pipelineSeparation of concerns:
- communication layer
- media routing layer
- presentation layerMost open-source WebRTC projects stop at:
✔ video calling
✔ basic SFU…but do not go into broadcast-grade routing systems.
This project aims to:
✔ Bring this architecture to open source
✔ Enable OBS-first streaming workflows
✔ Provide raw access to individual streams
✔ Decouple media transport from presentation👉 Essentially:
Turning WebRTC SFU into a programmable live streaming backendAn open-source alternative to ping.gg — built for developers who want full control over live streaming pipelines.