WebRTC works in practice, but is very heavy server-side. We use it for audio (<70ms latency end to end) but it's a lost cause for video with our infra.
-
-
(this is assuming the folks I talked to at NAB were correct) still better than the nonsense some people are trying to pull for captions in DASH: for each segment, generate a TTML (XML) file, and encapsulate it in an MP4 segment containing exactly one packet
-
I'm not sure if that's better or worse than the WebVTT mapping, which involves packets containing a binary format listing all events on-screen at a particular time, or a code indicating "none", which is impossible to decode/encode to/from text without latency
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.