FWIW Firefox had similar bug for bodies >64kb for a long time. Only recently fixed.
-
-
Replying to @wanderview @webkit
Sounds like I should force fetch polyfill in Firefox too. Or just not use clones? why cloning is heavy?
2 replies 0 retweets 0 likes -
Arthur Stolyar Retweeted Arthur Stolyar
Arthur Stolyar added,
1 reply 0 retweets 0 likes -
Replying to @nekrtemplar @webkit
As long as the one side of the clone is referenced, but not read, the browser must buffer the un-read portion.
1 reply 0 retweets 1 like -
So if you have a 2GB stream that you clone, but only read one side, then browser must buffer 2GB. In the extreme you can OOM the process.
1 reply 0 retweets 1 like -
This was the debate I had with Jonas. At the time Mozilla pushed hard for this subtle breakage. Blarg.
1 reply 0 retweets 0 likes -
What was the alternative?
1 reply 0 retweets 0 likes -
Explicit buffer operation or fully transparent multi-reader. The second is today's design w/ better ergonomics. The first is what we want.
1 reply 0 retweets 2 likes -
Replying to @slightlylate @wanderview and
We could fix by adding args to clone for max buffer sizes, or perhaps BYOB -- both w/ pressure events. Ugly. Probably necessary.
1 reply 0 retweets 0 likes -
Seems streams API is finally getting to point this can be explicitly handled (writable + identity). Might want to see how that shakes out.
2 replies 0 retweets 1 like
That was why we eventually dropped opposition; can eventually be repaired. Sadly, will always be a (then preventable) footgun. *Sigh*
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
& Web Standards TL; Blink API OWNER
Named PWAs w/
DMs open. Tweets my own; press@google.com for official comms.