What I did what take a library that had been popular for 15+ years (Boost.Range) and that directly addressed two of the biggest gripes against the Standard Library: terrible error messages and poor usability of the algorithms; modernized and standardized it. 1/2
-
-
Replying to @ericniebler @ssylvan and
To say that I was merely pushing my own agenda to indulge my ego without any consideration of the needs of the community is to demonstrate the same lack of curiosity that I have been accused of in this thread. 2/2
1 reply 0 retweets 9 likes -
Replying to @ericniebler @Timewatcher and
I don't know much about Boost.Ranges specifically, but serious question: are you unaware that many of use would rather eat glass than include Boost in our projects? It's a highly divisive library at best. Maybe ranges is the rare jewel in there, but it doesn't seem like a...
2 replies 0 retweets 5 likes -
Replying to @ssylvan @Timewatcher and
I'll answer that with another question: have you ever appreciated having standard smart pointers, or do you know anybody who appreciated having them? Did you know they were in Boost first? I'm not glossing over the real problems Boost has, but a lot of good has come of it.
3 replies 0 retweets 5 likes -
Replying to @ericniebler @ssylvan and
I actually don't appreciate shared_ptr at all, because it encourages an engineering culture in which distributed lifetime management is the norm, instead of the last resort that it should be.
2 replies 0 retweets 26 likes -
Replying to @JoshuaBarczak @ssylvan and
There are many bad uses of shared_ptr, I won't disagree. But there are times a non-intrusive reference-counted pointer is needed, and it's unlikely the average user can do better than shared_ptr. That's why it belongs in the standard.
3 replies 0 retweets 2 likes -
Replying to @ericniebler @JoshuaBarczak and
My beef is that it violates the c++ philosophy of only paying for features you use. It pays a thread synchronisation cost on ref count updates, regardless of whether you need that feature.
2 replies 0 retweets 5 likes -
Replying to @BrookeHodgman @ericniebler and
The average reinvention of it is often a crude intrusive ref-counter, that while a lot less elegant and general purpose (and possibly containing new bugs), can actually have a 100x performance difference :o So "better" depends heavily on the context faced by that user.
1 reply 0 retweets 2 likes -
Replying to @BrookeHodgman @JoshuaBarczak and
100x? That sounds incredibly high. Have you benchmarked this?
2 replies 0 retweets 0 likes -
Replying to @ericniebler @BrookeHodgman and
100x is fairly expected for this stuff. If you can just increment a register that’s a lot cheaper than needing all other cores to see it. In fact 100x is often a low end estimate.
1 reply 0 retweets 1 like
That’s the nearly worst case: an 18 cycle atomic stalling a 6-wide execution engine, even uncontended. (Hopefully future CPUs will speculate uncontended atomics for free.)
-
-
Replying to @TimSweeneyEpic @ssylvan and
Here's a blog about it. For one thread ~10-25x.https://fgiesen.wordpress.com/2014/08/18/atomics-and-contention/ …
2 replies 2 retweets 11 likes -
Replying to @adrianb3000 @TimSweeneyEpic and
It gets more complicated when you consider that the non atomic version may never touch memory at all (compiler enregisters it, does an inc/dec pair, skips store, perhaps that work is entirely used using “spare” resources too so take up “zero” time) etc.
2 replies 0 retweets 0 likes - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.