Sounds like perhaps the design is not thinking about it well for multithreaded (and hiding that behind lockless stuff); for example just cache the new entries on thread local and then do your lock & bulk copy at a well-controlled time.
-
-
This Tweet is unavailable.
-
Not sure what you mean. Writing to the data structure, "lock-free" or otherwise, will invalidate that cache line anyway. Using a mutex field in the cache line instead of a "lock-free" sequence of writes doesn't change that?
End of conversation
-
-
-
It mainly because using either lock primitives or (gods help you) LL/SC are often inherently slow even when there's no contention. You get the algorithmic AND practical pain. But yeah, "lock free" doesn't save you from bad design.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
There's a pretty substantial difference between a critical section and a "lock free" algorithm however you want to call it, and that is that it won't deschedule your thread for at least 15ms due to OS granularity. An atomic won't miss a frame or cause a pop, a mutex will.
-
That is a misuse of the term "mutex" and not what Jon was talking about. He is talking about literal mutual exclusion, not a particular operating system's implementation of a mutex. A mutex can be as simple as a looped compare exchange. It does not have to call the OS at all.
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.