Which is why the kernel folks' insistence on keeping drivers in the kernel rather than sandboxed user processes is so backwards...https://twitter.com/CopperheadOS/status/863454511674871808 …
Yeah, I really don't care what the mechanism is or what the performance aspects are as long as it works, though.
-
-
Most people's loads are NOT hw-access-bound but cpu-, memory-, and/or gpu-bound.
-
Oh certainly. I want the mechanism to work for network card drivers too.
-
If your loads are network-card-bound, then you need some more attention to isolating the driver in a way that doesn't hurt your perf.
-
But that only affects datacenter type users, not mobile, laptop, desktop, small server users.
-
So everyone else is suffering miserable security model for the sake of enterprise/datacenter network performance needs...
-
Oh certainly. We're over optimized for the wrong users in many cases.
End of conversation
New conversation -
-
-
It's the perf issues that keep people from giving it real consideration. I wonder if a shim could allow some drivers to run this way today..
-
Yes. The easiest way to shim it is give each driver a fake whole-linux-kernel in its memory space, have that link up device externally.
-
This fake kernel would have no perms and see no other hardware, just provide linkage shims for the driver code.
-
Yup yup. How to manage things like ioports and device mapped memory have a big impact on perf and security.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.