I never use CRT routines in production code because: 1) Less efficient, 2) Possible additional bugs in the CRT layer, 3) Less control over ops than native, and 4) Even when a bug is my fault, it is harder to find with the extra CRT layer of crap on top.https://twitter.com/Jonathan_Blow/status/1417544504916135936 …
-
-
Basically the assumption was: - The OS provides the C runtime which implements the most efficient and low level routines necessary for programs to work. - The OS provides their own API on top of it, which provides _higher level_ features than the C runtime.
-
Whoa! OK. Well, that makes sense. That's not what happens, but I can see why people might think that.
End of conversation
New conversation -
-
-
I just didn't know. It was a black box to me, I knew that it gives me more virtual memory, but I didn't know how it does this. Once I tried to read the source code of malloc, but it is so complicated, I couldn't find the exact place where the magic was happening... :(
-
Not sure if you're on Windows or Linux, but here is a Linux example: https://github.com/bminor/glibc/blob/ff417d40178b7363b08516091f74c0b6615456ee/malloc/malloc.c#L2502 … It has a whole macro dance it does to make syscalls, but this is the actual syscall it will end up doing: https://man7.org/linux/man-pages/man2/mmap.2.html …
- Show replies
New conversation -
-
-
What about file operations on linux/mac? Are there OS provided functions that are lower level than the C runtime?
-
Yes, of course. I don't do much MacOS work so I don't know what their architecture looks like, but on Linux it is the syscall table. You can see the entire thing here: https://filippo.io/linux-syscall-table/ …
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.