Which makes me really want some kind of VirtualAlloc flag :) I don't think there's any practical reason we can't have close to the 6/0ms case with 4k pages, if it's handled in bulk, rather than as a continuous series of faults?
-
-
Show this thread
-
(And just to be clear, the reason I can't "just use 2mb pages" is because you basically can't ship something that uses 2mb pages - it requires the user to do a manual group policy edit to enable!)
Show this thread
End of conversation
New conversation -
-
-
I used to think I understood memory and b/w. Then we met Xylinx.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
Does fault handling of 4k pages take 60ms each time or in total over a 1GB address space?
-
Total over the address space (this is on Skylake).
End of conversation
New conversation -
-
-
Also, do you have experience of similar timings (handling of page fault) on different kernels/platforms? LARGE_PAGES are probably tuned differently because they were introduced to make NT faster on servers (IIS)
-
The problem, if I had to guess, is solely due to the demand-paging nature of the 4k pages. 2mb pages on NT are locked, so they are provisioned in bulk in VirtualAlloc and then there is no faulting (AFAICT). If 4k pages were handled similarly, they would probably be as fast?
- Show replies
New conversation -
-
-
I see you are having fun with large pages, try this out: MEM_EXTENDED_PARAMETER x; x.Type = MemExtendedParameterAttributeFlags; x.ULong64 = MEM_EXTENDED_PARAMETER_NONPAGED_HUGE; VirtualAlloc2 (..., 0x40000000, MEM_RESERVE | MEM_COMMIT | MEM_LARGE_PAGES, PAGE_READWRITE, &x, 1);
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.