So one of the assumptions in nom's design is branch prediction will be mostly ok.But could it parse stuff while deferring branches to later?
-
Show this thread
-
It works by failing as early as possible, but could we parse most stuff without caring, then check afterwards? Would we get a speedup?
2 replies 0 retweets 1 likeShow this thread -
Replying to @gcouprie
Basically empirical unless branchless and smaller (fits better in cache)
1 reply 0 retweets 0 likes -
Replying to @djinnius
Of course that should be experimented on :) But I wonder if there's a pattern that would help in the general case
1 reply 0 retweets 0 likes -
Current nom code is small (will get smaller with nom 4), but still heavy on branches
1 reply 0 retweets 0 likes -
Replying to @gcouprie
Thinking forward, branch prediction close to maxed out but cache will increase somewhat
1 reply 0 retweets 0 likes -
Probably room for speedup w. branchless even if it increases code size slightly. Worth a shot!
1 reply 0 retweets 0 likes -
Replying to @djinnius
I might try my hand at the "tag" combinator first (recognizes a specific string) by reusing crypto constant time compare code (branchless)
1 reply 0 retweets 1 like
Also, possible to measure rates of branch (mis) prediction: http://valgrind.org/docs/manual/cg-manual.html …
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.