I think I have a plan to move out of macros for nom. It’s a monstrous amount of work, but it could be backward compatible, ie I’d rewrite the macro internals so existing code still works
-
Show this thread
-
so, here's the idea: nom uses macros calling other macros, to generate a function's body. I like it because the generated code is more or less what I'd write manually, and somewhat easy to debug (clean stack traces, etc)
1 reply 0 retweets 2 likesShow this thread -
trait based solutions (everything implement a Parse trait with a parse() method and a lot of combinators) suffers from the same debugging issues as futures: it generates an object that will call a serie of Parse::parse() methods, rendering everything hard to follow
2 replies 0 retweets 2 likesShow this thread -
it turns out, I can get the same behaviour as macros with just functions, that may or may not return closures: https://gist.github.com/Geal/84775215be3b4d5978173165373c7dbb#file-appetizer-rs-L174-L297 … I'm not decided yet, but this looks easy enough to employ. And I could rewrite the current macros to use those function combinators
1 reply 0 retweets 5 likesShow this thread -
also, if you're interested, here's a small, incomplete parser combinators I wrote on a whim to test this: https://gist.github.com/Geal/84775215be3b4d5978173165373c7dbb#file-appetizer-rs-L18-L166 … This parser technique is easy to learn, anybody can make their own :)
3 replies 0 retweets 5 likesShow this thread -
I’ll have to adjust things a bit so that stack traces are not too polluted by the combinators, but it’s easy enough to follow. Fun fact: it’s apparently much better at inlining code than macros nom? I’ll have to investigate :)pic.twitter.com/LN0BP4H44W
2 replies 0 retweets 4 likesShow this thread -
what the actual fuck. How. How can it be that much faster already. Top results are from https://github.com/rust-bakery/parser_benchmarks/blob/master/http/nom-http/src/main.rs … (not the simd optimized one) Bottom is the version with functions: https://gist.github.com/Geal/f447cdee0e305954b840e4b47683acd6 … just a naive translation, did not optimize anything. what is happeningpic.twitter.com/mb9fUu6U5A
3 replies 1 retweet 15 likesShow this thread -
I’m currently looking at possible perf regressions in previous rustc versions. I don’t have the same behaviour between stable and nightly. And even stable is a lot slower than it should
1 reply 0 retweets 1 likeShow this thread -
alright, there's definitely a perf regression in last rust stable, when testing on https://github.com/rust-bakery/parser_benchmarks/blob/master/http/nom-http/src/main.rs …
@rustlang any idea what happened on the perf side between 1.31 and 1.32?pic.twitter.com/EWPSLqxniw
1 reply 4 retweets 10 likesShow this thread -
looks like the jemalloc change makes most of the perf hit
4 replies 0 retweets 7 likesShow this thread
http://Crates.io saw a way larger impact from the allocator change than we expected too
-
-
CC
@sadisticsystems - I know sled (an embeddable database crate) had some unfortunate hits by the switchover as well.1 reply 0 retweets 0 likes -
sled got 300% slower on write-heavy workloads, and it is totally unacceptable for library authors to set the global allocator. But 95%+ of users will only suffer by never knowing about jemallicator, so I'm probably going to do it anyway and make it opt-out via a feature. rip x_x
1 reply 0 retweets 2 likes - 1 more reply
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.