Oh, yeah, to be clear I *vastly* prefer the "never break master" CI model, more pointing out that it has scaling issues that are tricky -- not impossible -- to deal with
-
-
In Rust's case I wrote up an autobatching scheme years ago, requiring a straightforward three way PR classification, but doing it well requires more CI budget than we have right now. Probably could deploy a simpler version of it that gets us some wins.
1 reply 0 retweets 4 likes -
Replying to @ManishEarth @stephentyrone
I mean I'm nowhere near the purse strings but this has literally been an issue off-and-on since I .. uh .. left the project. We were having an argument over it that very week. It means prioritizing cycle time in a way that seems to resist all rational planning. I don't get it.
1 reply 0 retweets 4 likes -
Like if someone with the correct authority said "we don't do any more feature work or bug fixing or anything until cycle time is down to 10 minutes", it would get solved. It's not like compilers that bootstrap and self-test that fast (without $infinite_aws_bill) can't be written.
2 replies 0 retweets 10 likes -
I’m fine with investing in automatic rollups (the fact that we have to do them bugs me too), but you lost me at “stop all feature/bug fix work until the compiler is 10x faster”.
1 reply 0 retweets 3 likes -
Replying to @pcwalton @graydon_pub and
I’m not even convinced it’s possible for Rust to compile that fast without simplifying the language a lot. Even if it were, you’re talking about a complete rewrite of major subsystems. Like either “rewrite the whole typechecker” or “rewrite LLVM”.
2 replies 0 retweets 8 likes -
Replying to @BRIAN_____ @graydon_pub and
Can you describe how to make rustc 10x as fast? Everyone says that of course it’s possible, 10x is easy to get, it’s just that nobody cares about compiler perf, etc. etc., And then everyone who tries ends up with, like, a perf boost of 10% if that.
3 replies 0 retweets 3 likes -
Replying to @pcwalton @BRIAN_____ and
you can work on latency instead of throughput (i.e. incremental) for the bootstrap case, second-order incremental *would be* a thing, but that would require a pure language to write a compiler in, and Rust ain't that I am seriously waiting for advances in linear-dependent types
2 replies 0 retweets 1 like -
as for speedups within rustc written in Rust: compile type-matching into a bytecode, and use a type representation that makes an interpreter fast, or even JIT it (like CSS selectors ;)) make LLVM cheaper with optimizations on MIR (superlinear wins from it being polymorphic)
2 replies 0 retweets 3 likes
Yeah, I think your first example illustrates my point: that idea is not just optimization using bog-standard 1970s compiler techniques, that’s capital-R Research :)
-
-
Replying to @pcwalton @BRIAN_____ and
I am not even aware of compilers in the 70s that did monomorphization! the ML family can compile fast by being dynamic dispatch for the people in the back: MLs & Haskell compile more like Java, and less like C++ or Rust, so they can do faster compiles with slower run times
1 reply 0 retweets 8 likes -
Depends if you consider Ada as 70s.
0 replies 0 retweets 4 likes
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.