I just got this down to 54s by fixing a silly mistake. I hope I've made more silly mistakes.https://twitter.com/edwinbrady/status/1265223132396367873 …
-
Show this thread
-
Down to 48.5s through the magic of inlining. This is really quite addictive... I am at least a little bit motivated by
@Augustsson telling me how fast Mu Haskell is (or was...)2 replies 1 retweet 44 likesShow this thread -
Replying to @edwinbrady @Augustsson
Unless you make it negative latency via writing a compiler that predicts future binaries ahead of the time you write the corresponding source code: /r/programming will not be satisfied.
1 reply 0 retweets 9 likes -
Question: Does the Idris 2 compiler already parallelize independent module subgraph compilation?
1 reply 0 retweets 0 likes -
Replying to @_m_b_j_ @Augustsson
No, that's less fun than optimising the single thread. But it is at least now possible - the Idris 1 run time wasn't good enough.
2 replies 0 retweets 3 likes -
Replying to @edwinbrady @Augustsson
Yeah, I'd argue to max out single thread first. I just argue if the global vector you describe is an inhibitor for future parallelization?
1 reply 0 retweets 0 likes
No, that's per source file. "Global" means relative to the current file, you could create lots and they wouldn't interfere.
-
-
Replying to @edwinbrady @Augustsson
thanks. So the inference is mediated via the module interface files. Should have known that. And here the DAG guarantees are not hurt from concurrent compilation.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.