Probably fine in a language that handles value changes over time rigorously rather than as arbitrary side effects but it seems like a death trap for imperative programming; it'd still be useful in that setting too, but you'd want it to stand out and be applied very carefully.
-
-
-
I think one thing we've learned over the past few decades is that even single-threaded event-driven/callback-driven/asynchronous programming is almost as hard to reason about as general concurrency problems and should be avoided whenever possible.
- 7 more replies
New conversation -
-
-
If I were to do this using the functional idioms already available to me in a largely imperative language, I’d define z = () => x() + y() and let the runtime take care of optimization/ memoization. The idea of a monitor could be abstracted out into a monad covering x and y calls.
-
Yes, that’s another interesting pattern that emerges on top of a transactional memory system: Memoize a function whose sole effect is reading the heap, and cache the dependencies to ensure re-evaluation only if they change.
End of conversation
New conversation -
-
-
You can already simply do:
#define z (x+y) Although, with reactive variables, I'd forecast a lot of bugs and security bugs. Int overflow mitigations require to check the result of operation each time it occurs. -
Imagine trying to guess in your code which variables are dynamic and which aren't while reading it without having to go back and check their initialization. Nightmare.
- 3 more replies
New conversation -
-
-
Designing data-flow systems/tools I realised that spreadsheets are very much this (mentioned above). I love the 'alternate' programming mindset required. Tends towards a functional model too.
-
Have found the data-flow model invaluable for building and iterating on reactive systems like this procgen project:https://twitter.com/ga5p0d3/status/920772026683547654?s=19 …
- 3 more replies
New conversation -
-
-
I wonder if propagator networks would help here... http://web.mit.edu/~axch/www/phd-thesis.pdf …. It’s been a while since I’ve read that but the idea of “continually improving results with incomplete intermediate results” seems useful in systems that change over time. Maybe not? Maybe?
-
A.f.a.i.k, you'd need some way to ensure the computation will converge, or you will create infinite feedback-loops too easily. Don't they rely on monotone functions in the thesis? Maybe CALM theorem saves us? Last time I was looking at this I hinged several
@kmett talks on this:) - 2 more replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.