I'm most excited about combining engineering with research at higher levels of abstraction, e.g knowing of some HW&SW bottlenecks but coming up with new mathematical models to solve the problems, while also being able to engineer the solution, aware of the limitations.
Conversation
An especially interesting intersection is FPGA development and new programming languages specifically designed to portray a certain logic for a set of specific problems, and not general purpose semantics that would be applied to a whole bunch of completely unrelated problems.
1
3
I'm only dipping my toes into the water here but it's a direction I'd like to go into the most math-wise.
2
1
For example (and excuse me for crude generalizations): math & basically epistemological research that deals specifically with how to present information in a more intuitive, and simpler way, that relates back to the type of information in question.
1
So there could be other ways of picturing and solving sets of problems than, say, using imperative or functional programming, but a different perspective, like using applied topology to look at communication in networks or whatever.
2
1
All of that combined to design new semantics, new programming languages, and also (but as a separate area of course) hardware design that supports it.
2
1
Of course this isn’t new - I’m just thinking in which direction I’d like to go over the years professionaly, and what I’d like to focus on the most through my studies. Combinng this with graphics and AGI research (langs & new ways of mapping info) could be super rewarding.
2
1
Replying to
I found this presentation to be an interesting perspective on the problem:
Quote Tweet
Enjoying this presentation! RISC-5, dependently types, post-C programming models, avoiding Spectre and Meltdown… all to turn on a light bulb! 

”Proof Assistants At the Hardware-Software Interface” by Adam Chipala: youtube.com/watch?v=GXXOyX
They are doing interesting work with compiling Coq descriptions to FPGAs, trying to capture the graph-based nature of the problem, and take advantage of natural parallelism inherent in the structure of the chip, rather than going via machine code.
3

