@pegasusepsilon use cuda:
https://youtu.be/0uNyqHUPR6U is Vl'Hurg in
https://github.com/graveolensa/torrential_rainbows …
I saw your post about writing a new renderer.
-
-
Replying to @graveolens
CUDA is nvidia-specific, I will most certainly not do that. Not every system (both GPU and software stack) supports OpenCL, so I won't do that either. I have a trick to use OpenGL, though, so I will be doing that.
3 replies 0 retweets 0 likes -
Replying to @pegasusepsilon
(more generally and platfrom agnostically: the specfuns I employ hither and thither are really happier when the numerics are blazing. I'd have modular forms/some good chunk of the DLMF on an asic if I could)
1 reply 0 retweets 0 likes -
Replying to @graveolens
Oh yeah, putting a math library of some sort into a shader, or just building a shader pipeline that allowed __float128 would be amazing. But this is the world we live in, so we'll have to do what we can instead.
1 reply 0 retweets 0 likes -
Replying to @pegasusepsilon @graveolens
By the way, you could always get some custom silicon from China, if you really want to churn large amounts of bits fast, and you have a decent patron. Nobody funds me, or I'm sure I'd be doing some far crazier things.
1 reply 0 retweets 1 like
@ikefeitler and intend to cast quaternion julia sets in bismuth, and we are equally unsupported.
-
-
Replying to @graveolens @ikefeitler
Fractal rendering obsession may well be the last of art for art's sake.
0 replies 0 retweets 0 likesThanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.