The main difference in my implementation is a custom MSAA resolve to handle the internal outlines. Also, it does 2 jump-floodings at the same time in order to support a second independent outline in a single pass (going to use that for hover vs select)
AndreasR - https://mastodon.gamedev.place/@wumpf
@wumpf
mastodon:
mastodon.gamedev.place/@wumpf
software stuff!
Software Engineer
Previously game engine dev @ Unity & Havok
AndreasR - https://mastodon.gamedev.place/@wumpf’s Tweets
Currently we do selection highlighting in the Rerun viewer by changing color & size. Which frankly sucks!
Getting jump-flooding based outlines to the rescue 🥳
1
1
8
Show this thread
Currently we do selection highlighting in the Rerun viewer by changing color & size. Which frankly sucks!
Getting jump-flooding based outlines to the rescue 🥳
1
Rerun 0.3.0 is liiiive! :)
On towards making regular releases a habit, not an easy feat 😅
Quote Tweet
Rerun 0.3.0, our first post-launch release, is out! It comes with packed performance and stability improvements and a bunch of polish and fixes to make your life easier. Full release notes here: github.com/rerun-io/rerun
Show this thread
5
OSS launch day at went really well. Got loads of traffic on the webpage and plenty of people joining on Discord. Exited to see how it goes from there now (:
5
read image description
ALT
1
4
23
Today is the day! Have a look at all the shiny (and to be shiny!) things we’ve built. It’s open source now <3
Quote Tweet
We’re making Rerun open source today! @Rerundotio is now available with `pip install rerun-sdk` and `cargo add rerun`
#computervision #robotics #opensource #rustlang
Show this thread
0:53
18.6K views
3
1
11
I think my sample scene is starting to get out of hand here 😵💫
4
srsly though, pretty stoked about finally sharing with the world what we built so far! :)
5
Show this thread
o̵h̵ ̵g̵o̵d̵ ̵n̵o̵ ̵n̵o̵t̵ ̵y̵e̵t̵ ̵n̵o̵t̵ ̵a̵l̵r̵e̵a̵d̵y̵ ̵c̵a̵n̵ ̵w̵e̵ ̵w̵a̵i̵t̵ ̵a̵ ̵b̵i̵t̵ ̵l̵o̵n̵g̵e̵r̵ ̵j̵u̵s̵t̵ ̵l̵e̵t̵ ̵m̵e̵ ̵f̵i̵x̵ ̵t̵h̵i̵s̵ ̵o̵n̵e̵ ̵m̵o̵r̵e̵ ̵t̵h̵i̵n̵g̵ , ehrm I mean this is gonna be pretty cool, trust me 😎
Quote Tweet
Almost there!
On February 15 we’re making @rerundotio open source, with Python and Rust libraries, examples and docs, in whatever state we manage to get it all in until then 
Show this thread
GIF
read image description
ALT
1
10
Show this thread
Remember that silly solar system I posted a while ago? twitter.com/wumpf/status/1
shows what I meant back then how this can be super cool with cameras 🙃:
Quote Tweet
Been playing around with #COLMAP recently. It was actually much easier to use than I remembered.
Here is a sparse reconstruction visualized with @rerundotio. Setting up this multi-perspective view was just a couple clicks in the UI (written 100% in #rustlang and #egui btw).
0:20
993 views
7
did I say all the things? YES! Multiselect!! :o)
4
Show this thread
selecting & hovering all the things!
spent some time wiring up stuff so everything gives feedback in all directions
1
5
16
Show this thread
One of our early testers, , is using to track the training of a little virtual robot that is running in #bevy (bottom left) 🤩
It's actually similar to some of the work I did at
GIF
1
10
40
And sooooon all of it this going open source :)
btw. that depth viz there is just close to a mio points which the viewer feeds in and renders in realtime. Ofc a specialized visualization would be cooler still (and will come!) but it's nice that stuff like this just works
Quote Tweet
One of my favourite things about how @rerundotio is shaping up is just how quick and simple it is to build powerful mixed 2D/3D visualizations for computer vision.
Also, super snappy thanks to #rustlang =)
0:25
1.5K views
9
mastodon.gamedev.place/@wumpf
love 's sample use cases of our viewer - this one gives you a glimpse into what's going on inside of #stablediffusion
Quote Tweet
Been playing some catchup lately getting into #StableDiffusion 2.0. To understand what's going on I've been logging and visualizing intermediate tensors with @rerundotio. Here is my cofounder @ernerfeldt turning into santa via depth guided diffusion from @StabilityAI
Show this thread
0:02
2.6K views
4
Further fusing 2d & 3d things together, now spaces behind a user-defined pinhole camera can be placed into 3d spaces \o/
The rendering itself is relatively easy, but making stuff fit both in architecture and ui is an ongoing challenge 🧑💻
4
15
the planets are a toy example, but allowing to take any viewpoint as steady is super useful for visualizations where e.g. a camera system moves relative to others!
1
11
6
33
Quote Tweet
#egui and eframe 0.20.0 released!
AccessKit support, prettier text, overlapping widget, and much more!
Try it at egui.rs
This is a BIG release, so let me list some highlights (1/n)
#rustlang #wasm #gamedev
Show this thread
5
all lines together in a single draw call, zero geometry generation on cpu 8-)
round line caps is the only place where I do fragment discarding so far and it ofc comes with a coverage mask for antialiasing :)
1
5
Show this thread
pretty proud of my line strip renderer for by now.
Doesn't look like much, but it is able to render arbitrary line (strips) with just the "skeleton-positions" and a bunch of flags. Generates all geometry in a vertex & fragment shader from there #wgpu
1
13
Show this thread
Someone on the team suggested to add a 3d camera to the 2d test scene to have a better look at potential depth issues.
Love it!
1
6
We recently added native support for object detections with keypoints to so I made a small example using #mediapipe pose tracking. Not perferct but still quite impressive on heavy motion blur.
7
28
plugin I use to post on twitter & mastodon simultaneously isn't picking up videos - should have thought so
1
9
Show this thread
back at rendering, this week it's all about unifying 2D & 3D
First stop, orthographic camera. Lines & points are fancy pretend-3D primitives so they needed a bit of love :)
1
1
10
Show this thread
We've finally added time series logging and plotting to 📈
You log scalars just like you would log text or images, and it shows up just like you would expect
4
23
Unsure how much longer I'll stick around here. Find me instead here 🙂
4
Quote Tweet
Playing around with using @rerundotio to visualize a quick experiment where I use a @huggingface model to periodically detect objects and then track them with a cheaper @opencvlibrary based method in between.
0:10
508 views
2
6
once these beaties land in wgpu, it also works on the Web :)
github.com/gfx-rs/wgpu/pu
1
Show this thread
Or rather I'm not happy with WebGL!! github.com/gfx-rs/wgpu/pu
Been doing things again it doesn't let me do 😢
1
Show this thread
once you assume most of your scene to be fully dynamic there's a bunch of curious stuff that falls into place.
Small example: It's possible to have just *one* instance buffer for since no transformation (!) is assumed to be re-usable. Draw calls just have weird index ranges
1
2
Show this thread
Yes, I implemented lines and points before meshes in our new renderer 😅
1
3
21
Show this thread
spot the difference ;)
(right now it's all about getting parity)
1
Show this thread
made good progress with the renderer this week :)
Next week we should be able to flip the switch from our three-d (otherwise cool crate!) based prototype-renderer (left) to our custom wgpu based renderer (right).
..and then we'll get all the cool stuff! 😉😁
1
14
Show this thread



