When you say mono RGB cam only, are you saying that a developer can use any RGB camera, or is this something more like ARCore / ARkit pass-through + clean up of their point cloud data to build an environment map
-
-
-
Our core system is modular & can run with any mono rgb camera (or any depth camera if you have one). The SDK which we release publicly only supports ARKit (and soon ARCore) devices. More will come, it’s just prioritizarion right now as we are a relatively small team
-
Oh and we build our own dense maps based on as yet unpublished research from the Oxford Active Vision Lab. We don’t touch the ARKit maps as we measured ours to perform (much) better. All we take from ARKit or ARCore (or any tracker) is the local pose from the VIO subsystem
-
Wouldn't running ARkit/ARCore in parallel to your mapping module limit the access to using the low-res images the Kit uses?
-
ARKit 2 gives you full resolution, but that only affects the output rendering/camera display, not the mesh or dense map building
-
No, full res video on iPhone is 4K which ARKit doesn’t give you.
-
Ok, I stand corrected. Either way, it has no effect on the resolution of our meshing, only the rendering
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Coming along very nicely, well done all! Love the ball catching in the bowl, that was excellent.
-
Thanks Toby, that was completely unexpected & it impressed me too! So did the bouncing off the headphones. The system is ready for some great realistic feeling spatial computing apps to be built on it. Can’t wait to see what people come up with
-
Have you seen that recent paper about models that can produce unseen angles of a given shape from limited views? Thst together with semantic slam would full out your mesh nicely
-
Yes, from googles deepmind team (who used to share a floor with my cofounders lab at Oxford when they were still there ...) it was super impressive, though some way from being runnable on a phone with real image data
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Tensor flow based?
-
We aren’t sharing specifics. Combination of neural nets fused with structure from motion. Depth map & mesh are persistent, not per frame as in many mono-depth like cnn only approaches
-
Eitherway, beautiful work!
-
Thankyou. I personally did very little apart from write the tweet :-) the entire 6D team pulled together and built on some amazing research out of my cofounder Victors lab at Oxford
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
-
-
Great! The effect is very good! Is there a youtube HD video? I want to share it with my friends.
-
That was just a quick screen recording from my phone last night. I’ll upload to YouTube later today and share the link
-
Thanks

-
Here’s a YouTube link. Resolution was as captured on the phonehttps://youtu.be/EWK2pKInCWU
-
Thank you! Thank you very much! Your ideas are very good! You chose to combine with ARKit instead of re-creating ARKit!
কথা-বার্তা শেষ
নতুন কথা-বার্তা -
লোড হতে বেশ কিছুক্ষণ সময় নিচ্ছে।
টুইটার তার ক্ষমতার বাইরে চলে গেছে বা কোনো সাময়িক সমস্যার সম্মুখীন হয়েছে আবার চেষ্টা করুন বা আরও তথ্যের জন্য টুইটারের স্থিতি দেখুন।