Computer Vision Platform & 3D Data Annotation

Client Compound Eye
Role Product Designer
TECH Computer Vision, Sensor Fusion, Data Annotation
TOOLS Figma, Blender, Webflow, Photoshop, Illustrator
Company Overview
Compound Eye is building a visual system for machines. Autonomous vehicles and other robots must experience the world in 3D so they can interact with their surroundings. This is also true for people and animals: perceiving a path and identifying obstacles enables all movement.
Today most robots use a combination of cameras and active sensors like lidar and radar, but these robots are easily confused in unstructured environments, like homes, streets, and sidewalks.

Humans perform much better than robots using vision alone. Dogs, birds, and mice explore the world without lasers or human-level intelligence.

Compound Eye borrows from millions of years of evolution to simulate nature's best calculation and reasoning techniques, enabling robots to understand the world in RGB and 3D using automotive-grade cameras.
Compound Eye combines parallax and semantic cues in a single frameworkCompound Eye has invented techniques for parallax-based depth sensing that use cameras mounted independently on different parts of a machine. We have also invented self-supervised approaches to training neural networks for monocular depth estimation and new ways to calibrate cameras online so that robots can operate indefinitely without adjustment - in real time on embedded hardware.

TL;DR - we point two or more regular cameras at a scene, determine the distance to every point using both parallax and semantic cues, and fuse the results to give accurate depth at every pixel, all in real time.

VIDAS Devkit

For OEMS: Dense depth, per-pixel semantic class, and optical flow to power cockpit visualizations, ADAS, and autonomy, operable in both on-road and off-road environments.
Install quickly with modular, vehicle-agnostic hardware in less than an hour with a drop-ship assembly kit.
Capture real-time perception data with a live preview via a wireless user interface using Operator.
Analyze perception data and test against ground truth using Inspector.
Use the VIDAS™ SDK to write custom software that controls the perception system and consumes real-time perception data over an ethernet link.
Source - https://www.compoundeye.com/vidas-devkit

3D Data Annotation

"After researching more than 200 commercially available annotation tools, the team found that most were built for sparse 3D datasets. Instead of buying off the shelf, they decided to build a tool to power their state-of-the-art perception platform. But even with this valuable resource, the company’s small team was still constrained by in-house capacity. And they didn’t want to spend time on tedious annotation tasks; they wanted to focus on the company’s mission of building a full 3-D perception solution using cameras. Compound Eye tried to outsource the annotation work to other vendors but, due to poor quality, high costs, and restrictive tooling, decided not to do so." - CloudFactory Case study

Depth / cloud point editing UI

Cuboid creation and annotation UI