Smaller is better. Advances in hardware have made it possible to build a robot small enough to fit inside the patient freeing up valuable OR space and allowing more flexibility to patient positioning and suitable procedures.
Training and remote operation. Thanks to proliferation of consumer VR headsets, surgeons can train virtually and remotely.
Shrinking down the surgeon. Using VR technology we are able to offer the surgeon an immersive mixed VR experience where they can view the stereoscopic camera feed from inside the body and also offer controls that feel natural. Instead of removing their hands from the tools to reposition the camera, if the surgeon wants to look left, they just move their head to look left. Instead of using laparoscopic tools to perform the surgery, the surgeon can control the robotic arms using tracked arm controllers that allow them to simple move their arms to perform the procedure.
Enhanced visualization and advanced sensing. Cellphone cameras technology has brought us insanely good cameras at a fraction of the size and cost. We offered enhanced visualization through stereo vision in addition to being able to provide a wide field of view not limited to a 2D screen. By integrating computer vision and machine learning from day one we can can extend the surgeon’s abilities and improve patient outcomes. Using external and internal cameras we can provide a comprehensive map of the patient and operating room. This enables ultra precise mapping and measurements of the system and patient.
There were existing engineer created interfaces that enabled a baseline functionality. I redesigned existing UI using standard design and interaction patterns and established a design systems that enabled engineers and product managers to prototype features without necessitating a designer. This allowed me to focus on critical aspects of product development that only the designer could do.
Validate workflow and features with users. We used the redesigned engineer created UI to validate initial feature set and UX direction and identify areas for ongoing user research.
Create a interface design language. Own and iterate the design system to incorporate the evolving understanding of user needs and usability insights. This was done in collaboration with engineers to ensure feasibility and scalability.
Contribute to user needs and roadmap. Iterate on user needs and UI design to get ahead of engineering. This let them focus on engineering tasks while I validated UX direction and ensured development time was not wasted working on features that didn’t address users needs.
Design 3D and 2D interactions and interfaces. With the big picture in mind, a product roadmap and MVP feature set, I was able to focus on interaction design and prototyping.
Placing a camera inside the body through a small incisions before the procedure begins allows the surgeon to survey and map the surgical area. Using photogrammetry we create a map of the patient which informs everything from robotic movement to meta data displayed to surgeon. Some preliminary use cases were creating, editing, removing reference markers that were anchored to the body. Computer vision was used to help identify anatomy and display contextual information to the surgeon. Surgeons regularly put actual tape measures inside the body to take measurements, we added the ability to measure instantly between dropped reference points or between the two robot arms.
Application modes + contextual UI
By understanding the surgeon’s procedure workflow we were able to tailor features and UI to the specific phase of the workflow. For example, it was standard practice in laparoscopic procedure to initially take marker to mark areas of interest followed by pulling a flexible ruler into the body to record measurements. By facilitating this digitally we also removed the need to yell out measurements for a nurse or assistant to make note of. Users initially were impressed with how comprehensive the feature set was but they also felt overwhelmed when learning and using them. This complexity felt heightened to surgeons who were often unfamiliar with VR controls. By breaking down tool functionality into modes the perceived complexity was reduced making it easier to learn and use.Managing UI elements became an additional burden to users. We used the anatomy map to place UI elements in areas that were not necessary to the current mode. UI display could also be adjusted to enhance readability. For example, if a complex light background was detected, the UI visual design would adjust background to a darker color with high opacity.