Robotic Surgical System

Client Vicarious Surgical, Boston MA
Role VR UX Designer
TECH VR / MR, Medical Devices, Industrial Design
TOOLS Maya, Unity, Photoshop, Illustrator
Project Overview
A platform for the future of surgery. A multi-user and multi-component surgical robotic system controlled primarily with a VR interface.
My Contributions
I created and lead the UX strategy in an engineering driven company. Through user discovery and research I designed features that reflected their needs. I collaborated with engineering, product management, and marketing to create a development roadmap that allowed iterative design sprints identifying opportunities for customer and user input. I introduced a design system that let the team quickly create usable interfaces removing the potential bottleneck of design. Because of my research and design contributions we build a product that made surgery accessible to more surgeons and hospitals while improving patient outcomes.

I was initially on the software team reporting to the engineering manager and then to the head of product.
The journey to robotic surgery
Most people don't know much about surgical procedures unless they have undergone one themselves. I'm going to assume that you have a similar level of knowledge as I did when starting this project, which is not much (lucky us). So, let me give you a brief crash course.

Open surgery is what most people imagine when they think of a surgical operation. It involves making a large incision with a scalpel to access the area that needs to be addressed inside the body. The healing process is typically challenging because the body has to recover not only from the procedure itself but also from the act of being opened up. It requires many stitches, leaves a big scar, and carries the risk of infection.

Enter laparoscopic surgery, often referred to as minimally invasive surgery. Thanks to advancements in imaging technology, this type of procedure allows for smaller incisions. Usually, one incision is made for a camera on a stick, and others are made for surgical instruments attached to long sticks. This approach enables surgeons to see inside the body and perform necessary procedures without fully opening it up. However, using laparoscopic tools is like walking on stilts - it requires training and practice, and it's beneficial to already have experience walking.

In summary, robotically-assisted surgery was developed to overcome the limitations of minimally invasive surgery and enhance the capabilities of surgeons performing open surgery. Instead of directly manipulating the instruments, the surgeon uses a computer to control robotic arms and surgical instruments."Note: I made a few minor changes for clarity and improved readability.
Surgical system pain points
Prohibitively expensive. Only major hospitals can afford the cost and have the space to dedicate to one of these robotic systems acting as a barrier to access.

Not enough training. Operating a robotic surgical system is considerably different from a standard laparoscopic procedure. Because these systems are not widely available, surgeons don’t get the opportunity to train on them. To those that do have access and are able to train, the material available to them is unintuitive presenting individual features out of context.

Limited applications because of hardware design and user controls. The large robot limits how the patient can be positioned and what areas are accessible. The instruments are limited in their motion because they are essentially laparoscopic instruments attached robotic arms.

The first surgical robotic systems were introduced nearly 20 years ago. We wanted to build something new using the latest innovations available in hardware design and user control.
Overall strategy
Smaller is better. Advances in hardware have made it possible to build a robot small enough to fit inside the patient freeing up valuable OR space and allowing more flexibility to patient positioning and suitable procedures.

Training and remote operation. Thanks to proliferation of consumer VR headsets, surgeons can train virtually and remotely.

Shrinking down the surgeon. Using VR technology we are able to offer the surgeon an immersive mixed VR experience where they can view the stereoscopic camera feed from inside the body and also offer controls that feel natural. Instead of removing their hands from the tools to reposition the camera, if the surgeon wants to look left, they just move their head to look left. Instead of using laparoscopic tools to perform the surgery, the surgeon can control the robotic arms using tracked arm controllers that allow them to simple move their arms to perform the procedure.

Enhanced visualization and advanced sensing. Cellphone cameras technology has brought us insanely good cameras at a fraction of the size and cost. We offered enhanced visualization through stereo vision in addition to being able to provide a wide field of view not limited to a 2D screen. By integrating computer vision and machine learning from day one we can can extend the surgeon’s abilities and improve patient outcomes. Using external and internal cameras we can provide a comprehensive map of the patient and operating room. This enables ultra precise mapping and measurements of the system and patient.
Design strategy
There were existing engineer created interfaces that enabled a baseline functionality. I redesigned existing UI using standard design and interaction patterns and established a design systems that enabled engineers and product managers to prototype features without necessitating a designer. This allowed me to focus on critical aspects of product development that only the designer could do.

Validate workflow and features with users. We used the redesigned engineer created UI to validate initial feature set and UX direction and identify areas for ongoing user research.

Create a interface design language. Own and iterate the design system to incorporate the evolving understanding of user needs and usability insights. This was done in collaboration with engineers to ensure feasibility and scalability.

Contribute to user needs and roadmap. Iterate on user needs and UI design to get ahead of engineering. This let them focus on engineering tasks while I validated UX direction and ensured development time was not wasted working on features that didn’t address users needs.

Design 3D and 2D interactions and interfaces. With the big picture in mind, a product roadmap and MVP feature set, I was able to focus on interaction design and prototyping.
Advanced sensing
Placing a camera inside the body through a small incisions before the procedure begins allows the surgeon to survey and map the surgical area. Using photogrammetry we create a map of the patient which informs everything from robotic movement to meta data displayed to surgeon. Some preliminary use cases were creating, editing, removing reference markers that were anchored to the body. Computer vision was used to help identify anatomy and display contextual information to the surgeon. Surgeons regularly put actual tape measures inside the body to take measurements, we added the ability to measure instantly between dropped reference points or between the two robot arms.
Application modes + contextual UI
By understanding the surgeon’s procedure workflow we were able to tailor features and UI to the specific phase of the workflow. For example, it was standard practice in laparoscopic procedure to initially take marker to mark areas of interest followed by pulling a flexible ruler into the body to record measurements. By facilitating this digitally we also removed the need to yell out measurements for a nurse or assistant to make note of. Users initially were impressed with how comprehensive the feature set was but they also felt overwhelmed when learning and using them. This complexity felt heightened to surgeons who were often unfamiliar with VR controls. By breaking down tool functionality into modes the perceived complexity was reduced making it easier to learn and use.Managing UI elements became an additional burden to users. We used the anatomy map to place UI elements in areas that were not necessary to the current mode. UI display could also be adjusted to enhance readability. For example, if a complex light background was detected, the UI visual design would adjust background to a darker color with high opacity.