Seeing what we can’t see today
In the future, surgery will combine autonomous robotics with traditional medical treatment
Activ Surgical is on a mission to revolutionize surgery by empowering surgeons with real-time intelligence and visualization to achieve the best possible outcomes and save lives. Founded in 2017and based in Boston, the digital surgery pioneer aims to provide surgeons with real-time, objective physiologic information during critical procedures. This will help practitioners avoid medical complications that kill about 400,000 people in the U.S. every year.
Activ Surgical’s CTO Tom Calef is a renowned medical roboticist and engineering leader who has delivered breakthrough innovation in surgical robotics. In this interview, he talks about the future of surgical robotics, Activ’s innovations, and the role of light.
Activ Surgical stands for the approach that the surgical future is collaborative. Please explain!
We aim to raise the awareness of both surgeons and robotic systems. Both are intelligent in terms of knowing what to do in certain situations and when to pass control back and forth – whether that’s a surgeon passing control to a robot, or a system intelligently instructing a surgeon on what the next steps of a procedure may be. This seamless collaboration is our ultimate mission at Activ Surgical.
Right now, surgeons use technology or systems as tools, but do not really interact with these systems. We enable systems to leverage data to provide real-time feedback to surgeons, not only for the immediate task at hand, but in and around a task such as how best to preserve an organ. Our ActiveEdge platform will show blood flow and tissue perfusion so that a surgeon can see if tissue is healthy.
Does this mean that – in the end – intelligence will replace surgeons?
The wisdom, judgement and instincts of a human surgeon cannot be replaced. We want to extend the capability of highly trained professionals through the use of artificial intelligence (AI) and machine learning. There is still an art to surgery. The ability of humans to react and use their deductive reasoning is critical to surgical outcomes and, in my opinion, that will not change.
AI has a big claim in presenting options in decision-making. We should promote the surgeon to a strategic and supervisory role such that they can perform more procedures more effectively, account for their ergonomics, and have a longer carrier span. We currently put highly trained humans into very poor ergonomic positions, and we have to make that better.
AI offers profound support on various levels: AI could support humans performing surgery remotely. That is where things get really exciting. What a system is doing actively is making sure that the right commands are being pushed down to the right instruments. An example would be being able to detect when a polyp is worth extracting during a diagnostic screening for breast or colon cancer. Local decisions will be made utilizing AI, but surgeons are giving their input to make sure that the systems and robots learn how to do surgery the right way.
Might we encounter special situations in which fully autonomous robotic machines will perform surgery without human intervention?
Yes, it is a reality, and it is the technology that we were founded on. There are scenarios where there is a growing need for autonomous surgery, including remote or dangerous environments where access to the best care is limited or unavailable. That being said, all of the intelligence that goes into the autonomous robot will be garnered from human experience. It will be taught based on how humans perform procedures, and tens of thousands of cases will be required to be able to do that in a meaningful and safe way. For the patient, this means access to the best possible care no matter where you are in the world.
What innovations are Activ Surgical working on in the surgical vision category?
Advanced imaging is really primed to accelerate technical innovation over the next five to 10 years. A number of clinical studies have shown that utilizing wavelengths outside of the visible spectrum, fluorescence, or dyes gives surgeons another data point that improves confidence, situational awareness, and the full patient outcome. In the case of laparoscopic cholecystectomy, for instance, the utilization of advanced visualization showed a dramatic decrease in the bile duct injury rate. As we look to future innovations, surgical videos are really dense data that can be used to empower AI and machine learning to pick up micro-trends. For instance, we are working on enabling tissue identification and tissue characterization for cancer margin detection. The big question is: How do we pick up the edges of where tissue transforms from healthy to disease such that we can guide dissection? Advanced visualization can really help there.
What role does light play to improve visualization, especially as sensor improvements help to detect what cannot be seen with the naked eye?
Light above and below the visible spectrum is really important because you start to see incredible phenomena happening below the tissue surface, particularly as you get into the longer wavelengths such as IR, NIR or SWIR. We visualize blood flow with some of these wavelengths. There is also an interesting development where you look at the response of very discrete wavelengths even in visible light. Taking that spectrum and powering it with deep learning allows us to identify changes in tissue. Standard spectroscopic techniques that have been used in pharmaceutical industries or hyper-spectral imaging that has been used in satellite imagery for decades have a place in surgery, and we believe that is what will be driving surgical visualization in the next decade. It is the ability to provide a signal. That is what light means in our field–the injection of energy so that we can sense it, backed up by deep learning that turn it into a clinical insight.
On a more personal note: How did you get to know Activ Surgical and what did you think the first time you heard about them?
I was employee number two at Activ Surgical, joining in 2017, and working alongside Peter Kim and Michael Ruehlman to create the brand and establish our headquarters in Boston. When I was first approached with the opportunity to join this VC-founded company, I was a massive proponent of advanced visualization, convinced that visualization could really improve patient outcomes in any scenario. I think robotics, intelligence, and sensing will fuel innovation in the med device industry significantly. There is already a massive uptake in standard robotics use. As we really show the clinical benefit of intelligence, we are going to adjust that trajectory even more so.
Your background is in computer engineering and mechanical engineering. In what way is an interdisciplinary background helpful to advance innovation?
I have been building competitive robots and a mechanical and software nerd since I was nine years old. I have always tried to figure out how things interplay and work together. That is what robotics is: an integration activity between electrical software computer optics and mechanical engineering. In my mind, you have to be able to cross into other disciplines. There are always so many ways to solve a problem. If you don’t have empathy for other disciplines, you will not necessarily come to an optimized solution. In the end, it takes a team to make all of this happen. My strength is to work with the team to come up with a product that stands the test of users over time.