Research - alberto-rota/dVRK GitHub Wiki

NEARLab

Current Research Work

This guide was made and is maintained by Alberto Rota. Contributes, issues and corrections are welcome at Alberto's email.


3D surgical scene reconstruction

Reconstructing a 3D space from 2D images is a task with which our brain copes quite well. However, for computer is not easy at all. By analyzing the differences between the two endoscopic images (from the right and left camera) we recreate the 3D intraoperatory space in real time. This is something that doesn't happen with the standard surgical robots: the relative position of the instruments and the organs is undefined. With the 3D map, we are able to know how close we are to the delicate structures of the surgical environment, and therefore give feedback to the surgeon in case he gets too close to sensitive tissues.

Enhanced adaptive training

The process of training a surgeon for the use of a complex surgical system like the daVinci is crucial and complex for effective and safe procedures. We recreated the daVinci robot in a Virtual Reality environment, and we are sending the camera signals from the virtual environment into the oculars of the real console: this way, the operator will feel completely immersed in the virtual surgical scenario. In the virtual reality, we created and customized many surgical tasks that mimic real operations and that have been designed to target specific key surgical skills. From the surgical errors that we calculate from the virtual objects, then, we energize the motors on the manipulators so that they provide forces and torques to the operator's hands and wrists in such a way that they would correct for errors and provide guidance.

Surgical task autonomization with Reinforcement learning

Robots like the daVinci have been designed to work completely in teleoperation mode: the robot will never move without an input or command from the surgeon. However, we are trying to move in the direction of semi-automating some simple surgical tasks like pick-and-place. We do this with Reinforcement Learning: with this approach, the robot (in a virtual envvironment for safety purposes) moves randomly until he reaches his target. After the target is reached, his movement will be less random and more oriented towards the correct execution. This learning-based gradual approach will make the robot reach the target independently of the target position.

⚠️ **GitHub.com Fallback** ⚠️