Simple Science

Cutting edge science explained simply

# Electrical Engineering and Systems Science# Image and Video Processing# Computer Vision and Pattern Recognition

Advancements in Cancer Surgery with SENSEI Probe

New tool aids surgeons in detecting cancerous tissue during procedures.

― 6 min read


New Tool for CancerNew Tool for CancerSurgerycancerous tissues in surgery.Improved detection methods for
Table of Contents

Cancer surgery is a major treatment for cancer, but it can be tough for surgeons to find all the cancerous tissue during the procedure. Even with advanced imaging techniques like PET and CT scans used before surgery, surgeons still often rely on their sense of touch and sight because there aren’t enough reliable tools to see what's happening inside the body during the surgery.

To tackle this issue, a special tool called the 'SENSEI' probe has been developed. This probe helps detect cancerous tissues during surgery by using a radiotracer that is injected beforehand. However, one major problem is that the probe does not provide any visible indication of where it is detecting Gamma activity on the tissue surface, making it difficult for surgeons to pinpoint the exact location.

The initial methods used to solve this problem, including segmentation and geometric approaches, were not successful. Instead, it was found that using advanced image features combined with the position of the probe could lead to better results. A simple regression network was designed to resolve the issue, and this method was tested and shown to work well. To further validate this solution, two Datasets were created and released, allowing researchers and surgeons to improve the detection of the sensing area of the probe during surgery.

The Challenges of Cancer Surgery

Cancer remains a big health problem across the globe. In the UK, someone is diagnosed with cancer every two minutes. Surgery is often a primary treatment option, but identifying cancerous tissue during procedures can be very challenging. Current imaging tools, while helpful, still leave surgeons without the complete information they need. This leads to situations where they may leave some cancer behind or mistakenly remove healthy tissue, which can harm patients and increase costs.

To improve the situation, better visualization tools are needed for minimally invasive surgery. This kind of surgery aims to reduce the impact on the patient while achieving the same results as open surgery. However, the lack of accurate tools for viewing tissues in real-time complicates this goal.

The SENSEI Probe

The recent 'SENSEI' probe, developed by a medical company, offers a way to accurately identify cancer during surgery. It uses nuclear agents to help locate cancerous tissues based on the emitted gamma signals. But the challenge remains: the probe does not provide a visual mark on the tissues, and this complicates its use. The sensing area, where the probe detects signals, needs to be accurately identified on the tissue surface.

Geometrically, this sensing area is defined as the Intersection Point between the probe axis and the tissue. However, traditional methods have difficulties with this due to the lack of clear texture in tissues and depth data. Moreover, tracking the probe's position during surgery is also complicated.

Innovative Solutions

To address these challenges, the researchers modified a non-functioning SENSEI probe by adding a Laser module. This laser clearly indicates the sensing area on laparoscopic images by shining a light onto the tissue surface. The complete setup includes a stereo laparoscope system for capturing images, a rotation stage for moving the phantom, a light control shutter, and the laser module.

By using this setup, the aim is to change the sensing area identification problem from a geometrical challenge to one that relies on content inference in 2D images. This approach remains complex as it ultimately needs to find the intersection point without the aid of the laser, simulating the actual use of the SENSEI probe during surgery.

Related Research

Laparoscopic images are important to assistive surgery and have been used in tasks like object detection and image segmentation. Recent advancements have been made in depth estimation, but getting accurate depth data for laparoscopic images is hard, which complicates training models.

Research has also focused on laparoscopic segmentation, which helps identify instruments and anatomical structures. Various deep learning approaches have shown promise, yet the lack of accurate depth information hinders progress.

Data Collection and Novel Datasets

A new dataset named 'Jerry' was created using miniaturized cameras attached to a stereo laparoscope. The dataset includes multiple images captured with the modified SENSEI probe, both with and without the laser. Another dataset, 'Coffbee', was also created, providing additional ground truth data.

These datasets offer multiple uses, including detecting the intersection point, estimating depth, and segmenting tools. Detecting the intersection point is particularly vital for accurate cancer visualization and is often overlooked in the field of surgical vision.

Intersection Point Detection

Detecting the intersection point is straightforward when the laser is on, as segmentation networks can identify the location easily. However, in real-world situations, the gamma probe does not leave a visible mark on the tissue. Therefore, different methods have been tried, but they often come with complications like sterilization concerns for tools.

This study proposes a simple regression approach to solve this issue, relying on 2D image information alone. This method works well without the laser guidance after being trained and allows for real-time sensing area mapping during surgery.

Methodology for Intersection Detection

The researchers used various deep learning segmentation networks initially. However, when images without the laser were used, the networks could not make accurate predictions. The laser spot provides key information for identifying the intersection point.

A more effective approach was to treat the problem as a regression task. The system consists of two main parts: extracting visual features from the image and learning from the sequence of principal points along the probe's axis. The two types of data were combined to predict where the probe intersects with the tissue surface.

The network was trained using pairs of images, one with the laser and the other without. The method includes techniques like Principal Component Analysis to learn about the probe axis, leading to better predictions.

Evaluation and Results

To evaluate the accuracy of the sensing area location, metrics like Euclidean distance were used to compare the predicted points with the actual intersections found using the laser. The results showed that the approach worked well. Different network designs were tested, and the combination of a ResNet backbone with a multi-layer perception yielded the best results.

The findings indicate that stereo images performed better than single images due to the added depth information. Additionally, there were noticeable differences in performance based on the type of network used. Overall, the proposed method achieved good prediction accuracy and efficient real-time processing.

Conclusion

This work presents a new framework for using a laparoscopic gamma detector during minimally invasive cancer surgery. With the added laser module, the researchers were able to guide the training and successfully detect where the probe meets the tissue. The released datasets and the new approach establish a benchmark in surgical vision, promising to improve outcomes in cancer surgery.

Continued efforts in this area are crucial to develop even better tools and methods for identifying cancer during surgery. Improved visualization will lead to better surgical practices, ensuring that all cancerous tissues are treated while protecting healthy tissues, ultimately enhancing patient care.

Original Source

Title: Detecting the Sensing Area of A Laparoscopic Probe in Minimally Invasive Cancer Surgery

Abstract: In surgical oncology, it is challenging for surgeons to identify lymph nodes and completely resect cancer even with pre-operative imaging systems like PET and CT, because of the lack of reliable intraoperative visualization tools. Endoscopic radio-guided cancer detection and resection has recently been evaluated whereby a novel tethered laparoscopic gamma detector is used to localize a preoperatively injected radiotracer. This can both enhance the endoscopic imaging and complement preoperative nuclear imaging data. However, gamma activity visualization is challenging to present to the operator because the probe is non-imaging and it does not visibly indicate the activity origination on the tissue surface. Initial failed attempts used segmentation or geometric methods, but led to the discovery that it could be resolved by leveraging high-dimensional image features and probe position information. To demonstrate the effectiveness of this solution, we designed and implemented a simple regression network that successfully addressed the problem. To further validate the proposed solution, we acquired and publicly released two datasets captured using a custom-designed, portable stereo laparoscope system. Through intensive experimentation, we demonstrated that our method can successfully and effectively detect the sensing area, establishing a new performance benchmark. Code and data are available at https://github.com/br0202/Sensing_area_detection.git

Authors: Baoru Huang, Yicheng Hu, Anh Nguyen, Stamatia Giannarou, Daniel S. Elson

Last Update: 2023-07-07 00:00:00

Language: English

Source URL: https://arxiv.org/abs/2307.03662

Source PDF: https://arxiv.org/pdf/2307.03662

Licence: https://creativecommons.org/licenses/by/4.0/

Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.

Thank you to arxiv for use of its open access interoperability.

More from authors

Similar Articles