书目名称 | Computer-Assisted and Robotic Endoscopy | 副标题 | Third International | 编辑 | Terry Peters,Guang-Zhong Yang,Jonathan McLeod | 视频video | http://file.papertrans.cn/235/234468/234468.mp4 | 概述 | Includes supplementary material: | 丛书名称 | Lecture Notes in Computer Science | 图书封面 |  | 描述 | .This book constitutes the thoroughly refereed post-conference proceedings of the Third International Workshop on Computer Assisted and Robotic Endoscopy, CARE 2016, held in conjunction with MICCAI 2016, in Athens, Greece, in October 2016.. .The 11 revised full papers were carefully selected out of 13 initial submissions. The papers are organized on topical secttion such as computer vision, graphics, robotics, medical imaging, external tracking systems, medical device controls systems, information processing techniques, endoscopy planning and simulation.. | 出版日期 | Conference proceedings 2017 | 关键词 | augmented reality; automated diagnosis; computer vision; medical imaging; surgical tracking and navigati | 版次 | 1 | doi | https://doi.org/10.1007/978-3-319-54057-3 | isbn_softcover | 978-3-319-54056-6 | isbn_ebook | 978-3-319-54057-3Series ISSN 0302-9743 Series E-ISSN 1611-3349 | issn_series | 0302-9743 | copyright | Springer International Publishing AG 2017 |
1 |
Front Matter |
|
|
Abstract
|
2 |
,Transfer Learning for Colonic Polyp Classification Using Off-the-Shelf CNN Features, |
Eduardo Ribeiro,Andreas Uhl,Georg Wimmer,Michael Häfner |
|
Abstract
Recently, a great development in image recognition has been achieved, especially by the availability of large and annotated databases and the application of Deep Learning on these data. Convolutional Neural Networks (CNN’s) can be used to enable the extraction of highly representative features among the network layers filtering, selecting and using these features in the last fully connected layers for pattern classification. However, CNN training for automatic medical image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work, we evaluate and analyze the use of CNN’s as a general feature descriptor doing transfer learning to generate “off-the-shelf” CNN’s features for the colonic polyp classification task. The good results obtained by off-the-shelf CNN’s features in many different databases suggest that features learned from CNN with natural images can be highly relevant for colonic polyp classification.
|
3 |
,Probe Tracking and Its Application in Automatic Acquisition Using a Trans-Esophageal Ultrasound Rob |
Shuangyi Wang,Davinder Singh,David Lau,Kiran Reddy,Kaspar Althoefer,Kawal Rhode,Richard J. Housden |
|
Abstract
Robotic trans-esophageal echocardiography (TEE) has many advantages over the traditional manual control approach during cardiac surgical procedures in terms of stability, remote operation, and radiation safety. To further improve the usability of the robotic approach, development of an intelligent system using automatic acquisition of ultrasound images is proposed. This is addressed using a view planning platform in which the robot is controlled according to a pre-planned path during the acquisition. Considering the real mechanical movement, feedback of the probe position is essential in ensuring the success of the automatic acquisition. In this paper, we present a tracking method using the combination of an electromagnetic (EM) tracking system and image-based registration for the purpose of feedback control used in the automatic acquisition. Phantom experiments were performed to evaluate the accuracy and reliability of the tracking and the automatic acquisition. The results indicate a reliable performance of the tracking method. As for automatic acquisition, the mean positioning error in the near field of ultrasound where most structures of clinical interest are located is 10.44 m
|
4 |
,Hybrid Tracking and Matching Algorithm for Mosaicking Multiple Surgical Views, |
Chisato Takada,Toshiyuki Suzuki,Ahmed Afifi,Toshiya Nakaguchi |
|
Abstract
In recent years, laparoscopic surgery has become major surgery due to several advantages for patients. However, it has disadvantages for operators because of the narrow surgical field of view. To solve this problem, our group proposed camera-retractable trocar which can obtain multiple surgical viewpoints while maintaining the minimally invasiveness. The purpose of this study is to obtain a wide visual panoramic view by utilizing image mosaicking of camera-retractable trocar viewpoints videos. We utilize feature points tracking in different videos to generate panoramic video independent of inter-cameras overlap and to increase mosaicking speed and robustness. We evaluate tracking accuracy according to several conditions and mosaicking accuracy according to overlap size. In contrast to the conventional mosaicking approach, the proposed approach can produce panoramic image even in the case of 0% inter-cameras overlap. Additionally, the proposed approach is fast enough for clinical use.
|
5 |
,Assessment of Electromagnetic Tracking Accuracy for Endoscopic Ultrasound, |
Ester Bonmati,Yipeng Hu,Kurinchi Gurusamy,Brian Davidson,Stephen P. Pereira,Matthew J. Clarkson,Dean |
|
Abstract
Endoscopic ultrasound (EUS) is a minimally-invasive imaging technique that can be technically difficult to perform due to the small field of view and uncertainty in the endoscope position. Electromagnetic (EM) tracking is emerging as an important technology in guiding endoscopic interventions and for training in endotherapy by providing information on endoscope location by fusion with pre-operative images. However, the accuracy of EM tracking could be compromised by the endoscopic ultrasound transducer. In this work, we quantify the precision and accuracy of EM tracking sensors inserted into the working channel of a flexible endoscope, with the ultrasound transducer turned on and off. The EUS device was found to have little (no significant) effect on static tracking accuracy although jitter increased significantly. A significant change in the measured distance between sensors arranged in a fixed geometry was found during a dynamic acquisition. In conclusion, EM tracking accuracy was not found to be significantly affected by the flexible endoscope.
|
6 |
,Extended Multi-resolution Local Patterns - A Discriminative Feature Learning Approach for Colonosco |
Siyamalan Manivannan,Emanuele Trucco |
|
Abstract
We propose a novel local image descriptor called the ., and a discriminative probabilistic framework for learning its parameters together with a multi-class image classifier. Our approach uses training data with image-level labels to learn the features which are discriminative for multi-class colonoscopy image classification. Experiments on a three class (abnormal, normal, uninformative) white-light colonoscopy image dataset with 2800 images show that the proposed feature perform better than popular hand-designed features used in the medical as well as in the computer vision literature for image classification.
|
7 |
,Evaluation of i-Scan Virtual Chromoendoscopy and Traditional Chromoendoscopy for the Automated Diag |
Georg Wimmer,Michael Gadermayr,Roland Kwitt,Michael Häfner,Dorit Merhof,Andreas Uhl |
|
Abstract
Image enhancement technologies, such as chromoendoscopy and digital chromoendoscopy were reported to facilitate the detection and diagnosis of colonic polyps during endoscopic sessions. Here, we investigate the impact of enhanced imaging technologies on the classification accuracy of computer-aided diagnosis systems. Specifically, we determine if image representations obtained from different imaging modalities are significantly different and experimentation is performed to figure out the impact of utilizing differing imaging modalities in the training and validation sets. Finally, we examine if merging the images of similar imaging modalities for training the classification model can be effectively applied to improve the accuracy.
|
8 |
,ORBSLAM-Based Endoscope Tracking and 3D Reconstruction, |
Nader Mahmoud,Iñigo Cirauqui,Alexandre Hostettler,Christophe Doignon,Luc Soler,Jacques Marescaux,J. |
|
Abstract
We aim to track the endoscope location inside the surgical scene and provide 3D reconstruction, in real-time, from the sole input of the image sequence captured by the monocular endoscope. This information offers new possibilities for developing surgical navigation and augmented reality applications. The main benefit of this approach is the lack of extra tracking elements which can disturb the surgeon performance in the clinical routine. It is our first contribution to exploit ORBSLAM, one of the best performing monocular SLAM algorithms, to estimate both of the endoscope location, and 3D structure of the surgical scene. However, the reconstructed 3D map poorly describe textureless soft organ surfaces such as liver. It is our second contribution to extend ORBSLAM to be able to reconstruct a semi-dense map of soft organs. Experimental results on in-vivo pigs, shows a robust endoscope tracking even with organs deformations and partial instrument occlusions. It also shows the reconstruction density, and accuracy against ground truth surface obtained from CT.
|
9 |
,Real-Time Segmentation of Non-rigid Surgical Tools Based on Deep Learning and Tracking, |
Luis C. García-Peraza-Herrera,Wenqi Li,Caspar Gruijthuijsen,Alain Devreker,George Attilakos,Jan Depr |
|
Abstract
Real-time tool segmentation is an essential component in computer-assisted surgical systems. We propose a novel real-time automatic method based on Fully Convolutional Networks (FCN) and optical flow tracking. Our method exploits the ability of deep neural networks to produce accurate segmentations of highly deformable parts along with the high speed of optical flow. Furthermore, the pre-trained FCN can be fine-tuned on a small amount of medical images without the need to hand-craft features. We validated our method using existing and new benchmark datasets, covering both . and . real clinical cases where different surgical instruments are employed. Two versions of the method are presented, non-real-time and real-time. The former, using only deep learning, achieves a balanced accuracy of 89.6% on a real clinical dataset, outperforming the (non-real-time) state of the art by 3.8% points. The latter, a combination of deep learning with optical flow tracking, yields an average balanced accuracy of 78.2% across all the validated datasets.
|
10 |
,Weakly-Supervised Lesion Detection in Video Capsule Endoscopy Based on a Bag-of-Colour Features Mod |
Michael Vasilakakis,Dimitrios K. Iakovidis,Evaggelos Spyrou,Anastasios Koulaouzidis |
|
Abstract
Robotic video capsule endoscopy (VCE) is a rapidly evolving medical imaging technology enabling more thorough examination and treatment of the gastrointestinal tract than conventional endoscopy technologies. Despite of the technological advances in this field, the reviewing of the large VCE image sequences remains manual and challenges experts’ diagnostic capabilities. Video reviewing systems for automated lesion detection are still under investigation. Most of these systems are based on supervised machine learning algorithms, which require a training set of images, manually annotated by the experts to indicate which pixels correspond to lesions. In this paper, we investigate a weakly-supervised approach for lesion detection, which requires image-level instead of pixel-level annotations for training. Such an approach offers a considerable advantage with respect to the efficiency of the annotation process. It is based on state-of-the-art colour features, which, in this study, are extended according to the bag-of-visual-words model. The area under receiver operating characteristic achieved, reaches 81%.
|
11 |
,Convolutional Neural Network Architectures for the Automated Diagnosis of Celiac Disease, |
G. Wimmer,S. Hegenbart,A. Vecsei,A. Uhl |
|
Abstract
In this work, convolutional neural networks (CNNs) are applied for the computer assisted diagnosis of celiac disease based on endoscopic images of the duodenum. To evaluate which network configurations are best suited for the classification of celiac disease, several different CNN networks were trained using different numbers of layers and filters and different filter dimensions. The results of the CNNs are compared with the results of popular general purpose image representations such as Improved Fisher Vectors and LBP-based methods as well as a feature representations especially designed for the classification of celiac disease. We will show that the deeper CNN architectures outperform these comparison approaches and that combining CNNs with linear support vector machines furtherly improves the classification rates for about 3–7% leading to distinctly better results (up to 97%) than those of the comparison methods.
|
12 |
,A System for Augmented Reality Guided Laparoscopic Tumour Resection with Quantitative Ex-vivo User |
Toby Collins,Pauline Chauvet,Clément Debize,Daniel Pizarro,Adrien Bartoli,Michel Canis,Nicolas Bourd |
|
Abstract
Augmented Reality (AR) guidance systems are currently being developed to help laparoscopic surgeons locate hidden structures such as tumours and major vessels. This can be achieved by registering pre-operative 3D data such as CT or MRI with the laparoscope’s live video. For soft organs this is very challenging, and quantitative evaluation is both difficult and limited in the literature. It has been done previously by measuring registration accuracy using retrospective (non-live) data. However a performance evaluation of a real-time system in live use has not been presented. The clinical benefit has therefore not been measured. We describe an AR guidance system based on an existing one with several important improvements, that has been evaluated in an ex-vivo pre-clinical study for guiding tumour resections with porcine kidneys. The main improvement is a considerably better way to visually guide the surgeon, by showing them how to access the tumour with an incision tool. We call this .. Performance was measured with the negative margin rate across 59 resected pseudo-tumours. This was 85.2% with AR guidance and 41.9% without, showing a very significant improvement (., two-tailed Fish
|
13 |
Back Matter |
|
|
Abstract
|
|
|