Information technology for surgery: general role, applications and example cases
About company
Our solutions
Resource center

Contact us to find out more and start working with us

    * Required field

    SberMedAI LLC uses Cookies (files with information about past visits to the website) for service personalization and user convenience. To continue browsing the website, you need to enable cookies and accept our Cookies policy. You may disable cookies, but the website will become unavailable. Cookies policy


    Computer technology for surgery

    February 17 2023


    Reading time 24 minutes


    A surgical operation is a complex task that requires utmost concentration and precise movement. Technology supports the surgeon in this difficult task by providing digital tools to the operating room.

    The role of information technology in modern surgery

    The following artificial intelligence (AI) technologies are used in surgery1:

    1. Machine Learning (ML). An algorithm that makes predictions and recognizes patterns by learning from data that marked by an expert.

    2. Natural Language Processing (NLP). The computer learns to understand human language, such as words in surgical reports.

    3. Artificial Neural Networks. Includes the input layer, where the data comes in, and the output layer, where the algorithm draws conclusions. Between these exists many layers where calculations take place and intermediate decisions are made.

    4. Computer vision. Technologythat identifies objects in an image or video by recognizing their color, position, and texture.

    In surgery, AI is used in planning operations. Computer models help to assess risks in advance to reduce the chance of complications. AI supports the doctor by providing additional information. When faced with a difficult situation, the specialist receives advice in real time2.

    IT solutions for surgery

    We are presenting a line of technology used for surgery, either independently or as part of complex systems.

    Processing of medical images

    AI can process radiographs, CT and MRI images, ultrasound images, making the necessary calculations:

    1. A computeralgorithm divides the image into separate areas. Each area is a set of pixels with similar characteristics3.

    2. Data about suspicious fields is fed into the neural network. AI sequentially analyzes the field, comparing new information with existing knowledge, and making a conclusion on whether it’s observing a pathology4.

    3. Registration/upscaling. Two images are used as input data, e.g. one obtained before the operation and another during it. The quality of the last picture is usually lower. The neural network matches the images and creates an improved result5.

    3D modeling

    A surgeon is examining the digital model of the brain

    Scientists from the Tokyo Medical University have created a computer system that can visualize lung pathologies when preparing for a segmentectomy.

    A segmentectomy is a removal operation of an affected lung area or segment6. The method is used in surgery for lung cancer. The surgeon faces a difficult task of distinguishing between normal and malignant tissue. It is necessary to remove the entire area with the tumor in order to avoid complications and recurrences, but to preserve as much healthy tissue as possible7.

    The algorithm created by Japanese scientists uses medical images to create a three-dimensional model of bronchopulmonary structures. The technology is applied in several stages8:

    1. CT images are loaded into the system.

    2. The algorithm analyzes the input data and creates a digital model of the lung tissue, blood vessels, bronchi, and tumor.

    3. AI analyzes which anatomical structures are associated with the tumor by determining the affected segment.

    4. The algorithm calculates the area that has to be operated and displays it on the screen.

    The surgeon checks the results of the reconstruction and makes the necessary corrections. The whole procedure takes 5-10 minutes. The information model is used both in preparation for and during surgery8.

    3D printing for surgery

    3D printing is used in plastic surgery. Printed implants can be used to fix defects in the jaw. Biodegradable material is used in combination with stem cells, so that the implant is gradually replaced by bone tissue. The technology has also been used in rhinoplasty and cleft lip treatment9.

    Pedicle screws are devices used in surgery to strengthen the vertebrae. These screws provide additional support to the spine, which is recovering from injury or disease10.

    Their use is difficult if the patient’s vertebrae have an unusual shape. Chinese Medical University has proposed a new technology that allows easier insertion of pedicle screws11:

    • A template is printed on a 3D printer, which is attached to the vertebrae.

    • The surgeon inserts the screws into the holes on the template.

    This way no soft tissues needs to be removed in order to insert the instrument more accurately – the targets are marked in advance.

    Cranioplasty is the surgical repair of a defect in the skull bone that was formed during a previous operation or injury. Polyetheretherketone (PEEK) is used as the fill material. This polymer is very similar to bone in terms of its mechanical and physical properties13.

    The use of 3D PEEK skull prostheses gives good aesthetic and functional results. Researchers note that polymer models have advantages over hand-made bone cement implants13.

    Augmented reality

    Surgeon looking at the operated area through a binocular surgical loupe

    Augmented Reality (AR) is a technology that allows the user to see computer information superimposed on the real environment14.

    How AR technology is applied in surgery15:

    1. Special devices, such as video cameras and sensors, monitor what is happening on the operating table.

    2. The data is processed by a computer.

    3. Real-time translation of the operation is displayed on the monitor.

    4. A 3D model of the organ is superimposed on the video.

    A 3D image is obtained in preparation for the operation. The algorithm processes CT images and creates a digital model that is verified by the doctor15.

    This technology has a drawback: the surgeon is forced to look at the monitor, losing focus. A possible solution could be the use of AR headsets that display information on the operating lenses14:

    1. The surgeon puts on special glasses.

    2. When looking through the lenses, both the operated area and its superimposed 3D model can be viewed.

    3. The digital image can be zoomed and moved using touchless gestures or voice commands.

    Examples of devices that support AR: Microsoft HoloLens, Google Glass, Sony SmartEyeglass16.

    In surgery, vein imaging devices such as AccuVein are used. The device emits infrared light on the surface of the skin. Venous vessels are displayed as dark lines, and the rest of the skin under the IR-rays is visible in red. This way the surgeon can evaluate veins in real time17.

    IT system for preoperative planning

    Chinese scientists have developed AI-based technology to help surgeons prepare for total hip arthroplasty (a complete replacement of an affected joint with a prosthesis).

    The project involved the study of over 1.2 million CT scans from 3,000 patients. Surgical specialists marked up some of the images manually in order to train the neural network. Physicians indicated anatomical landmarks, such as the contours of the pelvis and femur18.

    The technology provided the data that the surgeon evaluated18:

    • 3D images of the hip joint area;

    • model of the future prosthesis in different positions;

    • distances between anatomical landmarks;

    • expected result of arthroplasty.

    The information system has accelerated the planning of the operation, taking stress off the surgeon. The time it took for the neural network to process the CT image was about 2 minutes. The manual process could take up to 207 minutes18.

    Specialists from Belgium used a similar technology for total knee arthroplasty. The training dataset included 5409 preoperative plans compiled by experienced surgeons. The plans contained data on the size of the implant and how to properly position it19.

    Next, a digital model of the knee joint prosthesis was created, which was evaluated by the surgeon and simultaneously studied by the algorithm with consideration of clinical data. As a result, AI created an improved preoperative plan. The doctor made on average 39.7% fewer corrections to it than to the standard plan19.

    The Resting State fMRI service by SberMedAi is used when planning surgery for patients with oncology. AI helps to distinguish between the location of functionally significant areas of the cerebral cortex and the tumor.

    The algorithm analyzes the patient’s functional magnetic resonance imaging data. In the images, AI highlights the visual, motor and auditory areas of the brain with color. The doctor can get the result in 15 minutes.

    The CT Angiography service helps to plan surgical interventions for stroke:

    1. The doctor performs CT angiography with contrast.

    2. The algorithm analyzes the obtained images and uses them to create a 3D model of cerebral vessels.

    3. AI automatically marks zones of occlusion (places where the lumen of the vessel is closed by a thrombus).

    The service speeds up the analysis of CT angiography by detecting the localization of blood clots in vessels of the brain. AI creates a preliminary report, which is verified by a medical specialist. The algorithm helps the doctor in making clinical decisions, increasing the speed and accuracy of diagnosis.

    Navigation system for surgery

    Information technology helps doctors perform surgeries

    Surgical navigation system is a technology that helps to reduce risks and increase the accuracy of surgical operations. It integrates different information solutions, including medical imaging, visualization with 3D models and augmented reality20.

    Operating principle of the surgical navigation system21:

    1. Before the operation, CT images are processed to obtain 3D images of the anatomy of the patient’s organs.

    2. Tracking system is used to monitor surgical instruments.

    3. The position of the instruments is displayed relative to the patient’s anatomy.

    4. The doctor can follow the necessary information on the monitor.

    The navigation system allows both tracking the position of the tools and superimposing a 3D model on the video of the operation in real time. The image can be merged automatically or manually. The surgeon can edit the computer model, changing its position, size and transparency 22.

    Navigation in surgery permits several functions23:

    • allows the doctor to find the operable anatomy and gain access to it;

    • simplifies tool management;

    • performs the necessary measurements;

    • provides informationsupport to the surgeon.

    Navigation systems are used in neurosurgery. The surgeon works with neural tissue, so accuracy down to the millimeter is important here. The use of neuronavigation provides more information about the tumor: where it is located, what size and boundaries it has. It is easier for the surgeon to distinguish between malignant tissue and adjacent anatomical structures, such as the internal carotid artery and cranial nerves24.

    Navigation helps in abdominal surgery, such as gastrectomy – partial or complete removal of the stomach affected by a tumor25. In a randomized controlled trial, experts from the Republic of Korea compared two approaches to gastrectomy: standard and navigational. In the latter case, the quality of life and nutrition of the operated patients improved significally26.

    AI improves endoscopy

    Endoscopy is a procedure that allows internal inspection of the body using a long thin tube with a small camera at the end. The endoscope is inserted through natural orifices, such as the mouth. The method is used to study hollow organs: large intestine, uterus, stomach, bladder27.

    Information technologies are applied in endoscopy to solve diagnostic problems28:

    • assess the quality of the endoscopic image;

    • distinguish the pathological area from the normal one;

    • detect and locate the tumor and limit it with a frame;

    • determine the contours of the neoplasm and highlight it with color;

    • preliminarily define the type of tumor and the depth of germination in the tissue.

    Endoscopy is used in surgery for preoperative examination. Korean scientists used a neural network at an early stage of stomach cancer to determine the depth of tumor growth in the tissue. The study comprised of the following steps29:

    1. For training and testing of the neural network, scientists selected 11,539 endoscopic images.

    2. The informationmodel was trained to distinguish between mucosal cancer and a tumor affecting the underlying tissues.

    3. The neural network was tested – its sensitivity reached 91.0%.

    The surgeon uses advanced endoscopy data to understand how much tissue needs to be removed. This helps to avoid tumor recurrence without radical procedures30.

    Specialists from the UK have developed a technology that improves endoscopic navigation during surgery on the organs of the gastrointestinal tract. It is based on the modeling of the esophagus, stomach and duodenum, through which the endoscope moves. The neural network also builds models of the surrounding organs based on the CT images of the patient31.

    Endoscopic surgery can be improved with AR technology. Experts from the Netherlands and Sweden have developed a navigation system based on augmented reality with an algorithm designed for endonasal skull base surgery32:

    • the endoscope is inserted into the nasal cavity of the patient;

    • the doctor receives a live video stream from the endoscope;

    • the algorithm superimposes a 3D model on the video;

    • desired anatomical structures, such as vessels and nerves, are highlighted.

    The technology includes optical sensors that track the position of the endoscope 32.

    Robotic systems for surgery

    A doctor studies readings from a anesthesia monito

    The Cambridge Dictionary defines the word “robot” as “a mechanical device that works automatically or is controlled by a computer”33.

    Robots are used in surgery, although they do not perform operations. Surgeons control them through a computer or a special console to automate routine processes and improve their accuracy. Here’s how robotic systems work and in what areas of surgery they are useful:

    Approaches to teaching robotic surgery: how a robot and a doctor learn to understand each other

    Learning from Demonstration (LfD) replaces manual robot programming with automatic one. The expert shows the robot how to perform the task dividing it into a number of simple actions34.

    Robots are trained in surgery by using data sets are created, for example, the JIGSAWS dataset from Johns Hopkins University specialists. Modeling of surgical gestures is carried out in several stages35:

    1. A professional surgeon first completes a task by using computerconsoles to perform various manipulations.

    2. The task is divided into steps – a series of gestures. For example, to suture, you need to position the tip of the needle, push the needle through the fabric, pull the needle to the side. Each movement is annotated manually.

    3. The data is then analyzed by receiving video and kinematic information about the trajectory and speed of the robot limb. The data is processed by computational methods. The technologyturns human gestures into a set of numbers that a computer can understand.

    Human-Robot Interaction (HRI) is a research and development concept designed to improve the way humans and computers work together. HRI uses technology to teach a robot to understand and respond to human movement and gestures36.

    HRI information models allow supporting the robotic limb controls during surgery. Algorithms make the robot’s movements more accurate and smooth, and allow it to better recognize the doctor’s commands. The use of HRI is relevant in minimally invasive surgery, where the minimum amount of tissue is affected37.

    Studies are being conducted where the surgeon can control the information system by voice. Scientists from China and Germany have published a concept of an endoscopic robot that will respond to voice commands38:

    1. The doctor speaks commands into the microphone, for example, “up” or “right”.

    2. The robot recognizes the command and directs the endoscope in the right direction.

    3. The surgeon controls the instruments with his hands and the surgical field with his voice.

    The authors are currently developing a prototype robot, paying close attention to its accuracy and safety38.

    Named after the Italian master: the use of the da Vinci robot in surgery

    The da Vinci robotic surgical system was developed by the American company Intuitive Surgical. The technology consists of three components39:

    1. The surgeon’s console. Used by the doctor who controls the instruments with joysticks and foot pedals.

    2. Patient stand. It is installed next to the operating table on which the patient lies. The stand consists of four robotic arms and a camera.

    3. Video stand. The head end of the rack contains a monitor, which displays the operated area.

    The robotic arms provide a full range of motion and allow correct position of the surgical tool. The technology eliminates the effect of trembling of the surgeon’s hands. The specialist controls the progress of the operation using two optical channels. They transmit high-resolution images, one for each doctor’s eye40.

    According to Intuitive Surgical’s 2021 Annual Report, da Vinci has performed approximately 1.6 million surgeries in hospitals worldwide. The robotic system is used in various areas of minimally invasive surgery41:

    • surgical treatment of hernia;

    • operations on the colon and rectum;

    • removal of organs: gallbladder, prostate, uterus;

    • operations on the lungs;

    • removal of head and neck tumors.

    Scientists from Korea in 2018 analyzed 10,000 surgical interventions performed using da Vinci, out of which 94.5% were performed for malignant neoplasms42.

    The effect of the da Vinci system on clinical outcome depends on the type of surgery performed. The researchers note the advantages over the use of standard techniques in surgery. Patients operated on with da Vinci lost less blood and were discharged from the hospital more quickly43.

    The future of digital technology for surgery

    Operating room interior

    Successful use of artificial intelligence in surgery is possible if a number of conditions are met44:

    • Algorithms are comprehensible to healthcare professionals.

    • Technologiescan solve relevant clinical problems.

    • Information is collected in a reliable manner and is can be effectively processed by a computer.

    • Data security and confidentiality are respected.

    • The effectiveness of the algorithms has been tested and proven.

    The future of robotic surgery is associated with the development of tactile feedback. This technology will allow the robot and the doctor to better understand each other. Haptic feedback should help the surgeon to feel the pressure exerted on the tissue and the computer will be able to more accurately respond to different strengths of applied physical pressure45.


    1. Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial Intelligence in Surgery: Promises and Perils. Ann Surg. 2018 Jul;268(1):70-76. doi: 10.1097/SLA.0000000000002693. PMID: 29389679; PMCID: PMC5995666.

    2. Gupta A, Singla T, Chennatt JJ, David LE, Ahmed SS, Rajput D. Artificial intelligence: A new tool in surgeon’s hand. J Educ Health Promot. 2022 Mar 23;11:93. doi: 10.4103/jehp.jehp_625_21. PMID: 35573620; PMCID: PMC9093628.

    3. Luo D, Zhang Y, Li J. Research on Several Key Problems of Medical Image Segmentation and Virtual Surgery. Contrast Media Mol Imaging. 2022 Apr 11;2022:3463358. doi: 10.1155/2022/3463358. PMID: 35494211; PMCID: PMC9017556.

    4. Liu F, Zhou Z, Samsonov A, Blankenbaker D, Larison W, Kanarek A, Lian K, Kambhampati S, Kijowski R. Deep Learning Approach for Evaluating Knee MR Images: Achieving High Diagnostic Performance for Cartilage Lesion Detection. Radiology. 2018 Oct;289(1):160-169. doi: 10.1148/radiol.2018172986. Epub 2018 Jul 31. PMID: 30063195; PMCID: PMC6166867.

    5. Hu Y, Modat M, Gibson E, Li W, Ghavami N, Bonmati E, Wang G, Bandula S, Moore CM, Emberton M, Ourselin S, Noble JA, Barratt DC, Vercauteren T. Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal. 2018 Oct;49:1-13. doi: 10.1016/ Epub 2018 Jul 4. PMID: 30007253; PMCID: PMC6742510.

    6. Wang, L., Ge, L., You, S. et al. Lobectomy versus segmentectomy in patients with stage T (> 2 cm and ≤ 3 cm) N0M0 non-small cell lung cancer: a propensity score matching study. J Cardiothorac Surg 17, 110 (2022).

    7. Zhao X, Qian L, Luo Q, Huang J. Segmentectomy as a safe and equally effective surgical option under complete video-assisted thoracic surgery for patients of stage I non-small cell lung cancer. J Cardiothorac Surg. 2013 Apr 29;8:116. doi: 10.1186/1749-8090-8-116. PMID: 23628209; PMCID: PMC3661398.

    8. Saji H, Inoue T, Kato Y, Shimada Y, Hagiwara M, Kudo Y, Akata S, Ikeda N. Virtual segmentectomy based on high-quality three-dimensional lung modelling from computed tomography images. Interact Cardiovasc Thorac Surg. 2013 Aug;17(2):227-32. doi: 10.1093/icvts/ivt120. Epub 2013 Apr 26. PMID: 23624984; PMCID: PMC3715161.

    9. Lynn AQ, Pflibsen LR, Smith AA, Rebecca AM, Teven CM. Three-dimensional Printing in Plastic Surgery: Current Applications, Future Directions, and Ethical Implications. Plast Reconstr Surg Glob Open. 2021 Mar 22;9(3):e3465. doi: 10.1097/GOX.0000000000003465. PMID: 33968548; PMCID: PMC8099403.

    10. Spinal fusion – series – Pedicular screws [Digital resource]: MedlinePlus. URL: (dated: 01.12.2022).

    11. Chen PC, Chang CC, Chen HT, Lin CY, Ho TY, Chen YJ, Tsai CH, Tsou HK, Lin CS, Chen YW, Hsu HC. The Accuracy of 3D Printing Assistance in the Spinal Deformity Surgery. Biomed Res Int. 2019 Nov 11;2019:7196528. doi: 10.1155/2019/7196528. PMID: 31828123; PMCID: PMC6885147.

    12. Cranioplasty [Digital resource]: Johns Hopkins. URL: (dated: 01.12.2022).

    13. Hosameldin A, Osman A, Hussein M, Gomaa AF, Abdellatif M. Three dimensional custom-made PEEK cranioplasty. Surg Neurol Int. 2021 Nov 30;12:587. doi: 10.25259/SNI_861_2021. PMID: 34992904; PMCID: PMC8720430.

    14. Dennler C, Bauer DE, Scheibler AG, Spirig J, Götschi T, Fürnstahl P, Farshad M. Augmented reality in the operating room: a clinical feasibility study. BMC Musculoskelet Disord. 2021 May 18;22(1):451. doi: 10.1186/s12891-021-04339-w. PMID: 34006234; PMCID: PMC8132365.

    15. Vávra P, Roman J, Zonča P, Ihnát P, Němec M, Kumar J, Habib N, El-Gendi A. Recent Development of Augmented Reality in Surgery: A Review. J Healthc Eng. 2017;2017:4574172. doi: 10.1155/2017/4574172. Epub 2017 Aug 21. PMID: 29065604; PMCID: PMC5585624.

    16. Khor WS, Baker B, Amin K, Chan A, Patel K, Wong J. Augmented and virtual reality in surgery-the digital surgical environment: applications, limitations and legal pitfalls. Ann Transl Med. 2016 Dec;4(23):454. doi: 10.21037/atm.2016.12.23. PMID: 28090510; PMCID: PMC5220044.

    17. Chu MW, Sarik JR, Wu LC, Serletti JM, Bank J. Non-Invasive Imaging of Preoperative Mapping of Superficial Veins in Free Flap Breast Reconstruction. Arch Plast Surg. 2016 Jan;43(1):119-21. doi: 10.5999/aps.2016.43.1.119. Epub 2016 Jan 15. PMID: 26848464; PMCID: PMC4738119.

    18. Chen X, Liu X, Wang Y, Ma R, Zhu S, Li S, Li S, Dong X, Li H, Wang G, Wu Y, Zhang Y, Qiu G, Qian W. Development and Validation of an Artificial Intelligence Preoperative Planning System for Total Hip Arthroplasty. Front Med (Lausanne). 2022 Mar 22;9:841202. doi: 10.3389/fmed.2022.841202. PMID: 35391886; PMCID: PMC8981237.

    19. Lambrechts A, Wirix-Speetjens R, Maes F, Van Huffel S. Artificial Intelligence Based Patient-Specific Preoperative Planning Algorithm for Total Knee Arthroplasty. Front Robot AI. 2022 Mar 8;9:840282. doi: 10.3389/frobt.2022.840282. Erratum in: Front Robot AI. 2022 Apr 28;9:899349. PMID: 35350703; PMCID: PMC8957999.

    20. Chen X, Xu L, Wang Y, Wang H, Wang F, Zeng X, Wang Q, Egger J. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J Biomed Inform. 2015 Jun;55:124-31. doi: 10.1016/j.jbi.2015.04.003. Epub 2015 Apr 13. PMID: 25882923.

    21. Cleary K, Peters TM. Image-guided interventions: technology review and clinical applications. Annu Rev Biomed Eng. 2010 Aug 15;12:119-42. doi: 10.1146/annurev-bioeng-070909-105249. PMID: 20415592.

    22. Du C, Li J, Zhang B, Feng W, Zhang T, Li D. Intraoperative navigation system with a multi-modality fusion of 3D virtual model and laparoscopic real-time images in laparoscopic pancreatic surgery: a preclinical study. BMC Surg. 2022 Apr 11;22(1):139. doi: 10.1186/s12893-022-01585-0. PMID: 35410155; PMCID: PMC9004060.

    23. Mezger U, Jendrewski C, Bartels M. Navigation in surgery. Langenbecks Arch Surg. 2013 Apr;398(4):501-14. doi: 10.1007/s00423-013-1059-4. Epub 2013 Feb 22. PMID: 23430289; PMCID: PMC3627858.

    24. Khoshnevisan A, Allahabadi NS. Neuronavigation: principles, clinical applications and potential pitfalls. Iran J Psychiatry. 2012 Spring;7(2):97-103. PMID: 22952553; PMCID: PMC3428645.

    25. Gastrectomy [Digital resource]: NHS. URL: (dated: 01.12.2022).

    26. Kim YW, Min JS, Yoon HM, An JY, Eom BW, Hur H, Lee YJ, Cho GS, Park YK, Jung MR, Park JH, Hyung WJ, Jeong SH, Kook MC, Han M, Nam BH, Ryu KW. Laparoscopic Sentinel Node Navigation Surgery for Stomach Preservation in Patients With Early Gastric Cancer: A Randomized Clinical Trial. J Clin Oncol. 2022 Jul 20;40(21):2342-2351. doi: 10.1200/JCO.21.02242. Epub 2022 Mar 24. PMID: 35324317; PMCID: PMC9287280.

    27. Endoscopy [Digital resource]: NHS. URL: (dated: 01.12.2022).

    28. Paderno A, Gennarini F, Sordi A, Montenegro C, Lancini D, Villani FP, Moccia S, Piazza C. Artificial intelligence in clinical endoscopy: Insights in the field of videomics. Front Surg. 2022 Sep 12;9:933297. doi: 10.3389/fsurg.2022.933297. PMID: 36171813; PMCID: PMC9510389.

    29. Yoon HJ, Kim S, Kim JH, Keum JS, Oh SI, Jo J, Chun J, Youn YH, Park H, Kwon IG, Choi SH, Noh SH. A Lesion-Based Convolutional Neural Network Improves Endoscopic Detection and Depth Prediction of Early Gastric Cancer. J Clin Med. 2019 Aug 26;8(9):1310. doi: 10.3390/jcm8091310. PMID: 31454949; PMCID: PMC6781189.

    30. Tanabe S, Ishido K, Higuchi K, Sasaki T, Katada C, Azuma M, Naruke A, Kim M, Koizumi W. Long-term outcomes of endoscopic submucosal dissection for early gastric cancer: a retrospective comparison with conventional endoscopic resection in a single center. Gastric Cancer. 2014 Jan;17(1):130-6. doi: 10.1007/s10120-013-0241-2. Epub 2013 Apr 11. PMID: 23576197.

    31. Gibson E, Giganti F, Hu Y, Bonmati E, Bandula S, Gurusamy K, Davidson B, Pereira SP, Clarkson MJ, Barratt DC. Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks. IEEE Trans Med Imaging. 2018 Aug;37(8):1822-1834. doi: 10.1109/TMI.2018.2806309. Epub 2018 Feb 14. PMID: 29994628; PMCID: PMC6076994.

    32. Lai M, Skyrman S, Shan C, Babic D, Homan R, Edström E, et al. (2020) Fusion of augmented reality imaging with the endoscopic view for endonasal skull base surgery; a novel application for surgical navigation based on intraoperative cone beam computed tomography and optical tracking. PLoS ONE 15(1): e0227312.

    33. Definition of ROBOT in the Cambridge English Dictionary [Digital resource]: Cambridge Dictionary. URL:словарь/английский/robot. (dated: 01.12.2022).

    34. Schaal, S. (1996). Learning from demonstration. NIPS’96: Proceedings of the 9th International Conference on Neural Information Processing Systems, pp. 1040–1046. URL: (дата обращения: 01.12.2022).

    35. Ahmidi N, Tao L, Sefati S, Gao Y, Lea C, Haro BB, Zappella L, Khudanpur S, Vidal R, Hager GD. A Dataset and Benchmarks for Segmentation and Recognition of Gestures in Robotic Surgery. IEEE Trans Biomed Eng. 2017 Sep;64(9):2025-2041. doi: 10.1109/TBME.2016.2647680. Epub 2017 Jan 4. PMID: 28060703; PMCID: PMC5559351.

    36. Sheridan TB. Human-Robot Interaction: Status and Challenges. Hum Factors. 2016 Jun;58(4):525-32. doi: 10.1177/0018720816644364. Epub 2016 Apr 20. PMID: 27098262.

    37. Jiang F, Jia R, Jiang X, Cao F, Lei T, Luo L. Human-Machine Interaction Methods for Minimally Invasive Surgical Robotic Arms. Comput Intell Neurosci. 2022 Sep 10;2022:9434725. doi: 10.1155/2022/9434725. PMID: 36124121; PMCID: PMC9482493.

    38. He, Yucheng & Deng, Zhen & Zhang, Jianwei. (2021). Design and voice‐based control of a nasal endoscopic surgical robot. CAAI Transactions on Intelligence Technology. 6. 1049/cit2.12022.

    39. About da Vinci Systems [Digital resource]: Intuitive Surgical. URL: (dated: 01.12.2022).

    40. Palep JH. Robotic assisted minimally invasive surgery. J Minim Access Surg. 2009 Jan;5(1):1-7. doi: 10.4103/0972-9941.51313. PMID: 19547687; PMCID: PMC2699074.

    41. Intuitive Surgical, Inc [Электронный ресурс]: AnnualReports. URL: (дата обращения: 01.12.2022).

    42. Koh DH, Jang WS, Park JW, Ham WS, Han WK, Rha KH, Choi YD. Efficacy and Safety of Robotic Procedures Performed Using the da Vinci Robotic Surgical System at a Single Institute in Korea: Experience with 10000 Cases. Yonsei Med J. 2018 Oct;59(8):975-981. doi: 10.3349/ymj.2018.59.8.975. PMID: 30187705; PMCID: PMC6127423.

    43. Yu J, Wang Y, Li Y, Li X, Li C, Shen J. The safety and effectiveness of Da Vinci surgical system compared with open surgery and laparoscopic surgery: a rapid assessment. J Evid Based Med. 2014 May;7(2):121-34. doi: 10.1111/jebm.12099. PMID: 25155768.

    44. Murphy DC, Saleh DB. Artificial Intelligence in plastic surgery: What is it? Where are we now? What is on the horizon? Ann R Coll Surg Engl. 2020 Oct;102(8):577-580. doi: 10.1308/rcsann.2020.0158. Epub 2020 Aug 11. PMID: 32777930; PMCID: PMC7538735.

    45. Brodie A, Vasdev N. The future of robotic surgery. Ann R Coll Surg Engl. 2018 Sep;100(Suppl 7):4-13. doi: 10.1308/rcsann.supp2.4. 30179048; PMCID: PMC6216754.

    Другие новости