Artificial Intelligence in Surgical Training for Kidney Cancer: A Systematic Review of the Literature

45
0
2023-10-7 10:09
MDPI
PTLv2
Followers:3Columns:927

1. Introduction

Renal cell carcinoma (RCC) represents a common malignancy with approximately 431,288 newly diagnosed cases and 179,368 deaths worldwide in 2020 [1]. Furthermore, the advancements in imaging techniques allow for the detection of tumors at earlier stages [2,3]. However, still, a substantial portion (approximately 10% to 17%) of kidney tumors are classified as benign through histopathological evaluation [4]. Moreover, certain populations with co-existing health conditions, such as obesity and the elderly, face increased risks during interventions. Generally, for non-metastatic disease, surgical resection remains the standard of care. According to the current guidelines, surgery can be performed by either an open, laparoscopic, or robotic-assisted approach dependent on local conditions to maximize oncological, functional, and perioperative outcomes [5].

Partial and complex radical nephrectomy are challenging procedures that require thorough training and planning for the key steps such as dissection of the renal hilum, tumor enucleation, and renorraphy. However, due to the triple burden of patient care, research, and teaching, especially for doctors in academic centers, exposure to hands-on experience in the operating room (OR) and surgical training have become less prominent in current curricula [6]. Thus, training outside the OR in various simulation scenarios has become more important. Surgical training outside the OR can be provided, for example, by dry lab laparoscopy training [7,8], virtual reality training [9], or serious gaming [10]. All these modalities have been shown to be beneficial for the trainees.

In addition to spare resources even during training, the recent advances in technology might help to replace trainers with training systems that can assess trainees’ performance based on metric parameters [11]. Furthermore, it has been shown that artificial intelligence (AI) may greatly contribute to the improvement and automation of surgical training [12,13].

In 1955, Professor John McCarthy of Stanford University coined the term artificial intelligence for the first time, referring to the capability of building intelligent machines that can efficiently perform complex intellectual human tasks such as learning, thinking, reasoning, and problem-solving [14]. Generally, AI depends on the quantity, quality, and variability of the available data for the training of these models and systems, which is considered one of the major challenges to the development and robustness of different AI applications [15,16]. The potential advantages of this technology fascinated the health care industry, allowing its permeation in nearly all fields of medicine. This increasing interest in AI applications in the medical field was further aided by the advancements in medical technology, such as the shift to electronic medical records, digital radiology, digital pathology, and minimally invasive surgeries such as robotics, laparoscopic surgery, and endoscopy, which allowed the generation of large amounts of data [15].

AI in the healthcare industry consists of four subfields. Firstly, machine learning (ML) defines the use of dynamic algorithms to identify and learn from complex patterns in a data set, thus allowing the machine to make accurate predictions. Secondly, Natural Language Processing (NLP) is another subfield of AI that encompasses the ability of the computer to understand and process written and spoken languages [17]. Thirdly, deep learning (DL) includes the use of massive datasets to train individualized functioning units, which are arranged in multiple connected layers resembling artificial neurons. These functioning units are known as artificial neural networks (ANN) [18]. Finally, computer vision (CV) is the ability of the computer to identify and analyze different objects in an image or a video [17].

In these settings, the current systematic review aims to assess how AI might help to overcome the current limitations of surgical education and to establish a dedicated framework for kidney cancer surgery, which proves particularly intricate, especially in the tumor enucleation and renorraphy steps (two surgical aspects where intelligence can be immensely beneficial).

2. Materials and Methods

2.1. Search Strategy

A systematic review of the literature was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement [19]. Furthermore, the PRISMA 2020 checklist , PRISMA Abstract checklist [19], and PRISMA-S checklist [20] are added as . A systematic search of the PubMed and SCOPUS databases was conducted on the 7 August 2023, to identify all articles concerned with the use of AI in kidney cancer surgical training. A combination of the following terms was used for the search: “robotic assisted partial nephrectomy”, “RAPN”, “partial nephrectomy”, “radical nephrectomy”, “nephroureterectomy”, “kidney cancer”, “Renal cancer”, “Annotation”, “machine learning”, “Deep learning”, “natural language processing”, “computer vision”, “artificial neural network”, “artificial intelligence”, “CV”, “NLP”, “DL”, “ANN”, “ML”, “AI”, “training”, “performance assessment”, “performance evaluation”, “virtual reality”, “VR”, “augmented reality”, “AR”, “simulation”, and “workflow”. shows the combination of the keywords used for each database searched.

2.2. Search Criteria

The inclusion criteria consisted of all original articles focusing on the utility of AI for surgical training of kidney cancer without any restrictions on the type of study (retrospective or prospective case series, clinical trials, cohort studies, or randomized controlled trials) or the date of publication. The articles were excluded if they were not published in the English language, had no original data (reviews, letters to the editor, commentaries, and editorials), or the full text was not available.

2.3. Screening and Article Selection

Two independent authors (AE and NR) screened all the search results by title and abstract to identify all the articles with clinical relevance to the topic of the current review according to the predefined inclusion and exclusion criteria. Duplicates were examined using Mendeley reference manager (Elsevier Ltd., Amsterdam, The Netherlands) and were revised manually for exclusion. Subsequently, a full-text review was performed for all the remaining manuscripts after the initial screening to finally determine the articles that will be discussed in the current review. A manual review of the references in the included studies was performed to identify any relevant articles. Finally, a third author (SP) reviewed the search process and helped in resolving any discrepancies among the two reviewers.

2.4. Data Extraction

Data from the determined articles was collected independently by the same two authors in a standard Excel sheet. The following key aspects of the included studies were extracted: (1) first author and year of publication; (2) type of the study; (3) number of patients or cases included; (4) AI tools used for building the algorithms of the study; (5) a brief description of the methods used; (6) the main endpoints; and (7) the main findings of the study.

2.5. Level of Evidence

Finally, the included studies were evaluated according to the 2011 Oxford Center for Evidence-Based Medicine (OCEBM) level of evidence [21]. OCEBM levels function as a hierarchical guide to identify the most reliable evidence. They are designed to offer a quick reference for busy clinicians, researchers, or patients seeking the strongest available evidence; the OCEBM Levels aid clinicians in swiftly appraising evidence on their own. While pre-appraised sources such as Clinical Evidence, NHS Clinical Knowledge Summaries, and UpToDate may offer more extensive information, they carry the potential for over-reliance on expert opinion. Additionally, it’s important to note that the OCEBM levels do not provide a definitive assessment of evidence quality. In certain instances, lower-level evidence, such as a compelling observational study, can yield stronger evidence compared to a higher-level study, such as a systematic review with inconclusive findings. Moreover, the levels do not offer specific recommendations; they act as a framework for evaluating evidence, and ultimate decisions should be guided by clinical judgment and the unique circumstances of each patient. In short, the levels serve as efficient tools for swift clinical decision-making, eliminating the reliance on pre-appraised sources. They provide practical rules of thumb comparable in effectiveness to more intricate approaches. Importantly, they encompass a broad spectrum of clinical questions, enabling the assessment of evidence regarding prevalence, diagnostic accuracy, prognosis, treatment effects, risks, and screening effectiveness [22].

3. Results

3.1. Search Results

Overall, the search identified 468 records, of which 53 articles were excluded as they were duplicates. The initial screening of the remaining 415 articles by title and abstract resulted in the exclusion of 385 articles that did not meet the inclusion criteria of the current review. The remaining 30 articles were eligible for full-text review, after which another 16 were excluded for different reasons. Only one article caused disagreement among the reviewers, where the authors assessed the validity of an AI total that may guide the decision to either perform partial or radical nephrectomy (after discussion among the reviewers, it was excluded as this tool has no effect on the surgical skills of the novice) [23]. Finally, 14 articles met our inclusion criteria and were included in the current systematic review [24,25,26,27,28,29,30,31,32,33,34,35,36,37]. Figure 1 illustrates the PRISMA flow diagram for the search process.

Figure 1. PRISMA flow diagram for the search process.

3.2. AI and Surgical Training for Kidney Cancer

There is a scarcity of evidence in the literature regarding the application of AI to enhance surgical skills in the urological discipline. This also applies to the field of surgical training for kidney cancer, where AI has predominantly been employed in the realm of surgical simulations and robot-assisted surgery [24,25,26,27,28,29,30,31,32,33,34,35,36,37]. shows a summary of the included studies concerning the use of AI in the field of renal cancer training.

4. Discussion

AI will potentially change the landscape of medicine and reshape the healthcare industry over the coming years. Generally, the applications of AI in healthcare include, but are not limited to, drug development, health monitoring, medical data management, disease diagnostics and decision aids, digital consultation, personalized disease treatment, analysis of health plans, and surgical education [38,39,40]. Expectations for AI applications in medicine are high, and some workers in the healthcare industry believe that if AI systems are currently capable of efficiently driving cars, they might be able to autonomously control surgical robots one day. However, it should be noted that AI is not a replacement for the human factor; it is just a tool to help medical professionals do their job more efficiently and safely [41]. The urological field is not an exception, where AI has made significant advancements in enhancing diagnosis, prognosis, outcome prediction, and treatment planning [17,42,43,44,45].

Considering surgical training, numerous studies have emphasized the significance of surgical skills in determining patient outcomes, including mortality, complication rates, operation length, and re-operation and re-admission rates [46,47]. Interestingly, surgical skills may account for up to 25% of the variation in patient outcomes [48]. Therefore, it is crucial to evaluate surgical skills effectively to enhance training, credentialing, and education and ultimately provide the highest quality of care to patients.

Medical training has traditionally followed the Halstedian model, where trainees observe, perform, and then teach procedures [49]. However, new regulations, increased paperwork, and concerns about inexperienced surgeons operating on patients have highlighted the need for a change in surgical training [49,50]. This is particularly important because studies have shown that complications tend to occur during the early stages of a surgeon’s learning curve [51]. Therefore, surgical training should prioritize structured and validated processes, including proficiency-based progression training and objective assessments. Accordingly, training should involve practice in a controlled setting or through simulations, where trainees could only move on to real-life procedures after reaching an established proficiency benchmark to enhance patient safety [52,53]. Noteworthy, the process of performance evaluation requires manual peer appraisal by trained surgical experts either during surgery or review of surgical videos. This is a time-consuming and unreliable process due to the lack of a standardized definition of success among different surgeons [54].

Interestingly, AI can be integrated with conventional proficiency-based training approaches to provide an objective assessment of surgical skills [52], which has paved the way for the development of automated machine-based scoring methods, particularly in the field of robotic surgery [55,56,57,58]. A recent randomized control trial demonstrated that the virtual operative assistant system (an AI-based tutoring system) provided superior performance outcomes and better skill acquisition compared to remote expert tutoring [59].

Considering autonomous assessment of surgical skills during kidney cancer surgeries, we are still taking our first steps, where most of the published studies are concerned with the identification of surgical workflow of RAPN [24,26], instrument annotation [33,35], and different tissue identification (blood vessels, tumors, anatomical spaces) [27,30,31,37], while only one study took a step forward towards actual automatic assessment of surgical skills during RAPN [32].

In this context, it’s evident how kidney imaging plays a pivotal role in the field of AI applied to renal surgical training. Over recent years, there has been a consistent drive for innovative imaging techniques. These developments aim to empower surgeons to conduct thorough examinations of the kidney, including remarkable three-dimensional (3D) reconstruction technology. The use of 3D models results in patient-specific virtual or physical replicas, leading to reduced operative time, clamping duration, and estimated blood loss [60,61]. Additionally, these techniques play a crucial role in educating patients and their families, providing a deeper understanding of tumor characteristics and the range of available treatment options [62,63]. The application of 3D-volumetry, a technology utilizing CT scans to assess renal volume, proves essential in evaluating split renal function [64]. The introduction of holographic technology represents a ground-breaking approach, providing an immersive and interactive experience based on 3D visualization. This fosters a greater appreciation of patient-specific anatomy [65].

Moreover, intraoperative navigation guided by 3D virtual models leads to lower complication rates and improved outcomes [66]. Intraoperative imaging, which encompasses both morphological and fluorescence techniques, is vital for tumor identification and assessment of ischemia through tools such as laparoscopic US probes [67] and intraoperative fluorescent imaging [68]. Embracing pathological intraoperative imaging through technologies such as fluorescence confocal microscopy [69,70] and optical coherence tomography [71] holds immense promise in guiding the treatment of small renal masses and advancing cancer control.

In their comprehensive examination of innovative imaging technologies for robotic kidney cancer surgery, Puliatti et al., meticulously detail the techniques mentioned above, highlighting the potential application of 3D visualization technologies and augmented reality navigation for guiding operations and providing training in renal cancer surgery [72]. Furthermore, the integration of VR and AR with AI is capable of improving robotic renal cancer surgeries and training [73]. Particularly, AR may have a great potential for reducing surgical complications and improving outcomes after surgery by guiding novice surgeons through the initial learning curve. However, the main limitation of AR-guided surgery is the registration process, where the 3D reconstructed model is superimposed over the corresponding anatomy in the endoscopic view. Furthermore, real-time object or structure tracking is another concern. Particularly, the kidneys are more complex organs (as they are not fixed to anatomical constraints) compared to other ones, such as the prostate, and thus require more sophisticated registration and tracking techniques [25]. In these settings, four studies in the literature focused on the use of AI for the automatic registration of 3D models and for real-time tissue tracking during surgery to overcome the limitations of tissue deformability and mobility [25,29,34,36]. Future work should concentrate on annotating soft tissues to study and quantify tool-tissue interactions. Accurate soft tissue segmentation in conjunction with instrument segmentation is crucial for successful augmented reality applications, ensuring the correct registration of 3D models with the intraoperative view [74].

In these settings, training machines to identify anatomical spaces, instruments, and different stages of various procedures is crucial not just for offering real-time assistance and feedback to surgeons during operations but also for providing young surgeons in a training environment with an understanding of their proficiency level in a specific surgical step. When it comes to kidney cancer, we still have a considerable distance to go before achieving this type of application.

4.1. AI Limitations

One of the major limitations restricting AI-based publications in the field of kidney cancer surgical training is the lack of standardized metrics for the objective evaluation of trainees’ ability to perform RAPN. Thus, Rui Farinha et al. [75] presented an international expert consensus on the metric-based characterization of left-sided RAPN cases, but these metrics are not applicable to other scenarios. They also established evidence supporting the validity of these metrics, showing reliable scoring and discrimination between experienced and novice RAPN surgeons [76]. However, the progression in the field of RAPN metrics development in the near future will allow the integration of AI to enable real-time error recognition during surgery and provide feedback during training, which holds promise for offering valuable insights and assistance in enhancing surgical performance.

Additionally, robotic surgery serves as an ideal testing ground for the advancement of AI-based programs due to its ability to capture detailed records of surgeons’ movements and provide continuous visualization of instruments. According to a Delphi consensus statement in 2022, the integration of AI into robotic surgical training holds significant promise, but it also introduces ethical risks. These risks encompass data privacy, transparency, biases, accountability, and liabilities that require recognition and resolution [77].

Furthermore, despite the extensive published literature on the significant potential of AI, there are no reports on its efficacy in improving patient safety in robot-assisted surgery [14].

Other general limitations to the robustness and acceptance of AI applications in the field of medicine include the heterogeneity of the research methodology of the published studies, the restricted generalizability as most of the developed AI algorithms are trained and validated using similar datasets, which may result in overfitting of the models, and the fact that many of the ML algorithms (particularly ANN) are very complex and difficult to understand, resembling a black box, which represents an obstacle towards rigorous testing of these algorithms [78]. Finally, the availability of large labeled datasets in the medical field is scarce in comparison to other fields [79]. In this setting, verified benchmark datasets are essential; however, producing a high-quality benchmark dataset is a complex and time-consuming process [80]. Most of the available benchmark datasets in the field of kidney cancer are mainly related to the histopathological or radiological evaluation of kidneys (i.e., the 2019 kidney and kidney tumor segmentation challenge [KiTS19], KiTS21, and KMC datasets) [81,82,83]. According to our knowledge, there are no benchmark datasets related to the segmentation of instruments and tissues during minimally invasive partial nephrectomy. Noteworthy, it is not always about the size and quality of the dataset. Gaël Varoquaux and colleagues effectively shed light on methodological errors in various aspects of clinical imaging within their review [79]. Despite extensive research in this field, the clinical impact remains constrained. The review pinpoints challenges in dataset selection, evaluation methods, and publication incentives, proposing strategies for improvement. It underscores the necessity for procedural and normative changes to unlock the full potential of machine learning in healthcare. The authors stress the tendency for research to be influenced by academic incentives rather than meeting the needs of clinicians and patients. Dataset bias is identified as a significant concern, emphasizing the importance of accurately representative datasets. Robust evaluation methods beyond benchmark performance are called for, along with the adoption of sound statistical practices. The publication process may fall short of promoting clarity, potentially hindering reproducibility and transparency. Researchers are urged to prioritize scientific problem-solving over publication optimization, considering broader impacts beyond benchmarks [79].

4.2. Future Perspectives

AI-powered technologies will impact all areas of surgical education. Starting with the communication between trainers and trainees, AI-powered language translators will allow an English trainer/trainee and a Chinese-speaking trainee/trainer, to interact in their native languages, improving communication and breaking language/cultural barriers. Moving to e-learning platforms, where AI can be used to fully guarantee trainees’ identification through biometric facial recognition technologies, thus allowing personalized e-learning courses that are customized to the previous knowledge and skills of each trainee. Furthermore, AI will facilitate the classification, editing, and tagging of videos to create personalized e-learning platform content.

Soon, AI-tutoring systems will be able to objectively assess trainees’ surgical skills in the laboratory and clinical practice, providing personalized quantitative feedback on their technical proficiency. This feedback will help identify areas for skill improvement and track their progress over time. Defining expert proficiency benchmarks will become easier, and comparing them with the trainee’s performance will facilitate proficiency-based training. This monitoring will allow an active and constant adaptation of the training program based on individual needs. These systems will use CV and ML techniques to monitor trainees’ actions, identify errors and critical errors, and offer corrective advice. They will serve as virtual trainers, enhancing the learning experience and ensuring best practices are followed. A bilateral conversation, activated through voice recognition software, will allow personalized virtual trainers to answer questions and keep trainees on track of their progress.

Furthermore, combining AI with VR and AR technologies will increase the potential to create impactful and immersive education and training experiences. This combination will be capable of guiding novice surgeons step-by-step during their initial learning curve, thus improving surgical outcomes.

Finally, AI will aid in continuous learning and knowledge integration since its algorithms will process vast amounts of medical literature, surgical videos, and patient data, extracting relevant insights, identifying trends, and providing trainees with up-to-date information.

5. Conclusions

The applications of AI in the field of surgical training for kidney cancer are still in the initial phase of discovery, with multiple limitations and restrictions. However, AI-based surgical training holds the promise of improving the quality of surgical training without compromising patients’ safety. Further studies are required to explore the potential of this technology in the surgical education of renal cancer.

References

  1. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  2. Thorstenson, A.; Bergman, M.; Scherman-Plogell, A.-H.; Hosseinnia, S.; Ljungberg, B.; Adolfsson, J.; Lundstam, S. Tumour characteristics and surgical treatment of renal cell carcinoma in Sweden 2005–2010: A population-based study from the National Swedish Kidney Cancer Register. Scand. J. Urol. 2014, 48, 231–238. [Google Scholar] [CrossRef] [PubMed]
  3. Tahbaz, R.; Schmid, M.; Merseburger, A.S. Prevention of kidney cancer incidence and recurrence. Curr. Opin. Urol. 2018, 28, 62–79. [Google Scholar] [CrossRef]
  4. Moch, H.; Cubilla, A.L.; Humphrey, P.A.; Reuter, V.E.; Ulbright, T.M. The 2016 WHO Classification of Tumours of the Urinary System and Male Genital Organs—Part A: Renal, Penile, and Testicular Tumours. Eur. Urol. 2016, 70, 93–105. [Google Scholar] [CrossRef] [PubMed]
  5. Ljungberg, B.; Albiges, L.; Abu-Ghanem, Y.; Bedke, J.; Capitanio, U.; Dabestani, S.; Fernández-Pello, S.; Giles, R.H.; Hofmann, F.; Hora, M.; et al. European Association of Urology Guidelines on Renal Cell Carcinoma: The 2022 Update. Eur. Urol. 2022, 82, 399–410. [Google Scholar] [CrossRef] [PubMed]
  6. El Sherbiny, A.; Eissa, A.; Ghaith, A.; Morini, E.; Marzotta, L.; Sighinolfi, M.C.; Micali, S.; Bianchi, G.; Rocco, B. Training in urological robotic surgery. Future perspectives. Arch. Españoles Urol. 2018, 71, 97–107. [Google Scholar]
  7. Kowalewski, K.F.; Garrow, C.R.; Proctor, T.; Preukschas, A.A.; Friedrich, M.; Müller, P.C.; Kenngott, H.G.; Fischer, L.; Müller-Stich, B.P.; Nickel, F. LapTrain: Multi-modality training curriculum for laparoscopic cholecystectomy—Results of a randomized controlled trial. Surg. Endosc. 2018, 32, 3830–3838. [Google Scholar] [CrossRef]
  8. Kowalewski, K.-F.; Minassian, A.; Hendrie, J.D.; Benner, L.; Preukschas, A.A.; Kenngott, H.G.; Fischer, L.; Müller-Stich, B.P.; Nickel, F. One or two trainees per workplace for laparoscopic surgery training courses: Results from a randomized controlled trial. Surg. Endosc. 2019, 33, 1523–1531. [Google Scholar] [CrossRef]
  9. Nickel, F.; Brzoska, J.A.; Gondan, M.; Rangnick, H.M.; Chu, J.; Kenngott, H.G.; Linke, G.R.; Kadmon, M.; Fischer, L.; Müller-Stich, B.P. Virtual Reality Training Versus Blended Learning of Laparoscopic Cholecystectomy. Medicine 2015, 94, e764. [Google Scholar] [CrossRef]
  10. Kowalewski, K.-F.; Hendrie, J.D.; Schmidt, M.W.; Proctor, T.; Paul, S.; Garrow, C.R.; Kenngott, H.G.; Müller-Stich, B.P.; Nickel, F. Validation of the mobile serious game application Touch SurgeryTM for cognitive training and assessment of laparoscopic cholecystectomy. Surg. Endosc. 2017, 31, 4058–4066. [Google Scholar] [CrossRef]
  11. Kowalewski, K.-F.; Hendrie, J.D.; Schmidt, M.W.; Garrow, C.R.; Bruckner, T.; Proctor, T.; Paul, S.; Adigüzel, D.; Bodenstedt, S.; Erben, A.; et al. Development and validation of a sensor- and expert model-based training system for laparoscopic surgery: The iSurgeon. Surg. Endosc. 2017, 31, 2155–2165. [Google Scholar] [CrossRef]
  12. Garrow, C.R.B.; Kowalewski, K.-F.; Li, L.B.; Wagner, M.; Schmidt, M.W.; Engelhardt, S.; Hashimoto, D.A.; Kenngott, H.G.M.; Bodenstedt, S.; Speidel, S.; et al. Machine Learning for Surgical Phase Recognition. Ann. Surg. 2021, 273, 684–693. [Google Scholar] [CrossRef]
  13. Kowalewski, K.-F.; Garrow, C.R.; Schmidt, M.W.; Benner, L.; Müller-Stich, B.P.; Nickel, F. Sensor-based machine learning for workflow detection and as key to detect expert level in laparoscopic suturing and knot-tying. Surg. Endosc. 2019, 33, 3732–3740. [Google Scholar] [CrossRef]
  14. Moglia, A.; Georgiou, K.; Georgiou, E.; Satava, R.M.; Cuschieri, A. A systematic review on artificial intelligence in robot-assisted surgery. Int. J. Surg. 2021, 95, 106151. [Google Scholar] [CrossRef]
  15. Chen, J.; Remulla, D.; Nguyen, J.H.; Dua, A.; Liu, Y.; Dasgupta, P.; Hung, A.J. Current status of artificial intelligence applications in urology and their potential to influence clinical practice. BJU Int. 2019, 124, 567–577. [Google Scholar] [CrossRef]
  16. Kowalewski, K.-F.; Egen, L.; Fischetti, C.E.; Puliatti, S.; Juan, G.R.; Taratkin, M.; Ines, R.B.; Abate, M.A.S.; Mühlbauer, J.; Wessels, F.; et al. Artificial intelligence for renal cancer: From imaging to histology and beyond. Asian J. Urol. 2022, 9, 243–252. [Google Scholar] [CrossRef]
  17. Shah, M.; Naik, N.; Somani, B.K.; Hameed, B.Z. Artificial intelligence (AI) in urology-Current use and future directions: An iTRUE study. Turk. J. Urol. 2020, 46, S27–S39. [Google Scholar] [CrossRef]
  18. Suarez-Ibarrola, R.; Hein, S.; Reis, G.; Gratzke, C.; Miernik, A. Current and future applications of machine and deep learning in urology: A review of the literature on urolithiasis, renal cell carcinoma, and bladder and prostate cancer. World J. Urol. 2020, 38, 2329–2347. [Google Scholar] [CrossRef] [PubMed]
  19. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Brennan, S.E.; Moher, D.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  20. Rethlefsen, M.L.; Kirtley, S.; Waffenschmidt, S.; Ayala, A.P.; Moher, D.; Page, M.J.; Koffel, J.B.; PRISMA-S Group. PRISMA-S: An extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst. Rev. 2021, 10, 39. [Google Scholar] [CrossRef] [PubMed]
  21. Howick, J.; Chalmers, I.; Glasziou, P.; Greenhalgh, T.; Heneghan, C.; Liberati, A.; Moschetti, I.; Phillips, B.; Thornton, H.; Goddard, O.; et al. The Oxford 2011 Levels of Evidence. CEBM 2011. Available online: https://www.cebm.ox.ac.uk/resources/levels-of-evidence/ocebm-levels-of-evidence (accessed on 16 August 2023).
  22. Howick, J.; Chalmers, I.; Glasziou, P.; Greenhalgh, T.; Heneghan, C.; Liberati, A.; Moschetti, I.; Phillips, B.; Thornton, H. The 2011 Oxford CEBM Evidence Levels of Evidence (Introductory Document) Oxford Cent Evidence-Based Med n.d. Available online: http://www.cebm.net/index.aspx?o=5653 (accessed on 14 September 2023).
  23. Yang, H.; Wu, K.; Liu, H.; Wu, P.; Yuan, Y.; Wang, L.; Liu, Y.; Zeng, H.; Li, J.; Liu, W.; et al. An automated surgical decision-making framework for partial or radical nephrectomy based on 3D-CT multi-level anatomical features in renal cell carcinoma. Eur. Radiol. 2023, 2023, 1–10. [Google Scholar] [CrossRef] [PubMed]
  24. Nakawala, H.; Bianchi, R.; Pescatori, L.E.; De Cobelli, O.; Ferrigno, G.; De Momi, E. “Deep-Onto” network for surgical workflow and context recognition. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 685–696. [Google Scholar] [CrossRef] [PubMed]
  25. Padovan, E.; Marullo, G.; Tanzi, L.; Piazzolla, P.; Moos, S.; Porpiglia, F.; Vezzetti, E. A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2022, 18, e2387. [Google Scholar] [CrossRef] [PubMed]
  26. Nakawala, H.; De Momi, E.; Bianchi, R.; Catellani, M.; De Cobelli, O.; Jannin, P.; Ferrigno, G.; Fiorini, P. Toward a Neural-Symbolic Framework for Automated Workflow Analysis in Surgery. In Proceedings of the XV Mediterranean Conference on Medical and Biological Engineering and Computing–MEDICON 2019, Coimbra, Portugal, 26–28 September 2019; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1551–1558. [Google Scholar] [CrossRef]
  27. Casella, A.; Moccia, S.; Carlini, C.; Frontoni, E.; De Momi, E.; Mattos, L.S. NephCNN: A deep-learning framework for vessel segmentation in nephrectomy laparoscopic videos. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 6144–6149. [Google Scholar] [CrossRef]
  28. Gao, Y.; Tang, Y.; Ren, D.; Cheng, S.; Wang, Y.; Yi, L.; Peng, S. Deep Learning Plus Three-Dimensional Printing in the Management of Giant (>15 cm) Sporadic Renal Angiomyolipoma: An Initial Report. Front. Oncol. 2021, 11, 724986. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, X.; Wang, J.; Wang, T.; Ji, X.; Shen, Y.; Sun, Z.; Zhang, X. A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1285–1294. [Google Scholar] [CrossRef]
  30. Amir-Khalili, A.; Peyrat, J.-M.; Abinahed, J.; Al-Alao, O.; Al-Ansari, A.; Hamarneh, G.; Abugharbieh, R. Auto Localization and Segmentation of Occluded Vessels in Robot-Assisted Partial Nephrectomy. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2014: 17th International Conference, Boston, MA, USA, 14–18 September 2014; pp. 407–414. [Google Scholar] [CrossRef]
  31. Nosrati, M.S.; Amir-Khalili, A.; Peyrat, J.-M.; Abinahed, J.; Al-Alao, O.; Al-Ansari, A.; Abugharbieh, R.; Hamarneh, G. Endoscopic scene labelling and augmentation using intraoperative pulsatile motion and colour appearance cues with preoperative anatomical priors. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1409–1418. [Google Scholar] [CrossRef]
  32. Wang, Y.; Wu, Z.; Dai, J.; Morgan, T.N.; Garbens, A.; Kominsky, H.; Gahan, J.; Larson, E.C. Evaluating robotic-assisted partial nephrectomy surgeons with fully convolutional segmentation and multi-task attention networks. J. Robot. Surg. 2023, 17, 2323–2330. [Google Scholar] [CrossRef]
  33. De Backer, P.; Van Praet, C.; Simoens, J.; Lores, M.P.; Creemers, H.; Mestdagh, K.; Allaeys, C.; Vermijs, S.; Piazza, P.; Mottaran, A.; et al. Improving Augmented Reality Through Deep Learning: Real-time Instrument Delineation in Robotic Renal Surgery. Eur. Urol. 2023, 84, 86–91. [Google Scholar] [CrossRef]
  34. Amparore, D.; Checcucci, E.; Piazzolla, P.; Piramide, F.; De Cillis, S.; Piana, A.; Verri, P.; Manfredi, M.; Fiori, C.; Vezzetti, E.; et al. Indocyanine Green Drives Computer Vision Based 3D Augmented Reality Robot Assisted Partial Nephrectomy: The Beginning of “Automatic” Overlapping Era. Urology 2022, 164, e312–e316. [Google Scholar] [CrossRef]
  35. De Backer, P.; Eckhoff, J.A.; Simoens, J.; Müller, D.T.; Allaeys, C.; Creemers, H.; Hallemeesch, A.; Mestdagh, K.; Van Praet, C.; Debbaut, C.; et al. Multicentric exploration of tool annotation in robotic surgery: Lessons learned when starting a surgical artificial intelligence project. Surg. Endosc. 2022, 36, 8533–8548. [Google Scholar] [CrossRef]
  36. Yip, M.C.; Lowe, D.G.; Salcudean, S.E.; Rohling, R.N.; Nguan, C.Y. Tissue Tracking and Registration for Image-Guided Surgery. IEEE Trans. Med. Imaging 2012, 31, 2169–2182. [Google Scholar] [CrossRef] [PubMed]
  37. Amir-Khalili, A.; Hamarneh, G.; Peyrat, J.-M.; Abinahed, J.; Al-Alao, O.; Al-Ansari, A.; Abugharbieh, R. Automatic segmentation of occluded vasculature via pulsatile motion analysis in endoscopic robot-assisted partial nephrectomy video. Med. Image Anal. 2015, 25, 103–110. [Google Scholar] [CrossRef] [PubMed]
  38. Amisha, M.P.; Pathania, M.; Rathaur, V.K. Overview of artificial intelligence in medicine. J. Fam. Med. Prim. Care 2019, 8, 2328–2331. [Google Scholar] [CrossRef] [PubMed]
  39. Ward, T.M.; Mascagni, P.; Madani, A.; Padoy, N.; Perretta, S.; Hashimoto, D.A. Surgical data science and artificial intelligence for surgical education. J. Surg. Oncol. 2021, 124, 221–230. [Google Scholar] [CrossRef] [PubMed]
  40. Veneziano, D.; Cacciamani, G.; Rivas, J.G.; Marino, N.; Somani, B.K. VR and machine learning: Novel pathways in surgical hands-on training. Curr. Opin. Urol. 2020, 30, 817–822. [Google Scholar] [CrossRef] [PubMed]
  41. Heller, N.; Weight, C. “The Algorithm Will See You Now”: The Role of Artificial (and Real) Intelligence in the Future of Urology. Eur. Urol. Focus 2021, 7, 669–671. [Google Scholar] [CrossRef]
  42. Cacciamani, G.E.; Nassiri, N.; Varghese, B.; Maas, M.; King, K.G.; Hwang, D.; Abreu, A.; Gill, I.; Duddalwar, V. Radiomics and Bladder Cancer: Current Status. Bladder Cancer 2020, 6, 343–362. [Google Scholar] [CrossRef]
  43. Sugano, D.; Sanford, D.; Abreu, A.; Duddalwar, V.; Gill, I.; Cacciamani, G.E. Impact of radiomics on prostate cancer detection: A systematic review of clinical applications. Curr. Opin. Urol. 2020, 30, 754–781. [Google Scholar] [CrossRef]
  44. Aminsharifi, A.; Irani, D.; Tayebi, S.; Jafari Kafash, T.; Shabanian, T.; Parsaei, H. Predicting the Postoperative Outcome of Percutaneous Nephrolithotomy with Machine Learning System: Software Validation and Comparative Analysis with Guy’s Stone Score and the CROES Nomogram. J. Endourol. 2020, 34, 692–699. [Google Scholar] [CrossRef]
  45. Feng, Z.; Rong, P.; Cao, P.; Zhou, Q.; Zhu, W.; Yan, Z.; Liu, Q.; Wang, W. Machine learning-based quantitative texture analysis of CT images of small renal masses: Differentiation of angiomyolipoma without visible fat from renal cell carcinoma. Eur. Radiol. 2018, 28, 1625–1633. [Google Scholar] [CrossRef]
  46. Birkmeyer, J.D.; Finks, J.F.; O’Reilly, A.; Oerline, M.; Carlin, A.M.; Nunn, A.R.; Dimick, J.; Banerjee, M.; Birkmeyer, N.J. Surgical Skill and Complication Rates after Bariatric Surgery. N. Engl. J. Med. 2013, 369, 1434–1442. [Google Scholar] [CrossRef] [PubMed]
  47. Hung, A.J.; Chen, J.; Ghodoussipour, S.; Oh, P.J.; Liu, Z.; Nguyen, J.; Purushotham, S.; Gill, I.S.; Liu, Y. A deep-learning model using automated performance metrics and clinical features to predict urinary continence recovery after robot-assisted radical prostatectomy. BJU Int. 2019, 124, 487–495. [Google Scholar] [CrossRef] [PubMed]
  48. Stulberg, J.J.; Huang, R.; Kreutzer, L.; Ban, K.; Champagne, B.J.; Steele, S.R.; Johnson, J.K.; Holl, J.L.; Greenberg, C.C.; Bilimoria, K.Y. Association Between Surgeon Technical Skills and Patient Outcomes. JAMA Surg. 2020, 155, 960. [Google Scholar] [CrossRef] [PubMed]
  49. Amato, M.; Eissa, A.; Puliatti, S.; Secchi, C.; Ferraguti, F.; Minelli, M.; Meneghini, A.; Landi, I.; Guarino, G.; Sighinolfi, M.C.; et al. Feasibility of a telementoring approach as a practical training for transurethral enucleation of the benign prostatic hyperplasia using bipolar energy: A pilot study. World J. Urol. 2021, 39, 3465–3471. [Google Scholar] [CrossRef] [PubMed]
  50. Maybury, C. The European Working Time Directive: A decade on. Lancet 2014, 384, 1562–1563. [Google Scholar] [CrossRef] [PubMed]
  51. Foell, K.; Finelli, A.; Yasufuku, K.; Bernardini, M.Q.; Waddell, T.K.; Pace, K.T.; Honey, R.J.D.; Lee, J.Y. Robotic surgery basic skills training: Evaluation of a pilot multidisciplinary simulation-based curriculum. Can. Urol. Assoc. J. 2013, 7, 430. [Google Scholar] [CrossRef]
  52. Gallagher, A.G. Metric-based simulation training to proficiency in medical education: What it is and how to do it. Ulster Med. J. 2012, 81, 107–113. [Google Scholar]
  53. Gallagher, A.G.; Ritter, E.M.; Champion, H.; Higgins, G.; Fried, M.P.; Moses, G.; Smith, C.D.; Satava, R.M. Virtual reality simulation for the operating room: Proficiency-based training as a paradigm shift in surgical skills training. Ann. Surg. 2005, 241, 364–372. [Google Scholar] [CrossRef]
  54. Hameed, B.; Dhavileswarapu, A.S.; Raza, S.; Karimi, H.; Khanuja, H.; Shetty, D.; Ibrahim, S.; Shah, M.; Naik, N.; Paul, R.; et al. Artificial Intelligence and Its Impact on Urological Diseases and Management: A Comprehensive Review of the Literature. J. Clin. Med. 2021, 10, 1864. [Google Scholar] [CrossRef]
  55. Andras, I.; Mazzone, E.; van Leeuwen, F.W.B.; De Naeyer, G.; van Oosterom, M.N.; Beato, S.; Buckle, T.; O’sullivan, S.; van Leeuwen, P.J.; Beulens, A.; et al. Artificial intelligence and robotics: A combination that is changing the operating room. World J. Urol. 2020, 38, 2359–2366. [Google Scholar] [CrossRef]
  56. Sarikaya, D.; Corso, J.J.; Guru, K.A. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection. IEEE Trans. Med. Imaging 2017, 36, 1542–1549. [Google Scholar] [CrossRef] [PubMed]
  57. Fard, M.J.; Ameri, S.; Darin Ellis, R.; Chinnam, R.B.; Pandya, A.K.; Klein, M.D. Automated robot-assisted surgical skill evaluation: Predictive analytics approach. Int. J. Med. Robot. Comput. Assist. Surg. 2018, 14, e1850. [Google Scholar] [CrossRef] [PubMed]
  58. Hung, A.J.; Chen, J.; Gill, I.S. Automated Performance Metrics and Machine Learning Algorithms to Measure Surgeon Performance and Anticipate Clinical Outcomes in Robotic Surgery. JAMA Surg. 2018, 153, 770. [Google Scholar] [CrossRef]
  59. Fazlollahi, A.M.; Bakhaidar, M.; Alsayegh, A.; Yilmaz, R.; Winkler-Schwartz, A.; Mirchi, N.; Langleben, I.; Ledwos, N.; Sabbagh, A.J.; Bajunaid, K.; et al. Effect of Artificial Intelligence Tutoring vs Expert Instruction on Learning Simulated Surgical Skills Among Medical Students. JAMA Netw. Open 2022, 5, e2149008. [Google Scholar] [CrossRef] [PubMed]
  60. Kim, J.K.; Ryu, H.; Kim, M.; Kwon, E.; Lee, H.; Park, S.J.; Byun, S. Personalised three-dimensional printed transparent kidney model for robot-assisted partial nephrectomy in patients with complex renal tumours (R.E.N.A.L. nephrometry score ≥ 7): A prospective case-matched study. BJU Int. 2021, 127, 567–574. [Google Scholar] [CrossRef]
  61. Shirk, J.D.; Kwan, L.; Saigal, C. The Use of 3-Dimensional, Virtual Reality Models for Surgical Planning of Robotic Partial Nephrectomy. Urology 2019, 125, 92–97. [Google Scholar] [CrossRef]
  62. Wake, N.; Rosenkrantz, A.B.; Huang, R.; Park, K.U.; Wysock, J.S.; Taneja, S.S.; Huang, W.C.; Sodickson, D.K.; Chandarana, H. Patient-specific 3D printed and augmented reality kidney and prostate cancer models: Impact on patient education. 3D Print Med. 2019, 5, 4. [Google Scholar] [CrossRef]
  63. Rocco, B.; Sighinolfi, M.C.; Menezes, A.D.; Eissa, A.; Inzillo, R.; Sandri, M.; Puliatti, S.; Turri, F.; Ciarlariello, S.; Amato, M.; et al. Three-dimensional virtual reconstruction with DocDo, a novel interactive tool to score renal mass complexity. BJU Int. 2020, 125, 761–762. [Google Scholar] [CrossRef]
  64. Mitsui, Y.; Sadahira, T.; Araki, M.; Maruyama, Y.; Nishimura, S.; Wada, K.; Kobayashi, Y.; Watanabe, M.; Watanabe, T.; Nasu, Y. The 3-D Volumetric Measurement Including Resected Specimen for Predicting Renal Function AfterRobot-assisted Partial Nephrectomy. Urology 2019, 125, 104–110. [Google Scholar] [CrossRef]
  65. Antonelli, A.; Veccia, A.; Palumbo, C.; Peroni, A.; Mirabella, G.; Cozzoli, A.; Martucci, P.; Ferrari, F.; Simeone, C.; Artibani, W. Holographic Reconstructions for Preoperative Planning before Partial Nephrectomy: A Head-to-Head Comparison with Standard CT Scan. Urol. Int. 2019, 102, 212–217. [Google Scholar] [CrossRef]
  66. Michiels, C.; Khene, Z.-E.; Prudhomme, T.; de Hauteclocque, A.B.; Cornelis, F.H.; Percot, M.; Simeon, H.; Dupitout, L.; Bensadoun, H.; Capon, G.; et al. 3D-Image guided robotic-assisted partial nephrectomy: A multi-institutional propensity score-matched analysis (UroCCR study 51). World J. Urol. 2021, 41, 303–313. [Google Scholar] [CrossRef] [PubMed]
  67. Macek, P.; Cathelineau, X.; Barbe, Y.P.; Sanchez-Salas, R.; Rodriguez, A.R. Robotic-Assisted Partial Nephrectomy: Techniques to Improve Clinical Outcomes. Curr. Urol. Rep. 2021, 22, 51. [Google Scholar] [CrossRef] [PubMed]
  68. Veccia, A.; Antonelli, A.; Hampton, L.J.; Greco, F.; Perdonà, S.; Lima, E.; Hemal, A.K.; Derweesh, I.; Porpiglia, F.; Autorino, R. Near-infrared Fluorescence Imaging with Indocyanine Green in Robot-assisted Partial Nephrectomy: Pooled Analysis of Comparative Studies. Eur. Urol. Focus 2020, 6, 505–512. [Google Scholar] [CrossRef] [PubMed]
  69. Villarreal, J.Z.; Pérez-Anker, J.; Puig, S.; Pellacani, G.; Solé, M.; Malvehy, J.; Quintana, L.F.; García-Herrera, A. Ex vivo confocal microscopy performs real-time assessment of renal biopsy in non-neoplastic diseases. J. Nephrol. 2021, 34, 689–697. [Google Scholar] [CrossRef]
  70. Rocco, B.; Sighinolfi, M.C.; Cimadamore, A.; Bonetti, L.R.; Bertoni, L.; Puliatti, S.; Eissa, A.; Spandri, V.; Azzoni, P.; Dinneen, E.; et al. Digital frozen section of the prostate surface during radical prostatectomy: A novel approach to evaluate surgical margins. BJU Int. 2020, 126, 336–338. [Google Scholar] [CrossRef]
  71. Su, L.-M.; Kuo, J.; Allan, R.W.; Liao, J.C.; Ritari, K.L.; Tomeny, P.E.; Carter, C.M. Fiber-Optic Confocal Laser Endomicroscopy of Small Renal Masses: Toward Real-Time Optical Diagnostic Biopsy. J. Urol. 2016, 195, 486–492. [Google Scholar] [CrossRef]
  72. Puliatti, S.; Eissa, A.; Checcucci, E.; Piazza, P.; Amato, M.; Ferretti, S.; Scarcella, S.; Rivas, J.G.; Taratkin, M.; Marenco, J.; et al. New imaging technologies for robotic kidney cancer surgery. Asian J. Urol. 2022, 9, 253–262. [Google Scholar] [CrossRef]
  73. Amparore, D.; Piramide, F.; De Cillis, S.; Verri, P.; Piana, A.; Pecoraro, A.; Burgio, M.; Manfredi, M.; Carbonara, U.; Marchioni, M.; et al. Robotic partial nephrectomy in 3D virtual reconstructions era: Is the paradigm changed? World. J. Urol. 2022, 40, 659–670. [Google Scholar] [CrossRef]
  74. Zadeh, S.M.; Francois, T.; Calvet, L.; Chauvet, P.; Canis, M.; Bartoli, A.; Bourdel, N. SurgAI: Deep learning for computerized laparoscopic image understanding in gynaecology. Surg. Endosc. 2020, 34, 5377–5383. [Google Scholar] [CrossRef]
  75. Farinha, R.; Breda, A.; Porter, J.; Mottrie, A.; Van Cleynenbreugel, B.; Sloten, J.V.; Mottaran, A.; Gallagher, A.G. International Expert Consensus on Metric-based Characterization of Robot-assisted Partial Nephrectomy. Eur. Urol. Focus 2023, 9, 388–395. [Google Scholar] [CrossRef]
  76. Farinha, R.; Breda, A.; Porter, J.; Mottrie, A.; Van Cleynenbreugel, B.; Sloten, J.V.; Mottaran, A.; Gallagher, A.G. Objective assessment of intraoperative skills for robot-assisted partial nephrectomy (RAPN). J. Robot. Surg. 2023, 17, 1401–1409. [Google Scholar] [CrossRef] [PubMed]
  77. Collins, J.W.; Marcus, H.J.; Ghazi, A.; Sridhar, A.; Hashimoto, D.; Hager, G.; Arezzo, A.; Jannin, P.; Maier-Hein, L.; Marz, K.; et al. Ethical implications of AI in robotic surgical training: A Delphi consensus statement. Eur. Urol. Focus 2022, 8, 613–622. [Google Scholar] [CrossRef] [PubMed]
  78. Brodie, A.; Dai, N.; Teoh, J.Y.-C.; Decaestecker, K.; Dasgupta, P.; Vasdev, N. Artificial intelligence in urological oncology: An update and future applications. Urol. Oncol. Semin. Orig. Investig. 2021, 39, 379–399. [Google Scholar] [CrossRef] [PubMed]
  79. Varoquaux, G.; Cheplygina, V. Machine learning for medical imaging: Methodological failures and recommendations for the future. NPJ Digit. Med. 2022, 5, 48. [Google Scholar] [CrossRef]
  80. Sarkar, A.; Yang, Y.; Vihinen, M. Variation benchmark datasets: Update, criteria, quality and applications. Database 2020, 2020, baz117. [Google Scholar] [CrossRef]
  81. Chanchal, A.K.; Lal, S.; Kumar, R.; Kwak, J.T.; Kini, J. A novel dataset and efficient deep learning framework for automated grading of renal cell carcinoma from kidney histopathology images. Sci. Rep. 2023, 13, 5728. [Google Scholar] [CrossRef]
  82. Heller, N.; Sathianathen, N.; Kalapara, A.; Walczak, E.; Moore, K.; Kaluzniak, H.; Rosenberg, J.; Blake, P.; Rengel, A.; Weight, C.; et al. The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes. arXiv 2019, arXiv:1904.00445. [Google Scholar]
  83. Heller, N.; Isensee, F.; Trofimova, D.; Tejpaul, R.; Zhao, Z.; Chen, H.; Wang, L.; Golts, A.; Khapun, D.; Weight, C.; et al. The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT. arXiv 2023, arXiv:2307.01984. [Google Scholar]
Related Suggestion
Limitations in Evaluating Machine Learning Models for Imbalanced Binary Outcome Classification in Spine Surgery: A Systematic Review
Clinical
Clinical prediction models for spine surgery applications are on the rise, with an increasing reliance on machine learning (ML) and deep learning (DL). Many of the predicted outcomes are uncommon; therefore, to ensure the models’ effectiveness in clinical practice it is crucial to properly evaluate them. This systematic review aims to identify and evaluate current research-based ML and DL models applied for spine surgery, specifically those predicting binary outcomes with a focus on their evaluation metrics. Overall, 60 papers were included, and the findings were reported according to the PRISMA guidelines. A total of 13 papers focused on lengths of stay (LOS), 12 on readmissions, 12 on non-home discharge, 6 on mortality, and 5 on reoperations. The target outcomes exhibited data imbalances ranging from 0.44% to 42.4%. A total of 59 papers reported the model’s area under the receiver operating characteristic (AUROC), 28 mentioned accuracies, 33 provided sensitivity, 29 discussed specificity, 28 addressed positive predictive value (PPV), 24 included the negative predictive value (NPV), 25 indicated the Brier score with 10 providing a null model Brier, and 8 detailed the F1 score. Additionally, data visualization varied among the included papers. This review discusses the use of appropriate evaluation schemes in ML and identifies several common errors and potential bias sources in the literature. Embracing these recommendations as the field advances may facilitate the integration of reliable and effective ML models in clinical settings.
98
0
Deep Learning and Gastric Cancer: Systematic Review of AI-Assisted Endoscopy
Clinical
Background: Gastric cancer (GC), a significant health burden worldwide, is typically diagnosed in the advanced stages due to its non-specific symptoms and complex morphological features. Deep learning (DL) has shown potential for improving and standardizing early GC detection. This systematic review aims to evaluate the current status of DL in pre-malignant, early-stage, and gastric neoplasia analysis. Methods: A comprehensive literature search was conducted in PubMed/MEDLINE for original studies implementing DL algorithms for gastric neoplasia detection using endoscopic images. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was on studies providing quantitative diagnostic performance measures and those comparing AI performance with human endoscopists. Results: Our review encompasses 42 studies that utilize a variety of DL techniques. The findings demonstrate the utility of DL in GC classification, detection, tumor invasion depth assessment, cancer margin delineation, lesion segmentation, and detection of early-stage and pre-malignant lesions. Notably, DL models frequently matched or outperformed human endoscopists in diagnostic accuracy. However, heterogeneity in DL algorithms, imaging techniques, and study designs precluded a definitive conclusion about the best algorithmic approach. Conclusions: The promise of artificial intelligence in improving and standardizing gastric neoplasia detection, diagnosis, and segmentation is significant. This review is limited by predominantly single-center studies and undisclosed datasets used in AI training, impacting generalizability and demographic representation. Further, retrospective algorithm training may not reflect actual clinical performance, and a lack of model details hinders replication efforts. More research is needed to substantiate these findings, including larger-scale multi-center studies, prospective clinical trials, and comprehensive technical reporting of DL algorithms and datasets, partic
33
0
Updates in Diagnostic Imaging for Infectious Keratitis: A Review
Clinical
Infectious keratitis (IK) is among the top five leading causes of blindness globally. Early diagnosis is needed to guide appropriate therapy to avoid complications such as vision impairment and blindness. Slit lamp microscopy and culture of corneal scrapes are key to diagnosing IK. Slit lamp photography was transformed when digital cameras and smartphones were invented. The digital camera or smartphone camera sensor’s resolution, the resolution of the slit lamp and the focal length of the smartphone camera system are key to a high-quality slit lamp image. Alternative diagnostic tools include imaging, such as optical coherence tomography (OCT) and in vivo confocal microscopy (IVCM). OCT’s advantage is its ability to accurately determine the depth and extent of the corneal ulceration, infiltrates and haze, therefore characterizing the severity and progression of the infection. However, OCT is not a preferred choice in the diagnostic tool package for infectious keratitis. Rather, IVCM is a great aid in the diagnosis of fungal and Acanthamoeba keratitis with overall sensitivities of 66–74% and 80–100% and specificity of 78–100% and 84–100%, respectively. Recently, deep learning (DL) models have been shown to be promising aids for the diagnosis of IK via image recognition. Most of the studies that have developed DL models to diagnose the different types of IK have utilised slit lamp photographs. Some studies have used extremely efficient single convolutional neural network algorithms to train their models, and others used ensemble approaches with variable results. Limitations of DL models include the need for large image datasets to train the models, the difficulty in finding special features of the different types of IK, the imbalance of training models, the lack of image protocols and misclassification bias, which need to be overcome to apply these models into real-world settings. Newer artificial intelligence technology that generates synthetic data, such as generative adversarial networks, may assist in overcoming
29
0
Comments 0
Please to post a comment~
Loading...
Likes
Send-Pen
Favorites
Comment