BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Deep Learning Enables Prostate MRI Segmentation and Cancer Classif
 ication - Yongkai Liu\, David Geffen School of Medicine\, UCLA
DTSTART:20211117T130000Z
DTEND:20211117T140000Z
UID:TALK165901@talks.cam.ac.uk
CONTACT:J.W.Stevens
DESCRIPTION:Prostate cancer (PCa) is the most common solid noncutaneous ca
 ncer in American men. Multiparametric MRI (mpMRI)\, including T2\, diffusi
 on weighted imaging (DWI) and T1 dynamic contrast enhanced imaging (DCE) h
 as shown promising results for the detection and staging for clinically si
 gnificant PCa (csPCa). The Prostate Imaging Reporting and Data System vers
 ion 2.1 (PI-RADSv2.1)\, an expert guideline for performance and interpreta
 tion of mpMRI for PCa detection\, T2 and DWI images are used for primary i
 nterpretation of lesions in the PZ and TZ respectively for assigning a PI-
 RADS score to lesions detected on mpMRI. A robust deep learning-based meth
 od for reproducible\, automatic segmentation of prostate zones (ASPZ) may 
 enable the consistent assignment of mpMRI lesion location since manual seg
 mentation of prostate zones is a dependent time-consuming process\, depend
 ent on reader experience and expertise. Moreover\, segmentation outcomes f
 rom ASPZ are typically deterministic\; there is a lack of knowledge on the
  confidence of the model. Providing uncertainties of the model can improve
  the overall segmentation workflow since it easily allows refining uncerta
 in cases by human experts. In addition\, the evaluation of current state-o
 f-art deep learning methods was limited by relatively small sample size\, 
 ranging from tens to hundreds of MRI scans. It is relatively expensive to 
 create large\, continuous samples with manual segmentation of WPG\, which 
 limits the ability to test the DL models in a clinical setting. We also ev
 aluated a previously developed attentive deep learning-based automatic seg
 mentation model using a large\, continuous cohort of prostate 3T MRI scans
  (n=3360) to test its feasibility in a clinical setting. Finally\, PI-RADS
  requires a high level of expertise and exhibits a significant degree of i
 nter-reader and intra-reader variability\, likely reflecting inherent ambi
 guities in the classification scheme. Image texture analysis provides the 
 spatial arrangement of intensities in the image and can be used to quantit
 atively describe the tumor heterogeneity\, which can be the primary featur
 e of csPCa. An automated classification of PCa using texture analysis may 
 overcome the current challenges associated with PI-RADS but commonly suffe
 rs from the laborious handcrafted feature design process to fully capture 
 the underlying image texture. Alternatively\, with the development of deep
  learning in medical imaging\, convolutional neural networks (CNNs) with t
 exture analysis may further improve the accuracy of PCa classification wit
 hout handcrafted feature engineering.\n\n\n*Join Zoom Meeting*\nhttps://ma
 ths-cam-ac-uk.zoom.us/j/96932950870?pwd=b1o2c2UxckVITlRlazJzY0laRmVHZz09\n
 Meeting ID: 969 3295 0870\nPasscode: DRHjehPj
LOCATION:Virtual (see abstract for Zoom link)
END:VEVENT
END:VCALENDAR
