BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:How do you know when you’re right? – On hallucinations\, the l
 imits of trustworthy AI\, and the power of ‘I don't know’ - Anders Han
 sen
DTSTART:20240206T160000Z
DTEND:20240206T170000Z
UID:TALK211849@talks.cam.ac.uk
CONTACT:Sae Koyama
DESCRIPTION:In 2023 the Cambridge Dictionary word-of-the-year was: ‘hall
 ucinate’ – due to the overwhelming evidence of hallucinations in moder
 n AI\, in particular\, those caused by chatbots. In the interest of creati
 ng trustworthy AI\, one can ask the following questions:\n# Can AI be made
  so that it does not hallucinate?\n# If not\, can one design algorithms th
 at will detect when AI hallucinates?\n# If not\, what do we then do?\n\nIn
  this talk we will show how the answer to the two first questions is ‘no
 ’\, even for basic problems in the sciences. This leaves us with the onl
 y option of trustworthy AI: the ability to say ‘I don’t know’. We wi
 ll discuss how there is no theoretical limitation on creating AI that hall
 ucinate\, but will say ‘I know’ when it is certain that the output is 
 correct (and this certainty is indeed true). Moreover\, in the case it say
 s ‘I don’t know’ the output could be either correct or an hallucinat
 ion. We argue that the ability to say ‘I don’t know’ is a fundamenta
 l part of human intelligence and trust\, and that it follows from the foun
 dations of mathematics that this is the best form of trustworthy AI possib
 le. This opens up the question on which problems can be tackled by AI – 
 that can say ‘I don’t know’ – in a meaningful way. Indeed an AI sa
 ying ‘I don’t know’ all the time is not particularly useful. We will
  show how this question can be handled by the Solvability Complexity Index
  (SCI) hierarchy from the foundations of computational mathematics.
LOCATION:MR5\, CMS\, Wilberforce Road\, Cambridge\, CB3 0WB
END:VEVENT
END:VCALENDAR
