BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Challenges and Opportunities in Computational Imaging and Sensing 
 - Prof. Pier Luigi Dragotti\, Imperial College London
DTSTART:20200514T140000Z
DTEND:20200514T150000Z
UID:TALK140905@talks.cam.ac.uk
CONTACT:Prof. Ramji Venkataramanan
DESCRIPTION:In many areas of science and engineering new signal acquisitio
 n methods allow unprecedented access to physical measurements and are chal
 lenging the way in which we do signal and image processing. Within this br
 oad theme related to the interplay between sensing and processing\, the ma
 in focus of this talk is on new sampling methodologies inspired by the adv
 ent of event-based video cameras and on solving selected inverse imaging p
 roblems in particular when multi-modal images are acquired.\n\nIn the firs
 t part of the talk\, we investigate biologically-inspired time-encoding se
 nsing systems as an alternative method to classical sampling\, and address
  the problem of reconstructing classes of sparse signals from time-based s
 amples. Inspired by a new generation of event-based audio-visual sensing a
 rchitectures\, we consider a sampling mechanism based on first filtering t
 he input\, before obtaining the timing information using leaky integrate-a
 nd-fire architectures. We show that\, in this context\, sampling by timing
  is equivalent to non-uniform sampling\, where the reconstruction of the i
 nput depends on the characteristics of the filter and on the density of th
 e non-uniform samples. Leveraging specific properties of the proposed filt
 ers\, we derive sufficient conditions and propose novel algorithms for per
 fect reconstruction from time-based samples of classes of sparse signals. 
 We then highlight further avenues for research in the emerging area of eve
 nt-based sensing and processing.\n\nWe then move on to discuss the single-
 image super-resolution problem which refers to the problem of obtaining a 
 high-resolution (HR) version\nof a single low-resolution (LR) image. We co
 nsider the multi-modal case where a scene is observed using different imag
 ing modalities and when these modalities have different resolutions. In th
 is context\, we use dictionary learning and sparse representation framewor
 k as a tool to model dependency across modalities in order to dictate the 
 architecture of deep neural networks and to initialize the parameters of t
 hese networks. Numerical results show that this approach leads to state-of
 -the-art results in multi-modal image super-resolution applications. If ti
 me permits will also present applications in the area of art investigation
 .\n
LOCATION:JDB Seminar Room\, CUED
END:VEVENT
END:VCALENDAR
