BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Women@CL Talklet Event - Mohibi\, Mahwish Arif\, Aida Miralaei
DTSTART:20181123T130000Z
DTEND:20181123T140000Z
UID:TALK115156@talks.cam.ac.uk
CONTACT:Ayat Fekry
DESCRIPTION:*Title:* Availability\, Integrity\, and Confidentiality in a C
 ontent Centric Internet Architecture\n\n*Presenter:* Mohibi Hussain\, Syst
 ems Research Group \n\n*Abstract* \n\n“How can we integrate security by 
 design in a Content- Centric future Internet architecture to ensure availa
 bility\, integrity\, and confidentiality ?” \nThe current Internet infra
 structure\, initially conceived to provide closed group connectivity to a 
 limited user-base\, is now facing the challenges of catering to over three
  billion users with dynamically changing data\, transport\, and access req
 uirements. Named-Data Networking (NDN) which is an example of Information 
 Centric Networking (ICN) or Content-Centric Networking (CCN)\, is being ex
 plored as a possible future Internet architecture. Given the important cha
 llenges of availability\, integrity\, and confidentiality in the ever-chan
 ging information security landscape\, it is imperative to build NDN with i
 ntrinsic resilience based on the security by design narrative. We have exp
 lored DDoS mitigation as a defence mechanism to improve availability\, con
 tent poisoning cure to ensure integrity\, and name obfuscation for privacy
  and confidentiality in NDN. Modelling the non-functional requirements for
  NDN through system dynamics completes the story by catering for the essen
 tial external variables affecting the design of an Internet architecture.\
 n\n\n----------------------\n\n*Title:* Reducing the overhead of Parallel 
 Loop Schedulers for Many Core Processors\n \n*Presenter:* Mahwish Arif\, C
 omputer Architecture Group\n\n*Abstract*\n\nWhile Moore’s law remains ac
 tive\, every new processor generation has an increasing number of cores. T
 he multi/many-core shared memory machines offer opportunity for increased 
 intra-node parallelism and thus performance improvement for the applicatio
 ns.  However\, the higher core count also poses challenges to schedule and
  distribute work load on these cores in a timely and scalable manner.  My 
 research aims at improving the scalability of parallel loop schedulers by 
 specialising them for fine-grain loops. To this end\, we propose a low ove
 rhead NUMA-aware work distribution mechanism for a static scheduler. We in
 tegrate this static scheduler with the Intel OpenMP and Cilkplus parallel 
 task schedulers to build hybrid schedulers. Detailed\, quantitative measur
 ements demonstrate that our technique achieves scalable performance on a  
 48-core  machine   with significantly lower overhead than both Intel OpenM
 P and Cilkplus.  We  demonstrate consistent 16-30% performance  improvemen
 ts  on  a  range  of  HPC  and  data  analytics  codes\, with a peak at 2.
 8x. \n\n----------------------\n\n\n*Title:* Energy Efficient In-Memory Ap
 proach for Binary Convolutional Neural Network\n\n*Presenter:* Aida Mirala
 ei\, Computer Architecture Group\n\n*Abstract*\n\nOne of the most effectiv
 e methods for reducing the inference cost of neural networks is to reduce 
 the precision of computation. Quantizing or binarizing network weights and
  activations reduces the precision of arithmetic operations in a CNN\, but
  will effectively reduce the power consumption. Moreover\, computations in
  CNNs with binary weights and activations have a very good potential to be
  simplified into bitwise operations which can make them more energy effici
 ent.\nIn this talk\, I’ll show how implementing the computations of BCNN
 s’ inference phase in the main memory helps the system to benefit a sign
 ificant performance gain and energy saving. Then I’ll talk about the pro
 posed architecture of my design and how I can take advantage of the proces
 sing-in-memory implementation of inference phase of BCNNs to accelerate th
 e computations and reduce energy consumption.\n
LOCATION:Computer Laboratory\, William Gates Building\, Room FW11
END:VEVENT
END:VCALENDAR
