BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:SCALE-LES: Strategic development of large eddy simulation suitable
  to the future HPC - Hirofumi Tomita\,   (RIKEN/AICS)
DTSTART:20120928T123000Z
DTEND:20120928T125500Z
UID:TALK40242@talks.cam.ac.uk
CONTACT:Mustapha Amrani
DESCRIPTION:The Large Eddy Simulation is a vital dynamical framework to in
 vestigate the cloud-aerosol-chemistry-radiation interaction from the viewp
 oint of climate problem. So far\, the LES using in the meteorological file
 d are having several problems. One problem was that it is large grid-size 
 used\, compromising to the suitability of LES. In addition\, the aspect ra
 tio of horizontal and vertical grids was much larger than unity. The grid-
 size must be reduced to several 10m and it is desirable that the aspect ra
 tio is near unity for the atmospheric LES. The target domain was also narr
 ow for less of computer resources. The large-scale computing using the rec
 ent powerful super-computer may enable us to conduct the LES with reasonab
 le grid-size and wide domain. Ultimately\, the global LES is one of milest
 ones in near future. Another problem in LES applied on meteorological fiel
 d is that the heat source owing to water condensation is injected in a gri
 d box. Strictly considering\, the grid-box heating collapse the theory of 
 LES that the grid size is in the energy cascade domain. Nevertheless\, we 
 have used the dry theory of LES. Beside the above problem that should be r
 esolved in the future\, we are now confronting with computational problems
  for such large-scale calculations. The numerical method of fluid dynamica
 l part in the atmospheric model has been shifted from the spectral transfo
 rm method to the grid-point method. The former is no longer acceptable on 
 the massively parallel platforms form the limitation of inner-connect comm
 unication. On the other hand\, the latter also contains a new problem\, wh
 ich is so-called memory bandwidth problem. For example\, even on K Compute
 r\, the B/F ratio is just 0.5. The key to get high computational performan
 ce is the reduction of load/store from and to the main memory and efficien
 t use of cash memory. Similar problem occurs in the communication between 
 computer nodes. The multidisciplinary team (Team SCALE) in RIKE/AICS is no
 w tackling to such prob\n\n
LOCATION:Seminar Room 1\, Newton Institute
END:VEVENT
END:VCALENDAR
