Tuesday, October 22, 2019
Energy-efficient and Sustainable Computing across the Hardware/Software Stack
Energy efficiency is now a critically important design constraint for most computing systems today. Likewise, reliability has always been a top concern, from high-performance systems to mobile embedded applications. However, as technological advances produce devices only a few nanometers in length, and applications become more memory- and compute-intensive, energy-efficiency and reliability become harder to manage. In this talk, I will present techniques—past and present—across the HW/SW stack, for energy-efficient and reliable computing. I will also discuss how these techniques may be used to achieve more sustainable computing in the future.
R. Iris Bahar received the B.S. and M.S. degrees in computer engineering from the University of Illinois, Urbana-Champaign, and the Ph.D. degree in electrical and computer engineering from the University of Colorado, Boulder. Before entering the Ph.D program at CU-Boulder, she was with Digital Equipment Corporation, responsible for the hardware implementation of the memory controller unit for their NVAX processor. She has been on the faculty at Brown University in the School of Engineering since 1996, and now holds a dual appointment as Professor of Engineering and Professor of Computer Science. Her research interests include computer architecture; computer-aided design for synthesis, verification and low-power applications; design, test, and reliability issues for nanoscale systems; and most recently, design of robotic systems. Her research has been continuously funded since 1997 through various industrial and government sources, including the NSF, DARPA, DoD, the Semiconductor Research Corporation (SRC), Intel, IBM and NASA.
Wednesday, October 23, 2019
(IEEE Fellow and Professor,
Department of Electrical and Computer Engineering, The George Washington University)
Can Photonic Computing be the Answer to Green and Sustainable Computing?
For decades now, processing speed has been consistently rising. The top supercomputer, Summit, today, can perform 148,600 trillion calculations in one second (148.6 PF on LINPAC). Exascale supercomputers that can perform more than one million trillion (quintillion) calculations per second are planned for 2021. However, this smooth ride is almost over with the end of Moore’s Law and Dennard’s Scaling due to the increased power consumption and leakage. Scientists believe that we are reaching serious physical limits. Innovative ideas in device technology and architectures are a must for the next generation of computing. Photonic devices are characterized by their speed and ultra-low power. In this talk we examine the use of alternative Photonic computing architectures based on these devices, on the chip, to solve many critical science and engineering problems, including Partial Differential Equations (the basis for scientific and engineering simulations) and machine learning. We use this to examine the potential and progress needed to reap the full benefits of this technology and move it to becoming a main stream fast green computing alternative.
Tarek El-Ghazawi is a Professor in the Department of Electrical and Computer Engineering at The George Washington University, where he leads the university-wide Strategic Academic Program in High- Performance Computing. He is the founding director of The GW Institute for Massively Parallel Applications and Computing Technologies (IMPACT) and was a founding Co-Director of the NSF Industry/University Center for High-Performance Reconfigurable Computing (CHREC). El-Ghazawi’s interests include high-performance computing, computer architectures, reconfigurable and embedded computing, nanophontonic based computing. He is one of the principal co-authors of the UPC parallel programming language. At present he is leading and co-leading efforts for Post-Moore’s Law processors including analog, nanophotonic and neuromorphic computing. Professor El-Ghazawi is a Fellow of the IEEE and selected as a Research Faculty Fellow of the IBM Center for Advanced Studies and a UK Royal Academy of Engineering Distinguished Visiting Fellow and a Distinguished Visiting Speaker for the IEEE Computer Society. He was awarded the Alexander von Humboldt Research Award, from the Humboldt Foundation in Germany, the Alexander Schwarzkopf Prize for Technical Innovation, The IEEE Outstanding Leadership Award by the IEEE Technical Committee on Scalable Computing, and the GW SEAS Distinguished Researcher Award. El-Ghazawi has served as a senior U.S. Fulbright Scholar.
Thursday, October 24, 2019
Data Center Cooling - Then, Now and the Future
Data center cooling systems have varied widely over the years, yet the goal was always the same: keeping the facility running smoothly by preventing the internal equipment from overheating. As designs keep moving forward we can see how cooling solutions and strategies have evolved and where they seem to be headed into the future. Air and water cooling, economization, energy recovery, and more have been explored and utilized all over the globe for extended periods of time. What has worked? What has worked best?
Today’s data centers are ranked not just on reliability, but also on efficiency and cost effectiveness – and the means of cooling the data center is one of the biggest energy and cost factors. This discussion will review the iterative design steps that have been taken in the past to see how cooling strategies have improved, and then forecast the next phases we may see in the near and perhaps not-so-near future.
Mr. Peterson is a mission critical program manager and a professional engineering consultant specializing in mission critical facility efficiency. He is a speaker, technical author and an ASHRAE Distinguished Lecturer, providing technical and application support surrounding the many systems and needs for the modernization and optimization of data centers. His continued involvement with the many industry leaders has helped to cross boundaries between IT, facilities, sustainability, and reliability.