Events

Position :Home>Events> Content

From Matrix to Tensor: Algorithm and Hardware Co-Design for Energy-Efficient Deep Learning

Time: Jun 24, 2019

地址 1012 meeting roomof North Campus 事件时间: 2019-06-27 10:30:00

https://meeting.xidian.edu.cn/uploads/images/201906/1561339389.jpg

Title:

From Matrix to Tensor: Algorithm and Hardware Co-Design for Energy-Efficient Deep Learning

Lecturer:

Bo Yuan

Time:

2019-06-27 10:30:00

Venue:

1012 meeting roomof North Campus

Lecturer Profile

Dr. Bo Yuan is currently the assistant professor in the Department of Electrical and Computer Engineering in Rutgers University. Before that, he was with City University of New York from 2015-2018. Dr. Bo Yuan received his bachelor and master degrees from Nanjing University, China in 2007 and 2010, respectively. He received his PhD degree from Department of Electrical and Computer Engineering at University of Minnesota, Twin Cities in 2015.

His research interests include algorithm and hardware co-design and implementation for machine learning and signal processing systems, error-resilient low-cost computing techniques for embedded and IoT systems and machine learning for domain-specific applications. He is the recipient of Global Research Competition Finalist Award in Broadcom Corporation. Dr. Yuan serves as technical committee track chair and technical committee member for several IEEE/ACM. He is the associated editor of Springer Journal of Signal Processing System.

Lecture Abstract

In the emerging artificial intelligence era, deep neural networks (DNNs), a.k.a. deep learning, have gained unprecedented success in various applications. However, DNNs are usually storage intensive, computation intensive and very energy consuming, thereby posing severe challenges on the future wide deployment in many application scenarios, especially for the resource-constraint low-power IoT application and embedded systems.

In this talk, I will introduce my recent algorithm/hardware co-design works for energy-efficient DNN (MICRO'17,MICRO'18, ISCA'19). First, I will show the use of low displacement rank (LDR) matrices can enable the construction of low-complexity DNN models as well as the corresponding energy-efficient DNN hardware accelerators. In the second part of my talk, I will show the benefit of using permuted diagonal matrix, as another type of structured and sparse matrix, for the energy-efficient DNN hardware design. Finally, I will introduce the benefits of tensor decomposition for DNN design and the corresponding high-performance DNN accelerator.

Close

Baidu
map