Improving Hardware for Efficient Deep Learning

PD Concepts | 18 August 2018

Papers

  1. Neurostream: Scalable and Energy Efficient Deep Learning with Smart Memory Cubes
  2. Evaluating the Energy Efficiency of Deep Convolutional Neural Networks on CPUs and GPUs
  3. An energy-efficient deep learning processor with heterogeneous multi-core architecture for convolutional neural networks and recurrent neural networks
  4. Towards Ultra-High Performance and Energy Efficiency of Deep Learning Systems: An Algorithm-Hardware Co-Optimization Framework
  5. NeuralPower: Predict and Deploy Energy-Efficient Convolutional Neural Networks
  6. NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications
  7. Eyeriss v2: A Flexible and High-Performance Accelerator for Emerging Deep Neural Networks
  8. Understanding the Limitations of Existing Energy-Efficient Design Approaches for Deep Neural Networks
  9. Efficient Processing of Deep Neural Networks: A Tutorial and Survey
  10. Hardware for Machine Learning: Challenges and Opportunities
  11. Dynamic Bit-width Reconfiguration for Energy-Efficient Deep Learning Hardware
  12. A Method to Estimate the Energy Consumption of Deep Neural Networks
  13. Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
  14. Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators
  15. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks

Slides

  1. Efficient Methods and Hardware for Deep Learning
  2. DNPU: An Energy-Efficient Deep Neural Network Processor with On-Chip Stereo Matching
  3. How to Estimate the Energy Consumption of Deep Neural Networks
  4. Hardware for Machine Learning: Challenges and Opportunities

Websites

  1. Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks
  2. Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
  3. Deep Neural Network Energy Estimation Tool
  4. Deep-Learning-Processor-List
  5. Qualcomm - Hexagon DSP Architecture
  6. Qualcomm - Hexagon DSP Processor
  7. Qualcomm - Hexagon DSP SDK
  8. Qualcomm - Neural Processing SDK for AI

Videos

  1. Lecture 15 | Efficient Methods and Hardware for Deep Learning
  2. Efficient Methods and Hardware for Deep Learning
  3. ICLR 2016 Best Paper Award: Deep Compression by Song Han
  4. Song Han’s PhD Defense. June 1, 2017 @Stanford
  5. Toward Efficient Deep Neural Network Deployment: Deep Compression and EIE, Song Han
  6. SysML 18: Vivienne Sze, Limitations of Energy-Efficient Design Approaches for Deep Neural Networks

Playlists

  1. Spring 2015 – Computer Architecture Lectures – Carnegie Mellon
  2. Computer Sc - Computer Architecture
  3. Lecture Collection | Convolutional Neural Networks for Visual Recognition (Spring 2017)
  4. Lecture Collection | Natural Language Processing with Deep Learning (Winter 2017)

Companies

  1. IBM - Hardware for AI
  2. NVIDIA - Deep Learning
  3. Qualcomm - AI Research
  4. Intel - AI

People

  1. Song Han

Courses

  1. Computer Architecture - Princeton University
  2. Digital Signal Processing
  3. VLSI CAD Part I: Logic
  4. VLSI CAD Part II: Layout

Blog Posts

  1. Interview with Qualcomm’s Gary Brotman, Part 1: Hexagon DSP and Working with AI
  2. Interview with Qualcomm’s Gary Brotman, Part 2: On-Device Machine Learning and AI Hitting Mainstream
  3. The future of artificial intelligence lies on the edge: A Q&A with Gary Brotman about on-device AI
  4. AI processors go mobile
  5. On-Device Processing and AI Go Hand-in-Hand
  6. We are making on-device AI ubiquitous
  7. Deep Learning on Embedded Devices: The Rise of Intelligence at the Edge
  8. Qualcomm Hexagon 685 DSP is a Boon for Machine Learning
  9. Qualcomm’s QDSP6 v6: Imaging and Vision Enhancements Via Vector Extensions
  10. Qualcomm Details Hexagon 680 DSP in Snapdragon 820: Accelerated Imaging

In case if you found something useful to add to this article or you found a bug in the code or would like to improve some points mentioned, feel free to write it down in the comments. Hope you found something useful here.

Happy learning!