9781681738314-1681738317-Efficient Processing of Deep Neural Networks (Synthesis Lectures on Computer Architecture)

Efficient Processing of Deep Neural Networks (Synthesis Lectures on Computer Architecture)

ISBN-13: 9781681738314
ISBN-10: 1681738317
Author: Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel S. Emer
Publication date: 2020
Publisher: Morgan & Claypool
Format: Paperback 341 pages
FREE US shipping on ALL non-marketplace orders
Marketplace
from $48.01 USD
Buy

From $48.01

Book details

ISBN-13: 9781681738314
ISBN-10: 1681738317
Author: Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel S. Emer
Publication date: 2020
Publisher: Morgan & Claypool
Format: Paperback 341 pages

Summary

Efficient Processing of Deep Neural Networks (Synthesis Lectures on Computer Architecture) (ISBN-13: 9781681738314 and ISBN-10: 1681738317), written by authors Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel S. Emer, was published by Morgan & Claypool in 2020. With an overall rating of 4.3 stars, it's a notable title among other AI & Machine Learning (Computer Science) books. You can easily purchase or rent Efficient Processing of Deep Neural Networks (Synthesis Lectures on Computer Architecture) (Paperback) from BooksRun, along with many other new and used AI & Machine Learning books and textbooks. And, if you're looking to sell your copy, our current buyback offer is $2.89.

Description

This book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs).

DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics.

While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve metrics--such as energy-efficiency, throughput, and latency--without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems.

The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of the DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as a formalization and organization of key concepts from contemporary works that provides insights that may spark new ideas

Rate this book Rate this book

We would LOVE it if you could help us and other readers by reviewing the book