Chips for Deep learning continue to leapfrog in capabilities and efficiency

Deep learning has continued to drive the computing industry’s agenda in 2016. But come 2017, experts say the Artificial Intelligence community will intensify its demand for higher performance and more power efficient “inference” engines for deep neural networks.The current deep learning system leverages advances in large computation power to define network, big data sets for training, and access to the large computing system to accomplish its goal.Unfortunately, the efficient execution of this learning is not so easy on embedded systems (i.e. cars, drones and Internet of Things devices) whose processing power, memory size and bandwidth are usually limited.This problem leaves wide open the possibility for innovation of technologies that can put deep neural network power into end devices.“Deploying Artificial Intelligence at the edge [of the network] is becoming a massive…


Link to Full Article: Chips for Deep learning continue to leapfrog in capabilities and efficiency

Pin It on Pinterest

Share This