Better parallel performance for artificial inteligence

Using ‘AlexNet’, 64 GPUs in parallel achieve 27x the speed of a single GPU for, as far as Fujitsu can determine, the world’s fastest processing. A conventional method to accelerate deep learning is to use multiple computers equipped with GPUs, networked and arranged in parallel. However, as processors are added performance gains tail off because the time required to share data between computers increases when, according to the Labs, more than 10 computers are used at the same time. The sharing technoogy was applied to the Caffe open-source deep learning framework. To confirm effectiveness, Fujitsu Labs evaluated the technology on an AlexNet multi-layered neural network where, compared with one GPU, is achieved 14.7x with 16 GPUs and 27x with 64 GPUs. “These are the world’s fastest processing speeds, representing an…


Link to Full Article: Better parallel performance for artificial inteligence

Pin It on Pinterest

Share This