Fujitsu has announced that researchers at Fujitsu Labs have discovered a way to speed deep learning dramatically. According to the company, its new software accomplishes learning tasks 46 percent faster when using 16 GPUs and 71 percent faster when using 64 GPUs. It added that when it tested the software on the AlexNet neural network for image recognition, machine learning jobs that typically take a month on a system with one GPU required only one day on a system with 64 GPUs.
Deep learning, a subset of the machine learning branch of artificial intelligence (AI), processes enormous amounts of data in order to train neural networks. GPUs are good at processing these large volumes of data, but scaling to use multiple GPUs in parallel poses challenges. The new Fujitsu software overcomes many of those issues, making it possible to use a lot more GPUs to process deep learning workloads.
Fujitsu plans to incorporate its new technology into its Human Centric AI Zinrai products sometime during the current fiscal year.