Fujitsu Technology Speeds Deep Learning

Fujitsu Technology Speeds Deep Learning

Fujitsu has announced that researchers at Fujitsu Labs have discovered a way to speed deep learning dramatically. According to the company, its new software accomplishes learning tasks 46 percent faster when using 16 GPUs and 71 percent faster when using 64 GPUs. It added that when it tested the software on the AlexNet neural network for image recognition, machine learning jobs that typically take a month on a system with one GPU required only one day on a system with 64 GPUs.

Deep learning, a subset of the machine learning branch of artificial intelligence (AI), processes enormous amounts of data in order to train neural networks. GPUs are good at processing these large volumes of data, but scaling to use multiple GPUs in parallel poses challenges. The new Fujitsu software overcomes many of those issues, making it possible to use a lot more GPUs to process deep learning workloads.

Fujitsu plans to incorporate its new technology into its Human Centric AI Zinrai products sometime during the current fiscal year.

View article

See also  20 Real-World Examples of Embedded Systems

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist