devxlogo

NVIDIA and Azure announce new AI collaborations

AI Collaborations
AI Collaborations

Microsoft and NVIDIA are accelerating AI development and performance through their long-standing collaboration. They are introducing new offerings available through Azure AI Foundry, including optimized microservices for popular foundation models and the integration of advanced reasoning models. Epic, a leading electronic health record company, plans to use these optimized microservices to improve AI applications for better healthcare and patient outcomes.

They are researching methods to evaluate clinical summaries with these advanced models in collaboration with UW Health and UC San Diego Health. The companies are also optimizing inference performance for open-source language models, ensuring their availability on Azure AI Foundry. This includes performance optimization for Meta Llama models, which has already delivered substantial improvements in throughput and latency for companies like Synopsys.

Azure AI Foundry’s model catalog is expanding with the addition of Mistral Small 3.1, featuring multimodal capabilities and an extended context length.

NVIDIA and Azure AI advancements

The general availability of Azure Container Apps serverless GPUs with support for optimized microservices allows companies to run AI workloads on-demand with automatic scaling and reduced operational overhead.

Microsoft is accelerating the integration of high-performance GPUs and advanced networking into the Azure AI Infrastructure portfolio to support the evolution of reasoning models and agentic AI systems. Ian Buck, Vice President of Hyperscale and HPC, emphasized that this integration represents a significant leap forward, unlocking the potential of reasoning AI. The combination of high-performance GPUs with low-latency networking and Azure’s scalable architectures handles the new, massive data throughput and intensive processing demands.

Microsoft is also set to launch new VMs based on the latest GPUs in 2025, promising exceptional performance and efficiency for the next wave of agentic and generative AI workloads. Organizations like Meter train large foundation models on Azure AI Infrastructure to automate networking end-to-end. Azure’s performance and power significantly scale Meter’s AI training and inference, aiding in the development of models with billions of parameters to improve network design, configuration, and management.

See also  Farcaster Reportedly Pivots To Wallet Features

Microsoft’s and NVIDIA’s collaboration continues to grow and shape the future of AI, leveraging cutting-edge technology to push the boundaries of what’s possible in artificial intelligence.

Image Credits: Photo by Christian Wiediger on Unsplash

Noah Nguyen is a multi-talented developer who brings a unique perspective to his craft. Initially a creative writing professor, he turned to Dev work for the ability to work remotely. He now lives in Seattle, spending time hiking and drinking craft beer with his fiancee.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.