Date: December 16, 2025
The chip giant strengthens its AI infrastructure play with a key acquisition and new open models aimed at powering the next generation of enterprise AI agents.
Nvidia made a two-pronged push into open-source AI on Monday, announcing the acquisition of workload management developer SchedMD while simultaneously unveiling its Nemotron 3 family of open AI models.
The semiconductor powerhouse acquired SchedMD, the company behind Slurm, the dominant open-source workload management system used in high-performance computing and AI. Financial terms were not disclosed. Nvidia confirmed it will continue operating Slurm as open-source, vendor-neutral software available to the broader computing community.
Slurm is used in more than half of the top 10 and top 100 systems in the TOP500 list of supercomputers. The software, originally launched in 2002, handles the complex task of queuing, scheduling, and allocating computational resources across massive computing clusters, infrastructure that has become critical as AI workloads grow increasingly demanding.
"We're thrilled to join forces with NVIDIA, as this acquisition is the ultimate validation of Slurm's critical role in the world's most demanding HPC and AI environments," said Danny Auble, CEO of SchedMD, in NVIDIA's blog announcement. "NVIDIA's deep expertise and investment in accelerated computing will enhance the development of Slurm—which will continue to be open source—to meet the demands of the next generation of AI and supercomputing."
The two companies have collaborated for over a decade. SchedMD was founded in 2010 by Slurm developers Morris Jette and Danny Auble, who currently serves as CEO.
Alongside the acquisition, Nvidia released Nemotron 3, which it describes as the most efficient family of open models for building agentic AI applications. The release comes as enterprises increasingly shift from simple chatbots to complex multi-agent AI systems.
The Nemotron 3 family comprises three tiers: Nemotron 3 Nano, a 30-billion-parameter model for targeted tasks; Nemotron 3 Super, with approximately 100 billion parameters for multi-agent applications; and Nemotron 3 Ultra, boasting around 500 billion parameters for complex reasoning tasks.
"Open innovation is the foundation of AI progress," said Jensen Huang, founder and CEO of Nvidia, in the company's press release. "With Nemotron, we're transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale."
The Nano model, available immediately, uses a hybrid mixture-of-experts architecture that Nvidia claims delivers up to 4x higher token throughput compared with its predecessor, with reasoning-token generation reduced by up to 60%. The Super and Ultra models are expected in the first half of 2026.
Major enterprises, including Accenture, Cadence, CrowdStrike, Cursor, Deloitte, Oracle Cloud Infrastructure, Palantir, Perplexity, ServiceNow, Siemens, Synopsys, and Zoom are integrating Nemotron models into their AI workflows across manufacturing, cybersecurity, software development, and communications.
"NVIDIA and ServiceNow have been shaping the future of AI for years, and the best is yet to come," said Bill McDermott, chairman and CEO of ServiceNow, in the announcement. "ServiceNow's intelligent workflow automation combined with NVIDIA Nemotron 3 will continue to define the standard with unmatched efficiency, speed and accuracy."
Perplexity CEO Aravind Srinivas highlighted the model's role in routing strategies. "With our agent router, we can direct workloads to the best fine-tuned open models, like Nemotron 3 Ultra, or leverage leading proprietary models when tasks benefit from their unique capabilities," he said.
The moves reflect Nvidia's bet that physical AI will be the next frontier for its GPUs. Last week, the company announced Alpamayo-R1, an open reasoning vision language model focused on autonomous driving research, and expanded resources for its Cosmos world models.
Nvidia also released three trillion tokens of new training datasets and reinforcement learning libraries to support customized AI agent development.
Nemotron 3 Nano is available through Hugging Face and inference providers including Baseten, DeepInfra, Fireworks, and Together AI, with enterprise deployment available via NVIDIA NIM microservices.
By Arpit Dubey
Arpit is a dreamer, wanderer, and tech nerd who loves to jot down tech musings and updates. With a knack for crafting compelling narratives, Arpit has a sharp specialization in everything: from Predictive Analytics to Game Development, along with artificial intelligence (AI), Cloud Computing, IoT, and let’s not forget SaaS, healthcare, and more. Arpit crafts content that’s as strategic as it is compelling. With a Logician's mind, he is always chasing sunrises and tech advancements while secretly preparing for the robot uprising.
OpenAI Is Building an Audio-First AI Model And It Wants to Put It in Your Pocket
New real-time audio model targeted for Q1 2026 alongside consumer device ambitions.
Nvidia in Advanced Talks to Acquire Israel's AI21 Labs for Up to $3 Billion
Deal would mark chipmaker's fourth major Israeli acquisition and signal shifting dynamics in enterprise AI.
Nvidia Finalizes $5 Billion Stake in Intel after FTC approval
The deal marks a significant lifeline for Intel and signals a new era of collaboration between two of America's most powerful chipmakers.
Manus Changed How AI Agents Work. Now It's Coming to 3 Billion Meta Users
The social media giant's purchase of the Singapore-based firm marks its third-largest acquisition ever, as the race for AI dominance intensifies.