#News

Nvidia Hands Developers the Keys to Scalable AI with Nemotron 3 and SchedMD Acquisition

Nvidia Hands Developers the Keys to Scalable AI with Nemotron 3 and SchedMD Acquisition

Date: December 16, 2025

The chip giant strengthens its AI infrastructure play with a key acquisition and new open models aimed at powering the next generation of enterprise AI agents.

Nvidia made a two-pronged push into open-source AI on Monday, announcing the acquisition of workload management developer SchedMD while simultaneously unveiling its Nemotron 3 family of open AI models.

The semiconductor powerhouse acquired SchedMD, the company behind Slurm, the dominant open-source workload management system used in high-performance computing and AI. Financial terms were not disclosed. Nvidia confirmed it will continue operating Slurm as open-source, vendor-neutral software available to the broader computing community.

Slurm is used in more than half of the top 10 and top 100 systems in the TOP500 list of supercomputers. The software, originally launched in 2002, handles the complex task of queuing, scheduling, and allocating computational resources across massive computing clusters, infrastructure that has become critical as AI workloads grow increasingly demanding.

"We're thrilled to join forces with NVIDIA, as this acquisition is the ultimate validation of Slurm's critical role in the world's most demanding HPC and AI environments," said Danny Auble, CEO of SchedMD, in NVIDIA's blog announcement. "NVIDIA's deep expertise and investment in accelerated computing will enhance the development of Slurm—which will continue to be open source—to meet the demands of the next generation of AI and supercomputing."

The two companies have collaborated for over a decade. SchedMD was founded in 2010 by Slurm developers Morris Jette and Danny Auble, who currently serves as CEO.

New Open Models Target Enterprise AI Agents

Alongside the acquisition, Nvidia released Nemotron 3, which it describes as the most efficient family of open models for building agentic AI applications. The release comes as enterprises increasingly shift from simple chatbots to complex multi-agent AI systems.

The Nemotron 3 family comprises three tiers: Nemotron 3 Nano, a 30-billion-parameter model for targeted tasks; Nemotron 3 Super, with approximately 100 billion parameters for multi-agent applications; and Nemotron 3 Ultra, boasting around 500 billion parameters for complex reasoning tasks.

"Open innovation is the foundation of AI progress," said Jensen Huang, founder and CEO of Nvidia, in the company's press release. "With Nemotron, we're transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale."

The Nano model, available immediately, uses a hybrid mixture-of-experts architecture that Nvidia claims delivers up to 4x higher token throughput compared with its predecessor, with reasoning-token generation reduced by up to 60%. The Super and Ultra models are expected in the first half of 2026.

Industry Adoption Already Underway

Major enterprises, including Accenture, Cadence, CrowdStrike, Cursor, Deloitte, Oracle Cloud Infrastructure, Palantir, Perplexity, ServiceNow, Siemens, Synopsys, and Zoom are integrating Nemotron models into their AI workflows across manufacturing, cybersecurity, software development, and communications.

"NVIDIA and ServiceNow have been shaping the future of AI for years, and the best is yet to come," said Bill McDermott, chairman and CEO of ServiceNow, in the announcement. "ServiceNow's intelligent workflow automation combined with NVIDIA Nemotron 3 will continue to define the standard with unmatched efficiency, speed and accuracy."

Perplexity CEO Aravind Srinivas highlighted the model's role in routing strategies. "With our agent router, we can direct workloads to the best fine-tuned open models, like Nemotron 3 Ultra, or leverage leading proprietary models when tasks benefit from their unique capabilities," he said.

Strategic Positioning

The moves reflect Nvidia's bet that physical AI will be the next frontier for its GPUs. Last week, the company announced Alpamayo-R1, an open reasoning vision language model focused on autonomous driving research, and expanded resources for its Cosmos world models.

Nvidia also released three trillion tokens of new training datasets and reinforcement learning libraries to support customized AI agent development.

Nemotron 3 Nano is available through Hugging Face and inference providers including Baseten, DeepInfra, Fireworks, and Together AI, with enterprise deployment available via NVIDIA NIM microservices.

Arpit Dubey

By Arpit Dubey

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =