Oracle and NVIDIA just announced a partnership that could transform how businesses use artificial intelligence. The tech giants unveiled massive new infrastructure and software tools designed to bring AI capabilities directly into company databases and cloud systems. At the heart of this collaboration is the OCI Zettascale10 computing cluster, the largest AI supercomputer in the cloud.
The mind-boggling Zettascale10 supercomputer
Oracle’s new OCI Zettascale10 cluster will connect up to 800,000 NVIDIA GPUs across multiple data centers. That’s an absolutely staggering amount of computing power. To put it in perspective, that’s enough graphics cards to fill several football stadiums.
The system promises 16 zettaflops of peak AI performance. One zettaflop equals 1 trillion billion calculations per second. Multiply that by 16 and you get computational power that’s almost impossible to comprehend. This represents a 10x improvement over the previous Zettascale generation.
“Oracle and NVIDIA are bringing together OCI’s distributed cloud and our full-stack AI infrastructure to deliver AI at extraordinary scale”, said Ian Buck, Vice President of Hyperscale at NVIDIA. “Featuring NVIDIA full-stack AI infrastructure, OCI Zettascale10 provides the compute fabric needed to advance state-of-the-art AI research and help organizations everywhere move from experimentation to industrialized AI”.
Multi-gigawatt power consumption
The Zettascale10 clusters will consume multiple gigawatts of electricity. That’s enough power to run entire cities. Oracle is building specialized data center campuses specifically to house these massive systems.
These facilities pack enormous computational density within a two-kilometer radius. This tight physical clustering is essential for minimizing latency between GPUs during large-scale AI training. Every millisecond of delay matters when you’re processing the massive datasets required for advanced AI models.
The first major deployment uses this architecture for the Stargate project in Abilene, Texas. Oracle is building this supercomputer in partnership with OpenAI, which has committed $300 billion toward AI infrastructure investments.
Revolutionary networking architecture
Oracle developed Acceleron RoCE networking specifically to handle the Zettascale10’s unprecedented scale. Traditional data center networks create bottlenecks when thousands of GPUs need to communicate simultaneously. Acceleron RoCE eliminates these bottlenecks through clever engineering.
The system uses NVIDIA’s Spectrum-X Ethernet platform, the first networking solution purpose-built exclusively for AI workloads. This technology allows GPUs to connect to multiple network switches simultaneously. Each connection uses a separate isolated network plane.
When one network plane experiences problems, traffic automatically shifts to other planes. This prevents the costly stalls and restarts that plague traditional supercomputing clusters. For companies training AI models that take weeks or months, avoiding even a single restart can save hundreds of thousands of dollars.
Oracle AI Database 26ai brings intelligence to your data
Oracle’s new Database 26ai flips conventional AI wisdom on its head. Instead of moving your data to where AI models live, Oracle brings the AI directly into your database. This “AI for Data” approach solves massive security and efficiency problems.
Traditional AI deployments require copying sensitive business data to external AI services. This creates security vulnerabilities, compliance nightmares, and adds latency. Oracle’s approach keeps your private data locked inside your secure database while still providing advanced AI capabilities.
“By architecting AI and data together, Oracle AI Database makes ‘AI for Data’ simple to learn and simple to use”, explained Juan Loaiza, Executive VP of Oracle Database Technologies. “We enable our customers to easily deliver trusted AI insights, innovations, and productivity for all their data, everywhere, including both operational systems and analytic data lakes”.
Agentic AI workflows inside your database
One of Database 26ai’s killer features is the ability to run agentic AI workflows directly within the database. AI agents are autonomous programs that can complete complex multi-step tasks without constant human supervision.
These agents can combine your company’s private sensitive data with public information to answer sophisticated questions. The crucial difference is that your private data never leaves your secure database environment. The AI does all its work inside the database’s security perimeter.
For example, an AI agent could analyze your sales data, compare it against public market trends, identify opportunities, and generate strategic recommendations. All without exposing proprietary sales figures to external AI services.
Quantum-resistant security protects the future
Oracle implemented NIST-approved quantum-resistant encryption algorithms in Database 26ai. This protects against a threat that doesn’t even fully exist yet – quantum computers capable of breaking today’s encryption.
Some cybercriminals are already using “harvest now, decrypt later” attacks. They steal encrypted data today with plans to decrypt it once quantum computers become powerful enough. Oracle’s quantum-safe encryption ensures this strategy won’t work even decades into the future.
The database encrypts both data at rest (stored on disks) and data in flight (traveling across networks). This comprehensive encryption approach protects information at every stage.
NVIDIA NeMo integration streamlines AI deployment
Oracle Database 26ai now integrates directly with NVIDIA NeMo Retriever microservices. This might sound technical, but it makes implementing advanced AI features dramatically easier for developers.
NeMo Retriever handles the complicated plumbing required for retrieval-augmented generation (RAG). RAG allows AI language models to look up relevant facts in your company documents before answering questions. This makes AI responses far more accurate and useful compared to models relying solely on training data.
The integration means developers can implement sophisticated AI features with minimal code. NVIDIA provides pre-built microservices for every stage of the RAG pipeline – data extraction, embedding generation, result reranking, and response generation.
GPU-accelerated vector search coming soon
Oracle’s Private AI Services Container will soon leverage NVIDIA GPUs for vector embedding generation. Vector embeddings are mathematical representations of text, images, or other data that AI systems use for similarity searches and pattern matching.
Creating these embeddings is computationally intensive, especially as data volumes grow. GPUs excel at the parallel calculations required for embedding generation. Oracle plans to use NVIDIA’s cuVS library to offload this work to powerful graphics processors.
This GPU acceleration should dramatically reduce the time needed to prepare data for AI applications. Tasks that currently take hours could complete in minutes. For businesses processing millions of documents, this represents massive time and cost savings.
Oracle AI Data Platform gets GPU boost
The new Oracle AI Data Platform now includes built-in NVIDIA GPU options for high-performance workloads. It also features the NVIDIA RAPIDS Accelerator for Apache Spark, a powerful tool for data scientists and engineers.
RAPIDS lets you speed up data processing and machine learning workflows using GPUs. The best part is that it often requires zero code changes to existing Apache Spark applications. You simply enable GPU acceleration and watch your pipelines run significantly faster.
The platform combines enterprise data with AI models, developer tools, and privacy controls. Everything organizations need to build AI solutions lives in one integrated ecosystem.
NVIDIA AI Enterprise natively on OCI Console
NVIDIA AI Enterprise is now available directly through the Oracle Cloud Infrastructure Console. This might seem like a small detail, but it removes significant friction from the AI adoption process.
Previously, organizations needed to go through separate procurement processes to access NVIDIA’s AI tools. Now developers can provision GPU instances and enable NVIDIA’s full software suite with just a few clicks. This dramatically reduces the time from decision to deployment.
The integration includes over 160 AI tools for training and inference. Everything comes with enterprise support, flexible pricing, priority security updates, and expert guidance. The seamless experience works across OCI’s distributed cloud including public regions, sovereign clouds, and dedicated regions.
Model Context Protocol enables AI agent access
Database 26ai supports the Model Context Protocol (MCP), a standardized way for AI agents to access databases. This allows language models to interact with your database for iterative, multi-step reasoning tasks.
The protocol includes robust data privacy controls at row, column, and cell levels. You can precisely control what data AI agents can access. Dynamic data masking and granular access controls ensure AI systems only see information they’re authorized to view.
This capability is essential for enterprise AI deployment. Companies need AI’s power but can’t risk exposing sensitive customer data, financial information, or trade secrets.
Real-world customer success stories
Several major organizations are already using Oracle and NVIDIA’s integrated AI stack. Nomura Research Institute in Japan uses Oracle Alloy to accelerate cloud migration for clients while ensuring data sovereignty. Their AI applications comply with strict Japanese regulations about data residency.
E&UAE in the United Arab Emirates leverages NVIDIA Hopper GPUs and NVIDIA AI Enterprise capabilities. They’re innovating in generative AI while meeting the UAE’s digital sovereignty requirements. Their AI services remain fully under Emirati control despite using cutting-edge global technology.
Zoom optimizes its AI Companion assistant on OCI to comply with Saudi Arabian regulations. The integration allows Zoom to offer advanced AI features while keeping Saudi user data within Kingdom borders.
Competitive positioning against cloud giants
This partnership strengthens Oracle’s position against Amazon Web Services, Microsoft Azure, and Google Cloud. Those cloud providers also offer NVIDIA GPUs, but Oracle’s deep database integration provides unique advantages.
Oracle emphasizes its distributed cloud strategy with flexible deployment options. Customers can run identical infrastructure in public clouds, sovereign clouds, hybrid environments, and on-premises data centers. This flexibility matters for regulated industries and governments with data localization requirements.
“I believe the AI market has been defined by critical partnerships such as the one between Oracle and NVIDIA”, said Mahesh Thiagarajan, Executive VP of Oracle Cloud Infrastructure. “These partnerships provide force multipliers that help ensure customer success in this rapidly evolving space”.
AMD GPU option provides diversification
Oracle also announced plans for a 50,000 GPU cluster using AMD’s upcoming Instinct MI450 accelerators. This diversification strategy reduces dependence on any single chip supplier and provides customers with choice.
AMD’s MI450 series represents the company’s next-generation flagship AI processors. While details remain scarce, AMD positions these chips as competitive alternatives to NVIDIA’s offerings. The Helios rack design AMD revealed supports efficient deployment at scale.
Multi-vendor GPU support mirrors strategies by other cloud providers and major AI companies. OpenAI plans to deploy custom chips alongside off-the-shelf silicon to avoid overdependence on NVIDIA.
Timeline and availability
Oracle is currently accepting orders for OCI Zettascale10, with general availability expected in the second half of 2026. The long lead time reflects the massive infrastructure buildout required. Constructing data centers capable of consuming multiple gigawatts takes significant planning and construction.
Oracle AI Database 26ai is available now as a long-term support release. It replaces Oracle Database 23ai and customers can upgrade by applying the current release update. No complete database migration or application re-certification is required.
NVIDIA AI Enterprise integration through the OCI Console is also live now. Customers can immediately begin provisioning GPU instances with the full NVIDIA AI software stack.
Looking ahead at enterprise AI adoption
These announcements signal that enterprise AI is moving from experimental pilot projects to production-scale deployments. The infrastructure, tools, and integration patterns are maturing rapidly. Companies can now implement sophisticated AI systems with reasonable confidence in security, scalability, and total cost of ownership.
The “AI for Data” philosophy addresses fundamental concerns about privacy and compliance. By bringing AI to data rather than data to AI, Oracle removes major blockers to enterprise adoption. Regulated industries like healthcare, finance, and government can finally leverage AI’s power without unacceptable risks.
“Great AI needs great data”, noted Holger Mueller, VP and Principal Analyst at Constellation Research. “With Oracle AI Database 26ai, customers get both. It’s the single place where their business data lives—current, consistent, and secure. And it’s the best place to use AI on that data without moving it”.