Intel will take their CPU Dies and just slap on a NVLink Die and sell it to Nvidia which Nvidia will put into their Racks.
That enables better rack/data-center level agentic access to legacy SQL and application systems that are stuck on x86. But it also seems like NVIDA is working with CSP/app providers to accelerate traditional data processing, based on examples they gave at GTC 2026
• Google Cloud + Snap
• Snap runs GPU‑accelerated Apache Spark A/B‑testing pipelines on Google Cloud (GKE + L4 GPUs) using NVIDIA RAPIDS/cuDF.
• Processes 10+ PB of data per day, analyzing 6,000+ metrics for 940M+ users.
• Achieves about 4× faster runtimes and roughly 76% daily cost savings versus prior CPU‑only Spark clusters.
• IBM + NVIDIA (Nestlé data mart / watsonx.data)
• IBM integrates NVIDIA cuDF and GPU acceleration into watsonx.data (Presto/Velox SQL) for large enterprise data marts.
• Used on Nestlé’s global “Order‑to‑Cash” data mart (multi‑TB, 44 tables, 186 countries) to accelerate SQL analytics on GPUs instead of CPUs.
• Delivers materially faster query performance and lower TCO by offloading analytics to NVIDIA GPUs while IBM Storage Scale feeds 10 PB+ of data.
• Oracle + NVIDIA (OCI + Oracle Database AI)
• Oracle Cloud Infrastructure offers GPU‑accelerated Spark via the NVIDIA RAPIDS Accelerator, so existing Spark ETL/analytics jobs can run on GPUs without code changes.
• Oracle Database 23ai/26ai uses NVIDIA cuVS and related GPU libraries to accelerate vector search and index generation directly inside the database.
• Joint positioning: faster data preparation and AI workloads with lower cost by shifting heavy data processing from CPU‑only Oracle and Spark environments to NVIDIA GPU‑accelerated infrastructure on OCI.