NVIDIA is latest to promise AI data centers in space.
Digest more
It's the last day of Nvidia GTC. This is your ultimate guide to everything the chip maker has done this week. CEO Jensen Huang delivered a nearly 3-hour-long keynote on Monday, setting the stage for the chip maker's vision for 2026.
During a busy day for Nvidia at the opening of GTC 2026, the tech giant revealed new and expanded partnerships with automobile manufacturers and Uber (UBER) for its autonomous driving systems. The Nvidia DRIVE Hyperion platform has now been adopted by BYD,
Despite growing rivalry, Nvidia’s flagship AI systems will use Intel CPUs to meet enterprise deployment requirements and maintain x86 continuity across data‑center workflows.
God’s Eye C (DiPilot 100) — camera-only, highway-focused (basic version). God’s Eye B (DiPilot 300) — adds LiDAR. God’s Eye A (DiPilot 600) — triple-LiDAR for premium models.
CNBC got an exclusive first look at Vera Rubin, Nvidia's next AI system that's due to ship in the second half of the year
Nvidia projected a revenue opportunity of at least $1 trillion through 2027, doubling its previous estimate of $500 billion through 2026. The company became the first to reach a $5
That said the direction is clear. Claws are coming to the enterprise. Nvidia just made its bet on being the platform they run on — and the guardrails that keep them in bounds.
Nvidia's next-generation DLSS 5 technology has triggered a wave of criticism after its debut showcased a shift towards generative AI-driven graphics, with gamers and developers arguing that the results appear 'uncanny' and risk undermining artistic control in video games.
Instacart is building what it calls a "grocery world model" — an AI system that connects physical store data to its online platform.
NVIDIA Dynamo 1.0, the latest release of NVIDIA Dynamo software, provides a production-grade, open source foundation for inference at scale. Dynamo and NVIDIA TensorRT-LLM optimizations integrate natively into open source frameworks such as LangChain, llm-d, LMCache, SGLang and vLLM to boost inference performance.