Intel plans to tap into its ‘enterprise, cloud and partner channels’ for a new ‘multiyear strategic collaboration’ it has ...
​If Nvidia integrates Groq’s technology, they solve the "waiting for the robot to think" problem. They preserve the magic of AI. Just as they moved from rendering pixels (gaming) to rendering ...
Being more efficient and more powerful than chemical rockets, plasma engines are seen as the future of human spaceflight. While NASA-backed plasma rocket projects aim for travel times to Mars be just ...
Illustration: Kelsea Petersen / The Athletic; Takashi Ayoma / Getty, Antonio Calanni / AP Formula 1’s car design revolution for 2026 is the biggest in a generation. Not only are the chassis designs ...
Microsoft has announced the launch of its latest chip, the Maia 200, which the company describes as a silicon workhorse designed for scaling AI inference. The 200, which follows the company’s Maia 100 ...
Car companies have been re-launching powerful V8 engines that were previously discontinued. Many high-powered trucks were removed from the US market as car companies switched to more fuel-efficient ...
NASA is getting ready to launch its massive, fully expendable rocket for the first crewed flight to the Moon since Apollo. The agency’s new era of spaceflight comes with a few parts from its past, ...
The creators of the open source project vLLM have announced that they transitioned the popular tool into a VC-backed startup, Inferact, raising $150 million in seed funding at an $800 million ...
Shakti P. Singh, Principal Engineer at Intuit and former OCI model inference lead, specializing in scalable AI systems and LLM inference. Generative models are rapidly making inroads into enterprise ...
With that, the AI industry is entering a “new and potentially much larger phase: AI inference,” explains an article on the Morgan Stanley blog. They characterize this phase by widespread AI model ...
If GenAI is going to go mainstream and not just be a bubble that helps prop up the global economy for a couple of years, AI inference is going to have to come down in price – and do so faster than it ...
The AI hardware landscape continues to evolve at a breakneck speed, and memory technology is rapidly becoming a defining differentiator for the next generation of GPUs and AI inference accelerators.