Wednesday, July 16, 2025

Creating liberating content

There are multiple bills before the House that could redefine

OpenAI CEO Sam Altman speaks to members of the media

Elon Musk interviews on CNBC from the Tesla Headquarters in

Related News

There are multiple bills before the House that could redefine the way crypto is regulated in the U.S. Those in the crypto industry have been lobbying for the bills, which

OpenAI CEO Sam Altman speaks to members of the media as he arrives at a lodge for the Allen & Co. Sun Valley Conference on July 8, 2025 in Sun

Elon Musk interviews on CNBC from the Tesla Headquarters in Texas. CNBC In May, Tesla changed its corporate bylaws in a way that would require investors to own 3% of

Consumer inflation in India is projected to ease to an average of 4% in the current financial year, down from 4.6% in FY25, according to the latest report by Crisil.

US President Donald Trump said on Wednesday that he is not currently planning to remove Federal Reserve Chair Jerome Powell, though he sharply criticised the central bank chief’s performance.“He’s doing

NEW DELHI: In a rare move, India has enhanced the flying rights of a Gulf nation when it raised weekly seats to 18,000 per week between India and Kuwait from

Trending News

Consumer inflation in India is projected to ease to an average of 4% in the current financial year, down from 4.6% in FY25, according to the latest report by Crisil.

NEW DELHI: In a rare move, India has enhanced the flying rights of a Gulf nation when it raised weekly seats to 18,000 per week between India and Kuwait from

US stock indices rose on Wednesday after a better-than-expected wholesale inflation report lifted hopes that the Federal Reserve may resume rate cuts later this year.The S&P 500 inched up 0.2%

Access Denied You don’t have permission to access ” on this server. Reference #18.c4f5d217.1752673005.7f2c68b Source link

NEW DELHI: Hospitality major ITC Hotels reported its highest ever quarterly profits and revenue in Q1 FY 2026, despite the travel disruption seen in May following the April 22 terror

Goldman Sachs posted a 20% year-on-year jump in net profit for the second quarter of 2025, reaching $3.5 billion, driven by robust gains in its financial advisory business and equities

Nvidia’s Blackwell chips double AI training speed, MLCommons benchmark shows

Word Count: 643 | Estimated Reading Time: 4 minutes


Nvidia’s Blackwell chips double AI training speed, MLCommons benchmark shows

Nvidia’s newest Blackwell chips have significantly advanced the training of large artificial intelligence systems, dramatically reducing the number of chips needed to train massive language models, new data released Wednesday shows.The results, published by MLCommons, a nonprofit that issues benchmark performance results for AI systems, detail improvements across chips from Nvidia and Advanced Micro Devices (AMD), among others, Reuters reported. The benchmarks focus on AI training — the phase where systems learn from vast datasets — which remains a key competitive frontier despite the market’s growing focus on AI inference, or responding to user queries.One major finding was that Nvidia and its partners were the only ones to submit data for training a large-scale model like Llama 3.1 405B, an open-source AI system from Meta Platforms with trillions of parameters. This model is complex enough to stress test modern chips and expose their true training capabilities.According to the data, Nvidia’s new Blackwell chips are more than twice as fast per chip as the previous-generation Hopper chips. In the fastest result, a cluster of 2,496 Blackwell chips completed the training task in just 27 minutes. By contrast, more than three times that number of Hopper chips was needed to match or better that performance.At a press conference, Chetan Kapoor, chief product officer at CoreWeave — which collaborated with Nvidia on the benchmark tests — highlighted a broader industry trend.“Using a methodology like that, they’re able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,” Kapoor said, referring to a shift from massive monolithic systems to modular training infrastructures made up of smaller chip groups.The benchmark results underline Nvidia’s dominance in the AI training space, even as rivals like China’s DeepSeek claim competitive performance with fewer chips. As the race to power ever-larger AI systems intensifies, chip efficiency in training tasks remains a critical metric.





Source link

Most Popular Articles

Sign In

Welcome ! Log into Your Account