Friday, June 6, 2025

Creating liberating content

International cyber criminals transferred over Rs 1,455 crore through 89

NEW DELHI: Researchers at the Indian Institute of Technology (IIT),

Cameron Winklevoss, co-founder and president of Gemini Trust Co., left,

Related News

International cyber criminals transferred over Rs 1,455 crore through 89 Indian bank accounts in just six months, Gujarat Police revealed on Friday.These accounts were provided to the criminals on a

NEW DELHI: Researchers at the Indian Institute of Technology (IIT), Guwahati, have found that clay particles interact differently in the presence of SARS-CoV-2 — the virus that causes COVID-19 —

Cameron Winklevoss, co-founder and president of Gemini Trust Co., left, and Tyler Winklevoss, co-founder and chief executive officer of Gemini Trust Co., on stage during the Bitcoin 2025 conference in

The rupee appreciated by 11 paise to close at 85.68 against the US dollar on Friday, reversing early losses after the Reserve Bank of India (RBI) delivered a surprise 50

Tata Electronics signed a memorandum of understanding (MoU) with Bharat Electronics Limited (BEL) to ramp up the development of indigenous semiconductor and electronics solutions.The agreement, signed on 5 June 2025

This is an AI-generated image, used for representational purposes only. NEW DELHI: A new toolkit developed by The George Institute for Global Health (GIGH), India, in collaboration with Rutgers University,

Trending News

The rupee appreciated by 11 paise to close at 85.68 against the US dollar on Friday, reversing early losses after the Reserve Bank of India (RBI) delivered a surprise 50

Tata Electronics signed a memorandum of understanding (MoU) with Bharat Electronics Limited (BEL) to ramp up the development of indigenous semiconductor and electronics solutions.The agreement, signed on 5 June 2025

Travellers flying in and out of Delhi in the upcoming months might see a little turbulence in their plans as Delhi airport will be canceling almost 7.5% of its daily

India’s foreign exchange reserves slipped by $1.24 billion to $691.49 billion in the week ending May 30, according to data released by the Reserve Bank of India on Friday. Despite

US stock futures edged higher on Friday as markets awaited key labour data, while shares of Tesla rebounded following signs of de-escalation in the high-profile spat between CEO Elon Musk

The US treasury department on Thursday stopped short of labeling China a currency manipulator, but criticised Beijing for its lack of transparency in exchange rate policies. The decision came in

Nvidia’s Blackwell chips double AI training speed, MLCommons benchmark shows

Word Count: 643 | Estimated Reading Time: 4 minutes


Nvidia’s Blackwell chips double AI training speed, MLCommons benchmark shows

Nvidia’s newest Blackwell chips have significantly advanced the training of large artificial intelligence systems, dramatically reducing the number of chips needed to train massive language models, new data released Wednesday shows.The results, published by MLCommons, a nonprofit that issues benchmark performance results for AI systems, detail improvements across chips from Nvidia and Advanced Micro Devices (AMD), among others, Reuters reported. The benchmarks focus on AI training — the phase where systems learn from vast datasets — which remains a key competitive frontier despite the market’s growing focus on AI inference, or responding to user queries.One major finding was that Nvidia and its partners were the only ones to submit data for training a large-scale model like Llama 3.1 405B, an open-source AI system from Meta Platforms with trillions of parameters. This model is complex enough to stress test modern chips and expose their true training capabilities.According to the data, Nvidia’s new Blackwell chips are more than twice as fast per chip as the previous-generation Hopper chips. In the fastest result, a cluster of 2,496 Blackwell chips completed the training task in just 27 minutes. By contrast, more than three times that number of Hopper chips was needed to match or better that performance.At a press conference, Chetan Kapoor, chief product officer at CoreWeave — which collaborated with Nvidia on the benchmark tests — highlighted a broader industry trend.“Using a methodology like that, they’re able to continue to accelerate or reduce the time to train some of these crazy, multi-trillion parameter model sizes,” Kapoor said, referring to a shift from massive monolithic systems to modular training infrastructures made up of smaller chip groups.The benchmark results underline Nvidia’s dominance in the AI training space, even as rivals like China’s DeepSeek claim competitive performance with fewer chips. As the race to power ever-larger AI systems intensifies, chip efficiency in training tasks remains a critical metric.





Source link

Most Popular Articles

Sign In

Welcome ! Log into Your Account