Friday, July 11, 2025

Creating liberating content

Niagara Falls is one of the most stunning natural wonders

Jensen Huang, CEO of Nvidia, holds a motherboard as he

Related News

India’s foreign exchange reserves declined by $3.049 billion to $699.736 billion for the week ended July 4, according to data released by the Reserve Bank of India (RBI) on Friday.This

Niagara Falls is one of the most stunning natural wonders in the world, it’s a fact that no one can deny. Though it’s beautiful in every season, when winter takes

Jensen Huang, CEO of Nvidia, holds a motherboard as he speaks during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France,

US stocks opened lower on Friday, pulling major indexes off their record highs from the previous session, as investor sentiment turned cautious following President Donald Trump’s fresh tariff announcements.The S&P

The Reserve Bank on Friday said it has imposed a Rs 4.88 lakh penalty on HDFC Bank for contravention of certain norms relating to foreign investment in India while granting

This Optical illusion challenges the viewer to find a set of skis cleverly disguised in a beach scene filled with red-and-white striped chairs and umbrellas. At first glance, everything looks

Trending News

India’s foreign exchange reserves declined by $3.049 billion to $699.736 billion for the week ended July 4, according to data released by the Reserve Bank of India (RBI) on Friday.This

TCS had previously disclosed in April that salary increases would be postponed due to business uncertainties. (AI image) Employees of Tata Consultancy Services (TCS) will have to wait a little

‘The Best You’ features India’s comedy queen, Bharti Singh, who shares her story resilience and reinvention Klook, Asia’s leading platform for travel experiences and services, launched its new campaign titled

Britain’s economy contracted for the second consecutive month in May, dealing a blow to finance minister Rachel Reeves as she navigates a shaky domestic recovery and heightened global uncertainty. Official

Slow US-China trade deal may push Trump’s tariff deadlines Trade deals between the US and China are moving at a pace slower than expected, which may lead to extensions of

Access Denied You don’t have permission to access ” on this server. Reference #18.adf5d217.1752215691.16ea1cd4 Source link

Mixture of experts: The method behind DeepSeek’s frugal success |

Word Count: 710 | Estimated Reading Time: 4 minutes


Mixture of experts: The method behind DeepSeek's frugal success

China’s DeepSeek has pulled off an AI miracle—building a top-tier artificial intelligence model while spending far less than its American rivals. At a time when AI giants are burning billions on GPUs and power-hungry data centers, this start-up has figured out a way to do more with less.
The secret? A mix of smart engineering, a clever neural network design, and some good old-fashioned mathematical efficiency.
Big AI, Small Budget
Most AI firms stack their data centers with thousands of GPUs—Meta’s latest AI model reportedly ran on 16,000 specialized chips, each costing around $40,000. DeepSeek? Just 2,000. Their total compute cost? A mere $6 million, almost a tenth of what Meta is rumored to have spent.
The ‘Mixture of Experts’ Trick
The key to DeepSeek’s frugal success? A method called “mixture of experts.” Traditional AI models try to learn everything in one giant neural network. That’s like stuffing all knowledge into a single brain—inefficient and power-hungry.
DeepSeek, instead, split the system into specialized mini-networks—one for poetry, one for coding, another for biology, and so on. Each “expert” focused on its domain, while a “generalist” network acted as a bridge, coordinating them.
Think of it like a newsroom: specialist reporters cover specific beats, while an editor connects the dots.
The Decimal Game
If that wasn’t enough, DeepSeek also squeezed efficiency out of pure mathematics. AI models rely on mind-boggling amounts of number crunching, typically using 16-bit precision. DeepSeek? They slashed it to 8 bits—halving memory use and speeding up calculations.
Losing precision sounds risky, right? Not really. Just like rounding π to 3.14 works for most practical uses, trimming decimals didn’t hurt the AI’s performance. And when needed, DeepSeek stretched the final results back to 32-bit accuracy—giving them the best of both worlds.
Why Didn’t Others Do It?
AI giants like OpenAI and Google’s DeepMind have the brains and the budget, so why didn’t they crack this code first? Simple: risk.
Building AI models is expensive, and experimenting with new techniques can burn millions with no guarantee of success. DeepSeek took that gamble—and it paid off.
Now that they’ve published their findings, the industry is taking note. AI development just got a whole lot cheaper. The question is—who will be the next to follow suit?





Source link

Most Popular Articles

Sign In

Welcome ! Log into Your Account