Mark Zuckerberg is excessive on AI, so excessive in reality that he’s invested billions into Meta’s personal chip improvement course of, in order that his firm will be capable of construct higher AI knowledge processing techniques, with out having to depend on exterior suppliers.
As reported by Reuters:
“Fb proprietor Meta is testing its first in-house chip for coaching synthetic intelligence techniques, a key milestone because it strikes to design extra of its personal customized silicon and scale back reliance on exterior suppliers like Nvidia, two sources instructed Reuters. The world’s largest social media firm has begun a small deployment of the chip and plans to ramp up manufacturing for wide-scale use if the check goes effectively, the sources mentioned.”
Which is a major improvement contemplating that Meta at present has round 350,000 Nvidia H100 chips powering its AI initiatives, which every price round $25k to purchase off the shelf.
That’s not what Meta would have paid, because it’s ordering them in large volumes. Besides, the corporate just lately introduced that it’s boosting its AI infrastructure spend by the tune of round $65 billion in 2025, which is able to embody numerous expansions of its knowledge facilities to accommodate new AI chip stacks.
And the corporate could have additionally indicated the scope of how its personal chips will improve its capability, in a latest overview of its AI improvement is evolving.
“By the tip of 2024, we’re aiming to proceed to develop our infrastructure build-out that can embody 350,000 NVIDIA H100s as a part of a portfolio that can characteristic compute energy equal to almost 600,000 H100s.”
So, presumably, whereas Meta may have 350k H100 items, it’s really hoping to duplicate the compute energy of virtually double that.
May that extra capability be coming from its personal chips?
The event of its personal AI {hardware} might additionally result in exterior alternatives for the corporate, with H100s in large demand, and restricted provide, amid the broader AI gold rush.
Extra just lately, Nvidia has been in a position to scale back the wait occasions for H100 supply, which means that the market is cooling off a little bit. However even with out that exterior alternative, the truth that Meta might be able to construct out its personal AI capability with internally constructed chips may very well be an enormous benefit for Zuck and Co. shifting ahead.
As a result of processing capability has turn out to be a key differentiator, and should find yourself being the ingredient that defines an final winner within the AI race.
For comparability, whereas Meta has 350k H100s, OpenAI reportedly has round 200k, whereas xAI’s “Colossus” tremendous heart is at present working on 200k H100 chips as effectively.
Different tech giants, in the meantime, are creating their very own alternate options, with Google engaged on its “Tensor Processing Unit” (TPU), whereas Microsoft, Amazon and OpenAI all engaged on their very own AI chip initiatives.
The subsequent battleground, then, might the tariff wars, with the U.S. authorities implementing big taxes on numerous imports so as to penalize international suppliers, and (theoretically) profit native enterprise.
If Meta’s doing extra of its manufacturing within the U.S., that may very well be one other level of benefit, which can give it one other increase over the competitors.
However then once more, as newer fashions like DeepSeek have proven, it might not find yourself being the processing energy that wins, however the ways in which it’s used that really defines the market.
That’s additionally speculative, as DeepSeek has benefited extra from different AI initiatives than it initially appeared. However nonetheless, there may very well be extra to it, but when compute energy does find yourself being the crucial issue, it’s arduous to see Meta dropping out, relying on how effectively its chip venture fares.