Skip to main content

Filed under:

Chip race: Microsoft, Meta, Google, and Nvidia battle it out for AI chip supremacy

The rise of generative AI has been powered by Nvidia and its advanced GPUs. As demand far outstrips supply, the H100 has become highly sought after and extremely expensive, making Nvidia a trillion-dollar company for the first time.

It’s also prompting customers, like Microsoft, Meta, OpenAI, Amazon, and Google to start working on their own AI processors. Meanwhile, Nvidia and other chip makers like AMD and Intel are now locked in an arms race to release newer, more efficient, and more powerful AI chips.

As demand for generative AI services continues to grow, it’s evident that chips will be the next big battleground for AI supremacy.

  • Nvidia overtakes Microsoft as the world’s most valuable company

    Vector collage of the Ndivia logo.
    Cath Virginia / The Verge

    Less than two weeks after Nvidia jumped Apple in terms of its overall valuation, the GPU maker has now passed Microsoft to stand as the world’s most valuable company based on the chips it makes that are key to powering a boom in generative AI technology.

    At the close of trading on Tuesday, its share price stood at $135.58, up $4.60 from the previous day and pushing its market cap to $3.335 trillion. That’s more than Microsoft ($3.32 trillion), Apple ($3.29 trillion), and Google ($2.17 trillion). Nvidia’s shares split 10-for-1 after June 7th, lowering the overall share price, but the spike in the company’s value has been jarring. Its share price has gone up 160 percent in 2024, and the company only passed the $2 trillion mark in February.

    Read Article >
  • Nvidia is the world’s most valuable company at the moment.

    Riding a valuation pumped up by generative AI and its chips that power many of the tools, Nvidia’s market cap has passed not only Apple but now Microsoft, too, at more than $3.3 trillion, as reported by Bloomberg.

    The markets are still open, but the rise has been fast — Nvidia shares are up 160 percent in 2024, passing $2 trillion in February.


    Graph showing the market cap of Apple, Microsoft, and Nvidia since 2019.
    Image: Bloomberg
  • Nvidia is now more valuable than Apple at $3.01 trillion

    Vector collage of the Ndivia logo.
    Image: Cath Virginia / The Verge

    Nvidia has become the second most valuable company in the world. On Wednesday afternoon, the chipmaking giant’s market capitalization hit $3.01 trillion, putting it just ahead of Apple at $3 trillion.

    As Nvidia dominates the AI race with its flagship H100 chip, the company’s market cap has only continued to rise. Nvidia became a $1 trillion company in May 2023, then skyrocketed past $2 trillion in February of this year, making it more valuable than both Amazon and Alphabet.

    Read Article >
  • Even the Raspberry Pi is getting in on AI

    Photo illustration of a computer with a brain on the screen.
    Illustration by Cath Virginia / The Verge | Photos by Getty Images

    As the AI craze continues, even the microcomputer company Raspberry Pi plans to sell an AI chip. It’s integrated with Raspberry Pi’s camera software and can run AI-based applications like chatbots natively on the tiny computer. 

    Raspberry Pi partnered with chipmaker Hailo for its AI Kit, which is an add-on for its Raspberry Pi 5 microcomputer that will run Hailo’s Hailo-8L M.2 accelerator. The kits will be available “soon from the worldwide network of Raspberry Pi-approved resellers” for $70.

    Read Article >
  • Emma Roth

    May 30

    Emma Roth

    Intel, Google, Microsoft, Meta, and more want to standardize the tech used in AI data centers.

    The Ultra Accelerator Link (UALink) Promoter Group, will work to create an open standard to help AI accelerators “communicate more effectively” within data centers and boost performance. Other members include AMD, HP, Broadcom, and Cisco — but not Nvidia, which has AI chip-linking tech of its own.


  • Nvidia will now make new AI chips every year

    Illustration by Alex Castro / The Verge

    Nvidia just made $14 billion worth of profit in a single quarter thanks to AI chips, and it’s hitting the gas from here on out: Nvidia will now design new chips every year instead of once every two years, according to Nvidia CEO Jensen Huang.

    “I can announce that after Blackwell, there’s another chip. We’re on a one-year rhythm,” Huang just said on the company’s Q1 2025 earnings call.

    Read Article >
  • Nvidia just made $14 billion of profit in a single quarter thanks to AI chips.

    Sales jumped 262 percent in Q1 2025 to hit a record $26B in revenue, of which nearly three-quarters ($19.4B) was data center compute — especially its Hopper GPUs for training LLMs and generative AI apps, says Nvidia. Gaming only accounted for $2.6 billion revenue this quarter.

    Nvidia’s expecting record revenue again next quarter — $28B. Shovels in a gold rush, people.


    Image: Nvidia
  • Wes Davis

    May 14

    Wes Davis

    Google announced Trillium, its sixth generation of Tensor processors.

    CEO Sundar Pichai just announced new Trillium chips, coming later this year, that are 4.7 times faster than their predecessors, as Google competes with everyone else building new AI chips. Pichai also highlighted Axion, Google’s first ARM-based CPU, which the company announced last month.

    Google will also be “one of the first” cloud companies to offer Nvidia’s Blackwell GPU starting in 2025.

    Correction: Axion was announced last month, not last year. Also, corrected the spelling of Axion.


    Sundar Pichai on stage at I/O.
    Image: Google
  • Apple plans to use M2 Ultra chips in the cloud for AI

    An illustration of the Apple logo.
    Illustration: The Verge

    Apple plans to start its foray into generative AI by offloading complex queries to M2 Ultra chips running in data centers before moving to its more advanced M4 chips.

    Bloomberg reports that Apple plans to put its M2 Ultra on cloud servers to run more complex AI queries, while simple tasks are processed on devices. The Wall Street Journal previously reported that Apple wanted to make custom chips to bring to data centers to ensure security and privacy in a project the publication says is called Project ACDC, or Apple Chips in Data Center. But the company now believes its existing processors already have sufficient security and privacy components.

    Read Article >
  • Apple’s ‘Project ACDC’ is creating AI chips for data centers.

    Apple — like Google, Meta, Microsoft, OpenAI and everyone else this side of Nvidia — is reportedly working on custom server hardware to power AI models as it prepares to introduce a slew of new features.

    Over the past decade, Apple has emerged as a leading player designing chips for iPhones, iPads, Apple Watch and Mac computers. The server project, which is internally code-named Project ACDC—for Apple Chips in Data Center—will bring this talent to bear for the company’s servers, according to people familiar with the matter.

    Apple watcher Mark Gurman followed up saying a similar-sounding project was canceled and it doesn’t make sense anyway: it would be too expensive, lack differentiation, and Apple prefers on-device AI.

    Update: Added Gurman’s rebuttal.


  • US plans $285 million in funding for ‘digital twin’ chips research

    Illustrations of a grid of processors seen at an angle with the middle one flipped over to show the pins and the rest shrouded in a green aura
    Illustration by Alex Castro / The Verge

    The Biden administration is taking applications for $285 million in federal funding — allotted from the $280 billion CHIPS and Science Act — seeking companies to “establish and operate a CHIPS Manufacturing USA institute focused on digital twins for the semiconductor industry.” The plan for the CHIPS Manufacturing USA institute to establish a “regionally diverse” network to share resources with companies developing and manufacturing both physical semiconductors and digital twins. 

    Digital twins are virtual representations of physical chips that mimic the real version and make it easier to test new processors before they’re put into production to find out how they might react to a boost in power or a different data configuration. According to the press release, digital twin-based research can also leverage tech like AI to speed up chip development and manufacturing in the US.

    Read Article >
  • With $1B in sales, AMD’s MI300 AI chip is its fastest selling product ever.

    AMD also says an AI PC refresh cycle will help PCs return to growth in 2024, and that 150 software vendors will be developing for AMD AI PCs by year’s end. The company’s top priority is ramping AI data center GPUs, though, which are “tight on supply.” New AI chips are coming “later this year into 2025,” too.


    AMD’s Q1 2024 earnings summary.
    AMD’s Q1 2024 earnings summary.
    Image: AMD
  • OpenAI will give you a 50 percent discount for off-peak GPT use.

    OpenAI’s Batch API now lets users upload a file of bulk queries to the AI model, like categorizing data or tagging images, with the understanding that they won’t need immediate attention. Promising results within 24 hours lets them run when there is unused compute power, and keeps those pricey GPUs humming around the clock.


  • Meta’s new AI chips run faster than before

    Image of the Meta logo and wordmark on a blue background bordered by black scribbles made out of the Meta logo.
    Illustration by Nick Barclay / The Verge

    Meta promises the next generation of its custom AI chips will be more powerful and able to train its ranking models much faster. 

    The Meta Training and Inference Accelerator (MTIA) is designed to work best with Meta’s ranking and recommendation models. The chips can help make training more efficient and inference — aka the actual reasoning task — easier. 

    Read Article >
  • Intel launches new AI accelerator to take on Nvidia’s H100.

    Intel first introduced its Gaudi 3 AI accelerator last year, but now the company has revealed more details on performance. When compared to the H100 GPU, Intel says its Gaudi 3 accelerator can deliver “50% faster time-to-train on average across the Llama2 models” with better efficiency.

    The company also says the Gaudi 3 AI Accelerator will be a “fraction of the cost” of Nvidia’s pricey H100. It will become available to companies like Dell, HPE, and Lenovo in the second quarter of this year.


    Image: Intel
  • The US is reportedly working on a list of restricted Chinese chipmaking factories.

    Reuters reports the list could strengthen the Commerce Department’s existing restrictions on US tech shipments to Chinese chip factories. The US government has voiced national security concerns about letting China access US technology to grow its own capabilities.

    US companies have complained it’s difficult to know which Chinese factories produce advance chips and are subject to the restrictions, Reuters says.


  • Inside TSMC’s very secretive chip training facility.

    CNN gained entry into the training facility where TSMC teaches engineers to design and operate the machines that build semiconductors. The hope is to train engineers to “seed” factories it’s building in the US, Japan, and Germany.

    TSMC told CNN that it needs “to hire thousands more” employees to staff facilities around the world as demand for advanced chips grows.


  • A $40 billion AI investment fund?

    That’s what this NYT report says a16z and Saudi Arabia’s Public Investment Fund are considering, which may explain the rumors that Elon Musk (who denied them), Sam Altman, and others looking to the Middle East to fund their AI dreams.

    It also recalls last fall’s news that the US government added an “additional licensing requirement” for Nvidia and AMD AI shipments to unspecified Middle Eastern countries.


  • Nvidia reveals Blackwell B200 GPU, the ‘world’s most powerful chip’ for AI

    The Blackwell B200 GPU.
    The Blackwell B200 GPU.
    Image: Nvidia

    Nvidia’s must-have H100 AI chip made it a multitrillion-dollar company, one that may be worth more than Alphabet and Amazon, and competitors have been fighting to catch up. But perhaps Nvidia is about to extend its lead — with the new Blackwell B200 GPU and GB200 “superchip.”

    Nvidia says the new B200 GPU offers up to 20 petaflops of FP4 horsepower from its 208 billion transistors. Also, it says, a GB200 that combines two of those GPUs with a single Grace CPU can offer 30 times the performance for LLM inference workloads while also potentially being substantially more efficient. It “reduces cost and energy consumption by up to 25x” over an H100, says Nvidia, though there’s a questionmark around cost — Nvidia’s CEO has suggested each GPU might cost between $30,000 and $40,000.

    Read Article >
  • Google engineer indicted over allegedly stealing AI trade secrets for China

    The FBI symbol atop a red, black and white background made of seven pointed stars.
    Illustration by Alex Castro / The Verge

    A federal grand jury has indicted a Google engineer, Linwei Ding, aka Leon Ding, for allegedly stealing trade secrets around Google’s AI chip software and hardware on March 5th, before he was arrested Wednesday morning in Newark, California. Deputy Attorney General Lisa Monaco said in a statement that Ding “stole from Google over 500 confidential files containing AI trade secrets while covertly working for China-based companies seeking an edge in the AI technology race.” 

    Much of the stolen data allegedly revolves around Google’s tensor processing unit (TPU) chips. Google’s TPU chips power many of its AI workloads and, in conjunction with Nvidia GPUs, can train and run AI models like Gemini. The company has also offered access to the chips through partner platforms like Hugging Face

    Read Article >
  • The GDDR7 graphics memory standard is here.

    JEDEC Solid State Technology Association has released details about its new standard, which it says is better positioned to handle the demands of gaming, networking, and AI.

    JESD239 GDDR7 is the first JEDEC standard DRAM to use the Pulse Amplitude Modulation (PAM) interface for high frequency operations. Its PAM3 interface improves the signal to noise ratio (SNR) for high frequency operation while enhancing energy efficiency. It also says GDDR7 has double the bandwidth over GDDR6, up to 192 GB/s per device, and double the number of channels.

    We don’t expect to see GDDR7 out in the world until Nvidia and AMD release next-gen GPUs, which could come before the end of 2024 but is a long way from being confirmed.


  • Intel plans to be inside 100 million AI PCs by next year.

    Intel vice president David Feng said during Mobile World Congress that as part of the push to put AI into everything it builds, it will produce 40 million CPUs for AI PCs this year and 60 million in 2025, reports Nikkei Asia.

    The “AI PC” concept includes Microsoft’s new CoPilot button plus Intel Core Ultra processors with built-in GPUs and neural processing units for AI models, which are now available as part of its vPro platform for business laptops.


  • Leading edge chipmakers requested $70 billion in CHIPS Act grants.

    With over 600 statements of interest received, Commerce Secretary Gina Raimondo acknowledged today that the amount requested is more than twice the $28 billion the government has budgeted to invest.

    We have decided to prioritize projects that will be operational by 2030. There are worthy proposals with plans to come online after 2030 that we say no to in order to maximize our impact in this decade...We anticipate that our investments in leading-edge logic chip manufacturing will put us on track to produce roughly 20% of the world’s leading-edge logic chips by 2030, up from the zero percent we produce today.

    The CHIPS Act originally had $52 billion in subsidies to boost US semiconductor manufacturing, but it’s not nearly enough to catch up by itself — industry leader Taiwan Semiconductor Manufacturing Company (TSMC) earmarked $44 billion in 2022 just to expand its existing capacity.


  • Emma Roth

    Feb 23

    Emma Roth

    Nvidia’s role in the AI wave has made it a $2 trillion company

    Nvidia’s logo.
    Illustration by Alex Castro / The Verge

    Nvidia has officially hit a $2 trillion market capitalization, making it the first chipmaker to cross the threshold. It’s also the third US company to reach over a $2 trillion valuation, right behind Apple ($2.83 trillion) and Microsoft ($3.06 trillion).

    The California-based company has seen rapid growth over the past year due to its leadership in the AI chip market. Nvidia’s market cap reached $1 trillion only less than one year ago, and it left both Amazon and Alphabet in the rearview mirror as it became a $1.83 trillion company earlier this month.

    Read Article >
  • Wes Davis

    Feb 22

    Wes Davis

    Microsoft and Intel strike a custom chip deal that could be worth billions

    An Intel logo surrounded by processors
    Illustration by Alex Castro / The Verge

    Intel will be producing custom chips, designed by Microsoft for Microsoft, as part of a deal that Intel says is worth more than $15 billion. Intel announced the partnership during its Intel Foundry event today. Although neither company specified what the chips would be used for, Bloomberg noted today that Microsoft has been planning in-house designs for both processors and AI accelerators.

    “We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry,” said Microsoft CEO Satya Nadella in the official press release.

    Read Article >