Podcast Episode
The announcement, made before more than eighteen thousand attendees at the SAP Center in San Jose, California, marks a significant milestone for Samsung's foundry division. The Groq 3 chip is manufactured using Samsung's four nanometre process, with production recently ramped from roughly nine thousand wafers to approximately fifteen thousand wafers, representing a seventy percent increase as the technology transitions from sampling to large-scale commercial output.
The technology stems from Nvidia's roughly twenty billion dollar deal with AI chip startup Groq, struck in December last year, in which Nvidia absorbed core technology and recruited key personnel including founder Jonathan Ross.
Amazon Web Services announced it would deploy Groq 3 LPUs alongside more than one million Nvidia GPUs, whilst Huang projected at least one trillion dollars in cumulative AI chip revenue from 2025 through 2027.
Samsung Mass-Produces Nvidia's Groq 3 Inference Chip, Unveils HBM4E at GTC 2026
March 17, 2026
0:00
4:23
Samsung Electronics is now mass-producing Nvidia's Groq 3 Language Processing Unit using its advanced four nanometre foundry process, as announced at GTC 2026. The chip uses on-chip SRAM instead of traditional memory, promising dramatically faster AI inference speeds. Samsung also unveiled its next-generation HBM4E memory chip for the first time.
Samsung Takes Centre Stage at Nvidia GTC 2026
Samsung Electronics has emerged as one of the biggest winners at Nvidia's GTC 2026 developer conference, with CEO Jensen Huang confirming that the South Korean tech giant is now mass-producing the Groq 3 Language Processing Unit, Nvidia's dedicated AI inference chip.The announcement, made before more than eighteen thousand attendees at the SAP Center in San Jose, California, marks a significant milestone for Samsung's foundry division. The Groq 3 chip is manufactured using Samsung's four nanometre process, with production recently ramped from roughly nine thousand wafers to approximately fifteen thousand wafers, representing a seventy percent increase as the technology transitions from sampling to large-scale commercial output.
A New Approach to AI Inference
The Groq 3 LPU represents a fundamentally different approach to AI processing. Rather than relying on traditional high-bandwidth memory, each chip integrates five hundred megabytes of on-chip SRAM, delivering approximately one hundred and fifty terabytes per second of bandwidth, far exceeding the twenty-two terabytes per second offered by conventional memory solutions. Nvidia plans to deploy the chips in LPX inference racks featuring one hundred and twenty-eight LPUs, available in the second half of 2026.The technology stems from Nvidia's roughly twenty billion dollar deal with AI chip startup Groq, struck in December last year, in which Nvidia absorbed core technology and recruited key personnel including founder Jonathan Ross.
Samsung's Comprehensive AI Portfolio
Beyond foundry manufacturing, Samsung used GTC to showcase its HBM4E memory chip for the first time, a seventh-generation high-bandwidth memory solution delivering speeds of up to sixteen gigabits per second per pin and four terabytes per second of bandwidth. The company's sixth-generation HBM4 is already in mass production for Nvidia's Vera Rubin AI platform.Amazon Web Services announced it would deploy Groq 3 LPUs alongside more than one million Nvidia GPUs, whilst Huang projected at least one trillion dollars in cumulative AI chip revenue from 2025 through 2027.
Published March 17, 2026 at 8:18am