SK Group has a keen eye for transformative technologies, often spearheading advancements overlooked by industry peers. This forward-thinking ethos – deeply ingrained since SK’s inception – has positioned the South Korean company at the forefront of global innovation.
There are few better examples of this foresight than artificial intelligence (AI), where SK’s research powers the next generation of advanced computing.
Indeed, these days, more than ever, AI models seem to catapult from technical novelties to the fastest-growing applications in history. How did this technology become so effective, so fast, and where does SK fit in?
The breathtaking speed and processing power of today’s models trace back to a little-known breakthrough in chip design: High-Bandwidth Memory (HBM) chips. This once-overlooked piece of hardware is now a cornerstone of modern AI, thanks to a decade-old bet by SK hynix, the chip-making arm of SK Group.
From Underdog to Leader: SK’s Vision for HBM
SK hynix introduced HBM to the market in 2013. The technology’s
x-factor is its ability to manage massive data flows at incredible
speeds.
Unlike conventional chips, HBM stacks layers of memory vertically,
akin to a ‘data skyscraper’. This unique architecture supports faster
communication between the processors driving ChatGPT and other leading
AI models.
Initially, HBM’s complexity and niche applications left the industry
skeptical. Still, SK hynix forged ahead, driven by a bold vision for the
future of computing. Today, SK hynix is the leading provider of memory
chips for next-gen processors. Fueled by AI, the company estimates
demand for HBM memory chips could increase at an annual rate of 82% through 2027.
The company’s latest HBM solution, the HBM 3E, which it will start mass producing later this year, will be a crucial component of the AI tools attracting global attention and investments.
How HBM Revolutionizes Memory Technology
To appreciate HBM’s impact, including its latest iteration, the HBM 3E, consider the following analogy:
Think of traditional memory chips as a series of small, single-story
libraries scattered across a town. Each library holds a unique
collection of books, which represent the chips’ data. When you need a
specific book, you must travel to the appropriate library, and moving
between these libraries takes time and resources.
Now, imagine HBM as a large, multi-story central library where all
books are housed under one roof. To access the right book, you simply
move between the library’s floors using elevators (akin to
through-silicon vias in HBM). This journey is more time and
cost-efficient.
This architectural evolution captures the essence of HBM’s
innovation. By stacking memory chips vertically and using vertical
electrical connections, HBM significantly improves the speed, energy
efficiency, and scalability of data transfer.
SK hynix’s HBM 3E can process the equivalent of 230
full-high-definition movies per second. This breakthrough capacity
enables the rapid and complex functions of modern computing systems,
including ChatGPT.
The Road Ahead
SK hynix’s journey with HBM extends far beyond invention. The company
has transitioned from being an industry pioneer to a market leader. As
AI evolves, industry experts say
SK hynix is best equipped to meet the sector’s demands. Over a decade
after its strategic bet on HBM, SK hynix continues to shape the
trajectory of advanced computing – a powerful testament to SK Group’s
long-term focus on identifying and unlocking breakthrough innovations.