AMD launches AI chip to compete with Nvidia
AMD MI300X is a data center processor used for AI training, promising to compete with the latest chip models from Nvidia.
MI300X belongs to the Instinct MI300 series specializing in generative AI models - technology in ChatGPT and other chatbots. The product launched on June 13 is considered to have parameters equivalent to the most powerful H100 chip from Nvidia .
Specifically, MI300X uses the CDNA architecture designed for large language modeling and advanced AI modeling. "The focus of the chip is graphics processing capabilities. GPUs are enabling the creation of powerful AI," said Ms. Lisa Su , CEO of AMD.
AMD CEO Lisa Su introduced MI300X at the June 13 event in San Francisco. Photo: Servethehome
According to Ms. Su, MI300X can use 192 GB of memory when processing AI, the highest currently on the market. The ability to exploit memory will speed up processing of multiple data at the same time. Competitor Nvidia H100 currently supports 120 GB memory.
Generative AI models currently use a lot of memory. At the event, AMD demonstrated the MI300X running a 40 billion parameter model called Falcon. However, this model is lower than OpenAI 's GPT-3 , which has up to 175 billion parameters.
"Data size is getting bigger, machines really need more GPUs to run the latest large language models," Ms. Su said. At the same time, she also said that AMD chips also have additional memory features, helping developers deploy large systems with minimal GPU occupancy.
AMD also built a technology called Infinity Architecture, which helps combine 8 MI300X units in one AI processing system. Nvidia and Google have also developed similar systems, combining eight or more GPUs in a single box.
A minimal AI processor contains eight MI300X chips from AMD. Photo: Servethehome
To date, one of the reasons AI developers favor Nvidia chips is a fully developed software package called CUDA, which allows them to access the chip's core hardware features. However, AMD said they also have a similar product called ROCm.
AMD plans to sell the MI300X later this year, but has not announced the price. According to CNBC , the product will be lower than $40,000 - the price Nvidia is applying for its most powerful H100 chip. "Cheaper prices help AMD increase competition, as well as help make the cost of creating generative AI training systems cheaper in the future," this page commented. "If AMD's AI chips are adopted by developers and server manufacturers, it could be a large untapped market."
According to Ms. Su, AI chips are currently in the early stages of development. "AI is the company's largest and most strategic long-term growth opportunity," she said. "We predict the data center AI chip market will grow from $30 billion this year to more than $150 billion in 2027, with a compound annual growth rate of over 50%."
- What is a Web Application Firewall (WAF) difference between blacklist and whitelist?
- Guide setup Configure a web application firewall (WAF) for App Service
- News Cloud Storage Backup Data VPS | What’s new at Vultr?
- What is a cloud server and how does it work? Create your Cloud Backup business?
- Review service Products and pricing Platform Google Cloud Storage