Qualcomm and Meta bring the big language model to the phone
Qualcomm will bring Meta's Llama 2 large language model to smartphones and computers using its chips from 2024.
Currently, the big language AI model (LLM) mainly runs on powerful servers and uses Nvidia's GPU, due to the large demand for computing power and data. Meanwhile, the leading chip makers for phones and PCs are missing out on the new trend.
Qualcomm wants to change that. They aim for large language models that can run on smartphones instead of in the cloud in large data centers. If successful, this will significantly reduce AI operating costs and usher in the era of mobile AI assistants.
Large language models can run on smartphones with Qualcomm chips. Photo: Gizchina
Qualcomm said it will equip Llama 2 open source on devices using its chip. Llama 2 can perform many of the same tasks as ChatGPT but is broken down into different small programs that can work on smartphones.
Qualcomm's chip integrates a TPU processor, which is suitable for the AI computations that large language models need, but its power cannot compare to a data center containing modern GPUs.
Meta's Llama 2 is a hugely popular language model because it's open source, allowing businesses to tailor how it works to their needs without asking for permission or paying. Meanwhile, OpenAI's GPT-4 or Google's Bard is closed source and kept secret.
- What is a Web Application Firewall (WAF) difference between blacklist and whitelist?
- Guide setup Configure a web application firewall (WAF) for App Service
- News Cloud Storage Backup Data VPS | What’s new at Vultr?
- What is a cloud server and how does it work? Create your Cloud Backup business?
- Review service Products and pricing Platform Google Cloud Storage