Meta has unveiled the initial two models from its versatile Llama 4 suite: LLama 4 Scout and Llama 4 Maverick. Maverick, described as «the workhorse» of the pair, specializes in image and text comprehension for «general assistant and chat use cases,» as stated in a blog post by the company. On the other hand, the smaller Scout model is capable of handling tasks such as «multi-document summarization, analyzing extensive user activity for personalized tasks, and processing vast codebases.» Additionally, Meta has introduced Llama 4 Behemoth, an upcoming model touted as «among the world’s smartest LLMs.» CEO Mark Zuckerberg has indicated that news about a fourth model, LLama 4 Reasoning, will be shared «in the next month.»
Both Maverick and Scout are currently available for download from the LLama website and Hugging Face, and they have been integrated into Meta AI, including platforms like WhatsApp, Messenger, and Instagram DMs.
According to Meta, Scout boasts 17 billion active parameters with 16 experts. Zuckerberg highlighted that it is «extremely fast, inherently multimodal, and features an industry-leading, almost infinite 10 million token context length, designed to operate on a single GPU.» In contrast, Maverick comprises 17 billion active parameters with 128 experts. Meta claims that it outperforms competitors like GPT-4o and Gemini 2.0 in coding, reasoning, multilingual capabilities, long-context understanding, and image analysis. It also competes with DeepSeek v3.1 in reasoning and coding benchmarks.
Zuckerberg has already labeled the forthcoming Behemoth model, which is currently undergoing training, as «the highest performing base model globally,» with 288 billion active parameters, as per Meta. While it is not yet available, it is anticipated that there will be more information regarding this model and the Reasoning model soon, especially with Meta’s imminent AI developer conference, LlamaCon, approaching in a few weeks.
This article was originally published on Engadget at .
FUENTE