Llama3

Meta has unveiled Llama 3, its next-generation open-source language model, claiming the title of the most capable open-source model yet. Trained on a dataset seven times larger than its predecessor, Llama 2, the new model boasts architectural innovations and pretraining improvements, delivering superior performance in reasoning, code generation, and instruction following.

Meta is positioning Llama 3 as the most capable openly available large language model, outperforming competitors like Google and Anthropic at similar scales. The Llama series, including Vicuna and Alpaca, has been instrumental in the AI space, powering various applications and serving as a basis for models developed by others.

Llama 3 is expected to be available in 8B and 70B parameter versions, with larger, multi-modal models anticipated later in the year. Meta plans to release a light version within the next month, with the full model coming in May 2024[1][2][6]. The model will be integrated into Meta AI, Meta's new chatbot solution, and will support hardware from various providers, including AMD, AWS, Dell, Intel, Nvidia, and Qualcomm.

Meta's commitment to open-source models differentiates it from competitors like OpenAI and Google, which restrict access to their foundation models behind paywalls. Meta has made trained Llama models and weights freely available under permissive software licenses[2]. However, critics argue that the AI isn't fully open-source, as the training and instruction-tuning data remains a closely guarded secret.

Meta's ambitions for Llama 3 include creating the most intelligent AI assistant that people can freely use worldwide, with a focus on reducing false refusals and increasing personalization[5]. Meta is also working on a 400 billion parameter version of Llama 3, which could be the largest open-source model ever released.

To try Llama3, you can follow these steps:

1. Access the Ollama platform: Llama3 is available on the Ollama platform, which offers both pre-trained and instruction-tuned models optimized for dialogue/chat use cases.

2. Choose the model: Select either the 8B or 70B parameter version of Llama3, depending on your requirements.

3. Run the model: To run the model, open the terminal and use the ollama run llama3 command with your desired prompt. For example, you can ask, "Why is the sky blue?"

For a more detailed guide, you can refer to the Ollama documentation on Llama3: https://ollama.com/library/llama3:instruct.

Keep in mind that Llama3 is designed for commercial and research use in English, with instruction-tuned models optimized for dialogue/chat use cases. Ensure that your use of the model complies with the Llama3 Community License and other applicable policies.

>> update <<

Meta's Llama 3 is available now on You.com!

You.com's integration of Llama3 shortly after its launch is a testament to the platform's agility and commitment to leveraging cutting-edge technologies.

Comments