Writen by
Chai Biscuit
7:32 AM
-
0
Comments
What is Llama2?
Llama 2, the next generation of our open source large language model.
- LLama2 is a transformer-based language model developed by researchers at Meta AI.
- The model is trained on a large corpus of text data and is designed to generate coherent and contextually relevant text.
- LLama2 uses a multi-layer transformer architecture with encoder and decoder components to generate text.
- The model is trained on a variety of tasks, including language translation, text summarization, and text generation.
- LLama2 has achieved state-of-the-art results on several benchmark datasets.
- The model's architecture and training procedures are made publicly available to encourage further research and development in the field of natural language processing.
- LLama2 has many potential applications, including chatbots, language translation.
How to download Llama2?
1. From meta git repository using download.sh
2. From Hugging Face
1. From meta git repository using download.sh
Below are the steps to download Llama2 from meta website.
- Go to the meta website https://ai.meta.com/llama/
- Click on download. Provide the details in the form .
- Accept term and condition and continue.
- Once you submit you will receive an email from meta to download the model from git repository. You can download the Llama2 in your local using download.sh script from git repository.
- Run download.sh ,it will ask the authenticate URL from meta . The URL gets expired in 24 hours. After providing you will be prompted to provide the size of the model i.e 7B,13B, or 70 B. Accordingly model will get downloaded.
Downloaded file in Google Colab:
2. From Hugging Face
Once you get acceptance email from meta ,login to Hugging Face .
link : https://huggingface.co/meta-llama
Select any model and submit request to grant access from Hugging Face .
Note : This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the Meta website and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days.
You will receive one 'access granted ' email from Hugging Face.
3. Create Accees Tokens from 'Settings' in Hugging Face account.
4. Llama2 in Langchain and Hugging Face in Google Colab .
1. Change the run type to GPU in google Colab.
For the demonstration purpose I have used meta-llama/Llama-2-7b-chat-hf model in the code.
Step-1
Install below packages as part requirements.txt
Step 2
Login to HuggingFace cli using previously generated Access Token.
Step3 Install Langchain
Step 4
Import all packages
step5
Create the pipeline using transformers.pipeline'
Step6
Create llm model using HuggingFacePipeline from Langchain
Step 7
Create the prompt template and run it using LLMChain from Langchain
Below are some use cases.
1. Summarize the Paragraph
2. Ask Information /Question
3. NER Recognition
No comments
Post a Comment