Into To Langchain
LangChain is a powerful framework designed to simplify the development of applications that leverage large language models (LLMs). It provides a suite of tools and abstractions that make it easier to build, manage, and deploy applications that utilize LLMs for various tasks such as natural language understanding, generation, and interaction.
Key Features of LangChain
Concepts of Chains : LangChain allows developers to create chains of calls to LLMs and other components, enabling complex workflows and interactions.Make sure that outputs of one component can be used as inputs to another component. For example, you can create a chain that first summarizes a document and then translates the summary into another language.
Model Agnostic : LangChain supports multiple LLM providers, allowing developers to switch between different models without changing their application code.
Memory and State Management : LangChain provides built-in support for managing conversation history and state, making it easier to build conversational agents.For example, chatbots that can remember previous interactions.
Getting Started with LangChain
To get started with LangChain, you can follow these steps:
- Create a virtual environment
python -m venv langchain-env
source langchain-env/bin/activate # On Windows use `langchain-env\Scripts\activate`
- Install LangChain
pip install langchain
- Install an LLM provider library (e.g., OpenAI)
pip install langchain-openai
for ollama
pip install -U langchain-ollama
for other please see the official documentation.
Models in Langchain
LangChain supports various types of models, including:
- Language Models (LLMs): These are the core models that generate text based on input prompts. Examples include OpenAI’s GPT-3, GPT-4, and other similar models.
- Embedding Models: These models convert text into vector representations, which can be used for tasks like semantic search and clustering. Examples include OpenAI’s embedding models and others.
- Chat Models: These are specialized models designed for conversational interactions, such as OpenAI’s ChatGPT.
Difference between LLM and Chat Models:
Feature Language Models (LLMs) Chat Models Purpose General text generation Conversational interactions Input Format Plain text prompts Structured messages (user, system, assistant) Context Handling Limited context awareness Maintains conversation context Use Cases Text completion, summarization, translation Chatbots, virtual assistants
LLM Model Integration
Here’s an example of how to integrate Ollama LLM model with LangChain:
from langchain_ollama.llms import Ollama
llm = Ollama(model="llama3")
response = llm.invoke("What is LangChain?")
print(response)
ChatModel Integration
Here’s an example of how to integrate Ollama Chat model with LangChain:
from langchain_ollama.chat_models import OllamaChat
chat = OllamaChat(model="llama3")
response = chat.invoke("What is LangChain?")
print(response)
Concept of temperature
Temperature is used to control the randomness of the output generated by language models
| Temperature | Effect |
|---|---|
| Low (e.g., 0.2) | More focused and deterministic output |
| Medium (e.g., 0.7) | Balanced creativity and coherence |
| High (e.g., 1.0) | More diverse and creative output |
Concept of max tokens
Max tokens refer to the maximum number of tokens (words or subwords) that the model is allowed to generate in response to a prompt. Setting a limit on max tokens helps control the length of the output.
You can set the temperature and max tokens in the model initialization as follows:
llm = OllamaChat(model="llama3", temperature=0.7, max_tokens=150)
Embedding Models Integration
You can use embedding model in LangChain as follows:
from langchain_ollama.embeddings import OllamaEmbeddings
embedding_model = OllamaEmbeddings(model="embeddinggemma")
vector = embedding_model.embed_query("Hello world")
print(vector)
You can also embed documents:
from langchain_ollama.embeddings import OllamaEmbeddings
embedding_model = OllamaEmbeddings(model="embeddinggemma")
documents = ["This is the first document.", "This is the second document."]
vectors = embedding_model.embed_documents(documents)
print(vectors)
Embedding models are useful for tasks like semantic search, clustering, and recommendation systems.