Chains

Chains in LangChain are a way to combine multiple components, such as language models, prompt templates, and other processing steps, into a single workflow. Chains allow you to create complex interactions and data flows by linking together different parts of your application.

you can see the structure of chain as below:

chain.get_graph()

Example of a simple

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOllama(model="llama3")

template = PromptTemplate(template="""
You are a helpful assistant that answers questions about the world.
Question: {question}
Make sure to answer in detail as much as possible.
""",
input_variables=["question"]
)

parser = StrOutputParser()


chain = template | llm | parser
response = chain.invoke({"question": "What is LangChain?"})
print(response)

Sequencial Chain

Sequencial Chain allows you to chain together multiple components in a linear sequence, where the output of one component becomes the input to the next component.

Question : Create a chain in which one model first explains a topic in detail and then another model summarizes that explanation.

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOllama(model="llama3")

template1 = PromptTemplate(template="""
You are a helpful assistant that answers questions about the world.
Question: {question}
Make sure to answer in detail as much as possible.
""",
input_variables=["question"]
)

template2 = PromptTemplate(template="""
Summerize this within 5 sentences.
{text}
""",
input_variables=["text"]
)
parser = StrOutputParser()
chain = template1 | llm | parser | template2 | llm | parser
res=chain.invoke({"question": "Explain about the capital of Nepal?"})
print(res)

Parallel Chain

Parallel Chain allows you to run multiple components simultaneously, where each component processes the same input independently, and their outputs can be combined or used separately.

look diagram below:

%%{init: {"themeVariables": {"flowchart": {"curve": "linear"}}}}%%
graph TD;
    ParallelInput([ParallelInput]):::first
    ParallelOutput(ParallelOutput)
    PromptTemplate_1(PromptTemplate)
    ChatOllama_1(ChatOllama)
    StrOutputParser_1(StrOutputParser)
    PromptTemplate_2(PromptTemplate)
    ChatOllama_2(ChatOllama)
    StrOutputParser_2(StrOutputParser)
    PromptTemplate_3(PromptTemplate)
    ChatOllama_3(ChatOllama)
    StrOutputParser_3(StrOutputParser)
    StrOutputParserOutput([StrOutputParserOutput]):::last

    PromptTemplate_1 --> ChatOllama_1
    ChatOllama_1 --> StrOutputParser_1
    ParallelInput --> PromptTemplate_1
    StrOutputParser_1 --> ParallelOutput

    PromptTemplate_2 --> ChatOllama_2
    ChatOllama_2 --> StrOutputParser_2
    ParallelInput --> PromptTemplate_2
    StrOutputParser_2 --> ParallelOutput

    ParallelOutput --> PromptTemplate_3
    PromptTemplate_3 --> ChatOllama_3
    ChatOllama_3 --> StrOutputParser_3
    StrOutputParser_3 --> StrOutputParserOutput

    classDef default fill:#f2f0ff,line-height:1.2;
    classDef first fill-opacity:0;
    classDef last fill:#bfb6fc;
from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableParallel

llm = ChatOllama(model="llama3")



prompt1= PromptTemplate(
    template="Create a detail note of topic {topic}",
    input_variables=['topic']
)

prompt2= PromptTemplate(
    template="Create a MCQ on the topic {topic}",
    input_variables=["topic"]
)

prompt3= PromptTemplate(
    template="Merge the note {note} and mcq {mcq} and create a final one",
    input_variables= ["note","mcq"]
)

parser = StrOutputParser()


parallel_chain = RunnableParallel({
    'note':prompt1 | llm | parser,
    'mcq':prompt2 | llm | parser
})

merge_chain = prompt3 | llm | parser

combined_chain = parallel_chain | merge_chain

response = combined_chain.invoke({
    'topic':"Nepal is a beautiful heaven to visit"
})

print(response)

What this does is:

  1. The input topic is sent to two separate prompt templates in parallel: one for creating a detailed note and another for creating a multiple-choice question (MCQ).
  2. Each prompt template’s output is processed by the language model (llm) and then parsed using the StrOutputParser.
  3. The outputs from both parallel chains (note and mcq) are then merged using a third prompt template, which combines the note and MCQ into a final output.

Conditional Chain

Conditional Chain allows you to create dynamic workflows where the path of execution depends on certain conditions or criteria based on the input or intermediate results.

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableBranch,RunnableLambda
from langchain_core.output_parsers import PydanticOutputParser,StrOutputParser
from pydantic import BaseModel,Field
from typing import Literal

class PositiveNegative(BaseModel):
    feedback:Literal['positive',"negative","none"] = Field(description="Analyze if feedback is positive or not",default="none")

llm = ChatOllama(model="llama3")

parser = PydanticOutputParser(pydantic_object=PositiveNegative)
strparser = StrOutputParser()

prompt1= PromptTemplate(template="""Classify the following feedback as either "positive" or "negative". 
Respond ONLY with JSON in the format:
{format_instruction}

Feedback:
{feedback}
""",
                        input_variables=["feedback"],
                        partial_variables={
                            'format_instruction' : parser.get_format_instructions()
                        }
                        )

prompt2 = PromptTemplate(template="Write a response for  for this positive fedback {feedback}.Write only feedback so i can send directly to customer ",
                         input_variables=["feedback"]
                         )
prompt3 =PromptTemplate(template="Write a response for  for this negative fedback {feedback}.Write only feedback so i can send directly to customer",
                        input_variables=["feedback"]
                        )
classifier_chain = prompt1 | llm | parser




branchchain = RunnableBranch(
    (lambda x : x.feedback=='positive',prompt2| llm| strparser), #positive branch
    (lambda x: x.feedback =="negative",prompt3 | llm | strparser), #negative branch
    RunnableLambda(lambda x:"Coundnt find a sentimate") #default case
)

final_chain = classifier_chain | branchchain

response = final_chain.invoke({
    'feedback':"This is bad phone"
})

print(response)

WHat this does is:

  1. The input feedback is first classified as either “positive” or “negative” using a prompt template and a Pydantic output parser.
  2. Based on the classification result, the conditional chain (RunnableBranch) directs the flow to either the positive feedback response template or the negative feedback response template.
  3. Each response template generates a reply using the language model, and the final response is printed out. If the feedback doesn’t match either category, a default message is returned.