Runnables

Overview

Runnable in LangChain is an abstraction that represents any component or operation that can be executed as part of a chain or workflow. Runnables can be simple functions, prompt templates, language models, or even complex chains themselves. They provide a unified interface for executing different types of operations within LangChain.

Key Characteristics

  • Common Interface: One runnable can be connected to another as long as the output of the first matches the input of the second, enabling chain creation.
  • Composable Output: The output of a runnable is also a runnable, allowing multiple runnables to be chained together into complex workflows.

Standard Methods

  • invoke(): Execute the runnable with given input and return the output.
  • batch(): Execute the runnable with a batch of inputs and return a list of outputs.
  • stream(): Execute the runnable in streaming fashion, yielding outputs as they are generated.

Types of Runnables

Task-Specific Runnables

Core LangChain components converted into runnables for use in chains/pipelines:

  • ChatOllama
  • PromptTemplate
  • OutputParser
  • Retrievers

Runnable Primitives

Used to connect type-specific runnables together to create complex workflows:

  • RunnableSequence: Chain multiple runnables in sequence (| operator)
  • RunnableParallel: Run multiple runnables in parallel
  • RunnableBranch: Create conditional branches based on input or intermediate results
  • RunnableLambda: Wrap simple functions as runnables
  • RunnablePassthrough: Pass input directly to output without processing
  • RunnableMap: Apply a runnable to each item in a list of inputs

1. RunnableSequence

Chains multiple runnables in sequence, where the output of one runnable becomes the input to the next. It allows you to create linear workflows by connecting different components together.

Example

from langchain_ollama.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence

llm = ChatOllama(model="llama3")
template = PromptTemplate(
    template="What is the capital city of {country}. Tell in one word",
    input_variables=['country']
)
parser = StrOutputParser()

output = RunnableSequence(template, llm, parser)
print(output.invoke({'country': "nepal"}))

Note: Three runnables (PromptTemplate, ChatOllama, StrOutputParser) are chained together using RunnableSequence, which is also a runnable.

Using the Pipe Operator

output = template | llm | parser

Note: Previously this was called Chain, which is built on top of RunnableSequence with additional features. Each component in a chain is a runnable, and the output is also a runnable.


2. RunnableParallel

Runs multiple runnables simultaneously, where each runnable processes the same input independently. Returns outputs as a dictionary.

Example

from langchain_ollama.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence, RunnableParallel

llm = ChatOllama(model="llama3")
template1 = PromptTemplate(
    template="Generate a LinkedIn post of {topic}. Tell in one sentence",
    input_variables=['topic']
)
template2 = PromptTemplate(
    template="Generate a Facebook post of {topic}. Tell in one sentence",
    input_variables=['topic']
)
parser = StrOutputParser()

output = RunnableParallel({
    'tweet': RunnableSequence(template1, llm, parser),
    'facebook_post': RunnableSequence(template2, llm, parser)
})

result = output.invoke({'topic': 'AI in Healthcare'})
print(result)

Note: Two runnables (tweet and facebook_post) are run in parallel using RunnableParallel.

Merging Outputs

from langchain_ollama.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence, RunnableParallel

llm = ChatOllama(model="llama3")
template1 = PromptTemplate(
    template="Generate a LinkedIn post of {topic}. Tell in one sentence",
    input_variables=['topic']
)
template2 = PromptTemplate(
    template="Generate a Facebook post of {topic}. Tell in one sentence",
    input_variables=['topic']
)
merge_template = PromptTemplate(
    template="Merge the following social media posts into a single post:\nTweet: {tweet}\nFacebook Post: {facebook_post}\nFinal Post:",
    input_variables=['tweet', 'facebook_post']
)
parser = StrOutputParser()

output = RunnableParallel({
    'tweet': RunnableSequence(template1, llm, parser),
    'facebook_post': RunnableSequence(template2, llm, parser)
})

merge_runnable = RunnableSequence(output, merge_template, llm, parser)
final_result = merge_runnable.invoke({'topic': 'AI in Healthcare'})
print(final_result)

3. RunnablePassthrough

Passes input directly to output without any processing. Useful in conditional chains or as a default case in branching scenarios.

Simple Example

output = RunnablePassthrough()
res = output.invoke("Hello, World!")
print(res)  # Output: Hello, World!

Complex Example

from langchain_ollama.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence, RunnableParallel, RunnablePassthrough

llm = ChatOllama(model="llama3")
template1 = PromptTemplate(
    template="Generate a linked topic for {topic}. Tell in one word",
    input_variables=['topic']
)
template2 = PromptTemplate(
    template="Explain the {title}. Tell in one sentence",
    input_variables=['title']
)
parser = StrOutputParser()

series1 = RunnableSequence(template1, llm, parser)
series2 = RunnableParallel({
    'title': RunnablePassthrough(),
    'explanation': RunnableSequence(template2, llm, parser)
})

final_chain = RunnableSequence(series1, series2)
result = final_chain.invoke({'topic': 'AI in Healthcare'})
print(result)

4. RunnableLambda

Allows you to create a custom Python function and wrap it as a runnable for use in chains/pipelines. Acts as middleware to perform custom operations on input data before passing to the next runnable.

Example

from langchain_ollama.chat_models import ChatOllama
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence, RunnableParallel, RunnablePassthrough, RunnableLambda

llm = ChatOllama(model="llama3")
prompt = PromptTemplate.from_template("What is a good name for a company that makes {product}?")
parser = StrOutputParser()

chain = RunnableSequence(prompt, llm, parser)

final_output = RunnableParallel({
    "output": RunnablePassthrough(),
    "length": RunnableLambda(lambda x: len(x))
})

serial_output = RunnableSequence(chain, final_output)
result = serial_output.invoke({"product": "colorful socks"})
print(result)

Note: Can be used to take user input and send both user input and model response to the next runnable in the chain.


5. RunnableBranch

Creates conditional branches in a workflow, allowing you to direct execution flow based on conditions or criteria. Enables dynamic decision-making within a chain.

Example

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableBranch, RunnableLambda, RunnableSequence
from langchain_core.output_parsers import PydanticOutputParser, StrOutputParser
from pydantic import BaseModel, Field
from typing import Literal


class PositiveNegative(BaseModel):
    feedback: Literal["positive", "negative", "none"] = Field(
        description="Analyze if feedback is positive or not",
        default="none"
    )


llm = ChatOllama(model="llama3")
parser = PydanticOutputParser(pydantic_object=PositiveNegative)
strparser = StrOutputParser()

prompt1 = PromptTemplate(
    template="""Classify the following feedback as either "positive" or "negative".
Respond ONLY with JSON in the format:
{format_instruction}
Feedback:
{feedback}
""",
    input_variables=["feedback"],
    partial_variables={"format_instruction": parser.get_format_instructions()},
)

prompt2 = PromptTemplate(
    template="Write a response for this positive feedback {feedback}. Write only feedback so I can send directly to customer",
    input_variables=["feedback"],
)

prompt3 = PromptTemplate(
    template="Write a response for this negative feedback {feedback}. Write only feedback so I can send directly to customer",
    input_variables=["feedback"],
)

classifier_chain = RunnableSequence(prompt1, llm, parser)

branchchain = RunnableBranch(
    (lambda x: x.feedback == "positive", prompt2 | llm | strparser),  # positive branch
    (lambda x: x.feedback == "negative", prompt3 | llm | strparser),  # negative branch
    RunnableLambda(lambda x: "Couldn't find a sentiment"),  # default case
)

final_chain = classifier_chain | branchchain
response = final_chain.invoke({"feedback": "This is a bad phone"})
print(response)

LCEL (LangChain Expression Language)

LCEL allows you to define RunnableSequence using declarative syntax. It provides a more readable and concise way to define chains of runnables.

Comparison

Traditional approach:

output = RunnableSequence(prompt, llm, parser)

LCEL approach:

output = prompt | llm | parser

Note: Currently only supported for RunnableSequence.