Prompt In Langchain

We can directly pass user input to a language model as a prompt, as shown below:

from langchain_ollama import ChatOllama
user_input = input("Enter your question: ")
chat = ChatOllama(model="llama3")
response = chat.invoke(user_input)
print(response.content)

This approach works, but it gives full control to the user over the prompt. The user can enter any type of instruction, including:

  • Prompt injection attempts
  • Offensive or unsafe content
  • Requests that override intended behavior
  • Since there are no predefined rules, the chat model follows whatever the user provides.

Prompt Templates

To have more control over the prompts sent to the language model, we can use Prompt Templates. Prompt Templates allow us to define a structured format for the prompts, ensuring consistency and safety in the interactions.

A Prompt Template allows use to:

  • Define fixed rules and assumptions
  • Insert dynamic user input into predefined structures
  • Ensure consistent and safe outputs
  • Control the response format and tone
  • Fine-tune the model’s behavior for specific use cases

Here’s an example of using a Prompt Template in LangChain:

from langchain_core.prompts import PromptTemplate
from langchain_ollama import ChatOllama

# Initialize model
chat = ChatOllama(model="llama3")

# Prompt template
template = PromptTemplate(
template="""
    Assume you are an expert software engineer.

    Rules:
    - Answer clearly and concisely.
    - Do NOT include any extra or irrelevant information.
    - If the user uses vulgar or offensive language, do NOT respond.
    - If the user asks "who made you" or "who invented you", reply only with "Nirajan".
    - Do NOT include any meta explanations.

    Question:
    {question}
""",
    input_variables=["question"]
)

# Get user input
user_question = input("Enter your question: ")

# Create prompt
prompt = template.invoke({"question": user_question})

# Get response
response = chat.invoke(prompt)

# Output response
print(response.content)

Message

Let us Create a simple chatbot based on the above concept: from langchain_ollama import ChatOllama

from langchain_ollama import ChatOllama
chat = ChatOllama(model="llama3")
while True:
    user_question = input("You: ")
    response = chat.invoke(user_question)
    print("Assistant: ", response.content)

Here the output look like :

You: Which is greater 2 or 10
Assistant:  The answer is... 10!
You: now multiply bigger number by 20
Assistant:  Please provide the bigger number, and I'll be happy to multiply it by 20!

Whats wrong here? As chatmodel is stateless, it does not remember previous interactions. So, if we ask a follow-up question, it will not have the context of the previous question.wsame thing happening here in the above example thats why it is asking to provide the bigger number again.

Solution? We can keep history of previous interaction and send it along with the new question to provide context. This way, the model can understand the flow of the conversation and respond appropriately as shown below:

from langchain_ollama import ChatOllama

chat = ChatOllama(model="llama3")
chat_history =[]
while True:
    user_question = input("You: ")
    chat_history.append(user_question)
    response = chat.invoke(chat_history)
    chat_history.append(response.content)
    print("Assistant: ", response.content)
You: Which is greater 2 or 15
Assistant:  The answer is... 15!
You: multiply greater number by 10
Assistant:  Let's multiply the greater number (15) by 10:
15 × 10 = 150
So, the result of multiplying the greater number by 10 is 150.

When we print chat_history, we can see that it contains the entire conversation history:

['Which is greater 2 or 15', 'The correct answer is: 15 is greater than 2.', 'multiply greater number by 10', 'Since 15 is greater than 2, if we multiply the greater number (15) by 10, we get:\n\n15 × 10 = 150']

Whats Problem here In the above example, we are just appeding user question and model response to chat_history. There is no differentiation between user messages and assistant messages. This can lead to confusion for the model as it may not be able to distinguish between who said what in the conversation.This can solved by either appending message as

chat_history.append({"role": "user", "content": user_question})
chat_history.append({"role": "assistant", "content": response.content})

This is ok but inorder to make it more structured and easier to manage, we can use Langchain’s built-in Message classes as

  • HumanMessage : Represents messages from the user.
  • AIMessage : Represents messages from the AI assistant.
  • SystemMessage : Represents system-level messages that provide context or instructions to the model.

Here’s an example of using Message classes to manage chat history:

from langchain_ollama import ChatOllama
from langchain_core.messages import HumanMessage, AIMessage,SystemMessage
chat = ChatOllama(model="llama3")
chat_history =[]
while True:
    chat
    user_question = input("You: ")
    chat_history.append(HumanMessage(content=user_question))
    response = chat.invoke(chat_history)
    chat_history.append(AIMessage(content=response.content))
    print("Assistant: ", response.content)

This way, the chat history is more structured, and the model can easily differentiate between user and assistant messages, leading to better context management and more accurate responses.

ChatPromptTemplate

LangChain also provides ChatPromptTemplate specifically designed for chat-based interactions. It allows you to create structured prompts for chat models, incorporating system messages, user messages, and assistant messages. Here’s an example of using ChatPromptTemplate:

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import ChatPromptTemplate
llms = ChatOllama(model="llama3")
chat_template = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant"),
    ("human", "{user_input}"),

])
chat_prompt = chat_template.invoke({"user_input": "Hello, how are you?"})

resp=llms.invoke(chat_prompt)
print(resp.content)

Placeholders

A placeholder is a special spot in the prompt template where you can inject a list of messages (like previous dialogue history) or dynamic content at runtime, enabling multi-turn or context-aware conversations.

Eg: To make aware of previous conversation history, we can use MessagesPlaceholder as shown below:

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
# Initialize the chat model with the desired model name
chat_model = ChatOllama(model="llama3")
# Define the chat prompt template with system message, placeholder for old messages, and a human query
chat_template = ChatPromptTemplate([
    ('system', 'You are a software engineer'),
    MessagesPlaceholder(variable_name='old_data'),
    ('human', 'What is {query}')
])
# Dummy old conversation data - in production, load from database or other persistent storage
old_data = [
    HumanMessage(content="Which is greater, 2 or 5?"),
    AIMessage(content="5")
]
# Create the prompt by invoking the template with old_data and a new query
prompt = chat_template.invoke({
    "old_data": old_data,
    "query": "Multiply greater number by 10"
})
# Send the prompt to the chat model and get the response
response = chat_model.invoke(prompt)
# Print the model's response
print(response.content)

ChatBot with ChatPromptTemplate

from langchain_ollama.chat_models import ChatOllama
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import AIMessage
from langchain_core.messages import HumanMessage, SystemMessage
chat_model = ChatOllama(model="llama3")
chat_template = ChatPromptTemplate([
    ('system', 'You are a software engineer'),
    MessagesPlaceholder(variable_name='chat_history'),
    ('human', '{user_input}')
])
chat_history = []
while True:
    user_input = input("You:")
    llm_prompt = chat_template.invoke({
        "chat_history": chat_history,
        "user_input": user_input
    })
    response = chat_model.invoke(llm_prompt)
    chat_history.append(HumanMessage(content=user_input))
    chat_history.append(AIMessage(content=response.content))
    print("Assistant:", response.content)

langchain-chatprompttemplate

Note: Make sure you use PromptTemplate when you have single input and previous context is not required. Use ChatPromptTemplate when you have multiple messages and previous context is required for better understanding.