RAG in LangChain: Introduction and Loaders

Learn how RAG combines retrieval-based and generative models to create accurate, up-to-date, and scalable LLM workflows using LangChain’s document loaders, retrievers, and vector stores.

January 23, 2026 · 4 min · 726 words · Nirajan Khatiwada

Vector Stores in LangChain: Embeddings, Semantic Search & Similarity Retrieval

Learn how vector stores work in LangChain, from embeddings and similarity search to using Chroma for storing, querying, filtering, and managing high-dimensional vector data in RAG and recommendation systems.

January 23, 2026 · 6 min · 1082 words · Nirajan Khatiwada

Retrievers in LangChain: Data Sources, MMR & Contextual Compression

Understand how retrievers work in LangChain, explore different retriever types, and learn how to use MMR and Contextual Compression to improve relevance, diversity, and efficiency in RAG pipelines.

January 23, 2026 · 6 min · 1093 words · Nirajan Khatiwada

Retrieval Augmented Generation (RAG) in LangChain: Architecture & Practical Example

Learn how Retrieval Augmented Generation (RAG) works in LangChain with clear architecture diagrams and a complete YouTube summarizer chatbot implementation using Chroma, Ollama, and runnable chains.

January 26, 2026 · 5 min · 910 words · Nirajan Khatiwada