- Published on
Build AI Apps with LangChain and Python
- Authors

- Name
- Sanjeev Sharma
- @webcoderspeed1
Introduction
LangChain is the most popular framework for building LLM-powered applications in Python. It provides abstractions for chains, agents, memory, and retrieval — the building blocks of every serious AI application.
From chatbots to document Q&A to autonomous agents — this guide shows you how to build real AI apps with LangChain.
- Installation and Setup
- Your First LLM Call
- Prompt Templates
- Chains — Connect Multiple Steps
- RAG — Retrieval Augmented Generation
- Memory — Conversational Chatbots
- LangChain Agents — Autonomous AI
- Structured Output with Pydantic
- Conclusion
Installation and Setup
pip install langchain langchain-openai langchain-community
pip install openai python-dotenv
OPENAI_API_KEY=your-key-here
Your First LLM Call
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
llm = ChatOpenAI(model="gpt-4o", temperature=0)
messages = [
SystemMessage(content="You are a helpful Python expert."),
HumanMessage(content="Explain list comprehensions in one paragraph."),
]
response = llm.invoke(messages)
print(response.content)
Prompt Templates
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert in {language} programming."),
("human", "Explain {concept} with a code example."),
])
chain = prompt | llm
result = chain.invoke({
"language": "Python",
"concept": "decorators"
})
print(result.content)
Chains — Connect Multiple Steps
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
# Step 1: Generate a topic
topic_prompt = ChatPromptTemplate.from_template(
"Give me one interesting fact about {subject} in one sentence."
)
# Step 2: Expand on it
expand_prompt = ChatPromptTemplate.from_template(
"Expand on this fact with more details:\n{fact}"
)
# Chain them together using | operator
chain = (
topic_prompt
| llm
| parser
| (lambda fact: {"fact": fact})
| expand_prompt
| llm
| parser
)
result = chain.invoke({"subject": "black holes"})
print(result)
RAG — Retrieval Augmented Generation
Build a Q&A system over your own documents:
pip install faiss-cpu langchain-community tiktoken
from langchain_community.document_loaders import TextLoader, PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Step 1: Load documents
loader = TextLoader("my_document.txt")
documents = loader.load()
# Step 2: Split into chunks
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200
)
chunks = splitter.split_documents(documents)
# Step 3: Create embeddings and store in vector DB
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(chunks, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 3})
# Step 4: Build RAG chain
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_template("""
Answer the question based on the context below.
If you don't know, say "I don't know."
Context: {context}
Question: {question}
""")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
rag_chain = (
{"context": retriever | format_docs, "question": lambda x: x}
| prompt
| llm
| StrOutputParser()
)
# Ask questions!
answer = rag_chain.invoke("What is the main topic of the document?")
print(answer)
Memory — Conversational Chatbots
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Be concise."),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
])
chain = prompt | llm
# Store conversation history
history = []
def chat(user_input: str) -> str:
response = chain.invoke({
"history": history,
"input": user_input,
})
# Update history
history.append(HumanMessage(content=user_input))
history.append(AIMessage(content=response.content))
return response.content
# Have a conversation
print(chat("My name is Alice."))
print(chat("What's my name?")) # Remembers!
print(chat("Tell me a joke."))
LangChain Agents — Autonomous AI
Agents can decide which tools to use to complete a task:
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_core.tools import tool
from langchain import hub
llm = ChatOpenAI(model="gpt-4o", temperature=0)
# Built-in tools
search = DuckDuckGoSearchRun()
# Custom tool
@tool
def calculate(expression: str) -> str:
"""Evaluate a mathematical expression. Input should be a valid Python math expression."""
try:
return str(eval(expression))
except Exception as e:
return f"Error: {e}"
tools = [search, calculate]
# Use a pre-built prompt
prompt = hub.pull("hwchase17/react")
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = agent_executor.invoke({
"input": "What is the square root of the current year?"
})
print(result["output"])
Structured Output with Pydantic
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from typing import List
class ProductReview(BaseModel):
sentiment: str = Field(description="positive, negative, or neutral")
score: int = Field(description="Score from 1-10", ge=1, le=10)
key_points: List[str] = Field(description="Main points from the review")
summary: str = Field(description="One sentence summary")
llm = ChatOpenAI(model="gpt-4o")
structured_llm = llm.with_structured_output(ProductReview)
review = """
Amazing laptop! Battery lasts 12 hours, keyboard is great.
Screen is a bit dim outdoors. Overall very happy with purchase.
"""
result = structured_llm.invoke(f"Analyze this product review:\n{review}")
print(result.sentiment) # "positive"
print(result.score) # 8
print(result.key_points) # ['Battery lasts 12 hours', ...]
Conclusion
LangChain gives you the building blocks to go from a simple LLM call to a fully autonomous AI agent in Python. Start with chains and prompts, then add RAG for document-aware apps, memory for conversations, and agents for autonomous tasks. The AI application landscape is exploding — and LangChain is the framework of choice for Python developers building it.