A modular, state-aware conversational AI platform built with LangGraph, LangChain and Streamlit
The State-Aware Agentic AI Chatbot is a production-ready, extensible AI agent framework that combines the graph-based orchestration of LangGraph with a Streamlit frontend. The architecture is built around a typed state machine — every message exchange flows through a compiled StateGraph, giving the agent full awareness of conversation history at every step.
The project is designed as a foundation for building increasingly complex agentic workflows. The current implementation ships a Research Assistant use case, but the graph-based architecture makes it straightforward to plug in tool-calling nodes, RAG pipelines, multi-agent subgraphs, or any custom workflow.
- State-aware by design — conversation history is tracked via LangGraph's
add_messagesreducer, not session variables - Config-driven UI — all UI options (LLM providers, models, use cases) are controlled from a single
.inifile with no code changes needed to add options - Separation of concerns — LLM wiring, graph logic, state schema, and UI rendering are each isolated in their own module
- Groq-powered inference — uses Groq's ultra-fast inference API with Llama 3.x models for near-instant responses
| Layer | Technology | Purpose |
|---|---|---|
| Orchestration | LangGraph | Stateful graph execution engine for agentic workflows |
| LLM Framework | LangChain | Abstractions for LLMs, messages, and chains |
| LLM Provider | Groq + langchain_groq |
Fast inference for Llama 3.1 / 3.3 models |
| Frontend | Streamlit | Chat UI and sidebar controls |
| State Schema | TypedDict + Annotated |
Typed, append-only message state |
| Config | configparser (.ini) |
Declarative UI configuration |
| Language | Python 3.10+ | Core runtime |
State-aware-bot/
│
├── app.py # Application entry point
├── requirements.txt # Python dependencies
├── LICENSE # Apache 2.0 license
│
└── src/
└── langgraph_agentic_ai/
│
├── main.py # App orchestrator — wires UI, LLM, graph, and output
│
├── state/
│ └── state.py # LangGraph state schema (typed message list)
│
├── llms/
│ └── groqllm.py # Groq LLM factory — reads API key and model from UI input
│
├── nodes/
│ └── basic_chatbot_node.py # Graph node — invokes the LLM with current state
│
├── graph/
│ └── graph_builder.py # Builds and compiles the StateGraph for each use case
│
└── ui/
├── UI_config.ini # Declarative config: page title, LLM/model/usecase options
├── UI_config.py # Config reader — parses .ini and exposes typed getters
└── streamlit/
├── loadUI.py # Renders sidebar controls and returns user selections
└── display_out.py # Streams graph output into the Streamlit chat interface
The top-level entry point. Imports and calls load_langgraph_agenticai_app() from main.py. Run this file to start the application.
The central orchestrator. Initializes the Streamlit UI, captures user selections and chat input, instantiates the LLM, builds the graph for the selected use case, and delegates output rendering. All the wiring between components happens here.
Defines State, the typed dictionary that flows through every node in the LangGraph. The messages field uses the add_messages reducer so incoming messages are appended rather than overwritten — this is what makes the bot "state-aware" across turns.
The GroqLLM factory class. Reads the selected model name and API key from the UI controls dict, validates the key, and returns a configured ChatGroq instance ready to be passed into a graph node.
The BasicChatbotNode class. Implements a single graph node that invokes the LLM with the current message state and returns the response. This is the building block for more complex multi-node graphs.
The GraphBuilder class compiles a StateGraph for a given use case. It wires nodes to START and END edges and returns a compiled graph ready for .stream() or .invoke(). Adding a new use case means adding a new build method here.
The single source of truth for all UI configuration. Controls the page title, available LLM providers, selectable Groq models, and supported use cases — all without touching Python code.
The Config class wraps configparser and exposes typed getters (get_llm_options(), get_groq_model_options(), etc.). Validates that required keys exist on load and raises descriptive errors if they are missing.
The LoadStreamlitUI class sets the Streamlit page config and renders the sidebar: LLM selector, model selector, API key input, and use case selector. Returns a dictionary of user selections consumed by main.py.
The DisplayOutStreamlit class handles streaming graph output to the Streamlit chat interface. It renders the user's message and streams the assistant's response as the graph emits events.
- Python 3.10 or higher
- A Groq API key (free tier available)
# Clone the repository
git clone <your-repo-url>
cd State-aware-bot
# Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # macOS/Linux
.venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txtstreamlit run app.pyThen open http://localhost:8501 in your browser.
- Open the sidebar in the Streamlit UI
- Select Groq as the LLM provider
- Choose a model (
llama-3.1-8b-instantfor speed,llama-3.3-70b-versatilefor quality) - Paste your Groq API key
- Select the Research Assistant use case
- Start chatting
- Add the use case name to
USECASE_OPTIONSinUI_config.ini - Create a new build method in
GraphBuilder(e.g.,rag_chatbot_build_graph()) - Add a branch in
GraphBuilder.setup_graph()for the new use case - Add a rendering branch in
DisplayOutStreamlit.display_result_on_ui()
- Add the provider name to
LLM_OPTIONSinUI_config.ini - Create a new factory class in
src/langgraph_agentic_ai/llms/(mirroringgroqllm.py) - Add a selection branch in
main.pyto instantiate the new factory
Distributed under the Apache License 2.0.