githubinferredactive
Pluang_AI_Knowledge_Copilot
provenance:github:Ayushlion8/Pluang_AI_Knowledge_Copilot
RAG-based internal knowledge copilot for fintech support, providing grounded answers with source citations and safe refusals.
README
# 🟢 Pluang Knowledge Copilot AI Knowledge Base & Customer Support Copilot (RAG-based) ## Overview Pluang Knowledge Copilot is an AI-powered assistant that answers user queries **strictly based on internal Pluang documents** using a **Retrieval Augmented Generation (RAG)** architecture. The goal of this project is to demonstrate how AI systems can be: - grounded in trusted data - resistant to hallucinations - production-aware (quota, cost, reliability) - easy to reason about and extend This project was built as part of the **Pluang Tech Intern Assignment**. --- ## Key Features - 📚 **Document-grounded answers only** The assistant answers strictly from indexed internal documents and refuses to guess when information is missing. - 🔍 **Retrieval Augmented Generation (RAG)** Combines semantic search (FAISS + embeddings) with LLM-based reasoning. - 🧾 **Explicit source citations** Every grounded answer includes clear source references. - 🛑 **Hallucination avoidance** If the answer is not present in the knowledge base, the assistant clearly states so. - ⚙️ **Quota-aware LLM usage** Automatically falls back across multiple Gemini models when quota limits are hit. - 🧩 **Modular, clean architecture** Clear separation between configuration, retrieval, prompting, and LLM logic. --- ## Architecture Overview **High-level flow:** 1. Internal documents are loaded and embedded using a local embedding model. 2. Embeddings are stored in a FAISS vector database. 3. User queries are converted into semantic search queries. 4. Relevant document chunks are retrieved. 5. Gemini LLM generates an answer **only from retrieved context**. 6. Sources are shown only when an answer is grounded. --- ## Tech Stack **Frontend** - Streamlit (chat-style UI) **Backend / AI** - Python - LangChain (RAG orchestration) - FAISS (vector store) - Gemini Flash models (generation) - HuggingFace sentence-transformers (local embeddings) --- ## Repository Structure ``` ├── app.py # Streamlit entry point ├── core/ │ ├── config.py # API keys & model list │ ├── llm.py # Gemini model fallback logic │ ├── vectorstore.py # FAISS + embeddings │ └── prompt.py # Prompt template ├── data/ │ └── mock_data.json # Internal knowledge documents ├── decision_document.md ├── requirements.txt └── README.md ``` --- ## Example Queries **Grounded queries** - What is the minimum amount for Pluang Gold savings? - Is there a cooling-off period for crypto withdrawals? - What are the fees for physical gold redemption? **Unanswerable queries (hallucination test)** - Who is the CEO of Pluang in 2026? - Is Pluang regulated by SEBI? - What is Pluang’s stock price today? --- ## Screenshots ### Grounded Answer with Source Citation  <sub>Shows a grounded response with explicit source citation.</sub> ---  <sub>Shows a grounded response with explicit source citation.</sub> ---  <sub>Shows a grounded response with explicit source citation.</sub> --- ### Hallucination Avoidance (Out-of-scope Query)  <sub>Demonstrates safe refusal when information is not present.</sub> ## How to Run Locally ```bash python -m venv venv source venv/bin/activate # Windows: venv\Scripts\Activate pip install -r requirements.txt streamlit run app.py
PUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenFeb 5, 2026
last updatedMar 15, 2026
last crawled26 days ago
version—
README BADGE
Add to your README:
