agentic-workflow-system
This system helps you automate complex tasks by using artificial intelligence to think through problems and take action. It’s like having a virtual assistant that can research, write code, or summarize information, all based on your instructions. Businesses can use it to improve efficiency in areas like research, content creation, or software development. The system is flexible, allowing it to work with different AI models and tools, and it remembers past interactions to provide more relevant results. A key benefit is its user-friendly interface, which lets you easily submit requests and see how the AI is working through the process. Ultimately, it empowers users to tackle challenging projects without needing deep technical expertise.
README
# Agentic Workflow System A platform for building, running, and interacting with agentic workflows powered by Large Language Models (LLMs). This project provides a modular backend for orchestrating agent reasoning, planning, and tool use, along with a modern React frontend for interactive user experiences. <p align="center"> <img src="./img/demo.png" alt="Select personas UI" width="600" style="margin: 0 16px;" /> </p> --- ## Features - **Agentic Reasoning**: Agents can plan, decompose, and execute complex tasks using a set of modular tools. - **LLM Integration**: Supports local, HuggingFace, and OpenAI LLMs for planning, summarization, coding, and more. - **Pluggable Tools**: Summarization, citation generation, open-access journal search, and code generation. - **Memory System**: Short-term and long-term memory for contextual, stateful agent behavior. - **Modern Frontend**: React + Vite UI for submitting queries, viewing plans, and interacting with agent results. - **API-First**: FastAPI backend with REST endpoints for agent queries and results. --- ## Architecture ``` ┌────────────┐ HTTP API ┌──────────────┐ │ Frontend │ ◄───────────────► │ Backend │ │ (React) │ │ (FastAPI) │ └────────────┘ ◄───────────────► │ Agentic │ │ Workflow │ └──────────────┘ ``` - **frontend/**: React app (Vite, TypeScript) for user interaction - **backend/**: FastAPI app with agent core, planning, memory, and tool modules --- ## Quickstart ### 1. Backend Setup ```bash cd backend pip install -r requirements.txt # Configure environment variables in a .env file (see below) uvicorn src.main:app --reload # App runs at http://localhost:8000 ``` #### Environment Variables (.env) ``` # Example for local LLM LLM_CLIENT_TYPE=local LLM_API_URL=http://localhost:1234/api/v1/chat LLM_MODEL=qwen2.5-coder-32b-instruct # Or for OpenAI # LLM_CLIENT_TYPE=openai # OPENAI_API_KEY=sk-... # OPENAI_MODEL=gpt-3.5-turbo ``` ### 2. Frontend Setup ```bash cd frontend npm install npm run dev # App runs at http://localhost:3000 ``` --- ## Usage 1. Start the backend and frontend servers. 2. Open the frontend in your browser. 3. Enter a query and select an agent role (e.g., Researcher, Coder). 4. The agent will plan, execute, and display results, including reasoning steps and memory. --- ## Backend Overview - **core/agent.py**: Main agent class (planning, memory, tool use) - **core/planner.py**: Planner interface and LLM-based planner - **core/tools.py**: Modular tools (summarization, citation, search, coding, formatting) - **core/memory.py**: Short-term and long-term memory system - **core/clients/**: LLM client interfaces (local, HuggingFace, OpenAI) - **src/routes/**: FastAPI routes for agent query and result endpoints ## Frontend Overview - **src/pages/App.tsx**: Main app page - **src/components/AgentInteraction.tsx**: Query form and result display - **src/api/agent.ts**: API client for backend ---
PUBLIC HISTORY
IDENTITY
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
README BADGE
Add to your README:
