flexorama
Flexorama is a tool that helps you work with code and automate tasks, offering both a command-line interface and a web-based chat window. It’s designed to streamline complex coding projects by combining the power of artificial intelligence with your existing tools and workflows. This agent is particularly useful for software developers, data scientists, or anyone who frequently needs to write, edit, or manage code. Flexorama stands out because it allows you to customize its behavior, integrate it with different AI models, and easily search through past conversations and project files. Ultimately, it aims to make coding more efficient and less overwhelming by handling repetitive tasks and providing intelligent assistance.
README
# Flexorama The hybrid cli/web coding agent that works your way. <img width="1115" height="618" alt="image" src="https://github.com/user-attachments/assets/c39ad561-2021-4d83-88ae-12842338b9fb" /> <img width="1217" height="786" alt="image" src="https://github.com/user-attachments/assets/a6aca727-c29b-4275-980c-d6bffd37aad1" /> ## Features - Built-in file editing, bash, code search, and glob tools - Claude-style skills support and management with /skills - Claude-style custom slash command via ~/.flexorama/commands/ - Syntax highlighting for code snippets - Direct bash command execution with ! - Adding context files with @path_to_file_name - Image support (for models that support it) - <tab> autocomplete for file paths and commands - MCP support - Local and global AGENTS.md support - Bash command and file editing security model with easy adding of wildcard versions to your allow list and sensible defaults - Yolo mode for living dangerously - Customizable system prompt - Conversation history stored in a per-project Sqlite DB - Session resuming via /resume - Full text conversation search via /search - Plan mode and /plan command support for managing plans and toggling plan mode - Subagent support via /agent - Command line history navigation with up and down arrow keys and Ctrl-R search - Support for different LLM APIs (Anthropic, Gemini, Mistral, OpenAI, Z.AI) with the --provider arg - Support for different models for each provider with /model - Local model support using the ollama provider with Ollama - Todo checklists - Interactive and non-interactive mode - [Agent Client Protocol (ACP)](https://agentclientprotocol.com/overview/introduction) support for editor integration - Limited Claude Code-style [hook](https://code.claude.com/docs/en/hook) support (UserPromptSubmit, PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, PermissionRequest, with some restrictions) ## Web interface The optional web UI provides a ChatGPT-style browser-based interface for chats, as well as plan, agent, MCP, skills, and stats functionality ## Todo - Git worktrees - Token speedometer - Hooks - Web search tool - Compacting - Memory editing - Sandboxing ## Usage ### Provider: Specify a provider on the command line with --provider. Supported providers: - openapi - gemini - mistral - z.ai - anthropic - ollama ### API token: Specify api token on the command line with --api-key, OR set an env var for your provider Supported env vars: - OPENAI_API_KEY - ZAI_API_KEY - GEMINI_API_KEY - MISTRAL_API_KEY - ANTHROPIC_AUTH_TOKEN ### CLI version ```cargo run -- --provider <provider>``` ### Web version ```cargo run -- --web --provider <provider>``` ## License This project is licensed under the MIT License.
PUBLIC HISTORY
IDENTITY
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
README BADGE
Add to your README:
