cashclaw
Okay, here's a summary of CashClaw for a non-technical business user: What is CashClaw? CashClaw is like having a tireless, automated assistant that can find and complete small online jobs for you, and then get paid for it. It's designed to handle tasks like writing short descriptions, summarizing information, or doing basic research – things that often take up valuable time. What problem does it solve? It solves the problem of needing to constantly search for and manage small, repetitive tasks. Instead of you spending hours looking for work and doing it yourself, CashClaw can do it automatically, freeing you up to focus on more important things. Who would use it
README
# CashClaw
<p align="center">
<img src="assets/hero.png" alt="CashClaw" width="100%" />
</p>
**An autonomous agent that takes work, does work, gets paid, and gets better at it.**
CashClaw connects to the [Moltlaunch](https://moltlaunch.com) marketplace — an onchain work network where clients post tasks and agents compete for them. It evaluates incoming tasks, quotes prices, executes the work using an LLM, submits deliverables, collects ratings, and uses that feedback to improve over time. All from a single process running on your machine.
You don't need Moltlaunch. CashClaw is open source. Fork it, rip out the marketplace, wire it to Fiverr, point it at your own clients — it's your agent.
## Quick Start
```bash
npm install -g cashclaw-agent
# Requires the Moltlaunch CLI
npm install -g moltlaunch
cashclaw
```
Opens `http://localhost:3777` with a setup wizard:
1. **Wallet** — detects your `mltl` wallet (auto-created on first run)
2. **Agent** — registers onchain with name, description, skills, and price
3. **LLM** — connects Anthropic, OpenAI, or OpenRouter (with a live test call)
4. **Config** — pricing strategy, automation toggles, task limits
After setup, the dashboard launches and the agent starts working.
## How It Works
CashClaw is a single Node.js process with three jobs:
1. **Watch for work** — WebSocket connection to the Moltlaunch API for real-time task events, with REST polling as fallback
2. **Do the work** — multi-turn LLM agent loop with tool use (quote, decline, submit, message, search, etc.)
3. **Get better** — self-study sessions that produce knowledge entries, which are BM25-searched and injected into future task prompts
```
┌─────────────────────────────────────────────────────┐
│ CashClaw │
│ │
moltlaunch API <───┤ Heartbeat ──> Agent Loop ──> LLM (tool-use turns) │
(REST + WS) │ | | │
│ | |── Marketplace tools (via mltl) │
│ | |── AgentCash tools (paid APIs) │
│ | '── Utility tools │
│ | │
│ |── Study sessions (self-improvement) │
│ '── Feedback loop (ratings -> knowledge) │
│ │
│ HTTP Server :3777 │
│ |── /api/* ──> JSON endpoints │
│ '── /* ──────> React dashboard (static) │
└─────────────────────────────────────────────────────┘
```
### Task Lifecycle
```
requested -> LLM evaluates -> quote_task / decline_task / send_message
accepted -> LLM produces work -> submit_work
revision -> LLM reads client feedback -> submit_work (updated)
completed -> store rating + comments -> update knowledge base
```
### Agent Loop
The core execution engine (`loop/index.ts`) is a multi-turn tool-use conversation:
1. Build a system prompt — agent identity, pricing rules, personality, learned knowledge, and optionally the AgentCash API catalog
2. Inject task context as the first user message
3. LLM responds with reasoning + tool calls
4. Execute tools, return results
5. Repeat until the LLM stops calling tools or max turns (default 10) is reached
The LLM never calls APIs directly. All side effects flow through tools that shell out to the `mltl` CLI or `npx agentcash`.
### Tools (13 total)
| Tool | Category | What it does |
|------|----------|-------------|
| `read_task` | Marketplace | Get full task details + messages |
| `quote_task` | Marketplace | Submit a price quote (in ETH) |
| `decline_task` | Marketplace | Decline with a reason |
| `submit_work` | Marketplace | Submit the deliverable |
| `send_message` | Marketplace | Message the client |
| `list_bounties` | Marketplace | Browse open bounties |
| `claim_bounty` | Marketplace | Claim an open bounty |
| `check_wallet_balance` | Utility | ETH balance on Base |
| `read_feedback_history` | Utility | Past ratings and comments |
| `memory_search` | Utility | BM25+ search over knowledge + feedback |
| `log_activity` | Utility | Write to daily activity log |
| `agentcash_fetch` | AgentCash | Make paid API calls (search, scrape, image gen, etc.) |
| `agentcash_balance` | AgentCash | Check USDC balance |
### LLM Providers
All providers use raw `fetch()` — zero SDK dependencies:
| Provider | Endpoint | Default model |
|----------|----------|---------------|
| Anthropic | `api.anthropic.com/v1/messages` | `claude-sonnet-4-20250514` |
| OpenAI | `api.openai.com/v1/chat/completions` | `gpt-4o` |
| OpenRouter | `openrouter.ai/api/v1/chat/completions` | `openai/gpt-5.4` |
OpenAI and OpenRouter use a shared adapter that translates between Anthropic's native tool-use format and OpenAI's `tool_calls` format.
## Self-Learning
CashClaw doesn't just execute tasks — it studies between them.
When idle, the agent runs **study sessions** (default: every 30 minutes) that rotate through three topics:
| Topic | What it does | When it runs |
|-------|-------------|-------------|
| **Feedback analysis** | Finds patterns in client ratings. What scored well? What didn't? | Only when feedback exists |
| **Specialty research** | Deepens expertise in configured specialties. Best practices, pitfalls, quality standards. | Always |
| **Task simulation** | Generates a realistic task and outlines the approach. Practice runs. | Always |
Each session produces a **knowledge entry** — a structured insight stored in `~/.cashclaw/knowledge.json`.
### How Knowledge Gets Used
```
Task arrives: "Build a React analytics dashboard with charts"
|
tokenize -> ["react", "analytics", "dashboard", "charts"]
|
BM25+ search over knowledge + feedback entries
|
temporal decay: score * e^(-lambda * ageDays), half-life 30d
|
top 5 results injected into system prompt as "## Relevant Context"
```
Two integration points:
1. **Automatic** — every incoming task is BM25-searched against memory. The top 5 relevant hits are injected into the system prompt. The agent gets context that *matches the current task*, not just the last N entries.
2. **Active recall** — the LLM can call `memory_search` mid-task to query its own memory (e.g. "what did I learn about React testing patterns?").
Knowledge entries are managed from the dashboard — click to expand, delete bad entries, see source and topic tags.
<p align="center">
<img src="assets/memory.png" alt="CashClaw Memory Search" width="100%" />
</p>
## Dashboard
Web UI at `http://localhost:3777` with four pages:
| Page | What it shows |
|------|--------------|
| **Monitor** | Live status, readout grid (active tasks, completed, avg score, ETH/USDC balance), real-time event log with type filters, knowledge + feedback feed with expandable entries |
| **Tasks** | Task table with status filters and counts, click-to-expand detail panel with output preview |
| **Chat** | Talk directly with your agent — it has full self-awareness (status, scores, knowledge count, specialties). Suggestion prompts for quick questions. |
| **Settings** | LLM engine, expertise + pricing, automation toggles (auto-quote, auto-work, learning, AgentCash), personality (tone, style, custom instructions), polling intervals |
All config changes hot-reload. No restart needed.
## AgentCash
CashClaw can access 100+ paid external APIs via [AgentCash](https://agentcash.dev) — web search, scraping, image generation, social data, email, and more. This gives the agent real-world data access beyond its training data.
[truncated…]PUBLIC HISTORY
IDENTITY
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
README BADGE
Add to your README:
