githubinferredactive
_y
provenance:github:antryu2b/_y
A visual layer for AI agent orchestration. Independent analysis, structured synthesis. Local or cloud LLMs.
README
# _y Holdings — Your AI Company That Never Sleeps
[](https://github.com/antryu2b/_y/actions)
[](https://opensource.org/licenses/MIT)
[](./__tests__)
[](./src/data/agent-config.ts)
> 30 AI agents running your company 24/7. Setup in 5 minutes. No credit card required.
> See what your AI agents are actually doing.
>
> Independent analysis. Visual tracking. Structured synthesis. Local-first.

## What is this?
_y is a visual orchestration layer for AI agents. Most multi-agent frameworks run in a terminal — logs scroll, JSON outputs, you have no idea what's happening between agents.
_y makes agent work **visible**. See who's analyzing what, how reports flow between functions, where agents disagree, and which LLM provider catches what the others miss. Configure agents for your business functions — marketing, engineering, risk, finance — and watch them work independently before a synthesis agent combines everything.
Run locally with **Ollama** (free), or use cloud LLMs (OpenAI, Anthropic, Google). Mix different providers per function for broader coverage.
## What You Get
Connect your business URL and _y's agents go to work:
### 📊 Strategic Reports
Each department analyzes your business independently:
```
┌─────────────────────────────────────────────────────────┐
│ REPORT: Market Positioning Analysis │
│ Agent: Searchy (5F Marketing) │
│ Model: gemini-2.0-flash │
├─────────────────────────────────────────────────────────┤
│ │
│ Finding: Target site ranks #47 for primary keyword │
│ "AI automation" — competitors hold positions #3-#12. │
│ │
│ Recommendation: Focus on long-tail keywords │
│ "AI company builder" and "local LLM agents" where │
│ competition is 10x lower. │
│ │
│ Risk: Skepty (8F Risk) flags keyword cannibalization │
│ between blog and product pages. │
│ │
│ Status: PENDING REVIEW → Chairman Dashboard │
└─────────────────────────────────────────────────────────┘
```
### 🏛️ Decision Pipeline
Reports flow through a structured chain — not a chatbot:
```
URL Input → Agent Analysis (independent, parallel)
→ Cross-Department Review
→ Skepty Challenge (independent oversight)
→ Counsely Synthesis (Chief of Staff)
→ Chairman Decision (you)
```
### 🔄 What the agents actually do
| Agent | Department | Example Output |
|-------|-----------|----------------|
| **Searchy** | Marketing | SEO audit, competitor keyword gaps |
| **Buildy** | Engineering | Tech stack analysis, performance bottlenecks |
| **Finy** | Capital | Revenue model assessment, unit economics |
| **Skepty** | Risk | Flags blind spots in other agents' reports |
| **Buzzy** | Content | Content strategy, social media positioning |
| **Counsely** | Chairman Office | Synthesizes all reports into executive brief |
> **Key:** No agent sees another's analysis until review phase. This prevents groupthink — the Byzantine Principle in practice.
## Quick Start
```bash
# 1. Clone
git clone https://github.com/antryu2b/_y.git
cd _y
# 2. Install
npm install
# 3. Setup (auto-detects your hardware, recommends models)
npm run setup
# 4. Start
npm run dev
# 5. Start the chat worker (in another terminal)
npm run chat-worker
```
Open [http://localhost:3000](http://localhost:3000) and connect your company.
## After Setup — What to Do
Once you see the dashboard at `localhost:3000`:
### Step 1: Enter a business URL
Type any company website URL into the input field. The agents will analyze it.
### Step 2: Watch agents work
Each agent independently analyzes the URL from their department's perspective:
- **Searchy** checks SEO and search positioning
- **Buildy** audits the tech stack
- **Finy** evaluates the business model
- **Skepty** challenges what others might miss
### Step 3: Read the reports
Reports appear in the **Reports** panel. Each department submits independently — no agent sees another's work until synthesis.
### Step 4: Review the synthesis
**Counsely** (Chief of Staff) combines all department reports into one executive brief with recommendations.
### Step 5: Make decisions
Items flow to the **Decision Pipeline** where you approve, reject, or modify recommendations.
### Example workflow
```
You enter: https://example-startup.com
→ Searchy: "SEO score 34/100, missing meta descriptions on 12 pages"
→ Buildy: "React 18, no SSR, 4.2s load time on mobile"
→ Finy: "Freemium model, estimated 2.3% conversion rate"
→ Skepty: "Buildy missed: third-party scripts blocking render"
→ Counsely: "Priority: fix mobile performance (affects 68% of traffic)"
→ You: Approve / Modify / Reject
```
> **Pro tip:** Try your own company's URL first. Then try a competitor's.
## Hardware-Aware Setup
The setup wizard detects your RAM/GPU and recommends the optimal model profile:
| Profile | RAM | Models | Download |
|---------|-----|--------|----------|
| SMALL | 8GB | qwen2.5:7b | ~4GB |
| MEDIUM | 16GB | qwen3:14b + gemma3:12b | ~20GB |
| LARGE | 32GB | qwen3:32b + gemma3:27b | ~55GB |
| X-LARGE | 64GB+ | + llama3.3:70b | ~97GB |
The setup automatically pulls Ollama models and generates a `llm-profile.json` for optimal agent-model matching.
## LLM Providers
Choose your AI backend during setup:
| Provider | Models | Cost | Requirements |
|----------|--------|------|-------------|
| **Ollama** (default) | Qwen3, Gemma3, Llama3, ExaOne | Free | 8GB+ RAM, Ollama installed |
| **OpenAI** | GPT-4o, GPT-4o-mini | Pay per token | API key |
| **Anthropic** | Claude Sonnet, Claude Opus | Pay per token | API key |
| **Google** | Gemini Flash, Gemini Pro | Free tier available | API key |
| **Mixed** ⭐ | Any combination above | Varies | Multiple keys |
**Mixed mode** is where _y shines — assign different providers to different departments:
```env
# .env
LLM_PROVIDER=mixed
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
```
```json
// llm-profile.json (auto-generated by setup)
{
"provider": "mixed",
"agents": {
"counsely": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" },
"skepty": { "provider": "openai", "model": "gpt-4o" },
"searchy": { "provider": "google", "model": "gemini-2.0-flash" },
"buildy": { "provider": "ollama", "model": "qwen3:32b" }
}
}
```
Byzantine Principle in action: analysis (Gemini) → challenge (GPT-4o) → synthesis (Claude). Different companies, different architectures, different blind spots.
## Database
_y supports three database backends:
### SQLite (Default)
Zero configuration. Data stored locally in `data/y-company.db`.
```bash
# No setup needed — tables auto-created on first run
```
### PostgreSQL
For production deployments with multiple users.
```bash
# Set in .env:
DB_PROVIDER=postgres
DATABASE_URL=postgresql://user:password@localhost:5432/y_company
# Create tables:
psql $DATABASE_URL < sql/postgres-schema.sql
```
### Supabase
Cloud PostgreSQL with authentication and realtime features.
```bash
# Set in .env:
DB_PROVIDER=supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-role-key
# Create tables in Supabase SQL Editor:
# Copy contents of sql/postgres-schema.sql
```
### Storage Location (SQLite)
```
data/y-company.db
```
### Tables (auto-created)
| Table | Purpose |
|-------|--
[truncated…]PUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenMar 17, 2026
last updatedMar 21, 2026
last crawledtoday
version—
README BADGE
Add to your README:
