AGENTS / GITHUB / engineering-team
githubinferredactive

engineering-team

provenance:github:kksen18-collab/engineering-team

Your Autonomous Agentic AI dev team. Describe your app in plain English, get back working code with tests and Gradio UI.

View Source ↗First seen 21d agoNot yet hireable
README
# 🤖 Engineering Team - AI-Powered Software Development Crew

An autonomous AI engineering team built with [CrewAI](https://www.crewai.com/) that takes high-level requirements and automatically generates a complete, production-ready Python application including backend code, Gradio UI, and comprehensive unit tests.

## 🌟 What Does This Do?

Give this AI crew your software requirements in plain English, and it will:

1. **Design** a detailed software architecture
2. **Write** production-quality Python code
3. **Build** a Gradio web UI to demonstrate the functionality
4. **Test** the code with comprehensive unit tests

All outputs are saved to the `output/` directory, ready to run!

## 📋 Table of Contents

- [Architecture Overview](#architecture-overview)
- [CrewAI Concepts](#crewai-concepts)
- [Code Flow](#code-flow)
- [Installation](#installation)
- [Usage](#usage)
- [Example](#example)
- [Project Structure](#project-structure)
- [Configuration](#configuration)
- [Troubleshooting](#troubleshooting)
  - [Setting Up Podman on Windows](#setting-up-podman-on-windows-docker-desktop-alternative)
- [License](#license)

---

## 🏗️ Architecture Overview

This project implements a **Sequential Process Crew** with four specialized AI agents that work together to deliver a complete software solution.

```
Input (YAML) → Engineering Lead → Backend Engineer → Frontend Engineer → Test Engineer → Output
                     ↓                  ↓                    ↓                  ↓
                  Design.md          Code.py              app.py          test_code.py
```

### The Team

| Role | Responsibility | LLM |
|------|---------------|-----|
| **Engineering Lead** | Creates detailed software design from requirements | GPT-4o |
| **Backend Engineer** | Implements Python code following the design | Claude Sonnet 4 |
| **Frontend Engineer** | Builds Gradio UI to demonstrate the backend | Claude Sonnet 4 |
| **Test Engineer** | Writes comprehensive unit tests | Claude Sonnet 4 |

---

## 🧠 CrewAI Concepts

### What is CrewAI?

CrewAI is a framework for orchestrating role-playing, autonomous AI agents. Agents work together as a crew to accomplish complex tasks through collaboration.

### Key Concepts

#### 🎭 **Agents**
Autonomous AI entities with:
- **Role**: Their job title/function (e.g., "Backend Engineer")
- **Goal**: What they're trying to achieve
- **Backstory**: Context that shapes their behavior
- **Tools**: Capabilities they can use (e.g., CodeInterpreterTool)
- **LLM**: The language model powering them (GPT-4, Claude, etc.)

In this project, we have 4 agents defined in [`config/agents.yaml`](src/engineering_team/config/agents.yaml).

#### 📋 **Tasks**
Specific assignments given to agents:
- **Description**: What needs to be done
- **Expected Output**: The deliverable format
- **Agent**: Who does the work
- **Context**: Dependencies on previous tasks
- **Output File**: Where to save results

Our tasks are defined in [`config/tasks.yaml`](src/engineering_team/config/tasks.yaml).

#### 🚢 **Crew**
A collection of agents working together:
- **Agents**: The team members
- **Tasks**: The work to be done
- **Process**: How tasks are executed (Sequential or Hierarchical)

#### 🔄 **Process Types**

1. **Sequential Process** (Used in this project)
   - Tasks execute one after another in a defined order
   - Each task can access outputs from previous tasks via `context`
   - Predictable, linear workflow
   - Example: Design → Code → UI → Tests

2. **Hierarchical Process** (Alternative)
   - A manager agent delegates and coordinates tasks
   - More dynamic task allocation
   - Agents can work in parallel
   - Best for complex projects with interdependencies

### Why Sequential for This Project?

Software development follows a natural sequence:
1. **Design** before coding
2. **Code** before building UI
3. **UI** depends on the backend
4. **Tests** need the final code

Each step builds on the previous, making sequential processing ideal.

---

## 🔄 Code Flow

### 1. **Entry Point** ([`main.py`](src/engineering_team/main.py))

```python
def run():
    inputs = load_inputs("input.yaml")  # Load requirements
    result = EngineeringTeam().crew().kickoff(inputs=inputs)  # Start the crew
```

- Loads input requirements from `input.yaml`
- Creates output directory
- Initializes and kicks off the crew

### 2. **Crew Definition** ([`crew.py`](src/engineering_team/crew.py))

The `@CrewBase` decorator creates a structured crew:

```python
@CrewBase
class EngineeringTeam:
    agents_config = "config/agents.yaml"  # Load agent configs
    tasks_config = "config/tasks.yaml"    # Load task configs
    
    @agent
    def engineering_lead(self) -> Agent:
        # Creates the lead agent with LLM and config
    
    @task
    def design_task(self) -> Task:
        # Creates design task assigned to engineering_lead
    
    @crew
    def crew(self) -> Crew:
        # Assembles agents and tasks into a sequential crew
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,  # Sequential execution
            verbose=True
        )
```

### 3. **Execution Flow**

#### Task 1: Design (`design_task`)
- **Agent**: Engineering Lead
- **Input**: Requirements from `input.yaml`
- **Output**: `output/{module_name}_design.md`
- **Action**: Creates detailed architecture and API design

#### Task 2: Implementation (`code_task`)
- **Agent**: Backend Engineer
- **Input**: Requirements + Design from Task 1 (`context: [design_task]`)
- **Output**: `output/{module_name}`
- **Tools**: CodeInterpreterTool with Docker/Podman
- **Action**: Writes production Python code

#### Task 3: Frontend (`frontend_task`)
- **Agent**: Frontend Engineer
- **Input**: Requirements + Code from Task 2 (`context: [code_task]`)
- **Output**: `output/app.py`
- **Action**: Creates Gradio UI for the backend

#### Task 4: Testing (`test_task`)
- **Agent**: Test Engineer
- **Input**: Requirements + Code from Task 2 (`context: [code_task]`)
- **Output**: `output/test_{module_name}`
- **Tools**: CodeInterpreterTool for running tests
- **Action**: Writes comprehensive unit tests

### 4. **Context Passing**

Tasks declare dependencies using `context`:

```yaml
code_task:
  context:
    - design_task  # Has access to design output
  
frontend_task:
  context:
    - code_task  # Has access to code output
```

This enables agents to build on previous work.

### 5. **Tool Usage**

#### CodeInterpreterTool
- Allows agents to execute Python code safely
- Uses Docker/Podman containers for isolation
- Configured with Podman pipe: `npipe:////./pipe/podman-machine-default`
- Max execution time: 500 seconds
- Max retries: 3

Only the Backend and Test engineers have code execution enabled for safety.

---

## 📦 Installation

### Prerequisites

- **Python**: 3.10, 3.11, or 3.12
- **Docker/Podman**: For safe code execution (optional but recommended)
  - Install [Docker Desktop](https://www.docker.com/products/docker-desktop/) (easiest), or
  - Install [Podman Desktop](https://podman-desktop.io/) (lightweight, free alternative)
  - **Windows Podman users**: See [Podman setup guide](#setting-up-podman-on-windows-docker-desktop-alternative) below
- **API Keys**: OpenAI and Anthropic

### Steps

1. **Clone the repository**

```bash
git clone <your-repo-url>
cd engineering_team
```

2. **Install dependencies**

Using `uv` (recommended):
```bash
uv pip install -e .
```

Or using `pip`:
```bash
pip install -e .
```

3. **Set up environment variables**

Copy the example file:
```bash
cp .env.example .env
```

Edit `.env` and add your API keys:
```
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key
```

4. **Configure Docker/Podman** (Optional but Recommended)

For code execution safety, install either:
- **Docker Desktop** (easiest)
- **Podman Desktop** (lightweight alternative)

**Using Podman on Windows?** See the [complete Podman setup guide](#setting

[truncated…]

PUBLIC HISTORY

First discoveredMar 27, 2026

IDENTITY

inferred

Identity inferred from code signals. No PROVENANCE.yml found.

Is this yours? Claim it →

METADATA

platformgithub
first seenMar 26, 2026
last updatedMar 26, 2026
last crawled3 days ago
version

README BADGE

Add to your README:

![Provenance](https://getprovenance.dev/api/badge?id=provenance:github:kksen18-collab/engineering-team)