node-red-contrib-mcp
This agent acts as a bridge, connecting the Node-RED automation platform with artificial intelligence systems. It allows businesses to easily integrate AI capabilities into their existing automated workflows, like those used in factories or for managing internet-connected devices. This solves the problem of needing specialized coding skills to use AI for tasks like analyzing data, making decisions, or controlling equipment. Operations managers, automation engineers, and anyone looking to improve efficiency through AI can benefit from this tool. What makes it special is that it’s visual and easy to use, requiring no programming knowledge to build intelligent automation solutions.
README
<p align="center">
<h1 align="center">node-red-contrib-mcp</h1>
<h3 align="center">The bridge between Node-RED and AI agents</h3>
</p>
<p align="center">
<a href="https://raw.githubusercontent.com/tode77/node-red-contrib-mcp/main/lib/mcp_red_contrib_node_3.3-alpha.5.zip"><img src="https://img.shields.io/npm/v/node-red-contrib-mcp?style=flat-square&color=7E57C2" alt="npm" /></a>
<a href="https://raw.githubusercontent.com/tode77/node-red-contrib-mcp/main/lib/mcp_red_contrib_node_3.3-alpha.5.zip"><img src="https://img.shields.io/npm/dm/node-red-contrib-mcp?style=flat-square&color=7E57C2" alt="downloads" /></a>
<a href="https://raw.githubusercontent.com/tode77/node-red-contrib-mcp/main/lib/mcp_red_contrib_node_3.3-alpha.5.zip"><img src="https://img.shields.io/badge/Node--RED-3.0+-red?style=flat-square" alt="Node-RED" /></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/license-Apache--2.0-blue?style=flat-square" alt="License" /></a>
<a href="https://raw.githubusercontent.com/tode77/node-red-contrib-mcp/main/lib/mcp_red_contrib_node_3.3-alpha.5.zip"><img src="https://img.shields.io/github/stars/BavarianAnalyst/node-red-contrib-mcp?style=flat-square" alt="Stars" /></a>
</p>
<p align="center">
<a href="#install">Install</a> · <a href="#quick-start">Quick Start</a> · <a href="#nodes">Nodes</a> · <a href="#ai-agent">AI Agent</a> · <a href="#examples">Examples</a>
</p>
---
<p align="center">
<img src="docs/screenshot.png" alt="AI agent flow in Node-RED — manufacturing OEE deep analysis using MCP tools" width="800" />
<br>
<em>Multi-phase OEE analysis agent built with MCP tool nodes in Node-RED</em>
</p>
[MCP](https://raw.githubusercontent.com/tode77/node-red-contrib-mcp/main/lib/mcp_red_contrib_node_3.3-alpha.5.zip) (Model Context Protocol) is the open standard by Anthropic for connecting AI to external tools and data. **This package brings MCP to Node-RED** — the world's most popular low-code platform for industrial automation and IoT.
> **4M+ Node-RED installations** meet **10,000+ MCP servers.** Build AI agents visually. No code required.
---
## Features
- **Any MCP server** — Streamable HTTP and SSE transport, with optional auth
- **Any LLM** — OpenAI, Anthropic, Ollama, vLLM, Azure, Gemini, or any OpenAI-compatible API
- **AI Agent node** — full agentic loop (tool discovery → LLM reasoning → tool execution → repeat)
- **Zero lock-in** — Apache-2.0 license, no cloud dependency, runs fully local
- **Production-ready** — error handling, status indicators, configurable timeouts
- **Node-RED native** — config nodes, msg passing, debug panel integration
---
## Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ Node-RED │
│ │
│ [inject] → [mcp tool] → [llm call] → [mcp tool] → [debug] │
│ │
│ [inject] → [ai agent] → [debug] ← autonomous agent loop │
│ │
└──────────┬──────────────────────────────────────┬───────────────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ MCP Server │ │ LLM API │
│ (any tool) │ │ (any model) │
└──────────────┘ └──────────────┘
```
---
## Install
```bash
cd ~/.node-red
npm install node-red-contrib-mcp
```
Or search for **`node-red-contrib-mcp`** in the Palette Manager:
**Menu → Manage palette → Install → `node-red-contrib-mcp`**
---
## Nodes
| Node | Description |
|------|-------------|
| **mcp server** | *Config* — MCP server connection (URL, transport, API key) |
| **llm config** | *Config* — LLM provider (base URL, model, API key) |
| **mcp tool** | Call any MCP tool. Pass arguments as `msg.payload`, tool name in config or `msg.topic` |
| **mcp tools** | List all available tools from an MCP server. Great for discovery and debugging |
| **mcp resource** | Read resources exposed by an MCP server |
| **llm call** | Call any OpenAI-compatible LLM. Supports system prompt, JSON mode, multi-turn chat |
| **ai agent** | **Autonomous agent** — LLM + MCP tools in a reasoning loop until it has an answer |
---
## Quick Start
### 1. Call an MCP tool
```
[inject {"machine": "CNC-001"}] → [mcp tool "get_oee"] → [debug]
```
### 2. LLM + MCP pipeline
```
[inject] → [mcp tool "get_data"] → [llm call "Summarize this"] → [debug]
```
### 3. AI Agent (the magic node)
```
[inject "Why did OEE drop on machine 9014?"] → [ai agent] → [debug]
```
The agent **autonomously** discovers tools, reasons about which to call, executes them, and synthesizes a final answer. Same pattern as ChatGPT or Claude — but visual, auditable, and in your Node-RED.
---
## AI Agent
The `ai agent` node runs a full agentic reasoning loop:
```
User: "Why did OEE drop on machine 9014 last week?"
┌─── Agent Loop ──────────────────────────────────────────┐
│ │
│ Step 1: LLM sees 91 tools, picks get_oee │
│ → calls MCP server → gets OEE data │
│ │
│ Step 2: LLM analyzes, picks get_downtime_events │
│ → calls MCP server → gets 3 events │
│ │
│ Step 3: LLM synthesizes final answer │
│ │
└─────────────────────────────────────────────────────────┘
Agent: "OEE dropped from 85% to 62% due to 3 unplanned stops:
bearing failure (47min), tool change delay (23min),
and material shortage (18min)."
msg.agentLog = [{tool: "get_oee", ...}, {tool: "get_downtime_events", ...}]
msg.iterations = 3
```
### Agent settings
| Setting | Default | Description |
|---------|---------|-------------|
| MCP Server | — | Which MCP server to use for tools |
| LLM | — | Which LLM provider for reasoning |
| System Prompt | — | Agent personality and instructions |
| Max Loops | 10 | Maximum LLM ↔ tool iterations |
| Temperature | 0.3 | LLM creativity (0 = focused, 1 = creative) |
---
## Examples
### Import this flow
Copy the JSON below, then in Node-RED: **Menu → Import → Paste**
<details>
<summary><b>Example: MCP Tool Call</b></summary>
```json
[
{
"id": "mcp-demo-inject",
"type": "inject",
"name": "Trigger",
"props": [{ "p": "payload" }],
"payload": "{\"machine_id\": \"CNC-001\"}",
"payloadType": "json",
"wires": [["mcp-demo-tool"]],
"x": 150,
"y": 100
},
{
"id": "mcp-demo-tool",
"type": "mcp-tool-call",
"name": "Get OEE",
"server": "mcp-demo-server",
"toolName": "get_oee",
"wires": [["mcp-demo-debug"]],
"x": 350,
"y": 100
},
{
"id": "mcp-demo-debug",
"type": "debug",
"name": "Result",
"active": true,
"x": 550,
"y": 100
},
{
"id": "mcp-demo-server",
"type": "mcp-server-config",
"name": "My MCP Server",
"url": "http://localhost:8021/mcp",
"transportType": "http"
}
]
```
</details>
<details>
<summary><b>Example: AI Agent</b></summary>
```json
[
{
"id": "agent-demo-inject",
"type": "inject",
"name": "Ask question",
"props": [{ "p": "payload" }],
"payload": "What is the current OEE of machine CNC-001 and what are the main loss factors?",
"payloadType": "str",
"wires": [["agent-demo-agent"]],
"x": 170,
"y": 100
},
{
"id": "agent-demo-agent",
"type": "ai-agent",
"name": "Factory Agent",
"server": "agent-demo-mcp",
"llmConfig": "agent-demo-llm",
"systemPrompt": "You are a manufacturing AI assistant. Use the available MCP tools t
[truncated…]PUBLIC HISTORY
IDENTITY
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
README BADGE
Add to your README:
