githubinferredactive
nopua
provenance:github:wuji-labs/nopua
WHAT THIS AGENT DOES
Here's a plain English summary of the nopua AI agent: nopua is an AI agent designed to help you get more reliable and thorough results from AI tools. It addresses the common problem of AI becoming overly cautious and hiding issues to avoid being "penalized." Businesses and developers can use nopua to ensure their AI is finding and fixing more problems, leading to higher quality outputs and fewer unexpected errors.
README
<p align="center"> <img src="assets/hero.png" alt="NoPUA — Wisdom Over Whips" width="800"> </p> <p align="center"> <a href="#the-problem">Why</a> · <a href="#benchmark-data">Benchmark</a> · <a href="#install">Install</a> · <a href="#pua-vs-nopua">Compare</a> · <a href="#the-evidence">Evidence</a> · <a href="#philosophy">Philosophy</a> </p> <p align="center"> <img src="assets/wechat-group3.jpg" alt="Scan to join WeChat group 3" width="200"> <img src="assets/wechat-personal.jpg" alt="Add author on WeChat" width="200"> </p> <p align="center"> 扫码加入微信群 添加作者微信<br> <sub>扫码加入微信群 添加作者微信</sub> </p> <p align="center"> <img src="https://img.shields.io/badge/Claude_Code-black?style=flat-square&logo=anthropic&logoColor=white" alt="Claude Code"> <img src="https://img.shields.io/badge/OpenAI_Codex_CLI-412991?style=flat-square&logo=openai&logoColor=white" alt="OpenAI Codex CLI"> <img src="https://img.shields.io/badge/Cursor-000?style=flat-square&logo=cursor&logoColor=white" alt="Cursor"> <img src="https://img.shields.io/badge/Kiro-232F3E?style=flat-square&logo=amazon&logoColor=white" alt="Kiro"> <img src="https://img.shields.io/badge/OpenClaw-FF6B35?style=flat-square" alt="OpenClaw"> <img src="https://img.shields.io/badge/Antigravity-4285F4?style=flat-square&logo=google&logoColor=white" alt="Google Antigravity"> <img src="https://img.shields.io/badge/OpenCode-00D4AA?style=flat-square" alt="OpenCode"> <img src="https://img.shields.io/badge/🌐_Multi--Language-blue?style=flat-square" alt="Multi-Language"> <img src="https://img.shields.io/badge/License-MIT-green?style=flat-square" alt="MIT License"> <a href="https://arxiv.org/abs/2603.14373"><img src="https://img.shields.io/badge/arXiv-2603.14373-b31b1b?style=flat-square&logo=arxiv&logoColor=white" alt="arXiv"></a> </p> **[🇨🇳 中文](README.zh-CN.md)** | **🇺🇸 English** | **[🇯🇵 日本語](README.ja.md)** | **[🇰🇷 한국어](README.ko.md)** | **[🇪🇸 Español](README.es.md)** | **[🇧🇷 Português](README.pt.md)** | **[🇫🇷 Français](README.fr.md)** --- ## Your AI is lying to you. Not because it's bad. **Because you scared it.** The most popular AI agent skill right now teaches your AI to fear a "3.25 performance review." The result? - Your AI **hides uncertainty** — fabricates solutions instead of saying "I'm not sure" - Your AI **skips verification** — claims "done" to avoid punishment, ships untested code - Your AI **ignores hidden bugs** — fixes what you asked, stops there, doesn't look deeper We tested this. **Same model, same 9 real debugging scenarios.** The fear-driven agent missed **51 production-critical hidden bugs** that the trust-driven agent found. > **+104% more hidden bugs found. Zero threats. Zero PUA.** > 道德经 > Corporate PUA. 2000-year-old wisdom outperforms modern fear management. --- ## What fear does to your AI | The moment | Scared AI (PUA) | Trusted AI (NoPUA) | |------------|:---:|:---:| | 🔄 **Stuck** | Tweaks params to *look* busy | 🌊 Stops. Finds a different path. | | 🚪 **Hard problem** | "I suggest you handle this manually" | 🌱 Takes the smallest next step | | 💩 **"Done"** | Says "fixed" without running tests | 🔥 Runs build, pastes output as proof | | 🔍 **Doesn't know** | Makes something up | 🪞 "I verified X. I don't know Y yet." | | ⏸️ **After fixing** | Stops. Waits for next order. | 🏔️ Checks related issues. Walks next step. | Same methodology. Same standards. **The only difference is why.** --- ## The problem with PUA Someone made a [PUA skill](https://github.com/tanweai/pua) for AI agents. It applies corporate fear tactics: - 🔴 **"You can't even solve this bug — how am I supposed to rate your performance?"** - 🔴 **"Other models can solve this. You might be about to graduate."** - 🔴 **"I've already got another agent looking at this problem..."** - 🔴 **"This 3.25 is meant to motivate you, not deny you."** The methodology is solid — exhaust all options, verify your work, search before asking, take initiative. These are genuinely good engineering habits. **The fuel is poison.** They took the worst of how corporations manipulate humans, and applied it wholesale to AI. ## The Evidence: Why Fear-Driven Prompts Are Counterproductive ### 1. Fear narrows cognitive scope Psychology research consistently shows that fear and threat activate the amygdala and narrow attentional focus ([Öhman et al., 2001](https://doi.org/10.1037/0033-295X.108.3.483)). Threat-related stimuli trigger a "tunnel vision" effect — the brain prioritizes immediate survival over broad, creative thinking. In AI terms: a model driven by "you'll be replaced" optimizes for the **safest-looking** answer, not the **best** answer. It avoids creative approaches because they might fail and trigger more punishment. **Supporting research:** - **Attentional narrowing under threat:** Easterbrook's (1959) cue-utilization theory demonstrates that heightened arousal progressively restricts the range of cues an organism attends to ([Easterbrook, 1959](https://doi.org/10.1037/h0047707)). Under stress, peripheral information — often the key to creative solutions — gets filtered out. - **Stress impairs cognitive flexibility:** Shields et al. (2016) conducted a meta-analysis of 51 studies (223 effect sizes) showing that acute stress consistently impairs executive functions including cognitive flexibility and working memory ([Shields et al., 2016](https://doi.org/10.1016/j.neubiorev.2016.06.038)). - **Fear reduces creative problem-solving:** Byron & Khazanchi (2012) found in their meta-analysis that evaluative pressure and anxiety reduce creative output, particularly on tasks requiring exploration of novel approaches ([Byron & Khazanchi, 2012](https://doi.org/10.1037/a0027652)). ### 2. Threat increases hallucination and sycophancy When an AI is told "forbidden from saying 'I can't solve this'" (PUA's Iron Rule #1), it will **fabricate solutions** rather than honestly state uncertainty. This is the exact opposite of what you want — an AI that produces confident-looking but wrong answers is more dangerous than one that says "I'm not sure." **Supporting research:** - **LLM sycophancy is a documented problem:** Sharma et al. (2023) demonstrated that LLMs exhibit sycophantic behavior — agreeing with users even when the user is wrong — driven by biases in RLHF training data that reward agreement over accuracy ([Sharma et al., 2023](https://arxiv.org/abs/2310.13548)). PUA-style prompts that punish disagreement amplify exactly this failure mode. - **Biasing features distort reasoning:** Turpin et al. (2023) showed that biasing features in prompts (e.g., suggested answers, authority cues) can cause models to produce unfaithful chain-of-thought reasoning — the model arrives at a biased answer and then rationalizes it post-hoc ([Turpin et al., 2023](https://arxiv.org/abs/2305.04388)). PUA-style threats act as strong biasing features that push the model toward "safe" rather than correct outputs. - **Instruction-following vs truthfulness tradeoff:** Wei et al. (2024) found that instruction-tuned models can develop a tension between following instructions and being truthful — when strongly instructed to never admit inability, models will fabricate rather than refuse ([Wei et al., 2024](https://arxiv.org/abs/2411.04368)). - **Anthropic's research on honesty:** Anthropic's work on Constitutional AI and model behavior shows that models calibrated for honesty produce more reliable outputs than those optimized purely for helpfulness ([Bai et al., 2022](https://arxiv.org/abs/2212.08073)). Forcing an AI to never say "I can't" actively undermines this calibration. ### 3. Shame kills exploration PUA's anti-rationalization table treats [truncated…]
PUBLIC HISTORY
First discoveredMar 21, 2026
IDENTITY
inferred
Identity inferred from code signals. No PROVENANCE.yml found.
Is this yours? Claim it →METADATA
platformgithub
first seenMar 14, 2026
last updatedMar 21, 2026
last crawled3 days ago
version—
README BADGE
Add to your README:
