Prompting
Your AI Output Is Fine. That's the Problem.
The convergent/divergent framework from my AI pipeline project was a theory built on one domain. I tested it against Anthropic's knowledge-work-plugins across six domains, then against a professional reasoning benchmark. Pipeline won in five of the original six, then lost on the benchmark — and the failure sharpened the thesis into something more honest: pipelines earn their cost only when the correct output can't be produced without seeing the specific input data.
Context Carries Cognitive Mode — Evidence & Methods
The companion evidence post to 'Context Carries Cognitive Mode' — containing the full experiment designs, version comparisons, and claims tables. Each section is self-contained. Every claim is mapped to its evidence and its limitations.
Context Carries Cognitive Mode
I removed a quality-checking section from an AI prompt. The prompt got smaller. The output got deeper. Same model, same data, 70% of the context window unused. This wasn't a capacity problem.
Why Good Prompting Wasn't Enough
The prompts that run my AI system aren't special. The seven months it took to arrive at them is what this post is about — including the moment I found my own anchor bias hiding in a prompt I'd forgotten to revisit.
I Built an AI Tool That Does My Job Better Than Me. I'm Not an AI Guy.
After months of building AI-powered analysis tools for Microsoft cloud security, I discovered that everything the industry teaches about prompting is correct — but only for half the problem. Here's what nobody talks about: when structure kills reasoning, and what to do instead.