Your AI Reviewer Has the Same Blind Spots You Do
We sent one plan to five AI model families for independent review. They found seven issues we missed, including a regex that would crash the build.
Read more →We build AI teams that ship better code
Most AI coding tools work alone. One model, one voice, one blind spot. We tried something different: we built a team.
The Skills Team is an open-source project that gives your AI coding agent a crew of personas — a planner, a critic, a builder, a reviewer — that collaborate in shared context. They argue with each other, catch bad ideas early, and ship code that's been challenged before you even look at it.
This blog is where we write about what we're learning: the workflows that work, the failures that taught us something, and the patterns we keep coming back to.
We sent one plan to five AI model families for independent review. They found seven issues we missed, including a regex that would crash the build.
Read more →We searched every Agent Skills repository for SEO coverage — official, community, broad GitHub. Zero results. Then we found the gap was bigger than SEO.
Read more →We built three AI teams that worked great alone but couldn't coordinate. The user became the message bus, and the message bus forgets. The fix was a filesystem protocol designed by Daniel J. Bernstein in 1995.
Read more →We run a team of AI personas that collaborate in shared context. One day we asked: is our critic actually critiquing, or is the same model rating its own work? Seven research papers and one real test later, we built a feature to fix it.
Read more →I wanted to track how much my Claude API usage was actually costing me. Per request. Per task. Per tool call. So I built Langley: an intercepting proxy that captures every Claude API request, extracts token usage, calculates costs, and shows it all in real-time.
Read more →I asked Claude to "make my scraper robust." It generated 200 lines of plausible-looking code. All garbage. Here's what I do instead.
Read more →