Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The Anthropic teams using Claude Code piece is the most interesting part. What I'd love to know: how detailed are their prompts?

I've been building with Claude Code daily for months. The quality gap between vague requests and specific briefs is enormous. Like, boring vs. interesting enormous.

Ran an actual experiment on this - 30 days, building apps every day, varying direction levels. The lesson was embarrassingly simple but easy to miss.

Full write-up: https://thoughts.jock.pl/p/directed-ai-experiments-vibe-business

Are you seeing similar patterns in how top teams brief their AI tools?

No posts

Ready for more?