Content Scoring — Week of 2026-05-06
Pieces Scored
| Piece | Score | Top | Weak |
|---|---|---|---|
| LinkedIn — The AI Infrastructure Gap | 8.2 | Specificity / Practitioner credibility | Emotional resonance |
| Long-form — The AI Infrastructure Gap | 8.1 | Practitioner credibility / Platform fit | Emotional resonance |
| X Thread — The AI Infrastructure Gap | 7.8 | Stop-scrolling / Value density / Voice | Shareability |
| Long-form — 4-Question Data Readiness Test | 8.4 | Specificity / Platform fit | Practitioner credibility |
| X Thread — 4-Question Data Readiness Test | 8.3 | Value density | Voice authenticity |
Average
8.2/10 — up +0.2 vs 2026-04-29 baseline average of 8.0/10.
Learnings
- Diagnostic frameworks outperform generic thought leadership. The 4-question readiness test scores highest because it lets the reader self-assess immediately.
- Operational specificity remains the content moat. Concrete items like “project-level P&L by 9 AM Monday,” CRM-vs-ERP customer mismatch, 7 agents, 50+ MCP servers, and quality gates keep credibility high.
- The next bottleneck is spread, not quality. Pieces inform well, but some need sharper emotional stakes, contrarian tension, or taggable lines to improve shareability.