Where do the 9 manually-curated finalists land in the AI blind scoring? This tool visualizes the gap between human curation and algorithmic evaluation across all 76 Pass 2 survivors.
All 76 Pass 2 survivors ranked by AI score. Manual finalists are highlighted — notice how they cluster in the lower half.
For each of the 4 finalist families, the manual pick vs the AI’s top choice from the same family. Per-criteria bars show where the scores diverge.
Average per-criterion scores: manual finalists vs AI top 8. The widest gaps reveal what the AI prioritizes differently.
What explains the divergence between human and AI ranking?
The AI scoring pipeline emphasizes structural simplicity and scalability — marks that survive at 16px favicon, fill a squircle icon frame, and maintain legibility without fine detail. This is a technical evaluation of mark fitness.
Human curation valued brand narrative and emotional resonance — the story a mark tells about Kurnik as an incubator, the tension between precision and emergence, the feeling of "infrastructure-grade but warm." This is a conceptual evaluation of brand fit.
The biggest gaps are in scalability (AI avg 8.0 vs manual avg 5.5) and distinctiveness (AI avg 7.6 vs manual avg 5.7). The manual picks tend to be more conceptually ambitious marks that carry richer narratives but sacrifice small-size legibility for visual complexity.
Neither lens is wrong. The best mark for Kurnik likely lives at the intersection: structurally sound enough to work at every size, but narratively rich enough to feel like more than geometry. H1519-R6-03, the highest-ranked finalist at #23, comes closest to this balance.