The honest question here is whether pattern-matching against past YC companies is predictive or just comforting. YC's own partners have said repeatedly that they fund ideas they haven't seen before. A tool that scores your app based on similarity to previous admits has a built-in contradiction: the more your idea looks like something YC already funded, the less likely they are to fund another one in the same space. The highest-scoring ideas on this tool might actually be the worst applications for the current batch.
The rubric side has a related problem. PG's essays and Startup School content are already the most widely-read startup advice on the internet. Every serious YC applicant has already optimised for those criteria. Extracting them into a scoring rubric doesn't surface hidden signal - it just quantifies table stakes. The things that actually differentiate successful applications (founder-market fit, timing, contrarian insight) are exactly the things that resist pattern extraction from historical data.
There's also a market structure question worth thinking about. Your target users are YC applicants, which is a seasonal cohort that peaks twice a year around batch deadlines. That's a narrow demand window with a hard ceiling on willingness to pay (pre-funding founders watching every dollar). The tool that would actually command pricing power is one aimed at VCs doing deal screening, not applicants doing self-assessment. Same underlying tech, completely different buyer with a completely different budget.