1
0 Comments

Normalizing the Chaos: Building the Core Schema.

Most SaaS directories are just affiliate link farms. We’re building something different.

ComparEdge started with a massive, fragmented dataset of 331 vendors. As a Data Analyst, my first challenge wasn't the UI—it was the data normalization layer. How do you compare a seat-based CRM with a credit-based LLM provider without losing the nuance of the unit economics?

We’ve spent the last few months brute-forcing this. Here’s where we stand:

- The Database: A normalized schema for 331+ tools (scaling to 1k).

- The Scoring Engine: Our first version of the Vendor Lock-In matrix is active, weighing API depth against data portability.

- The LLM Layer: Real-time token pricing parity across 6 major providers is live.

This is an "unpublished draft" of a larger mission. We aren't just tracking prices; we are quantifying the infrastructure risks that most companies ignore until it's too late.

The site is in Beta/WIP. If you’re a builder who cares about margins over marketing, I need your feedback on the Stack Builder. Does the burn rate projection match your real-world OpEx?

posted to Icon for ComparEdge
ComparEdge