We built OSSRank because GitHub stars are a terrible way to evaluate open source. Stars measure clicks, not health. A project with 50,000 stars and one burned-out maintainer is more fragile than one with 5,000 stars and a funded team shipping weekly.
The framework was right. The product was wrong. Here's what I took away from shutting it down.
The Thesis Held Up
Everything we believed about evaluating OSS still rings true:
- Velocity beats popularity — commits per week, time-to-merge, release cadence
- Contributor diversity beats raw star count — bus factor matters
- Issue throughput beats issue volume — closed-vs-opened ratio is the real signal
- Commercial backing beats pure community enthusiasm — sustainability needs a funding source
When we showed teams the data, they agreed. They used it. Several made adoption decisions because of it.
And then they didn't pay for ongoing access.
The Product Was Wrong
Ranking projects is a content business, not a software business. Buyers want the answer once — for this adoption decision, this acquisition target, this competitive analysis. They don't need a dashboard. They need a number and a paragraph.
We were trying to sell a subscription to something that should have been a report.
I should have caught this earlier. The signal was in the sales calls: people would ask "can you just send me your take on these five projects?" That's a research engagement, not a SaaS contract. We kept trying to push them back into the product because the product was what we were building.
What I'd Do Differently
If I started OSSRank again — and I won't — I'd lead with the research engagement. Sell the answer, not the access. Build whatever software is needed to make the analyst faster, but never pretend the software is the product.
The mistake was confusing "we built a useful framework" with "we built a useful business." Those are very different things.
The Framework Outlives the Product
The OSSRank product is gone. The way of looking at OSS projects isn't. When I evaluate a project today — for adoption at Tembo, for understanding a competitor, for deciding what to build on, or for thinking through the dual flywheels of open source commercialization — I still run the same checklist:
- Last commit date
- Open issues trend
- Active contributor count
- Funding or commercial backing
- What I'd use if it died tomorrow
That's the part worth keeping. The rest is just a lesson about which problems are worth turning into companies and which ones are worth turning into a single good blog post.
— Ry
Related Essays
The Dual Flywheels of Modern Open Source Commercialization
Open source your core, build a SaaS around it ASAP, and keep both flywheels spinning. How to balance community and commercial traction.
Because I Had To
The story of leaving safe jobs and building companies — Differential, Astronomer, and now Tembo — because I had no other choice.
Always Too Early, Never Wrong
Serial entrepreneurs show up before the wave forms. Being early looks identical to being wrong — right up until the moment it does not.
Key takeaways
- A useful framework doesn't automatically become a useful product — someone has to be willing to pay for the answer.
- Ranking projects is a content business, not a software business. We were building the wrong shape of company.
- The signals that actually predict OSS health — velocity, contributor diversity, commercial backing — are still the right ones to look at.
FAQ
What was OSSRank?
A product for scoring open source project health using velocity, contributor diversity, issue throughput, and commercial backing — not just GitHub stars.
Why did you shut it down?
The framework was useful but the product wasn't. Teams agreed with the thesis, used the data when we showed it to them, and then didn't pay for ongoing access. That's a content gap, not a software gap.