← Back to essays
·3 min read·By Ry Walker

What I Learned Shutting Down OSSRank

What I Learned Shutting Down OSSRank

We built OSSRank because GitHub stars are a terrible way to evaluate open source. Stars measure clicks, not health. A project with 50,000 stars and one burned-out maintainer is more fragile than one with 5,000 stars and a funded team shipping weekly.

The framework was right. The product was wrong. Here's what I took away from shutting it down.

The Thesis Held Up

Everything we believed about evaluating OSS still rings true:

  • Velocity beats popularity — commits per week, time-to-merge, release cadence
  • Contributor diversity beats raw star count — bus factor matters
  • Issue throughput beats issue volume — closed-vs-opened ratio is the real signal
  • Commercial backing beats pure community enthusiasm — sustainability needs a funding source

When we showed teams the data, they agreed. They used it. Several made adoption decisions because of it.

And then they didn't pay for ongoing access.

The Product Was Wrong

Ranking projects is a content business, not a software business. Buyers want the answer once — for this adoption decision, this acquisition target, this competitive analysis. They don't need a dashboard. They need a number and a paragraph.

We were trying to sell a subscription to something that should have been a report.

I should have caught this earlier. The signal was in the sales calls: people would ask "can you just send me your take on these five projects?" That's a research engagement, not a SaaS contract. We kept trying to push them back into the product because the product was what we were building.

What I'd Do Differently

If I started OSSRank again — and I won't — I'd lead with the research engagement. Sell the answer, not the access. Build whatever software is needed to make the analyst faster, but never pretend the software is the product.

The mistake was confusing "we built a useful framework" with "we built a useful business." Those are very different things.

The Framework Outlives the Product

The OSSRank product is gone. The way of looking at OSS projects isn't. When I evaluate a project today — for adoption at Tembo, for understanding a competitor, for deciding what to build on, or for thinking through the dual flywheels of open source commercialization — I still run the same checklist:

  1. Last commit date
  2. Open issues trend
  3. Active contributor count
  4. Funding or commercial backing
  5. What I'd use if it died tomorrow

That's the part worth keeping. The rest is just a lesson about which problems are worth turning into companies and which ones are worth turning into a single good blog post.

— Ry

Key takeaways

  • A useful framework doesn't automatically become a useful product — someone has to be willing to pay for the answer.
  • Ranking projects is a content business, not a software business. We were building the wrong shape of company.
  • The signals that actually predict OSS health — velocity, contributor diversity, commercial backing — are still the right ones to look at.

FAQ

What was OSSRank?

A product for scoring open source project health using velocity, contributor diversity, issue throughput, and commercial backing — not just GitHub stars.

Why did you shut it down?

The framework was useful but the product wasn't. Teams agreed with the thesis, used the data when we showed it to them, and then didn't pay for ongoing access. That's a content gap, not a software gap.