Should software be rated?

star_icon One of the most common questions I get asked about this blog is why don’t I put ratings on software. It was no default decision – there are software blogs both with and without rating systems. Still readers seem to expect any kind of review to come with shining stars on top.

However I think software is too tricky for magic of tiny stars too work.



Photo by henribergius

What ratings accomplish

In general case rating helps:

  • skim – common sense dictates that things with low ratings better be discarded and those with high ratings given most attention;
  • scale – between absolutes of good and bad ratings help to compare different products and put them on relative scale.

How software is rated

Right from the start there is a lot of confusion about how exactly should software be rated. There is no clear scheme that can quantify quality of any given software product.

So inventing and enforcing of opinions starts with software being judged by many things:

  • ware-ness. That holy fury usually unleashed on adware. Nothing with filthy ads can be worth high rating, right? Except that many brilliant apps are funded by some form of advertising and completely outshine their pure competition.
  • number of functions. Little is bad. A lot is bloat. Anything in between is generic.
  • perceived value. Utility for obscure task is treated much less valuable than something for daily use. Until you meet obscure task.
  • subjective opinion. Like it – rate it good. Hate it – bash it away. Redefines unfair.
  • horde wisdom. Let users vote and sum up hundred clueless opinions into single that has illusion of authority.

Why it fails miserably

Complete lack of universal parameters, that can be found in all software, sets rating up for failure:

  • it doesn’t help to skim, because it is impossible to boil down good software to single number (and bad doesn’t even make it to reviews);
  • it doesn’t help scale, because ratings are consistent (at best) only in scope of single site or author and excluding low-end software from loop shoots rest of sense scale could make.


In the initial planning of this blog I had considered ratings (together with many other things that didn’t make it).

My main argument against it was – why should I bother putting different numbers on it if I am only going to cover good software?

I hope you don’t come for count-stars-download-and-use. I hope you come because you find my choice of software interesting, fitting your taste and solving your problems.

And in what way could possibly some stars enrich that experience?..

Related Posts


  • Angelo R. #

    I'm with you 100% on that Rarst! We should do away with all ratings, they become useless. What happens when you find one piece of software that you think is awesome? You'll rate that a 9/10 (or 4/5 stars). And then when the next piece of software comes along you'll compare it to your top piece. And if it does better? Well you can't give it a 10 can you? It's not perfect, so you'll give it a 9.5 and from then on your entire rating system is just shot. Take a look at video game rating systems.
  • Rarst #

    @Angelo Exactly, too much high ratings ruins system. Games are tad different - at least they have common goal in entertainment value. As well as established genres. Together that makes rating at least somewhat viable. But software has nothing in common most of the time.
  • Kane #

    I rely more on the description and comments from others then I do the scoring system. However, I still like a scoring system. Tracking the number of downloads is useful too. If a piece of software constantly gets a high rating then I'm almost certain to try it out. If it's a consistently low rating then probably not unless it will scratch whatever itch I'm having. :)
  • Rarst #

    @Kane I am torn on comments. Sometimes they are relevant, sometimes are not. May look at them but I am not confident to use them as main metric. Number of downloads only works for popular software. Small stuff may do task perfectly but isn't hyped and so downloaded as much. Part of the problem is that there is such thing as "consistently low rating" for software. There are (of course) apps that plain suck but how many reviews there are that start with "I decided to review utility that outright sucks and I rate it 1/10" ? :)
  • kalmly #

    If you think plain folks know what they are talking about, or how to rate a computer program, check out GOTD site. Scary. I do like to read comments that assure me a program will not destroy my hard drive and will uninstall nicely if I decide it isn't my cup of tea. Doesn't it come down to personal preference? I have a treasure of a little application that few people have ever heard of - mostly because there is/was a load of hype for similar programs but not so for this one. I would rate it a 10 and the others a 5 (maybe a 7) because I like small. However, others like big. I like fingers on keyboard, though others like mousing. I like do it once, though others don't seem to mind digging and clicking. SO - my rating wouldn't mean much to those "others". I do like input from reviewers, like yourself, especially those that compare the program to similar ones. If you don't want to rate it, I don't really care. It's the words I pay attention to, though others prefer stars.
  • Rarst #

    @kalmly It just occured to me when I was thinking over comments as metric that negative experience is more likely to be reported in those. Getting hard drive destroyed is stronger motivator than not getting hard drive destroyed. :) Difference in user preferences is also contributing to issue a lot (didn't make it into post). It especially hurts when basis for rating is subjective. For same app: Blogger A would rate 8 for portable. Blogger B would rate 6 for boring. Blogger C would rate 4 for small banner in the corner. Which one (if any) of these ratings is relevant to user would be completely up to user's preferences. So far comments are for good descriptions. :) I wonder if someone pro-stars will share an opinion.
  • Tom Clarke #

    You've made some good points here regarding what we too fine to be one of the trickiest parts of running a software website. At Softonic, we have a strict rating system, shared across our different locales, which deducts and awards points for various parts of each program, as well as a 2-point 'bonus' based on the editor's general feeling about the software in question. It's not a perfect system but it does allow us to justify a low score when a developer questions our rating. The biggest problem is that many of our competitors seem only too happy to dish out 10/10 ratings purely so they can get their site's logo included on the developer's homepage. This cynical approach to editorial review has made a lot of users mistrustful of starred ratings on software websites, and I understand why. In the end though, at Softonic we feel that a well-written review accompanied by a carefully considered rating out of 5 stars, offer all of our visitors the best of both worlds. We also hope that our users recognise that we rarely give a program 10/10, and so when a program receives that score, it truly is something special.
  • Rarst #

    @Tom Clarke Yep, I guess strict and defined system negates at least part of issues with rating. Thanks for dropping by and extensive comment! Reminds me I should browse Softonic. I don't spend much time on software portals but lately Appnews.net forces me to. :)
  • Benoit Tremblay #

    I say if you go with the rating, give a rating on 100 points. This way, even if you review only good software, 100 points gives you a lot more room than 5 stars. You know, there's a lot between a 4 stars and a 5 stars. The benefit I can see of using a rating would be to make a nice monthly recap or a year recap of the best softwares you've reviewed through the year. I personally don't care if you put a rating or not, but putting a rating certainly won't hurt your readership. I think it can only improve your readership because of the people looking for a rating...for a number of different reasons. By establishing strict guidelines, I think you can reach the best of both worlds. ;) Ben.
  • Rarst #

    @Ben Why not 1000 then? :) Would be recap of software that scored high, not necessarily best. putting a rating certainly won’t hurt your readership Had you forgot how bad I am at things I am supposed to do? :) This blog is not run by visitor expectations, it is run by me. Hadn't seen comment from you in ages, still workburied?
  • Benoit Tremblay #

    No problems Rarst, it's important to run everything like you want it to be run! ;) Yeah, I've been quiet lately, busy with a lot of stuff offline. Still related to the Web, but more on the consulting side. I should be more active this summer ;) Ben.
  • Rob Dunn #

    I agree. This was something I think Samer had struggled with over at FreewareGenius. Basically, my approach is "I like it enough to post about it, but your mileage may vary". You get out of the software what you put into it. I don't mind the readers rating my articles, per se - but I'm hoping they are doing so based upon the quality of content I am posting in addition to the quality of software, but who knows... Good post R, Rob
  • Rarst #

    @Rob Rating articles is different metric. It is feedback and something easily displayed and combined to highlight top content. And yes - there is problem of readers confusing rating post and rating what post is about. I am considering implementing Google Friend Connect stuff and one of their lates additions is "recommend" button. I think it's better approach to rating posts than 1-X scale.