Editorial: On Review Scores

That means time for contemplation and plenty of dead car batteries.
Baby it’s cold outside.

Happy It’s Still Winter Day, readers! Get snug and comfy in this cold weather* and curl up with a good website as I discuss the weighty subject of review scores. Weight might be the operative term, because these little numbers (or stars, or whatever) arguably carry more importance than the sum of the review text. The opinion of the reviewer must needs be boiled down to a single character summation of every concept, nuance, and exception detailed in the review process. The simplicity of reading a review score versus the time and effort required to read and digest the review text means most people will effectively place the score (often placed at end of the review) above the body of the review, letting it color perspective. Everything is reviewed on a cursory level, reviews are themselves reviewed by the readers, judged often on the appropriateness of the review score as it compares to expectations and the text of the review. To say contention arises because of this, both among consumers and developers, is an understatement. The subject has jumped to the fore again with popular games site Joystiq choosing to follow in our illustrious footsteps and leave their future reviews unscored.

When I was still only a reader of Lcom, one of the aspects I appreciated most was the scoreless reviews. Scores had always struck me as reductive, to use a recently popular term, and as attempting to boil down a mostly subjective process into an objective declaration. Aside from whatever difficulty the reviewer might have in finding a good fit for their review in score form, the real issue is at the collateral damage these scores produce. The point at which it becomes most worrisome is when the scoring process infects the review process before a score is even applied. Likewise, when the review is defended while working backward from the originally posted score then something is being lost in the application of that score. It is a comically frequent occurrence, familiar to any who pay attention to game reviews posted on mainstream sites, of consumer outrage purely focused around the number of the review. Often it is not that the review and the score are deemed “wrong” or disagreeable, but that the review and score do not match. Claims of reviews “reading like an eight” but getting a seven, or some similar scenario, pettifog the hard work of both the reviewer and game creators. Instead of consumers being informed about the product, they are only informed about a number that reviewer tied to that product. The review process, intended to arm consumers to be better consumers is instead used to arm arguments about the reviewer or their process.

But it does look nice.
I for one do not prefer my cocoa with cinnamon.

Other industries, famously the movie industry, also offer scores for their reviews and all of these are filtered through score aggregators, the most well known of which is Metacritic. This tool on its own is simply that, a tool collating data from around the internet (and what remains of print media). But it is not always so innocent as the site has to apply a weighted average to the scores based on different scales from the reviewing outlet (scales that span from five to one hundred points) as well as apply scores from reviews that do not give a clear score or apply no score at all. In the process, even should the outlet abstain from the scoring process, their work is ultimately still interpreted in a scored manner. Never mind the loss of control this represents for the reviewer, it proves an industry-side insistence on these scores. With stories about Metacritic averages being wielded as a cudgel to withhold benefits and bonuses from developers, it is not difficult to see why that insistence is not entirely healthy.

Anything that can be done to get more people to read more of the review text, instead of skipping to the score and then maybe skimming with that number in mind, can only help games development. With more people becoming more familiar with the actual critical opinions of the products, people will buy and develop games in a smarter manner. So too will the reviewing process be done in a smarter and more honest fashion as any need to wedge a game’s critique into a certain score (or even the other way around) will be eliminated. Joystiq staff have described the scoring process in the past as arduous and one they would not consider until the very end, thankfully, but one that was nevertheless difficult and seemingly arbitrary. Concerns over the unintended impact of scores and the development process were also mentioned, and are ones I wholly agree with. A review should not have more or even a similar impact to the consumer purchasing practices involving that game. A review can impact those purchasing practices, but in that case it is still the money from sales being accounted for and not numbers from scores. And in a time when more readers than ever are sensitive to industry collusion, I feel any steps to avoid this should be seriously entertained and examined.

If only I had a place like this…

The pressure to be a part of the scoring process all but assures that most outlets will be apprehensive about ditching it, considering that little number is so vital not only to game sales but to page views. So when a larger site decides to take a stand against such a practice, even to their detriment, I take a degree of notice. It is the current position of Lusipurr.com that our reviews will go unscored, in any sense however loose, and it is this editor’s intent to see that remain the case as long as I remain in place.

Now it is your turn, scoreless readers. Averagely weigh in on the matter. Give me a thumbs up or a thumbs in the butt down on this stance as it applies here and elsewhere. Are you perhaps more neutral regarding review scores? Do you firmly favor them? Comment, or I will give you an F!

*Does not apply to SiliconNooB


  1. I think that a score is nothing but how somebody feels about a game (or movie or whatever). That’s why I only reviewed my favourite games as reader reviews on IGN when I was younger. I wanted my feelings about the game to seem more concrete. I loved looking at the 9.8 I gave FFIX instead of the 9.2 IGN gave it (that’s based off memory, but I’m probably right, sadly). As you detail in this article, the usefulness of a score fades away the more one thinks about it. It serves no function outside distraction and chest puffing.

  2. I don’t like single score reviews, but if we’re breaking up the scoring into discrete categories, then I pay a bit more attention. Even the few times I’ve seen this, they still want to average out those numbers into one at-a-glance grade, which ruins the point, to me. I’ve attempted to come up with a proposed compromise, but haven’t gotten any further than rating games by association with various candies and mints.

  3. The problem with scores is that by their very nature they obscure the premises by which they are reached. One has to read the review to actually understand the score, at which point the score is no longer necessary: for, one has just read a review which gives a far more comprehensive set of information than a reductive value can ever provide.

    And that’s why we don’t have them here.

  4. Also: that… cabin(?) in the last picture is shocking. I should die of shame if I had a place decored in such a way.

  5. I liked the point about review scores just arming people to argue about the review instead of educating the consumer. I certainly hope Joystiq can keep those scores off. I imagine their reviews will get significantly less traffic now. This was a great article, Mel. I give it an 8.

  6. Reducing an entire work to a single numerical value has more to do with moving product than investigating its potential qualities.

Comments are closed.