TSM Episode 54: RPGcast

For the first time ever, an RPGamer podcast is presented in less than four hours, and on a site which 1) has no ads and 2) was updated after 1998!


Produced 2012.07.01

Lusipurr has ruined RPGamer’s podcast one too many times, and the staff of RPGamer finally take matters into their own hands, turning the tables on Lusi and his minions. Sabin hosts, with guest panelists Paws and Firemyst, in this CatFancy-Crossover-Cast!

16 comments on “TSM Episode 54: RPGcast”

  1. That was a little strange hearing Chris start up the show.

    I think I have 4 different copies of the original Final Fantasy. I might have a problem.

    What is it about Namco’s name that makes it so easy to Spoonerize? I always want to call them Bamco Nandai.

    The interpretation of the review system has become broken. So many sites now grade on an academic curve so a 7/10 is seen as mediocre when that should still be seen an above average game. This is the same problem that sealed the fate of Alpha Protocol. Unfortunately, I don’t see how this is going to be fixed anytime soon, though.

  2. @Dan (and before listening to the podcast): I was recently thinking on reviews and scores, and I came up with an idea. Instead of huge 10 or even only 5 point scales, what if there were only 3 points? 1 for below average games, 2 for average games, 3 for above average. This gives the reader something to see at a glance if they wish, but not SO much info that they may as well skip the reviewer’s actual REVIEW which clarifies and justifies the score (not to mention exemplifies their hard work reviewing the game). It would be intentionally vague. I also like the idea of simply NO review score, but I think this might be a happy medium. Perhaps it’s too simplistic? Or maybe it risks lumping outright unplayable games in the same category as ok but sub-par efforts?

  3. @DCS: It’s a SURPRISE! Actually, when I relistened to it, I thought the same thing. Tres bizarre!

    The best way to fix the review system is to abolish numerical/letter/etc. systems and just write a review. Let the reader decide, in their mind, based on the content of the review, what sort of grade it deserves, if they so choose. For years, book reviews did this and it worked great. Then, at some point, the laziness of stars and thumbs and so on hit the scene and now we have the miserable situation we live in today.

    Scoring systems are laziness on the part of the reviewer. Full stop.

    @Mel: A lot of sites are inclining towards this, albeit not necessarily gaming reviews. Things are now delineated into Like/Dislike, because what we have seen on sites like Metacritic is that the vast majority of users are inclined to either give something a 10 or a 1. I still think it is laziness.

  4. I don’t know if I’d call scoring laziness on the part of the reviewer but fear on the part of the person or company publishing or hosting the review. Fear that their review will get passed over because it’s not quick enough to digest. Fear and conformity that if they don’t measure a game numerically that their review will not be ranked on the aggregation sites, which gets their name freely advertized.

    Review scores and their constant re-tooling and re-thinking are perhaps people attempting to build a better mouse trap in a world where mice don’t exist. In general, classification of things like movies and games are a limiting factor. More often people turn things down based on their classifications than take them up. OC Remix, which you link to on the bottom of the site, doesn’t offer a way to search their remixes by genre because the guy who runs the site (DJ Pretzel) thinks that this would only lead to people listening to fewer remixes than more.

  5. @Mel: A 3-point scale could be used effectively, but I admit I’m really a shades of grey kind of person, so I’d often find myself wanting to waffle between two scores. My site uses a 10-point scale and I still find myself wanting 0.5’s. There are times I really hate putting scores, but I know that there are some people who just won’t read a whole review, and that’s my compromise. I won’t put the pro/con lists like some sites because that’s a guarantee that most of your readership will skip over the wall of text.

    @Lusi: I don’t think that the numbers themselves are necessarily the problem, and I’m not sure just flat scrapping the system is the best answer. Much like nuclear energy, it’s all in how you use it, and right now review scores aren’t being used productively, they’ve been weaponized. Unfortunately, human beings have an innate ability to find the worst way to use a good idea. (This is why we can’t have nice things!) If things were standardized from site to site and reviewer to reviewer, Metacritic would be a great resource because it would show whether a game is really divisive or everyone agrees that it is either terrible, mediocre or great. Unfortunately, that’s just a pipe dream.

  6. I’m with that sentiment, DCS. I like the idea of review scores as a compliment to a review, but while some variance is fine, it’s a little overboard now. It’s to the point when a score is pretty meaningless unless you’re very familiar with the specific reviewer him/herself.

  7. @DCS: Human nature is the reason why some things cannot be successfully used in a social environment. This is why we need to scrap game reviews. “Destructoid gave it an 8!” is absolutely meaningless, as is “the average review score is an 8”. The numbers are highly subjective and carry with them an aggregate tone of authority which they do not, in fact, possess.

    There is no such thing as a 9 game or an 8 game or a 5 game. There are games. They have good points and bad points. The idea that there is a way to standardise ‘scores’ is ludicrous, but far worse and more subtly insidious is the idea that a gaming experience can (or, indeed, should) be reduced to a single numerical value.

    I stand by what I said: review scores are terrible and they should be scrapped entirely. They are based on an obviously false premise–one that is harmful to gaming.

  8. You don’t link to any of the funny images mentioned in the podcast, Lusi, I give this episode’s post a 3.11 out of 2.

  9. @Mel: I can fix THAT.

    The Circle Pad Pro XL!

    Sony’s Absurd Racing Harness!

    Microsoft’s Double-Ended Pleasure Device

  10. LOL

    I’ve seen the Sony and MS units before, but they bear reposting. That 3DS XLpro mockup, however, is pretty damn funny. Put that d-pad right next to that other d-pad!

  11. @Lusi: I think having a tool to allow for rough comparison between games is very important to the industry. I also think that a single review score by itself really doesn’t hold a ton of merit, but the ability to look at two games and determine their relative quality is a nice option. That being said, people only report averages, they never report standard deviations (SD) between scores, and I think that information like that would further enlighten people, if they were willing to look at it.

    As a thought exercise, say Game A had an average score of 7.2/10 and an SD of 0.6 (indicating a decently big spread of review scores, assuming a large sample size) while Game B had a mean of 8.1/10 but an SD of 0.1 (meaning everyone roughly agrees on the score). The easiest thing to see is that virtually everyone agrees on how good Game B is, but Game A seems to only strike a chord with a smaller segment of reviewers.

    Looking a little deeper into the stats, you would find that the games are not significantly different by research standards. Many times, people use 2 times SD for significance so:
    7.2 + (2*0.6) = 8.4
    8.1 – (2*0.1) = 7.9
    And because these two scores cross over, by definition, they aren’t that different in score.

    If the numbers were, say, 3.8/0.4 and 7.1/0.6, you’d get
    3.8 + (2*0.4) = 4.6
    7.1 – (2*0.6) = 5.9
    And because they don’t cross over, you could safely say that one game is better than the other.

    Of course, sites like Metacritic would never go through enough effort for something like this, and only a small segment of the population would probably even give a crap. But to me THAT is really where review scores are helpful. Not as a floating data point that some fanboy can trumpet in a forum somewhere.

  12. @DCS: I think this is a pretty intelligent use of review scores, but I take issue with the idea that (even when accounting for standard deviations) one game’s average score being higher than another means that first game “better”. It’s still up to the subjective likes and dislikes of the player.

    How I tend to utilize reviews is by saying “What does this reviewer NOT like about the game and can I personally overlook enough of those things/do enough of those things not bother me to justify playing the game?” And of course I read multiple reviews when I do this. Usually all the good points a game has to offer are things I already know or else I wouldn’t have been interested enough to read the review.

  13. @Mel: Perhaps “better” isn’t the most appropriate word for me to use there, but it does provide some measure for analysis of general opinion. And just because something gets a low score doesn’t mean you can’t derive enjoyment out of it. Mega Shark vs. Giant Octopus is a wretched movie by any metric, but there are people who absolutely love it because of how ridiculous it is. I completely agree with you that scores alone should never be the deciding factor in a person’s opinion of whether to watch, read or play anything, that’s why we have words with our reviews.

  14. Yeah, I think scores are wildly over-valued and wildly misused, but I like their function as an idea. Knowing that the score is the reviewer’s projection, just like his words, and it’s just a vague idea to go off of. I like scores for games that I care about a little, but only enough to skim the review and read the summary/score. Then if the score is really good/bad or the summary notes things that I particularly like or dislike, I’ll read the review in full.

    However, with thousands of comments on reviews being outraged at a number and companies giving bonuses based on metacritic scores, it is certainly out of hand.

  15. Ethos, you touched upon a real evil in this industry with the Metacritic-related bonuses. It puts everyone, from the player to the reviewer to the developer, in a terribly unprofessional position at times.

  16. Ethan summed up my feeling on the matter beautifully – I find review scores to be dead useful, but the whole system is quite wonky at the minute.

    The way that I will tend to read up on a game, is to go to Metacritic and select three of the top-scoring reviews, three mid-scoring reviews, and three low-scoring reviews – I couldn’t easily sample the full spectrum of opinion without the existence of arbitrary review scores.

Comments are closed.