Showing posts with label retro gaming. Show all posts
Showing posts with label retro gaming. Show all posts

Monday, November 14, 2011

The evolution of the RPG... and me.

A year or two ago, I (rather infamously) drew my line in the sand: I do not like party-based games, I declared, and never had.

Following this assertion (brought on at that moment by disliking Dragon Age: Origins), I've played both Mass Effect games, am currently in the midst of Chrono Cross, and just devoured the entirety of Dragon Age 2 in a few days.  And yet in many ways I stand by my original statement -- so what's changed?

I'll be honest, lady rogue Hawke pretty much always took Fenris, Varric, and Merrill, by the middle of Act II.

I'll admit that in part, I've changed.  Though I've been loving games and digital worlds since I was a kid, my consumption of various game types has really ramped up in the last three years and I've been exposed to, and learned patience for, some kinds of game design that I hadn't gained wide experience with before.  Game appreciation, like film appreciation, is tied to a sense of time and place, and an understanding of the history of the art.  My sense of history is still developing.

Crucially, though, the games themselves have also been evolving.  The difference in feel between Dragon Age: Origins, which hearkens back to an older era of games, and Dragon Age 2, which feels very modern, really crystallizes that evolution for me.  Thanks in large part (though not solely) to BioWare's recent design choices, I've been able to narrow down a bit what it is I actually hate about party based gaming.

In a word?  Micromanagement.

For some people, this is fun. I will never truly understand those people.

For me, the joy of playing has never been in the numbers, the tactics, or the methodical min/max situation.  I am fundamentally a lazy gamer: I don't want to control a hundred things at once.  I'm willing to be responsible for one character and for her tactics, skills, attributes, gear, inventory, and personality.  I tend to gravitate toward one character type and I tend to play that type the same way across games.* I like passive skills and quick kills, and I prefer not having to overthink every single character placement or tactical choice.

If I'm playing a game where character development is the focus -- in broad strokes, the RPG genre -- then what I want is to take control of my avatar and to understand and master her personality and talents.  I don't want to be responsible for controlling others.  It's a selfish impulse ("don't be dead weight I have to drag around") but also a self-protective one ("I just can't manage both of us correctly at once; you'll get short shrift").

My aversion to having to worry what others are up to has led to some downright comical contortions. During my EverQuest II years, I was three solid months into the game and level 28 (back when it was much less solo-friendly)  before I ever joined a group.  The folks I grouped with were all in the same guild and I joined up with them a few days later.  That's how I eventually discovered the pleasure of watching a plan laid and executed with a minimum of communication.  Everyone knew their roles: tanks took the hits, healers healed, chanters controlled crowds, and DPS damaged things.  Sure, for special bosses or raid zones (or one memorable five-Fury group) we discussed strategy at greater length, but each character always knew her role because each was controlled by an autonomous being somewhere, an individual man or woman at a keyboard just like me.

Some of those raid strategies worked better than others. Running a new x2 zone on Test, June '05.

When handed Divinity II and Dragon Age: Origins in the same week, I gravitated to the former because I could simply strike out into the world as I pleased, without worrying about what others wanted, needed, or thought of me.  I've been bored, in the past, with having to make the rounds among companions and crew to check in on each and every one of them and their personal needs.

I've been thinking about the "why" a great deal over the past week.  I think it's because for a long time, in many of the games I played, companion characters' personal needs either felt mechanical, pointless, or kind of unhinged.  That's a personal assertion, and not necessarily a quality-of-games one; it has to do with my own particular wiring.  As much as I hate to admit it, because I'm a book-lover through and through and an imaginative one at that, I think what's actually hooking me into this new RPG era is the voice-over work.

When I play a game like Chrono Trigger or Chrono Cross, everyone sort of sounds the same.  Yes, I imagine characters speaking differently, with different cadences, accents, and mannerisms, but in the end every voice is still, on some level, mine.  I can't give other characters inflection that I can't imagine and active as my imagination is, in a text-only world my interpretations might run counter to the scene's intent.

In fact, I'm running into this fairly often in Dragon Age: Origins, which I'm now giving another try.**  With an unvoiced Grey Warden, it's up to me to guess whether a comment she can make is sarcastic or genuine, and whether that comment is made jokingly or earnestly.  As a result, other characters' responses are not necessarily what I expect or what I'm aiming for.  I've run into some disapproval situations that I didn't see coming, because I didn't realize the Warden was going to be perceived as confrontational rather than as politely direct.  (Also because Morrigan disapproves of roughly everything.)

And when Morrigan disapproves, she lights you on fire. It's just her way.
Having companions find their voices has upended the way I view these NPCs in my games.  It's an emotional connection to the narrative and its world that isn't a new concept, but that makes me personally care a great deal more.  Even in a silent protagonist, fundamentally single-player game like Fallout: New Vegas, companion voices make me feel differently and realign my priorities.  I want to earn Boone's respect, not his easily-granted disgust.  Hearing Arcade move from self-effacing sarcasm to honesty over time makes me feel trustworthy.  Disappointing Veronica makes me feel like I've kicked a puppy.  And actually getting to hear Christine talk and explain, after she had been rather violently robbed of her voice, is deeply satisfying.

The recent BioWare titles (the Dragon Age and Mass Effect franchises) have done a rather extraordinary job of surrounding me with characters I care about.  Between advancements in game tech and a strong investment in decent writing, I'm able to immerse myself in the illusion that my [Hawke / Shepard / Warden] is surrounded by other people, as real as my intervention has made the PC, with their own voices, stories, and personalities.  And they can control themselves.

Should I be so inclined I could order Garrus which baddies to shoot and when, but I never have to.  (I choose not to play on difficulty settings where that level of tactics would be required.)  I can take control of Isabela or Aveline, or issue direct commands to them, but I don't have to.  Without very much intervention (adding health potions to their tactics), Fenris knows how to watch my back and stupid Anders knows how to heal the party as needed.  Varric doesn't need me to issue a complex set of numbers and commands in order to seriously own that crossbow.

The ability and choice for the player character to have intimate and meaningful one-on-one conversations with non-player-characters has reframed the way I relate to a game.  If I need to make a complex or consequential decision in Chrono Cross, I look at a guide, or I talk it over with a friend (i.e. the spouse) who has played the game before and can give me non-spoiler guidance.  But when I need to make a complex or consequential decision in a game like Dragon Age 2, I have Hawke talk to her friends.  They become her guides and, by extension, mine.  Does Aveline disapprove of a choice?  She must have a reason and it's worth asking her before I act.

I'm used to NPC companions either feeling burdensome or feeling invisible -- for all that I liked, say, Lucca and Frog in Chrono Trigger, taking their turns in combat just meant me moving through one list of all options, and switching party members roughly meant switching combat tactic options and not much else.  That both game design and I have reached a stage where player companions feel almost like MMO buddies has been revelatory.  For the first time, when given the choice I care more about my companions' quests, evolution, and goodwill than I do about exploring every corner of the world (though I still do) or about the main story (which always comes around again in due time).

I haven't always particularly enjoyed characters' quests (bite me, Anders) or supported their loyalty missions (you too, Zaeed).  But as this year in gaming starts to wind down, I'm realizing that now, the companion quests are the ones I want to appear more often.  I enjoy making it a point to wander around the Normandy, or around Kirkwall, or around the campfire.  Fenris, Anders, Aveline, Varric, Isabela, Merrill -- their stories, their trust and forgiveness (or betrayal), are what was important to me in Dragon Age 2.  And as I look toward 2012 and Mass Effect 3, I know that Shepard can stare down the Reaper threat, but what I really want is to be sure that Garrus, Liara, Wrex, and Tali will trust her and stand by her side while she does.

Until then, back to Chrono Cross, where Kid is Australian and Poshul is desperately annoying -- but everyone is as silent as Serge. 


*For the record, that type is rogue / thief / assassin, heavy on the stealth and dual-wield or, in a futuristic setting like ME, on sniper tactics.  Sneak-and-stab or sneak-and-shoot: if they see me coming I'm doing it wrong.

**Because seriously, I want to see if I can find out why [DA:O character who appears at the end of DA2 with Cassandra] shows up then and there, 6-7 years after the events of DA:O.  Context: I needs it.



~~~~~~~~~~

And for more discussion on party-based gaming, that happened to come up while I was in the middle of this personal meditation, see Ta-Nehisi Coates and the Horde on The Future of the Computer Role-Playing Game.

Wednesday, September 21, 2011

Win, Lose, or Fail

A bunch of gaming writers have recently cycled back around to one of the most foundational questions of our art.  No matter what perspective each of us prefers, no matter which lens each of us uses, down at the bottom there's a single question even more important than the perennial argument of "Are games art?"

Our core issue is this: what are video games?

Michael Abbott over at The Brainy Gamer launched this most recent salvo with Games Aren't Clocks:

I say it's time to let go of our preoccupation with gameplay as the primary criterion upon which to evaluate a game's merits. It's time to stop fetishizing mechanics as the defining aspect of game design. Designers must be free to arrange their priorities as they wish - and, increasingly, they are. Critics, too, must be nimble and open-minded enough to consider gameplay as one among many other useful criteria on which to judge a game's quality and aspirations.

This caused a nearly instant rejoinder from journalist Dennis Scimeca at his personal blog, Punching Snakes, in which he asserted that actually, Games ARE Clocks:

Video games can afford to suffer some modicum of technical errors and still be playable – we routinely look past the regularly-scheduled bugs in Bethesda titles all the time without letting them ruin our fun – but if their mechanics are so broken so as to preclude play? Without play, there is no game, at which point nothing else matters.

I think the salient aspect of Abbott’s post starts midway through, when he expresses his frustration with the term “video game.” Rather than trying to redefine what the term means, in order to fit everything inside the same, comfortable box, however, I think we need new language entirely.

A few paragraphs later, he continues:

I might argue that The Sims has never been a video game, for the same lack of victory conditions. It is a simulation, a digital sandbox, and winning or losing has nothing to do with it. When competition ceases to be part of the equation, I think an object’s definition as a game should immediately be called into question. We don’t do this because even if we determined that “video game” no longer works as a descriptor, we have no fallback positions or options available.

It's an interesting debate, to me, because I think that in their own ways, both gentlemen are quite right.  Games are more than the sum of their mechanics, to many of us, and the word "game" is also loaded with connotations that may not apply to our modern interactive narratives.

Where I've gotten caught up, though, is in this idea of "winning" and "losing."  I don't think they've been the right terms to discuss game completion for a very long time.  BioShock isn't chess,  Plants vs Zombies isn't basketball, and Tetris isn't poker.  How do you decide if you're "winning" the character arc of Mass Effect, Fallout: New Vegas, or Fable III?

At its most basic, a game is something playable.  Whether it's got a story or not, no matter the genre, system, or type, a game is something that requires player input.  You, the consumer, are in some way integral to this experience.  Whether you push one button or speak a word into a microphone, whether you wave your arms at a motion sensor or deliberately hold still when you could act -- a game requires you to contribute.  That's the sum total of the agreement on our current definition of "gaming," and really that's quite a low bar.  Small wonder, then, that we keep looping through these arguments.
We don't just have a win / lose dichotomy anymore.  We do have completion and backlog; we have sandbox and short story.  But every title I can think of -- every title I've ever played and a thousand more I haven't -- has either a failure state or a success metric, and some have both.  Our metrics aren't necessarily competitive, and they might be imposed by the player rather than intrinsically by the game.  There are little successes and big ones, game-ending failures and completely surmountable ones, but every pixellated problem I've ever pounced on has at least one or the other.


(If at first you don't succeed, you fail.)

Writing about L. A. Noire and death in gaming back to back started me down the path of contemplating the failure state in general.  I hadn't really given it any thought before, but recently I've started to understand just how important it is.  Coupling the failure state with the success state (and no, they are not necessarily binary opposites) creates pretty much our entire dynamic of gaming.

Depending on the sort of player you are, this is either a total failure, or a smashing success.

While I was starting to muse aloud on this idea on Twitter, Mattie and Line challenged me with The Sims.  That challenge leads to a critical point: player-determined goals are still crucial goals.  Your Sims can fail at their own little lives: going hungry, getting fired, burning the house down, or getting dumped by SimSpouse.  But it is common to play the game aiming for maximum drama in SimLives -- so, the argument runs, those aren't failure states at all.  They're successes.  That's all well and good, but the players who want SimHouse to burn down still have failure conditions available: the scenario in which the house, in fact, does not burn down.  The standard failure and success metrics, as envisioned by the designers, might be reversed but there are still measurable goals present, waiting to be accomplished. 

To a certain extent, most success goals can be said to be player-determined.  What's true success in Peggle: beating the story mode, or going back for an Ace and a 100% on every level?  What's good enough in Tetris: getting to level 10?  Beating your own old high score?  Beating someone else's?  What's a successful play-through of Mass Effect: paragon, renegade, or somewhere in between?

Even in Minecraft, the most popular sandbox to come along in gaming since die were first rolled for stat sheets, there are successes and failures.  Both wear many faces, of course.  But success can look like this:

Image source: http://www.kevblog.co.uk/how-to-build-a-hollow-sphere-in-minecraft/

And failure can look (comically) like this:



Creation and destruction are player goals, rather than creator goals, but the game itself is still a set of tools that enables the player to achieve those goals (building a nice house, which is the sum of many smaller goals) or fail in them (committing accidental arson while installing the fireplace).

A huge amount of our gaming, though, is deliberately narrative.  Most of the games that I play certainly are.  This year alone has seen me in Fable III, Portal 2, Enslaved: Odyssey to the West, Fallout: New Vegas, Bastion, L.A. Noire, both Mass Effect titles, and another dozen or two that I can't immediately call to mind.  These are all cinematic stories, designed with beginnings, middles, and ends; the mechanics of their telling are a vehicle to carry us from plot point to plot point, mainly via weaponry.

Stories don't have failure conditions, but they do have endings.  Story-based games often have clear fail states, though, and that's the game over screen.  Your character has died, or the setback you face is so adverse there can be no overcoming it.  Game over, mission failed, you suck at shooting bad guys so your planet is destroyed.  Go back to a save point and try again.

Of course, sometimes they're just kidding about "game over."

But a game like Mass Effect doesn't need to rely as heavily on the fail states (though the game over screen most certainly exists), because its relying on the player input to define the character.  We care about keeping Shepard alive in the face of certain doom, but we tend to care more about whether she aims for diplomatic solutions, or shoots a guy in the face.  A failure state in Mass Effect 2 doesn't look like the game over screen given to the player if a mission goes bad; it looks like being unable to keep one of your crew members loyal, or like being unable to keep one in line.  We're playing to achieve the successes, in whichever form we feel they take, rather than to avoid the failures.

Most narrative games don't take the "define this character for yourself" trajectory that BioWare titles are famous for, of course, but they still rely on that delicate combination of success and failure.  If you're playing Phoenix Wright, the game is completely on rails.  But it has fail states: you can press the wrong statement or present the wrong evidence.  You need to have a decent understanding of what's going on in order to make correct accusations and put the evidence together properly.  And you can get it wrong to the point of seeing a "game over" screen.  (Unless you're me, and save compulsively, and reload if you're doing badly.)  Success in meeting goals -- finding evidence, correctly questioning a witness, or surviving a cross-examination -- will advance the story to the next set of goals. 

Purple's the evil one.
My most beloved games of old literally do not have a fail state.  The classic LucasArts SCUMM-engine adventure games -- Monkey Island 1 and 2; Day of the Tentacle, Loom, and more -- were revolutionary in that the player literally could not get permanently stuck or die.  (As compared to the Sierra adventure games of the era, which were death-happy, or to older games like Zork, where you could waste hours playing on past the point where you'd already screwed yourself over.)  Rather than ending with failure, the games rely on continued success.  These stories have natural bottlenecks built in: the narrative will not continue until you figure out what Bernard should do with that hamster or how Guybrush can use the rubber chicken with a pulley in the middle.  There are items that need to be found, contraptions that need to be built, and discussions that need to be had in order for the player to progress.

In a sense, these games -- of which you could easily argue L.A. Noire is the most recent descendant -- are very proactive.  Reliance on cut-scenes is very low and mainly, non-playable sequences are just showing the consequences of whatever action the player just took.  The absence of a game over screen may remove a certain kind of tension from the story, but it also removes a major source of potential frustration for the player.

With all of this said, it's true that not every game has a visible set of goals, or any available success or failure metrics.  There are titles out there that deliberately subvert the very idea of success and failure states; this is where I would say the avant-garde of gaming truly lies.  From one point of view, The Stanley Parable has six failure states.  From another point of view, it has six success states.  What it actually has are six conclusions and ways to reach them, the ultimate meanings of which are left to the player.  None are particularly desirable (at least, of the ones I saw); nor is any one better or worse than the others.  An existential crisis in every box!

The Path is another art game that subverts the idea of success and failure states.  There are six player characters; each girl has a starting point and is told to go to an ending point via the given path.  The game, such as it is, happens in the experiences along the way; the journey is the destination and the destination is incidental.  Grandmother's house is more of a concept than a crucial place to be.

One of six sisters finding maturity, sexuality, and experiential horror between home and Grandmother's.

The avant-garde exists deliberately to undermine the tropes and tools of our media.  That's what it's for, and I have long thought gaming would truly come into its own as an art form when a thriving independent and avant-garde scene could generate new ideas that would, in time, filter into mainstream development.  Film history and the histories of other arts have evolved along this path, and evolving technology and the ubiquity of distribution venues (i.e. the internet) have now made the production and release of art games common.

Aside from deliberately subversive arguable non-game experiences like The Stanley Parable (see Line Hollis for links to and reviews of more obscure art games than you can imagine), I don't think I've ever played any interactive digital experience in the "game" category that didn't have either some kind of failure or some kind of success built in.  Even the visual poem Flower partakes: you can't really fail (I was dreadful at using the motion controls, but as I recall you just keep trying, except perhaps for the stormy level), but as with a classic adventure game, you do need actively to succeed to continue.

If a game had absolutely no success metrics or failure states in any form, whether intentional or untentional, direct or subverted, dictated or player-driven, would it still be a game?  Maybe, in the same way Andy Warhol's Empire is still a film.

So, after all of this, we come back around to Dennis and to Michael.  As much as I think Dennis is wrong to assert that these digital experiences we all enjoy aren't "games," he's also right.  That is: we have to use the existing vocabulary for the time being, even if only to transition away from it as our discussion evolves.  We've only got so many words right now, and we -- players, critics, and designers -- need to be on the same plane to communicate.

But is "video game" really the right term for the transcendent, new immersive-media experience Michael seems to covet?  As long as those experiences have discrete goals, and as long as player input determines the failure or success of those goals, I think we can use the words we have.  We have a while yet to revisit our lexicon; I hope we've decided what to call the experience before we get to the point where the Holodeck actually shoots back.

Wednesday, August 17, 2011

On Gaming Death

Alyssa Rosenberg writes about a lot of new media and pop culture phenomena, but not very much about games.  In talking about Portal, she begins to address why:
I’ve been traveling a lot lately, so I’m playing through Portal much more slowly than I’d hoped, but as the levels have gotten harder and I’ve started negotiating around poison moats, I’ve figured out one of the things that kept me from playing games regularly for a long time: I find dying in-game incredibly stressful.
So far, nothing too terrible happens to Chell. If I screw up, I hit some brownish water, and I get a loading screen, and we start over again. But I’m invested enough in Chell, despite the fact that I am her and only rarely see her around corners and through portals, that I don’t have much sense of how she ended up at Aperture or why she — or me — has been left alive and alone. Thomas Bissell in his profile of video games voice actress Jennifer Hale ... talks about how effective she makes Commander Shepard’s death seem in Mass Effect, and the incentive that is to keep playing—you don’t want to leave her there, or leave on that note. My anxiety is about getting there in the first place: I get frozen up by the possibility of harm coming to my character.



I don't remember which one is Clyde, but I always blame him.
I got thinking about character death in gaming, after reading this.  Really, it's an odd concept, because it's a mixed bag of concepts.  We lump a whole bunch of different occurrences and design choices into this one idea.  Thirty years ago, avatar "death" or destruction made perfect sense as a design concept.  You need a mechanism in your game to tell the player that she has lost, or failed at her task.

Beyond just communicating failure to the player, avatar "death" used to be a mechanism that meant the kid behind you at the arcade would get a chance, eventually, to put his quarter in the machine and take his turn after you, or at least a mechanism by which you, yourself, would keep pumping in your change for one more chance at the challenge.  Simple, easy: a time and talent constraint built in.  The more you suck, the more quarters the machine will consume!

In the 1980s, though, the arcade famously went to our living rooms.  Throughout the NES era and into the SNES age, though, we had the convention of game "lives" built in.  By then it was just how we played, and 1-UP was part of the lexicon.  Trying and failing to cross a chasm or fight an enemy meant you "died," and you needed another guy.  (An aside: even if the player is controlling a female character or a completely non-human, non-gendered avatar of some kind, it's still "another guy."  Curious.)


I was always really glad for the collected 1-UPs by World 3.

As we all know, though, games have come a long way since the 1980s.  There was a time when they were all sets of arcade skills that sometimes had stories attached; now, we have a whole collection of stories that also have skill sections built in.

Unlike Alyssa, I don't get stressed when my first person player character "dies" in Portal.  Her death is impermanent; the player's respawn is nearly instantaneous and the game replaces puts the avatar pretty much right back at the site of the player's failure.  I no more stress out about launching myself into a turret (oops) than I do about laying a jigsaw puzzle piece in the wrong corner, or about missing a move in Tetris. Portal is ultimately about solving puzzles and although there's a great narrative framework going on, I don't feel personally affected by Chell's ceasing to be; I only feel frustration at my lack of talent or timing.


To infinity, and beyond!  Wait, what?

That said, there is indeed the frustration of bruised pride to contend with.  I don't feel any more attached to the persona of Chell or to her story in Portal 2, but I take more offense at dying because the game is easier.  It relies more on thinking puzzles through (which I can do, and well) rather than on reflex timing (which I can't do well at all).  When I fail at something that I could physically have done right, if only I had thought it through more clearly, I take personal offense at the failure.  That's part and parcel of the ego of the gamer, right there.  This puzzle -- this story -- must have a right answer, and so I must find it.  Otherwise, I, personally, have failed.

We very rarely have a limited number of second chances anymore.  Our games do still exist along the pass-fail, do-die dichotomies, but our stories, as a general rule, no longer continually penalize failure.  Rather than face a character death, we are instead taking a Mulligan on the last five minutes.  We get a rewind (sometimes literally, as in World of Goo) back to before that jump or that shot or that ambush in the corridor.

Sometimes, though... sometimes, it is still personal.  I got really ticked off on the very few occasions when my Commander Shepard died and I had to reload.  In her case, I did feel deeply invested in that character.  She was important, her story is important, and death didn't feel like taking a do-over on a game mission: it felt like a deeper kind of failure, the kind with some sort of betrayal or judgement attached.  The feeling only got worse in Mass Effect 2, where I had the ability to get other characters permanently killed off.  During the climactic mission of the game, I routinely let my fear of harming others send me into a kind of paralysis, during which I had to pause the game and pace around the room instead.

My indecision had nothing to do with this. By which I mean, had everything to do with this.

Of course, that's entirely by design.  When BioWare can make me pace around the room and ask my cat for opinions on who should lead a team (his response was to nibble on my arm), they've won.  Anyone designing a huge-budget, open-world, cinematic-style AAA game is invested in the player's investment.  That so many of us seem to have fangirled over this franchise is not an accident.

Not all games, though, are so large scale.  This summer's XBox indie darling Bastion has finally made the leap to PC, via Steam.  After watching all of the game criticism circles consistently lighting up about this title for a couple of months my curiosity had the better of me, and this week I did something I rarely do and jumped on a day-one purchase.  I had some time that evening to give it a whirl.

I kind of suck at Bastion.

It's not a game at which one can suck, exactly, and yet I manage to do so.  Still, I can tell that many of my woes are simply clumsiness: the mouse-and-keyboard combination isn't necessarily ideal for titles designed with an XBox 360 controller in mind, and I might need to remap a couple of keys for easier use.  Over time, I will adapt to this system and after a few days, having mapped my muscle memory to this particular set of mechanics and demands, I will cease sucking.

However, being terrible at Bastion for the time being has proven useful with insights on character death.  The gimmick of the game is narration: you hear what you're doing, what you've done, and what you're about to do, and you hear it with inflection and judgement.  Thus: the first time Kid plummets off the side of the path, to his doom, the narrator is patient and understanding.  The third or fourth time, the narrator's patience begins to wear thin.

On the plus side, there are plenty of jars and such to smash with my hammer while I fail to smash enemies.

The voice of the narrator is meant to be kindly and guiding, at least in these early segments of the game.  (I don't know if it will change or not; I've intentionally been avoiding spoilers and reviews.)  When he intones, "And so, Kid fell to his death," you get that brief moment of, "...awwwww."  But immediately -- before you can even feel sad that your inept steering threw this little artistically-drawn smashy guy into the abyss -- you hear, "Just kidding!" and respawn right where you were, right in the middle of what you're doing.

It's an interesting approach to character death.  No reloading of old saves (it's on a console-style autosave system) and not really even any thinking of how you could do it next time.  In a strange way, it's like a single-player zerging tactic: die, respawn in place, continue.

I don't know what to make of this kind of death mechanic in my game.  It's not an MMO, so I don't need to rely on anyone else's help to get up, nor do I owe anyone else an apology for my failure.  It's not the deeply branching story of a cinematic character to whom I become attached, so I don't lament his passing.  It's not a failed solution to a puzzle, and so I don't have to think about how to get it right the next time.

As far as I can tell, the narrator is the crux of it.  After all: he's going to keep telling the story no matter what.  That's what a storyteller does.  By framing Bastion in that way, it might genuinely be the most third-person game I've ever played.  The player doesn't really get a chance to put herself inside the head and body of the avatar she's controlling, the way we are habituated to doing.  There's an odd level of detachment that somehow makes character death entirely meaningless -- while also giving it sort of the aspect of a milliseconds-long mid-season cliffhanger.

I'm not sure what I think.  I'm barely even an hour into the game and that counts the section I had to play twice due to an unscheduled PC shutdown.  (In related news, my next case will have a cat-blocking door or panel over the power switch.)  My first hour, though, has made me feel that I care about Bastion's world very much and its player character not at all, which is an interesting and unusual combination.  But I want to know what happened, and I'm going to need that narrator to tell me, so play on I shall.

Wednesday, July 6, 2011

The JRPG and Me

I've described before how I grew up listening to movie scores.  When I was in high school, I went on a tear of renting the movies whose themes I had loved, figuring that I should start getting some context.  And so it came to pass that I learned although The Abyss did not live up to its musical promise in my eyes, The Terminator certainly did.  (I was and remain undecided on Willow.)

The last decade has brought me more into the gaming sphere.  Although I've been an avid computer gamer since 1986, the only console I ever had was a used NES I bought in 1993.  Sony and I were like two ships in the night for a very, very long time.

But in 2008, my now-husband got a PS3 (the Metal Gear Solid 4 bundle) for his birthday.  And although my entire games collection was stolen in 2006, he -- also an avid collector -- managed to avoid a similar misfortune.  And so, for the last three years, I've been catching up on a number of classic games and series I missed in the 1990s.  Thanks to a PSX emulator, M's PS1 discs, and a classic theme that's always drawn me in, I... am playing Chrono Cross.

I'm this guy now.
It's amazing how slow and clumsy I feel, attempting this feat.  Everyone else played games like this when they were twelve, and part of me wishes I had, too.  Like learning a second or third language, I seem to have reached the age where these skills are harder won than they should be -- even with my guide at hand.

The first JRPGs I ever successfully played to completion and enjoyed were The World Ends With You and Chrono Trigger, both on the Nintendo DS.  The former is DS native; the latter, obviously, a remastered re-release.  The fact of the matter is, I rarely enjoy Japanese or Japanese-style RPGs.  I don't tend to like controlling a party and the slow-paced, half-mystical, often incoherent lyricism of the writing often grates on my fast-paced American sensibilities. 

But then there's this.



And in fact, the first place I ever really heard the music was at Video Games Live, in 2009.



Frankly, that theme is too good to keep passing up, especially as my husband's original PS1 discs are running very smoothly in my gaming rig.

So that's where I am: wandering around outside of a village, using a skill / spell system that everyone on Earth understands intuitively but I have to struggle with, accompanied by a pink dog with a lisp.

I might hate that dog a little.  But right now its my only friend in a hostile world, and that's how bonds are formed...

I wasn't expecting to like Chrono Trigger either, when I first started it.  I had to be wheedled into giving it a try.  And I didn't like the world or the characters or the navigation, and then suddenly I did, and I was traveling through TIME YOU GUYS OMG TIME TRAVEL LOOK AN APOCALYPSE and the next thing I knew the number of hours I'd put into playing it had passed my ability to track.

So I'm giving Chrono Cross its due.  There are huge swaths of gaming history I missed along the way to where I am now, and I need to make good on my quest to understand how the history of our art led to its present day, and leads to its future.

"Attack" I can handle.

Sometimes I remember that watching silent film didn't come naturally to a child of the 1980s, either, but that I adored The General when I finally saw it.  So this, too, is a language I can learn... one turn at a time.

Wednesday, June 29, 2011

The Gamer's Gaze, part 2

We're going to take a brief step back from diving deeper into the idea of the male gaze and spend a moment talking about straight up looking: how do we see what we see in games?  We began by working through what the gaze means when we're discussing the cinematic camera.

But in gaming, the camera goes beyond cinematic.  Games contain a level of spectator participation and interactivity above and beyond that of film.  In addition to the director's choice of camera placement, the player has choice in camera placement.  Whether through direct control or through controlling our avatars, in most modern gaming we participate in the camera's positioning.  However, even when we define the literal point of view taken on the scene, the actual angle, we don't necessarily control the gaze.  It's a delicate balance, but the player does contribute to the gaze in games through active choice, in a way not present in most other media.  The game's authors, by creating the content and the camera, and the game's players, by choosing character perspective both literal and figurative, work in tandem to create the final gaze that the spectator takes.

Our ability to take on a changing and well-defined (or ambiguous) gaze has changed over time, as the technology of game design has improved.  The literal points of view available to us have shifted through the years.  Most players, for example, can admire the landscape and imagery present in an isometric game, but few of us will actively control the camera in such a layout for any reason other than better to understand tactics or strategy.  The gaze is akin to the way a player would look at a chessboard, rather than the way an audience member would look at a play or film.

Diablo II was the grand-daddy of a generation of clone RPGs.

In film, the camera represents and guides our view.  It leads; we follow.  Indeed, we have no other choice.  The director has pre-determined our viewpoint and the boundaries of frame and of on-screen space for us.  Film works because of limitations on the spectator gaze.  In order to create illusory spaces with real people and objects, a director needs to play with the camera and with space in all kinds of ways.

Forced perspective can make hobbits of us all.

Older generations of games were subject to nearly identical limitations.  For example, Myst -- now so very dated, but at the time so groundbreaking -- billed itself as "photorealistic" and meant just that: you moved through a world that was, essentially, a long series of complex and interactive photos.  The landscape was lovely but in terms of how we saw it, Myst was as much a flat pixel-hunt as Maniac Mansion or any shareware adventure game before it.

But in the 1990s, we gained a rather dramatic change with the arrival of explorable 3D game space.  Although it would take some time before our game worlds could be "true" 3D (and even in 2011, we're still fairly limited on that front), the difference was stark.  Given the chance to wander through a three dimensional space, the player suddenly gained some measure of autonomous control -- and notoriously, that measure was aim.  

The Doom games were beyond influential on the perennially bestselling FPS genre.

The most basic mechanic of the first-person shooter hasn't changed all that much in twenty years.  The game's camera and the player's point of view are meant to be one, intertwined.  Through fine mouse control (or later, and now more popularly, the analog stick), the player is expected fully to immerse himself (notably, not herself) in the role of the avatar.  The player explores the world, takes aim, and fires while fully inhabiting the persona of the character.  As a result, the player character may or may not be a well-defined individual.  Indeed, the player character can be nothing but an empty vessel, reduced to nothing but the gun -- but the world through which the avatar, and therefore the camera, moves is of utmost importance.

And those worlds have gotten pretty impressive, as Modern Warfare 2 shows us.

What's most interesting about the spectator's gaze in the FPS, though, is that game designers didn't invent it.  Alfred Hitchcock did, over 60 years ago.

Spellbound, 1945.

It's with Hitchcock (and the eternally fantastic Ingrid Bergman) that we vividly see that the first-person perspective is no accident and is not merely utilitarian.  The transgressive or voyeuristic potential of the gaze suddenly becomes apparent when we, the passive spectator of a film, are unavoidably thrust into the position of aiming a revolver at the film's star.  (The full clip is here, but it's from the end of the film so I'd recommend watching the whole movie instead.)  Whether or not we want to, we are following the aggressor's gaze -- and so we, in a sense, become the aggressor.

In a game that uses the first person perspective, we the player are put into a certain point of view on the narrative world; we are asked to inhabit the space in which the game takes place.  We take on the character's perspective in looking, in all the things that means.  The camera is in someone's head, and we literally see through those "eyes."  But unlike in Spellbound, or any other film, we choose where to look.  There's a level of player control available.

In a game like Bioshock, that control is the absolute key to the telling of the story.  The core narrative is framed around choice, while the story visibly runs on rails.  Progression through Rapture is intentionally linear, and yet the dialogue in the game speaks to freedom.  And of course at the key moment in the narrative, player control is completely removed, up to and including the ability to look around during cut scenes.  In fact, Bioshock is using Hitchcock's trick: the player is no more able to stop targeting Andrew Ryan than the viewer is able to stop targeting Ingrid Bergman.

The third person perspective, on the other hand, is both simpler and more complex.  We the player do not literally inhabit a character's point of view.  Rather than the role of protagonist, we are cast in the role of director, and we move our actors through their stage. We can see the player character, and how she or he is framed in the world.  In a sense, it's the difference between puppet and puppeteer, although that analogy makes the distinction sound sinister, which it's not.  Rather, it's a matter of artistic choice and mechanical necessity.

In a third-person game, the player does have the anchor of being tied to a player character, but also has the freedom to move the camera independent of the PC's perspective.  The trade-off for a broader perspective, though, is more limited range.  The game's designers control what positions are available to the player, and while in some settings the player can put the camera anywhere that doesn't require pathing through a collision plane, in other cases the view is as tightly scripted as Hollywood.  EverQuest II is a good example of the former, in that the player can put the camera anywhere around the character except underground, can zoom in or out as much as she likes, or can choose a first person perspective.

The latter option, however, seems to be the current trend in most console game design.  If the player is guiding Kratos, Ezio, or Drake through a story, then the camera will be a fixed tool that provides directional guidance and a set perspective.  Camera motion shows the lay of the land, potential climbing or escape routes, and likely avenues for weapon retrieval or enemy breakthrough.

In terms of gaze, this fixed third-person camera operates effectively as a cinematic camera.  Because the player does not contribute to its placement, the player is effectively freed from the implications of its gaze.  We may be watching all manner of unfortunate scenes, but our role is considered passive, at least in the sense that it is unintentional.  The damsel may be in distress or in disarray, but if we happen to watch her a certain way, it's because the game put it there for us.

Although I'll actually give credit; the Assassin's Creed games don't present this view all that often.

So when we take a deep examination into the presence of the male gaze in gaming, this is what we mean: when does the player have a choice over where to look?  How does the player look?  What is the physical presentation of women (and of men) when the player has control of the camera?  What is the physical presentation of women (and of men) when the player doesn't have control of the camera?  How is character agency reframed when the player controls the perspective?

The very literal gaze of the camera is what we've just digressed to here: what angle does it view from?  How far afield can we see, or how close up?  But our real concern is this: what viewpoint and bias do those cameras reveal through their placement and methods?  How is player perspective from inside, say, Duke Nukem's (ick) head different from perspective six feet behind Lara Croft, and what do those literal perspectives tell us about the value and archetypes assigned to the worlds they inhabit?

In short: how does literal on-screen framing tell us more about the figurative framework of the society that made the game?

We'll loop back around to that question, and unite parts 1 and 2 of this little series, in the third and final (I promise) installment.

[Edit: Part 3 is here.]

Sunday, May 22, 2011

Beyond the Girl Gamer 2.1: The System of the Worlds

Beyond the Girl Gamer: Introduction | 1.1 | 1.2 | 1.3 | 1.4
-----------------------------------------------------------------------

So far, we've talked a lot about characters: our protagonists, antagonists, and supporting casts.  Character design drives our gaming, to a huge extent, but it's just one part of the overall element of game writing, which is what we're going to examine in chapter 2 of this series.  And we're going to delve into some actual critical theory in order to do that.

Our transition, though, begins still with character to some degree, and the concept of the "coin flip" character in gaming.  The concept is this: you need to determine something binary (a male or female character), so you flip the coin to see if it comes up heads or tails and run with it.

I think of Chell, in Portal and Portal 2, as a coin flip character.  The game is completely, 100% unaffected by the PC gender.  In this case, the coin came up female.  In Half-Life (2) there doesn't need to be a particular reason that Gordon Freeman is male.  Valve probably didn't flip the coin, but when you do -- sometimes it still comes up heads.  There doesn't need to be a particular reason that Shepard of Mass Effect (2) is male; in the future, space marines come in all types.  And so BioWare has given us this most basic choice: to flip that coin ourselves.

This doesn't usually happen.  Not only does the coin not land on non-male, it also doesn't land on non-white or non-straight.  The straight white male is still an absolute default, and in the context of most games (and movies, and books, and...) any deviation has a distinct narrative presence.  There's a reason that THIS character has to be black, or female, but there's never a reason that a player character has to be a straight white dude.  He just is.  It's the unquestioned default.  (This is why the default Shepard is so boring to me.  He's generic, and there are thousands like him.)

Contrary to what some alarmists believe of all feminist thinkers, I agree that there's no good reason to make a specific man's story about a woman.  Sometimes you're telling the story of a man's life and that is totally cool.  If you are writing a historically accurate game about a knight in 12th century France well then by god, I expect him to dude up the joint in the manliest possible way, and I expect most of the powerful figures in his story also to be men, especially among the warrior and clergy classes. 

But when are our games ever historically accurate*?

Games take place in worlds of our own creation.  Law and Order can purport to represent New York City as it is.  We cannot claim to be representing Ferelden as it is, because there never was such a place outside of a writer's imagination.

But in fact, even when claiming to represent a place, like modern Manhattan, as it actually exists, all fictional media fail to some degree or other.  The story being told is always one that was written by a human, and one that is being filmed and edited by a human.   In any TV show, movie, or game the world, as we see it, is entirely constructed.  Someone came up with it, and made it, and everything in it is intentional.  Even the "reality" that bumps in (as in traffic on the street in Law and Order) is a deliberate choice -- someone chose not to use a soundstage, not to close that street, and not to use a different, traffic-free, take.

This basic idea -- that we are not ever watching reality, but are looking at a construct -- is at the core of all film studies and so it is in one of my old introductory film textbooks that I looked for the best description:
 "What film reviews almost always evade is one of the few realities of film itself, that it is an artificial construct, something made in a particular way for specific purposes, and that plot or story of a film is a function of this construction, not its first principle." 
Robert Kolker, Film, Form, and Culture, 2nd ed (2002). (p. xvii)
Rephrased, the most important concept to understand in early Film Studies is this: the characters are never the creators of the story's events.  Han and Leia don't flirt with each other due to mutual attraction; they flirt with each other because a script-writer called for it and a director put it on camera.  The story that you see unfolding is an element of the film you are watching.  The same is true of gaming.

Further, the sum total of everything put into the image you're looking at, in film, is called mise en scène (because the French had the first crack at written film theory).  It's basically the idea that lighting, set design, and every other visual in a scene help tell your story.  The textbook example (literally, it's in every introductory film theory and film history book out there) is the 1920 German film The Cabinet of Dr. Caligari.  One look at a famous shot and it becomes obvious why:

91 years and thousands of films later, it's still creepy.

For us, and for our purposes going forward, the really important, unbelievably crucial point is this: Game worlds are 100% digital and therefore, 100% constructed.  Nothing is simply "found" and nothing is incidental or accidental.  Every pixel is deliberate and intentional -- even though those pixels can also be utterly thoughtless.  "Created" is not the same as "carefully created."

Let's take ourselves back to mise-en-scène for a moment.  Can anyone argue that this environment, shown below, is not absolutely as carefully crafted, and as essential to the story, as in any film?  It has, in fact, been argued that the real main character of Bioshock is the underwater city of Rapture, and there's something to be said for that.

It's like a murderous and awkward Renaissance painting in here.
But the fun part is, it's not just the modern, cinematic games that use this concept so crucially.  I think the first game where I became really aware of the environment beyond my character as essential was Super Mario Brothers 3.


Yes, this game.

In SMB3, the sun itself pulls right out of the background art and becomes an enemy.  All of the brick types work differently (two are shown here).  Enemies, fatal to the player character, come popping out of the environment regularly (the plant in the image above being just one example).  And in levels comprised of large, scenic blocks (World 1, Level 1 for starters), the player can actually drop behind the white ones.  Literally, the player can take herself behind the scenes of the video game's environment -- but only at certain times.

So when we're looking at a game, and analyzing it in any way, the crucial thing is for us to remember that everything is created.  We need to remember to step outside of the narrative and to repeatedly ask how and why the designers of the game chose to frame it or to make it progress in the way they did.  If we're asking, "Why does Naomi Hunter wear her shirt unbuttoned so far down in the lab?" it's the wrong question.  We should be asking, "Why is this world designed in such a way that our scientist is an attractive female who keeps her shirt unbuttoned so low while working?"  If Nathan Drake bumps his head going into a tunnel, the question is not, "Why is he so clumsy?" but instead, "Why did the game's creators decide this tunnel was two inches shorter than their protagonist?" or, "What are we meant to learn about this character through seeing this collision?"

Sometimes, when we're asking these questions from outside of the narrative, the answers will be mundane.  "Budget restrictions" or "tight deadline" are probably the most common answers, across all games and studios.  If we're asking why the Courier in Fallout: New Vegas is silent, that's probably the answer we'll get (not enough time and money in the world to make recording every possible line a worthwhile design choice).

But sometimes, we'll find, on asking, that no-one thought carefully about a design choice one way or the other, and instead just made an assumption based on his or her own cultural defaults.  Those are the most interesting answers.  From these moments, we learn more about the culture producing the game -- we learn more about ourselves, and about what will need to change in the future if we want different games.  From the same text I cited earlier:
"The idea of culture as text means, first, that culture is not nature; it is made by people in history for conscious or even unconscious reasons, the product of all they think and do.  Even the unconscious or semiconscious acts of our daily lives can, when observed and analyzed, be understood as sets of coherent acts and be seen to interact with each other.  These acts, beliefs, and practices, along with the artifacts they produce ... have meaning.  They can be read and understood.
Robert Kolker, Film, Form, and Culture, 2nd ed (2002). (p. 116)

Here in the real world where we live, everyone is allowed to be incidental.  People come and go, because they're people.  The main character of my life is a straight white woman (and I am she).  When I am at work, if I am taking the elevator from the ground floor to the 10th, and the doors open on 6, the odds are about 50/50 whether a man or a woman will board.  Similarly, in my workplace in particular, the odds are about 50/50 that the person boarding would be white or a racial minority, and about 1 in 6 that the person boarding would identify as non-straight.  I would expect and understand any of these, because I move in a world full of people.  If I am taking the Metro home, and the doors open at Union Station, I would expect an even bigger range of diversity in boarding passengers.

If Solid Snake were in an elevator going up ten stories, and the doors opened on 6, there would have to be a story-driven reason for a woman to board.  (In fact, there would need to be a narrative reason for the doors to open at all.)  Snake moves in a world of ideas, concepts, and tropes, not in a world full of people.  We say "truth is stranger than fiction," because we expect fiction to make sense.  But what kind of sense?  Does fiction deserve as much random diversity as reality has?

And so, in the next chapter: what spaces do our characters live in and why do our characters live in these spaces, when they could be anywhere?

*Your Critic will not have a chance to play L.A. Noire until later this summer, so if this one is the exception that proves the rhetorical question, well, try not to leave spoilers.