If you read my blog regularly, then you may have noticed that I did not do posts about either the Ghost of Tsushima or The Last of Us Part IIState of Play presentations. This should have seemed strange to my normal readers because up until this point I have done a post about every State of Play episode since the beginning. So I wanted to talk about why I chose not to do posts for those two presentations and will continue not to do posts for State of Play presentations done in the same style as those ones moving forward.
I really like the original State of Play format. It’s very similar to the Nintendo Direct format, but in some ways I like it even better. I won’t go into too much detail, because I’ve already written at length about this previously. But basically what I like(d) about the State of Play format was the highly informative, time efficient, look at multiple upcoming games. Even more so did I value the fact that they often gave time to games that were not highly anticipated AAA titles that had already been hyped up for more than a year. These recent single game presentations have betrayed that original format/style.
The last two State of Plays were your run of the mill AAA E3 presentations done via video. You could have taken either one of those presentations and played them live at E3 and they would have been no different. And I think it’s fairly obvious that they happened because E3 will no longer be happening this year. And let me state clearly that I have no problem with such presentations and that I was already planning on buying both of those games. I’m especially excited for Ghost of Tsushima and preordered the Special Edition before this presentation even went live. What I do have a problem with is that these presentations are being given the State of Play label.
By labeling these extended single game presentations as State of Play episodes, SONY has essentially betrayed the original format and altered it to be pretty much any game related content they choose to put out digitally. That’s a bad thing, in my opinion. It’s disorganized and completely derails the user base’s ability to set expectations for future State of Play episodes. The next time we get a State of Play announcement, we will have no way of accurately setting expectations for what it will be. Will it be a single game presentation, multiple snapshots of upcoming indie games, a new game announcement, or something completely different? Note that I’m not saying that any of those types of content is more of less valuable than any others. What I’m saying is that users have varied interests and should be able to decide whether or not they want to watch a presentation before hand based on the expectations of what it will be. But SONY has removed our ability to accurately set those expectations, thereby trying to manipulate everyone into sitting through presentations they may or may not have an interest in.
The weirdest thing is the fact that the, now rightfully delayed, June 4thPS5 presentation wasn’t labeled as State of Play. This was billed as a presentation of upcoming games by multiple studios of various sizes. Other than the longer running time, this was way more in line with the original State of Play format than the single game presentations and yet they labeled it The Future of Gaming. So the question I have is why create an entirely new name for this presentation that falls more in line with the original State of Play format while not creating a different name for presentations that don’t fall in line with the original format?
Truthfully they didn’t even need to give those single game presentations a label to begin with. They could have just billed them as gameplay presentations of their respective games. That is a commonly occurring form of content released by publishers and developers. The decision to label them both as State of Play presentations was an intentional one and I find that disappointing. Because I want more of the original State of Play format content. I don’t want the only type of presentations from PlayStation to be long form presentations of AAA titles I already know I’m going to buy. That type of content is pretty much useless to any informed gamer. It just builds hype. I know plenty of people who didn’t even watch The Last of Us Part II presentation because they had either already decided to buy it or already decided not to buy it, because we’ve already seen previous presentations, hype build up, and for some the leaks. Meaning the presentation did very little to push people in either direction. Whereas a presentation of upcoming titles that weren’t already super hyped and highly anticipated would have been much more valuable and informative to a larger number of players.
I know I probably sound like Grandpa Simpson yelling at clouds, but these sorts of choices are important. They can mean the difference between calling attention to an otherwise unknown game and getting it some much needed, and often deserved, time in the spotlight and an indie studio going bankrupt. They also affect users. I don’t necessarily care to watch an extended gameplay presentation of a game I’m already decided on. But I absolutely want to watch a presentation of multiple game announcements or snapshots for titles I’m not aware of or familiar with. And like most people, my time is both limited and valuable to me. But next time SONY says a State of Play is incoming I won’t necessarily know what to expect. So in a way they’ve taken away my agency as a viewer because I’ll potentially be going in blind and can very possibly be highly disappointed with the content. Not because the content is necessarily bad. But because it’s content I have no interest in watching.
Again, I like the original State of Play format. I’m sad to see it already being betrayed after only four episodes. I hope SONY hasn’t decided to kill it off altogether this early on and opted for exclusively traditional single AAA gameplay presentations. For me, that would be a real tragedy. It’s only because of the State of Play presentations that I took a serious interest in games like Untitled Goose Game, Predator: Hunting Grounds, and Wattam. And it doesn’t matter if any or all of the games were ultimately good or bad. Predator: Hunting Grounds is bad by the way. What matters is that the State of Play episodes got me looking at games that I otherwise was never going to consider buying or probably even trying. That’s what the original format of State of Play was accomplishing: alerting gamers to games they may not have had on the radar. And that’s what it needs to continue to do. As such, if PlayStation continues to put out State of Plays as AAA game presentations for games that have already been hyped up and had lots of previous content released, then I will continue to not cover them on this blog. Because talking to you about previews we didn’t need serves even less purpose than the presentations themselves. It makes more sense just to wait and review the full games after I’ve played them at that point.
I’m not a huge fan of open world games. I don’t hate them. Nor do I have any problem with playing them. I’ve played countless open world games over the course of my life. This year alone I’ve already played three or four of them. Currently I’m playing Assassin’s Creed: Origins. I definitely play and enjoy them, when made well. But I don’t prefer them. I’m fine with a linear game. I think the best design choice is the soft open world game though. Not a full open world but rather a limited sized map that allows for exploration in controlled environments. Games like Shadow of the Tomb Raider, Nioh 2, and God of War (2018) are all different versions of this concept. They all contain open exploration in controlled map settings but are not housed within large open worlds. As a 30 year old with a large backlog and many responsibilities, this type of world/exploration design is definitely my preference. It allows the feeling of exploration and discovery without the daunting task of spending countless hours combing wide empty spaces for a single collectible.
When I was a kid, open worlds existed but they were rare. The Legend of Zelda: Ocarina of Time is probably the most prolific example of an open world from my childhood. They also weren’t the standard or the preference for most players. By the time I finished high school, open world games had become common. Franchises like The Elder Scrolls, Assassin’s Creed, and GTA had now become favorites in the gaming community and ushered in a new era of game development standards. By the time I graduated college, open worlds had become more than just a genre. They were an expectation. Everyone seemed to want open worlds and developers went out of their way to provide some version of them. Even when the games they were making didn’t really need it. Competing with franchises like Red Dead Redemption, Borderlands, and Far Cry made open worlds a must for many players. But the way open world games were judged was pretty stupid and misinformed.
For whatever reason, open worlds weren’t judged like other video games for a long time. Most games were/are compared based on their gameplay, graphics, and story. But open world games were being judged on things like map size, percentage of interactive elements, and graphics specific to the environment. A badly written game with a large open world was being judged favorably next to a linear game with a tight, well written story. Suddenly size and appearance seemed to be the only things that mattered in game design. And developers were happy to play along. It seemed like every game was getting bigger and bigger while the content quality was getting lower and less focused on storytelling. This is the era where fetch quests and collectibles re-surged and then blew past N64 proportions. It was a time of repetition and lots of walking. Even today, most games still don’t get fast travel right, but back then times were real bad. But it seemed like everyone was eating it up. People were complaining about games being too linear and not having enough content while defending things like “collect 40 feathers” scattered around the map. I hated it. Played through all of it, but really did not enjoy it. It’s one of the main reasons I never really got into 100% completion runs of games. I can’t be asked to do collectithons when the map is huge and fast travel is garbage. And Ubisoft, among other studios, love collectithons. Especially back then.
While the open world genre has now been tempered with smaller map franchises like Dark Souls, South Park (Ubisoft), and Darksiders, the demand for bigger maps continued to proliferate at the same time. Games like GTAV, Arkham City, and Dragon Age: Inquisition are all examples of franchises that just kept growing the map size more for the sake of comparison than anything else. Yes those three games do have lively maps, but they also have lots of open space that’s good for nothing more than wasting your time as you move from point A to point B. And maps have continued to grow even more out of proportion. Ghost Recon: Breakpoint has a map that is just unnecessarily large. I’d say the same thing about Metal Gear Solid V and many other open world titles. And sadly this is the fault of consumers rather than developers.
It’s important to note that there is nothing inherently wrong with open world games or large maps. But there’s also nothing inherently right with them either. It’s the way the world is constructed and what you can do in it that matters. The setting plays a role as well. Let’s compare Ubisoft games as an example. Ghost Recon: Breakpoint, Watch Dogs 2, and Assassin’s Creed IV: Black Flag are all open world Ubisoft titles. Of the three games, Breakpoint’s map is the only one I’d say was too large. Yet I think the Black Flag map might actually be the largest of the three games. What it comes down to is setting and scope. Black Flag is set in the entire Caribbean region of the Atlantic Ocean, Watch Dogs 2 is set in San Francisco, and Breakpoint is set on a fictional island. Black Flag is set in a time/setting where your only means of transportation across large areas is a ship. Watch Dogs 2 is set in a time/setting where you can only use road vehicles like cars and motorcycles. And Breakpoint is set in a time/setting where you have access to smaller boats, helicopters, road vehicles of various types, and motorcycles. When it comes to scope, Black Flag should feel the biggest. It has you traveling between countries by sailing ship with no motorized power. Watch Dogs 2 should feel the smallest. It has you driving around a single city. Breakpoint should feel larger than Watch Dogs 2, but not so large that it rivals or even surpasses Black Flag. Yet it’s the most grueling and time consuming of the three games to travel across.
Breakpoint’s unnecessarily large map size is coupled with the fact that the map is fairly empty. You spend an exorbitant amount of time just traveling across the map. To its credit, a large number fast travel points are present in the game. But even then you still have to travel a ways to get to most objectives and collectibles even after using fast travel. The helicopter, which is easy to get, doesn’t make the trip that much faster. The map could easily be condensed by about 25%, or even more, and suddenly the amount of empty space with no encounters or gameplay value would stop being such an issue. But in the pursuit of bigger maps for the purpose of marketing and bragging rights, they made the map as big as they did. It did not make the game better in any way. It only made it appear to be bigger than it actually is when comparing actual content. These inflated map sizes have become the standard in open world game design. And it’s made games worse because of it. But it seems that change for the better is finally on the horizon.
I have not played Assassin’s Creed: Odyssey yet but I intend to. I’ve heard it is very good and considering how much I’m enjoying Assassin’s Creed: Origins I’m not surprised. But the biggest legitimate complaint I’ve heard against the game over and over again is that the map is simply too big. Or more accurately it’s too empty for how big it is. Seeing consumers complain that maps are starting get too big in larger numbers makes me happy. It means that demand is changing. It means that people are finally maturing to the fact that more time spent in a game doesn’t make the game better if that time isn’t spent on meaningful content. It means that the never ending push for bigger maps is on its way out. It means that collectithons will finally start to reduce or disappear altogether from open world games. Games are shaped by demand and the demand for mindlessly larger maps is finally in decline.
Assassin’s Creed: Valhalla is the next game in the franchise. Ubisoft has already stated that the map size will be managed better. They have waffled on what that means with conflicting announcements about whether that means smaller size, less empty space, or something else. But the point is that Ubisoft has openly acknowledged that the large map in Odyssey was not enjoyed by a large number of players and that this issue would be addressed in Valhalla. This is how it starts. This is how the industry ultimately gets past the idea that bigger maps automatically mean better games. People making their voices heard in large numbers and developers actually listening and making small, but notable changes. Valhalla will still probably have a map that’s too big with a lot of empty space. But some considerations will have been made. Then people will complain again and more considerations will be made in the next game. And other studios will see this and these responses from the public and they too will start to reduce the size of their maps. Or fill them with more meaningful content and less empty space. The market is finally primed to return to practical maps where space exists for the sake of content and not spectacle and I for one am looking forward to it.
I’m not really big on director’s cuts of movies. Or more accurately, I’m not really big on the idea of multiple cuts of movies existing and being distributed. I want the theatrical cut of a movie to be the director’s cut. The entire concept of director’s cuts irritates me because it assumes that not only do I want to watch a movie a second time, but that I want to pay additional money to do so with the promise of a bit more footage. In my opinion, they should just let directors direct. But producers actually control a production a lot more than many people realize. The truth is that 9 times out of 10 you’re actually watching the producers cut of a movie.
The most interesting thing about this odd dynamic between producers and directors is that the people have been conditioned to favor directors while not really caring about producers. Think about how movies are billed. Producers are always listed in the opening credits, on posters, and in ads, but they’re never the focus. People are sold on the director and actors. Most don’t care in the slightest bit about who the producer is unless they’re currently being accused of sexual assault or heading a larger franchise like Kevin Feige at Marvel. But for a majority of films, regular people don’t really care about producer credits. And yet producers hold all the power. We’ve even seen productions where a director was fired simply because producers felt like they weren’t being shown enough respect. But still the movie marketing machine pushes people towards favoring directors rather than producers.
The director’s cut concept is a money making scheme that only works because of this relationship between producers and directors. But up until now it didn’t matter that much in most cases. Think about what director’s cuts actually accomplish. They give people the opportunity to rewatch a movie with a few additional scenes and rarely affect anything important about the story. Even in the case of a franchise, they usually don’t affect much. Look at The Lord of the Rings trilogy by Peter Jackson as an example. The extended director’s cuts add in some scenes for the sake of lore, but don’t really alter the story in significant ways. Of course part of this probably comes from the fact that Peter Jackson was both director and producer on those films. This is often the case with bigger productions. But it’s not always the case. TheAvengers (2012) was written and directed by Joss Whedon but the only listed producer credit is Kevin Feige. While we of course couldn’t speak to the dynamic between them during production, on paper Kevin Feige was in charge of that production. Whedon was simply the instrument being used to create his vision. And yet it was Whedon who wrote the script. And notice that there is no director’s cut of the film available. There is an extended cut with a few extra scenes, but in no way has the marketing ever implied that the theatrical version and the home release version were significantly different films.
The real question is what happens when the dynamic changes from producer vs director to producer vs director vs director? The public has always been fed the idea that directors and producers often disagree and that this is the reason director’s cuts exist. But never before have we seen a scenario where a movie was released and then the same movie was remade by a second director with the same footage. This is completely different from the idea of a theatrical release vs a director’s cut. This would be two completely different movies with completely different visions. And technically producers could still disagree with both versions of the movie leading to two sets of theatrical versions and director’s cuts.
Releasing two completely different versions of the same film by two different directors is problematic. Doing it as part of an established franchise with an interconnected set of films is an absolute shit show. Think about how much people already fight over things like canon in nerd franchises. Now apply that to a comic book universe where two different directors make the same film in the timeline. It has the potential to be continuity chaos. With all that being said, let’s discuss the Snyder Cut.
The Snyder Cut refers to an alternate version of the film Justice League (2017). For the purposes of accuracy, I will give a detailed summary of the entire Snyder Cut controversy here. Many people are either not aware of the situation or are working with incomplete and/or inaccurate information. So I will summarize my interpretation of the situation, based on the reports I’ve read, here:
After the success of the Marvel Cinematic Universe Phase One (Six films released between 2008 – 2012), DC/Warner Brothers decided that they could create a similar level of success with the Justice League comic pantheon. This would go on to be called the DC Extended Universe (DCEU) They decided that rather than try to copy and paste the Marvel tone and style that they would create a darker toned cinematic universe. They also wanted to get their first big crossover film out before the MCU released their culminating crossover film (Avengers: Infinity War). Zack Snyder was chosen to head the project. Or at least he was chosen to be the lead director if you want to be entirely accurate. With darker toned comic films under his belt such as 300 and Watchmen, this seemed like a fine choice for a darker toned cinematic comic book movie universe. Snyder directed both Man of Steel and Batman v Superman: Dawn of Justice, the first two films in the DCEU. For the most part, people weren’t happy with either film. Neither film has above a 60% on Rotten Tomatoes. And Batman v Superman, the latter of the two has exactly half the score of Man of Steel. In both cases, the audience scores are significantly higher, but neither gets above 75% and Man of Steel still ranks higher than Batman v Superman. Meaning both the critical and public response to the DCEU films by Zack Snyder were received less than successfully and were dropping in approval from film to film. At the same time, both films were financially successful, bringing in more than double their production budgets in box office receipts. As such, Warner Brothers chose to have Snyder direct Justice League, which would serve as the equivalent to The Avengers (2012).
Zack Snyder started Justice League and it was reported that he produced an unfinished but technically entire draft of the film. Meaning that shooting had been completed but editing and reshoots had not been finalized. His daughter died during post production of the film. That’s an important detail. She died during post production of the film. This confirms that a complete draft of the film was produced. It just hadn’t been finalized. Due to the tragedy of losing a child, Snyder resigned from the project and was replaced by Joss Whedon, the writer and director of The Avengers. It’s important to note that a majority of people believed and accepted this story at the time of reporting. No one was unhappy with Snyder for leaving the project to mourn his daughter. And pretty much no one took issue with the idea of bringing in Joss Whedon for post-production, as a director proven to be capable of creating both critically and financially successful ensemble comic book films. Ultimately Justice League sucked, but was also financially successful, more than doubling its production budget in box office receipts. Technically speaking, it sucked less than Batman v Superman but more than Man of Steel both critically and to the public, based on Rotten Tomatoes critical and audience scores. But people decided to ignore this fact and argued that Snyder would have made a better movie. This is where things get tricky.
It’s only because Justice League was disappointing that people turned on Joss Whedon. If the movie had been as good as The Avengers, the conversation would have ended there. In the same way, it’s only because the movie sucked that people supported Snyder. People were not happy with Snyder ‘s first two DCEU films. But they were so unhappy with Justice League that they wanted to believe that Snyder’s film would have been better. This was coupled with the fact that it had been announced that an actual full cut of the film had already been produced by Snyder. Again, he left the project during post production. But really the most important detail in this entire story is the fact that after Justice League released, and sucked, it was reported that the producers actually didn’t like Snyder’s cut of the movie. Suddenly conspiracies saying that the producers had actually wanted to fire Snyder from the project but were able to use his daughter’s death as an amicable way out were going viral. Essentially people invented a story that made it seem like Snyder originally made a completely different film than what was released and that the evil woke producers, with the help of Joss Whedon, killed Snyder’s vision and released what again was a critically and publicly better received film than Snyder’s last DCEU film. It was these conspiracy theories that led to the #ReleaseThe SnyderCut movement.
For the past three years, people have campaigned unceasingly that they want to see Zack Snyder’s version of Justice League. For most of that time, Warner Brothers stated that they would absolutely not release that version of the film. This makes sense for two major reasons. First, it is extremely rude to Joss Whedon, who again not only produced a better received film than Snyder’s last DCEU project, but also a financially successful film. Joss Whedon did what he was hired to do. Directors are hired by producers to make financially successful films. That’s all they’re expected to do. Winning awards is nice. Making fans happy is nice. But those aren’t a director’s job. A director’s job is to make producers money by delivering financially successful movies. Joss Whedon succeeded in this endeavor with his cut of Justice League. The other equally important reason that Warner Brothers didn’t want to release an alternate version of the film, ignoring the fact that it’s pretty much never been done before, is that it would be a continuity nightmare.
Three other films within the same universe as Justice League have already been released with more films on the way, one of which is already completed and another already in post-production. All of those films potentially don’t make sense depending on the events that take place in an alternate version of a crossover event film. What if someone dies? What if someone lives? What if a dynamic changes? What if a special item is lost or found? The idea of releasing an alternate version of a key film in the timeline four years after the fact is world building suicide. After three years of campaigning, HBO was allowed to purchase the distribution rights to the Snyder cut of Justice League from Warner Brothers. HBO realized the cut was in fact garbage but knew that the public would pay to see it. So they paid Snyder to recut the film and are investing additional funds to do reshoots and additional CGI. The “new” Snyder cut will be available for HBO Max subscribers in 2021. That brings us to today.
I do believe in the power of the consumers. I believe that through diligence and organization we the people can accomplish great things. I often think back to Star Wars: Battlefront II and how the public demanded change and got it. Another example is the XBOX One and the always online announcement. So even though I absolutely don’t agree with them in this case, I respect the commitment shown by the #ReleaseTheSnyderCut people. Ultimately I’m disappointed by how this appears to be ending though.
Let me be clear in saying that while I respect their right to campaign, personally I’m 100% against this release the Snyder cut movement. I think it sets a terrible precedent for entertainment media. Especially for creators. If people can just decide that because they don’t like something it has to change then that means those who create have no say in the works they produce. We’ve already seen movements to recast actors in certain roles, fire directors, and reshoot entire seasons of shows. We’ve seen fan reedits of films and deep fakes removing/replacing actors in movies. This is not a good thing. People spend their lives trying to make things and people being able to just change them or say they don’t exist shouldn’t be considered a viable option. I’m fine with people not supporting something they don’t like. I encourage people to withhold their money and choose not to participate in games or movies they take issue with in order to vote with their wallet. But I do not support the idea of people being able to negate things that have already been produced. Movements should shape the future, not the past. If you were unhappy with the last three Star Wars films then you should let Disney know that by not paying to see the next one. This in turn will hopefully lead to the change you want to see in how the films are made. But no matter how unhappy you were with the films you should not be able to dictate that Daisy Ridley, Kelly Marie Tran, or John Boyega weren’t ever in Star Wars. Because that’s not true nor is it fair to those actors that spent their lives trying to make it as actors. They deserve recognition for the work they have done, even if you weren’t happy with it. As much as I hated The Last Jedi, I’d never argue that Rian Johnson’s name should be taken off the project.
I’m also a big stickler about canon. I like connected universes and intertwined plots. I like that some small detail revealed in one movie comes back and ultimately shapes the plot of another character’s movie much later. That sort of universe construction can’t work in a scenario where films can be changed or redone at the whim of the people. The alternative is badly produced one off films that sort of connect to each other based on recurring actors and names. Look at the X-Men cinematic universe as the best example of this. 10 movies that are sort of related, filled with plot holes, and almost no coherency or general direction. Plus two Deadpool films if you want to get technical. And most of those movies are carried more by Hugh Jackman’s acting and special effects than the quality of the writing or general interest in the other characters. Which is a tragedy considering how good the X-Men characters and stories from the comics and cartoon are. But that’s exactly what happens when you create a franchise of movies with no defined direction and change installments based on the whims of the people after the fact. And that’s still not as problematic for canon as rereleasing films a second time would be. How will canon be defined in the DCEU moving forward? Will the events of the original release still count or will changes in the Snyder cut be considered valid canon? Will people now be forced to watch multiple versions of the same movie and debate what counts moving forward? These are questions that no one seems to be asking. So no I am not in support of the Snyder cut.
While I am absolutely not in support of the Snyder cut being released, I do support the idea of the people’s demands being met in response to their avid dedication to sticking to their demands. That’s why I am very unhappy with how this whole situation has ultimately turned out. The people who campaigned think they won, but in reality they’ve been conned. I’ve already been seeing people declare victory since the official HBO announcement of the Snyder cut release next year, but the truth is that they’re not actually getting what they demanded. We were told that a version of Justice League exists that was already completed by Zack Snyder. It was rumored that the producers didn’t like this cut. Because people didn’t like the Whedon cut, so they demanded the Snyder cut. But that demand was based on the understanding that such a cut of the film already exists. Yet that’s not what is being delivered. If such a cut really does exist, there’s no need to wait for 2021. They could release it today. Instead they’re investing millions of dollars to bring back the actors to do reshoots, adding CGI, and letting Snyder take another crack at cutting the film.
Let’s be very clear about what’s happening. Three years of criticism, debate, blog posts, and film reviews have been released stating what’s wrong with Justice League, a movie originally shot by Zack Snyder but credited to Whedon based on last minute edits and a few rumored reshoots. We’re not gonna get to see the Snyder cut. We’re gonna get to see the Snyder mulligan cut. He knows what made the people angry. He now knows what will make the people happy and what wouldn’t have worked in his cut. He’s being given the budget to reshoot and reedit vast sections of the movie. That movie better be damn great. And there’s no reason it shouldn’t be. But that’s not how movies are made. Any movie could be great the second time around. Imagine if J.J. Abrams could just redo Star Wars Episodes VII – IX. Imagine how much better they would be and how much more people would like them. That’s not real film making. Making a movie is about risk. It’s about reading the fanbase and trying to impress them while also trying to surprise them, without making them angry. If you already know exactly what makes them angry and what makes them happy, you can’t really mess up the movie. But that’s not an honest film making scenario. I would want to see the real Snyder cut. I assume it would be shit, but I’d watch it anyway. I don’t want to see Snyder get the easiest golden parachute film making scenario ever conceived so that people end up praising him even though his first two DCEU movies were at best OK and at worst hot garbage. But that’s exactly what’s going to happen. The Snyder cut movement is getting conned into subscribing to HBO Max to not watch the movie they fought for 3 years to see. And they’re thankful for it. That’s really depressing. That level of blatant and out in the open manipulation pisses me off something fierce. The hashtag was #ReleaseTheSnyderCut, not #ReleaseTheSnyderReCut. These people didn’t win. At best they tied.
The thing that makes me really mad is that it appears that this whole movement isn’t over. I’m not surprised because I’ve already stated my fears for filmmaking moving forward, but it hasn’t even been a month and I’m already seeing new hashtags like #ReleaseTheAyerCut in reference to Suicide Squad. As if there’s some other version of that film that isn’t a dumpster fire. This is the problem with allowing post-release alternative versions of films. People will no longer accept any movie as is if they don’t like it. They will just assume they’re being lied to and that producers are snubbing directors from presenting their vision. I hope this whole line of reasoning ends here, but if it doesn’t the future of cinema, and arguably all plot based entertainment media, is in for hard times.
In 2013, I was in a really weird place in my life. Maybe the lowest I’ve been since I graduated college. I was living in a shitty town in a shitty state making pizza in a bar with a dual degree from an Ivy League university. No this isn’t the story of another failed liberal arts degree student. This is a story about love. My girlfriend, now wife, was attending graduate school in a small town I’d never heard of and I moved there with her to support her financially. What I wasn’t aware of when I agreed to move there was that there were no real businesses in that town except bars. I didn’t own a car at the time because we had moved there from abroad. And even if I had owned a car, we lived in a college dorm, provided by her graduate program, that charged a fortune for parking so owning a car in that scenario wasn’t really an option anyway. So I got the only local job I could find, which ended up being making pizza in a bar. I worked long hours, weekends, and was paid very little. But I did it because you gotta do what you gotta do.
At the time I owned a SONY Vaio laptop that was three or four years old. I had used it during college and couldn’t afford to replace it so I continued using it as my only computer option. It was good enough for basic things but it couldn’t run most games other than older emulators and indie titles. Some of my followers may remember my failed attempts to stream via that laptop back in those days. I spent most of my time gaming on my PS4 and Wii U and usually streamed via my PS4 directly to Twitch. I also recorded a lot of footage and uploaded it after the fact. My laptop could handle this. It just took a really long time to process the videos.
During this time, a friend recommended that I try a game called The Witcher. It was a PC game made in 2007 by some Polish developer I had never heard of. I didn’t know a thing about the game. Today that seems ridiculous to say, but this was before The Witcher 3 was really being talked about. In fact, it was like right before. If you followed the company and the franchise, then you probably already knew about it and were looking forward to playing it. But if you weren’t already into the franchise then, like me, you probably knew nothing about it. And I’m someone who’s usually pretty knowledgeable about upcoming games even when I’m not looking to play them myself. I wasn’t really interested in playing The Witcher but both it and The Witcher 2 were on sale on GOG for like $4 together so I bought them more to appease my friend than out of any actual interest.
As with most games I buy, I didn’t end up playing The Witcher as soon as I bought it. A few weeks or maybe even months went by. Then suddenly The Witcher 3 began its mainstream marketing run. This was actually one of the last games I remember seeing commercials for on cable, because this was the last time in my life that I regularly watched cable TV. The game looked amazing. We know now that it was/is, but at the time the ads were the thing that really sold me. But I’m the type of person that needs to play all the games in a franchise in order. So my desire to play The Witcher 3 finally pushed me to start The Witcher.
Thankfully my old laptop could run The Witcher. This shouldn’t be surprising because the game came out about three years before my laptop. I would call The Witcher the best bad game I’ve ever played. It can only be described as some of the best writing I’ve ever seen in a game coupled with some of the worst gameplay I’ve ever forced myself to slog through to the end. It’s not even accurate to call it a great game so much as a great experience. I absolutely hated actually playing it but I couldn’t get enough of the story, characters, and world. So when I finished it, I immediately knew that I was gonna play The Witcher 3 and literally loaded up The Witcher 2 as soon as the credits finished rolling. This is where my troubles really began.
The Witcher was released in 2007 and my laptop from 2010 could run it with little issue. Even though it wasn’t a gaming laptop, the leaps forward in technology over that three year gap made an office laptop viable for playing an old game. The Witcher 2 on the other hand was released in 2011. While it wasn’t released that far after my laptop, it was a modern game with hefty graphics for the time. Sadly my SONY Vaio just couldn’t hack it. Even at the lowest settings, I was not able to run The Witcher 2 smoothly. I was so depressed that I couldn’t play that game. At this point I no longer owned an XBOX 360 and for some stupid reason that was the only console the game was available on. I could have went out and bought a used one but I refused to go back to a console that had already broken down and been replaced on four separate occasions before I finally gave the system up for good. That meant that my only option was getting a new PC.
It was at this moment that I finally decided to build my own PC. I had known multiple people in college who had built their own gaming desktops but the prospect of doing that always scared me. It seemed too difficult, too expensive, and too risky. But I decided that was as good a time as any because I really wanted to play The Witcher 2. The Witcher 3 was a non-issue because I could get that on PS4 if I wanted to. But I had to play The Witcher 2 first. I never do anything small. If I’m gonna do something, I’m gonna take it seriously from start to finish. I wasn’t just gonna build an OK PC that could barely run The Witcher 2. I was gonna build a hefty system that could easily tackle running The Witcher 3. It ultimately took me three years of studying, saving, and planning before I finally built my gaming desktop. By that time I had left that shitty state (and country at this point), moved back abroad, and had landed a job in the PC hardware industry. My passion for playing The Witcher 2 in many ways led me to where I am now.
I got the PC built but rather than play The Witcher 2 right off the bat I, like many gamers, got distracted by other titles. So the game I had built my PC to play got pushed aside for a long time. I’ve played countless games on my PC since then. If you watch my streams then you know some of the much more advanced games I’ve played on PC such as Watch Dogs 1 & 2, Star Wars Jedi: Fallen Order, DOOM, Ghost Recon: Breakpoint, and the list goes on. I’m very happy with my PC and I’m proud of myself for the accomplishment it was to pay for and build it. But I didn’t actually end up starting The Witcher 2 till three years after it was built.
Last month I finally started The Witcher 2, and last week I finally completed it. It took almost seven years of dedication to a single goal to reach this point. There were definitely distractions and roadblocks along the way, but I got here. It might not seem like the biggest accomplishment in the world, but to me it’s important. That’s why I felt it was necessary to document this moment here.
I committed to building a PC and playing The Witcher 2 in 2013. I finished The Witcher 2 on May 11th, 2020. And now I can finally play The Witcher 3. But I’ll probably put it off for like another three years because reasons.
Having now played Animal Crossing: New Horizons for 170 hours, I can say two things. The first is that the game is a depth defying evolution of the concept since the original game released on the Gamecube almost 20 years ago. It’s accessible, simple, and addictive while not taking advantage of any of the predatory microtransactions Nintendo could absolutely get away with. It’s complicated enough to hold the attention of adults, both causal and serious gamers, while also being simple enough to be played and enjoyed by children. While it is not the best game ever made, it may be the most Nintendo game ever made in the last two generations or more of Nintendo consoles. The second thing I can say is that the game is riddled with quality of life problems. Not glitches or coding errors, but intentional problems that ultimately hurt the gameplay experience.
I have been absolutely floored by some of the island designs I’ve seen posted online. People have accomplished things that I couldn’t even imagine. The amount of things you can actually accomplish/build in Animal Crossing: New Horizons is insane. Yet my island still looks like garbage. One could argue that my island looks like garbage because I simply lack creativity, but I don’t agree with that statement. Now I’m not saying that I’m as visually creative as everyone else. I’m a writer by trade so visual design isn’t really my strong suit. But I do have plenty of ideas and a vision for my own epic island design. And I’m happy to acknowledge that the chances of the design I have in my head, or reference notes after I took the time to draw and plot out everything I wanted to do on paper, probably isn’t as impressive as many of the things that I’ve seen go viral online. But at the very least my island wouldn’t look like garbage if my vision could be realized. The problem is that at every turn the game goes out of its way to arbitrarily limit my ability to create my own vision. And again none of these limitations are due to glitches. They are intentional design flaws that can easily be fixed, but simply won’t be because Nintendo gonna Nintendo.
Landscaping and Island design isn’t the only place where the game has monumentally inconvenient limitations that are easily fixed but simply won’t be because reasons. There are a host of quality of life issues that simply don’t need to be present in the game. So for this week’s post I wanted to go over my top 15 complaints about Animal Crossing: New Horizons. This is not an exhaustive list; and I’m sure some people will disagree with some of the things mentioned. But I believe every one of these issues could easily be patched out and would make the gameplay experience better for a majority of players. Better being defined as giving players the ability to maximize their own personal enjoyment and/or creative freedom. List is in no particular order.
1. No Natural Island Features Should be Permanent
When you first start the game, you are asked to pick an island layout. If like me, you started the game on day one with little to no prior knowledge or plans in the works, then you chose a layout that seemed the most convenient at the time without knowing exactly what you were committing to. My island’s natural layout has been a nightmare for pretty much my entire time playing the game. It of course started with house placement. I knew exactly where I wanted my house to go when I first looked at the map layouts. That has never changed. What I didn’t know going into the game was that I wouldn’t be able to reach the location I wanted for my house until much later into the game. I thought I would be able to get the vaulting pole and ladder from the start and place my house exactly where I wanted it. Instead I was forced to put it in the complete opposite side of the island from where I wanted it because not only did I want my house on a mountain, but I also wanted an island with a single continuous river that went from end to end, locking me to only about 40% of my island’s total land for the opening portion of the game. As you can imagine, this was very annoying. But I was OK with it because I knew eventually I would be able to move my house and even reshape my river, if I wanted to.
Eventually I was finally able to reshape the land and the water, while also having the tools to go wherever I wanted. By the time I unlocked K.K. Slider (about 110 hours in), I finally had an established vision for what I wanted my island to look like. I set out to complete this task only to then realize my plan wasn’t possible because my river inlets from the ocean weren’t located in the right places. This cannot be altered, which I wasn’t aware of when I devised my grand plan. You’re simply stuck with the river to ocean connections you have. Now yes I could technically build my own rivers from scratch and just not connect them to the ocean at all. But that’s not really what I wanted. Furthermore, one of my inlets is located too high on my map which blocks me from having the perfect cliffs I wanted.
Along with the river mouths, you also have to contend with beaches and even worse beach stone. These black rocks eat up the sides and corners of your map for literally no reason and prevent you from having perfectly square edges to your cliffs. Some may also refer to them as OCD stone. Why Nintendo decided to make all these physical features permanent is beyond me. What I do know is that not only have they dashed my island landscaping dreams multiple times, but they also cost me so many hours of hard work because I had to alter several map units of land to account for them. This entire issue is stupid and shouldn’t be a thing. Just let me redesign my island however I want once I’ve reached the landscaping portion of the game.
2. The Game Needs Mass/Rapid Landscaping Options
Being able to reshape land and water is extremely convenient. It’s one of my favorite aspects of the game. Even with the many limits it has to it, the fact that you can reform cliffs and rivers to create the landscape you want (mostly) allows for every island to be truly original. That being said, the process of landscaping is one of the most tedious and troublesome endeavors I’ve ever experienced in a Nintendo game. You have to manually shape each block unit of land in the game one at a time. It is appalling that there isn’t a Mario Maker style landscape editing mode where you can just build entire sections of cliff and water in a few seconds. I have spent literal weeks trying to build, and rebuild, the cliff structure I wanted. And this doesn’t include all the time I’ve had to spend moving around trees and flowers to do it. I wouldn’t even mind having to pay a bells fee to do it. I just don’t want to have to spend hours to build a cliff after I’ve already taken the time to clear all the land. The cliff should be the easy part. And joy-con drift never angered me so much as it does while trying to landscape in this game. The game already has a unit based map. Allowing the player to draw cliff or water on it quickly rather than unit by unit landscaping would be an easy thing to implement.
3. Build and Destroy Landscaping Functions Should be Separate Buttons
In order to keep the coding simplistic, Animal Crossing: New Horizons throws all landscaping functions into two buttons. You select what kind of landscaping you want to do by pressing the plus button and then the A button to make a selection. Then you use the A button to interact with the unit of land directly in front of you, assuming your joy-con doesn’t drift. If the landscaping selection you currently have active isn’t on the unit in front of you, the A button adds it. If the active landscaping selection is on the unit in front of you, the A button removes it. While simple in practice, this causes a lot of problems. Again, many of them are the result of joy-con drift. Often you end up removing land when you intended to add it. Or adding water when you intended to remove it. And vice versa. This could easily be remedied by dedicating the A button to adding landscaping options and a different button being dedicated for removing landscaping selections. Of course this would only be the case while the landscaping app is active. Having this function would save users so much time by not having them make unintentional landscaping mistakes throughout the entire process of terraforming their islands.
4. Why Can’t I Build Giant Walls?
I have absolutely no idea why you can’t build two story cliffs, but it’s one of the most irritating limitations the game has. For some reason you can’t build a cliff on top of a cliff. You have to leave a space of at least one unit between the first level cliff and the second level cliff. So instead of building high cliffs you end up with big two step stairs. If I had to hazard a guess, I’d say the reason for this probably has to do with the incline limitations, which I will get to. You can’t make two story inclines, so building two story cliffs would prevent you from being able to access the tops of them. But I don’t see why that’s a problem because if I’m building a two story cliff then clearly I don’t want it to be climbed to begin with. Also, couldn’t’ the ladder just extend if it was really an issue that needed solving? Not only is this issue visually troublesome, but it also wastes a lot of real estate. You only have so much land. Having to waste the outer edges one unit in all directions is quite a loss of total available land.
5. Inclines Have So Much Wasted Potential
The only way to reach a higher level without a ladder is an incline. This is fine. Even the process of adding inclines for a fee is fine. What isn’t fine is all the things inclines should be able to do but can’t. First, inclines are locked to one cliff unit up and two ground units wide. Inclines are extremely useful but they could do so much more. You can’t build them adjacent to each other either vertically or horizontally. They need a gap of at least one space. So if you wanted to make a two story cliff with an incline it would have an annoying one unit step between the two inclines. You also can’t build them side by side. Meaning you can’t build hills or epic continuous grand entrances.
You also can’t repave or plant flowers on inclines. Meaning if, like me, you wanted to use floor paths to build long “roads” that went up cliffs, you would not be able to fully coordinate their colors because inclines can’t be customized past picking from a limited selection of incline designs. The inability to plant flowers on them also means you can’t have continuous flower paths for your “roads” that go up cliffs either. While some of this may be a lot of trouble to remedy, much of it shouldn’t have been part of the game to begin with. There’s no reason you shouldn’t be able to install adjacent inclines and bridges.
6. Construction Delays Progress so Much
Construction projects are a sensible idea. Moving buildings and adding/removing inclines being a bit more special so you aren’t constantly changing everything on your island makes you appreciate your decisions more. Having to pay for them does this adequately. But making me have to wait for the day to flip and only allowing me to move one building and one incline/bridge a day is such a waste of time. If I’m trying to reshape my entire island after acquiring the landscaping license, I shouldn’t have to wait a day to move each house on my island. I shouldn’t have to wait a day to demolish or move each bridge/incline I built in the early game while just trying to access more of the land and amass resources. It’s no wonder why some players, myself not included, use time travel. So much progress is stopped by limiting construction projects to one of each type a day. Just charge me express work fees and let me do everything in the same day. At the very least let me reshape the island in mass at least once after unlocking the landscaping license. Because obviously most players wouldn’t have put things where they are if they had had full access to all the tools and landscaping abilities you eventually get from the start.
Also, it’s completely ridiculous that you have to pay twice to change the surrounding landscape of a building or incline. I had my house in the perfect spot the first time I moved it. But I did not yet have landscaping abilities. Once I unlocked them, I wanted my house in the same general spot, but moved over three units and on top of a cliff. Doing this required moving my house to a completely different location by paying a fee of 30K bells and waiting a day for construction, then reshaping the land where I wanted my house, paying another fee of 30K bells, and waiting another day for construction. This sort of process was required for four of my islanders’ homes as well, ultimately costing me 460,000 bells and 10 days of waiting. The process should not have been that long, that expensive, or that troublesome.
7. Housing Development Shouldn’t Be Limited
You can expand your house’s interior by paying off loans. This is fine. The prices may seem a little high but once you start playing the stalk market “correctly” money becomes almost a non-issue once you get past your initial landscaping costs. But there’s a limit to how big your house can be. In reality, this makes sense. But this is a video game. Why can’t I just keep expanding my house indefinitely? Or at least past the point of realistic practicality. You can only have a maximum of six total rooms in your house. You can’t control or expand the size of them and their dimensions are kind of inconvenient as well. Why can’t I just pay more bells to expand these rooms or add additional ones? If I want a living room, a kitchen, a bathroom, a workshop, a gameroom, a bedroom, and a guest room, why can’t I? I have plenty of bells. So just let me keep expanding. And let me keep expanding storage as well. You can get up to 1600 storage spots by the time you fully upgrade your house. But why place a limit at all? Just let me keep paying a flat rate of bells to expand my storage indefinitely. Why does it matter?
8. Why Can’t I Stack Items Based on Space?
Certain items can have other items placed on them such as tables, chairs, and rugs. But then other items can’t even if there’s enough space to do so. For instance, the wooden speakers. This stereo system is quite big and it looks cool. But it takes up a lot of room. The top of it has enough surface area to act as a table or stand. So why can’t I put things on top of it such as potted plants or trophies? That’s something a person would actually do in real life. Yet because the game doesn’t designate it as a piece of stacking furniture, you aren’t allowed to do this. You can’t even put things on beds. So much space is wasted in an already limited space environment.
9. Just Let Me Store Turnips For Goodness Sake
Once a week you find yourself having to store turnips in the most inconvenient of places. First I was storing them all over my house. Every open space of my floor would be covered in turnips. I wasn’t even decorating my basement because I needed the storage space. Then, like many other players, I took to building an outdoor storage area for them. This is more convenient in many ways, but it’s also a complete waste of real estate. Having to essentially sacrifice a large piece of land to store your turnips every week is an unnecessary inconvenience that adds no enjoyment to the game. Either let me store them in the storage or raise the single item volume considerably. I buy 16K turnips a week. That’s 160 item slots to store. That’s a ton of wasted real estate. And sure you don’t have to buy turnips every week, and certainly not in those large quantities. But in the weeks that you do, you need that space available so it makes more sense just to leave it open rather than build on it at all.
10. Why Can’t I Turn the Camera When Outside my House?
The camera in Animal Crossing: New Horizons can be quite troublesome. You often can’t see things behind buildings and trees. But there are often important things there such as dig spots and bugs. You can turn the camera in the house just fine. In fact, it’s very convenient. But you can’t do this when outside your house and I can’t think of a single justifiable reason for this.
11. Add a Fossil Record to the Museum
In the game, you have a phone that catalogues every fish and bug you’ve caught and whether or not you have donated them to the museum. Why the same is not true for the fossils is beyond me. I don’t even need a fossil record on my phone. Just put it in the damn museum, like literally any real museum would have. Finishing the fossil collection, which I finally managed to do by trading in the fossil market on Discord, is such a hassle because you literally don’t know how many or which fossils you’re missing without looking it up online. Even when you do try to look it up, it’s still fairly unclear what you’re actually missing because you have to manually walk the museum and try to figure it out. Just add a damn fossil list to the museum so it’s like an actual museum.
12. Let Me Mass Buy Clothing in the Fitting Room
The fitting room in the Able Sisters clothing store is really nice. It’s exactly what you want when trying to decide which clothes to buy. But damn if it isn’t the most inconvenient thing in the world when trying to buy multiple colors of the same piece of clothing. If an item comes in multiple colors and I want more than one, why can’t I just buy all the colors I want at once? Making me have to pay, then exit the fitting room, then reenter the fitting room, find the item again, and pay again is completely unnecessary. Just let me buy as many items as I want at once.
Also, let me know which items I already own. The crafting table tells you what items you already have in your pockets and in storage. Why doesn’t the fitting room do the same?
13. Why Do the Store’s Close?
I work a full time job. I am not alone. I have to commute to my job. I am not alone. I can’t use my Switch at work. I am not alone. Nook’s Cranny doesn’t open until 8 AM and closes at 10 PM. This means that anyone who has to leave for work before 8 AM can’t sell things they are carrying from the night before and can’t check the morning prices of turnips, much less take advantage of them. Anyone who works late can’t purchase or sell anything at Nook’s Cranny either. There are days where I have to leave for work before 8 AM and don’t get home till almost 8 PM. Then I have other responsibilities like cooking dinner and walking my dog. That makes the operating hours of Nook’s Cranny very difficult for me. And I don’t even have children. But one must ask why does the store close at all? This isn’t real life. They don’t need to sleep. Tom Nook and Isabelle never close the Residential Services office. So what’s the deal with Tom’s nephews? The store should definitely reset every day like the calendar does with new announcements. And if for the sake of balance you wanted to argue that the turnip purchasing time should still be locked to specific hours of day, I could understand an argument for that. But the store closing is unnecessary. Or at the very least remove the fees for using the box to sell items. I don’t mind waiting till the next morning to get my funds in the mail. That’s a realistic mechanic I guess.
Don’t even get me started on the Able Sisters shop. Why does it close an hour earlier than Nook’s Cranny? What is the justification for that? Realistically you only need to visit it once a day, assuming you aren’t strapped for bells, in order to do all your business there. But it’s still the same issue of availability. If someone isn’t able to get to their Switch between the hours of 8 AM and 9 PM, then when do they get to purchase and design new clothing items? The game may be geared towards kids, but adults play it. If the shops have to be open for limited hours, at least let the player set those hours for their island. Maybe the employees sleep during the day and work through the night.
14. Why Doesn’t my Nook Phone Have a Debit Function?
I have a bank account and a smart phone. That is literally all a person needs to make purchases without carrying cash. So why can’t I purchase things from shops without the cash in hand? Just let me pull the funds needed directly from my bank account. Not for Daisy and NPC purchases, because that wouldn’t be realistic or practical. Though Zelle is a thing. But if I want to buy a chessboard from Nook’s Cranny with 3 million bells in the bank but not 95,000 bells in my pocket just let me purchase it with funds directly from my bank account so I don’t have to run to Resident Services, access the bank account, withdrawal the funds, and then run all the way back to Nook’s Cranny.
While we’re at it, let players access their bank accounts from other islands. Not their storage because that would be unrealistic. But as with my digital purchases argument, the technology is already there. The entire purposes of bank accounts is so you can access your funds anywhere that has an ATM. Every island has an ATM so let players pull bells from the Resident Services on any island.
15. Add a Dynamic DIY Vendor
Much of the game is built around the idea of interacting with NPCs and the environment to get new recipes. Characters like Celeste are key to making the most out of your crafting experience. But in my opinion there are serious issues with the volume of DIY recipes acquired as well as the ability to get the ones you want. I find it very irritating when I’m trying to complete a seasonal set like the bamboo collection and I get drops of repeat bamboo recipes before I’ve even finished the collection. That forces players to have to try to deal with the market and convince other players to trade them the recipes they want/are missing because the game itself doesn’t seem to be providing them.
While I won’t outright say you should just be able to buy every recipe in the game whenever you want, I do believe there should be a constant stream of DIY vending that takes bells or even Nook Miles. Technically the game kind of has this at Resident Services, but the list of available recipes is fixed. That shouldn’t be the case. As with the Nook Shopping service, the DIY choices should be changing daily. Like with turnip prices, it should be completely random with some days giving you repeats or junk recipes while other days can include super rare ones. Every day players should have the ability to acquire at least one new recipe no matter how much time they put in. This also makes every day seem eventful in some way even when nothing particularly special is otherwise going on.
So there are my 15 biggest complaints about Animal Crossing: New Horizons. I want to clarify that I love the game. I have played it literally every day since it released. I have never time traveled and I take my progress in the game very seriously. I play the stalk market like a pro and have great aspirations for my island and its residents. But the game is severely lacking in a number of quality of life features that would make the experience of playing the game way more convenient and fulfilling. The game is by no means bad, but it could be considerably better.
Recently I finished DOOM (2016) for the first time. I was not planning on playing it when it was originally announced, because it’s way out of my normal wheelhouse. Not only do I not like shooters for the most part, but I specifically hate FPS. I also tend to dislike Hellish/Satanist aesthetics in games, with some noteworthy exceptions such as the immaculate Dante’s Inferno (2010). So while highly praised by most, I was gonna pass. After many people recommended the game to me, I still wasn’t going to play it until they finally released a demo on PS4 sometime in like late 2018. I don’t know why the demo was released on PS4 so far after release, but this was the first time that I actually got to try the game hands on. While I’m not a fan of the genre, I immediately could tell this was a well-made game. That’s the mark of a truly good game. It’s when someone who isn’t a fan can play it and quickly tell that it’s a good game to a point where they want to play it even though they usually wouldn’t. After finishing this demo, I agreed to eventually purchase and play the game. Several months later, I was able to purchase it on Steam for $5. Then several months after that I finally decided to play it. I’ve actually been live streaming a let’s play of it, which you can check out here if interested.
My reason for not usually wanting to play shooters is that I’m not a fan of guns. But I play many third person shooters anyway when they seem compelling. I’ve streamed many Ubisoft shooters such as Watch Dogs 1 & 2, Ghost Recon: Breakpoint, and The Division 2 as examples. My reason for not usually wanting to play FPS games is that I really dislike the first person view. I often find it disorienting and don’t like not being able to see my character. Especially when they look like a badass. That’s the main reason I’m appalled by games like Deus Ex. Why would I want to be a badass looking cyborg if I can’t see him in action? As with shooters in general, there are exceptions where I will play a game in first person, but these are much fewer and far between. This is especially true for FPS titles. I’d much rather play a first person RPG like Skyrim over a first person shooter. The last “full length” AAA FPS game I recall playing was Destiny (2014). So playing something like DOOM is extremely out of character for me. But as I said, exceptions do occasionally occur. DOOM being the most recent one.
I tend to dislike hellish themed games for similar reasons to why I hate zombie games. The subject matter calls back to internalized fears that stem all the way back to my childhood. It’s for this reason that I don’t play many Hellish themed games. Or at least not the Western/Bible inspired hell aesthetic anyway. I have no issue with demon filled games from Japan. I can’t wait to finally play Nioh 2. I also don’t have any issue with games like God of War, where you visit Hades. These alternate interpretations of the underworld do not instill any sort of fear within me and thus I have no problem playing them. Whereas the last game I played based on the Christian idea of Hell is probably Dante’s Inferno, which I played at release. And the last zombie game I played was probably The Last of Us Remastered in 2017. I guess you could also say The Last of Us – Left Behind DLC, which I finally played this year, if you want to be completely accurate. They’re just not games I play. To date I’ve never played or watched a single Resident Evil game or movie.
What I think is interesting about DOOM is that not only am I playing it, but I’m enjoying it. I’m enjoying the gameplay. It’s quite good. I don’t really think I need to delve much further into this because I’ve already stated that the main reason I decided to play it was because the gameplay in the demo was so good. But what I think is more interesting is the fact that I don’t feel uncomfortable playing it aesthetic wise. As I said, Hellish games make me uncomfortable. The only times I generally allow myself to play them is when the graphics aren’t trying to be too realistic or when the gameplay is just too good to pass up. The latter was the case with both Dante’s Inferno and now DOOM. I went into the game expecting to be uncomfortable. One of the reasons I decided to stream it was that I’d not be playing it alone so I could distract myself from my discomfort with the aesthetic. But I haven’t really had any issues playing it. And I think the reason why is Doomguy.
In general, there are two main types of protagonists in any sort of narrative driven game focused on violence as the main form of gameplay. There are of course occasionally exceptions, but for the most part you’re always either the underdog or the badass. The underdog is not qualified to be in the situation he/she has been thrown into. They get placed in a conflict they didn’t really want to be in and then sort of luck and hard work their way through to the end. Nathan Drake, modern Lara Croft, and Joel all exemplify this underdog persona. It doesn’t matter how many adventures they’ve been on or what they’ve already accomplished. They always seem to be up against overwhelming odds with little chance of succeeding/surviving. But with brains, a can-do attitude, and luck they somehow make it to the end alive. The badass is unsurprisingly the polar opposite. This character is always the go to person for the task. They’re over qualified for whatever the problem is and no one believes they can’t actually complete the task given to them except the villain, for obvious reasons. This is how protagonists like Kratos, Master Chief, and Doomguy are characterized in their games. They’re revered and downright feared by almost everyone they come into contact with. Their reputations precede them, and rightly so. Take just about any super successful story driven AAA franchise and the main protagonist usually falls into one of these two archetypes.
You never see a game where you’re just some average cop, solider, or agent who’s qualified but not the ideal choice. It’s either a highly decorated person or a rookie who literally just started. This is done intentionally in order to set the tone of the game. The developers either want you to feel unqualified so victory seems so much bigger at the end. Or they want you to feel overly qualified so they can give you lots of awesome weapons and moves without having to justify them narratively. But there’s also an experiential aspect to these types of characterizations. The underdog instills a sense of fear in the player. As you’re being told you shouldn’t be there and you have no chance, you feel inadequate as the person controlling that avatar. Conversely, as you’re being told you’re a badass and this mission shouldn’t be a problem, you feel confident that you can get it done as the person controlling the avatar. And very few avatars are built up to be as badass as Doomguy.
I think the reason I haven’t been uncomfortable playing DOOM is that the persona of the Doom Slayer, or Doomguy as I will continue to call him, is just so epic. It’s a man that’s so badass and powerful that the demons in Hell literally set up monuments in memory of his legend. He’s said to be a man that was so angry at the demons of Hell that he was granted immortality by the Seraphim (angels) in order to fight eternally against the demons and take revenge for whatever wrong they did him. How can you be scared when the enemy is literally afraid of you as part of the canon? And it’s not just in the canon but in the gameplay. On more than one occasion I’ve seen demons of multiple types try to run away from me during combat. Doomguy is the epitome of the epic declaration “I’m not locked in here with you. You’re locked in here with me.” And that level of confidence and badassery is transferred to the player.
I’ve beaten Demon’s Souls, all three Dark Souls games, and Bloodborne. Yet every time I go to start another From Software soulslike title I’m intimidated. It’s not because I don’t think I can beat it, because I have more than enough proof to know that I can. I’m intimidated because the game presents itself as being more than the player can handle. The motto is literally “Prepare to Die”. The game chops your comfort and confidence down from the start. DOOM does the exact opposite. It actively builds the player up from the very beginning to feel like you can achieve anything and already have. And as such you can stroll into Hell by choice, rip and tear through demons twice your size, and then stroll back out at your leisure. Because you’re Doomguy.
The positive psychology in the presentation of a game is not something I’ve thought much about before playing DOOM. The negative psychology I’ve thought about many times. It’s fairly obvious how it works and how effective it is. Because the player almost always goes into a game with a natural inferiority to begin with. I’m not a super solider, monster hunter, or Dhovakin. I’m just a guy that plays a lot of video games. So it’s fairly easy for the developer to make me feel unqualified for the challenge to begin with. This is how most zombie games are presented. You’re always a normal guy with at best a bit of cop training plunged into an undead nightmare. Even the zombie games where you’re not an amateur still pretty much make you an average guy with a bit of experience at most. You’re never an otherworldly epic badass seasoned by a mountain of corpses beneath your feet. That’s because zombie games are always framed as survival games. You’re always trying to survive an apocalypse. From a narrative standpoint that makes sense because zombies always bring about a dystopian reality in narratives. But that doesn’t mean it has to necessarily be that way.
I don’t like zombie games but I would probably enjoy one that presented itself like DOOM. Rather than a random gym teacher or beat cop, make the protagonist a complete over the top badass. Not a wannabe badass with a motorcycle and a sob story but an actual balls to the wall, no strings attached badass. I don’t want to fear Mr. X. I want Mr. X to shit his pants when he sees me. Give me a game that frames the protagonist like the Doom Slayer at the beginning of DOOM Eternal. Make the humans more afraid to piss me off than they are of the zombies. Give me epic armor and crazy badass weapons. Don ‘t set me in a world with zombies. Set the zombies in a world with me. I’d probably enjoy that zombie game.
Last week, the Entertainment Software Association (ESA) formally announced the cancellation of E3 2020. Or more specifically they officially announced the cancellation of the digital E3 2020 event that they had previously announced would take place due to the cancellation of the regular on site E3 event, because of the coronavirus pandemic. To be fully accurate, what originally happened was SONY, among other entities that usually are expected to attend E3, decided not to attend this year because of the coronavirus. Eventually enough companies, and media personalities if we’re being completely honest, decided not to follow SONY’s lead causing the ESA to decide it was in their best interests to cancel the physical event altogether. Almost certainly due to projected financial losses. But rather than formally cancelling, they decided to try to save face by promising a digital event in place of the normal physical event. Now they have cancelled that as well. In my professional opinion, I have to say that this is the final straw for E3.
Even without the coronavirus, making E3 into a digital event makes a ton of sense. It’s more cost effective, more accessible to more people, and allows companies with lower budgets a better chance at being able to participate. Honestly there’s little reason for E3 to continue to exist in its current form and this has been the case for years. Note that I am not saying that there is no place for a Los Angeles based physical video game event in the current video game industry. What I am saying is that E3 should no longer be managed and treated like it’s as important as it currently is. E3 today should really be more like a Gamescom or Tokyo Game Show where it’s just another event where companies can and sometimes do announce things but ultimately it’s just about interacting with fans and local business interests for convenience sake. It should no longer be the end all of game industry events. But that’s not even really what I want to discuss today. I want to talk about the fact that E3 is now for all intents and purposes dead.
The promise of a digital E3 event was kind of a tall order to begin with if we’re being honest. See Nintendo’s E3Direct every year doesn’t actually have anything to do with E3. They simply create a presentation independently and just choose to release it during E3 at a scheduled time based on the presentation schedule, which is publicly available. And because Nintendo is such an important player, the ESA chooses to stream the Nintendo Direct on site because they know if they didn’t people would take the time to go watch it, thus reducing traffic at E3 during the presentation. Nintendo does still participate at E3 by putting up a booth, but in terms of announcements and a presentation, that’s all handled outside of E3 and in no way is affected by the ESA. The reason E3 continued to work even after Nintendo decided to do this was because no one else decided to do it. Nintendo was essentially forced to work around E3’s schedule in order to stay relevant in the gaming news cycle. But if no one is presenting live then suddenly there is no E3 news cycle. There’s just a bunch of digital presentations by different companies. Why would any company allow the ESA to manage and police the release of their gaming announcements digital presentation? Nintendo doesn’t and no one else would either. And they especially wouldn’t pay a fee to release their presentation to the internet. So at that point the only thing the ESA could offer them was a scheduled announcement time surrounded by other digital presentations. But that’s not really a selling point.
If anything, you want to release your digital presentation before all the other companies or after all the other companies. Because you want to garner the most continuous attention and hype for your presentation. So really companies wouldn’t want to release their digital presentations that close to each other at all. They’d be better off picking their own random days throughout the year and being the focus of the news cycle when they do release. And if they’re smart, like Nintendo often is, they’ll make their presentation interactive. As in release a presentation that announces a downloadable demo going live that day. Or beta sign ups, etc. If it’s a digital presentation, it can be as long or as short as a company wants and include all sorts of promotional gimmicks without having to be approved by the ESA or any other external entity from said companies. And that’s true for both AAA and indie developers/publishers alike. So the prospect of a bunch of companies, especially the bigger ones like EA, Ubisoft, and Microsoft actively choosing to share the spotlight of their digital presentations with other companies’ digital presentations is pretty ridiculous. Think about the hype Nintendo Directs get throughout the year. Why would any company choose to share that limelight with their competitors, ultimately weakening the impact of their digital presentations?
The only reason events like E3 even exist is simply that putting on your own event is very expensive and hard to promote. It’s more cost effective, even though it is still very expensive, to just attend another event. So you sacrifice that spotlight by sharing it with other companies that are all in the same boat trying to save money and garner as much attention as possible. But when it comes to releasing online, everything is backwards. You want nothing to do with anyone else’s content. Imagine if by some miracle you were the only person on Twitch streaming for like three straight hours. Just by some miracle there were zero other channels streaming during that time. You would garner so much attention just because nothing else is going on at the same time. It’s the same concept for these digital presentations. So the idea of a digital E3 was built solely on hope for companies to adhere to tradition rather than sensible business decisions. And of course in 2020 we know tradition doesn’t mean shit. So no these companies were not about to turn over their digital presentations to the ESA and give them control of managing and releasing them. That was never going to happen.
Here’s why I say E3 is now dead. We’re about to have our first year with no official E3 since 1995. For the past 24 years “we” were all led to believe that it was a must. That the only way game companies could properly announce their games to the public was through this one offline event. We were told it was important for the companies, media, and public to interact with each other and share their love of gaming. And many people believe(d) this. Now suddenly we’re not only not having E3, but we’re not even going to have any large scale coordinated gaming events at all. They’re all getting cancelled or postponed and replaced with digital presentations. Mark Cerny’s GDC PS5 presentation was a great example of this. It proved that PlayStation could effectively present their new hardware ideas and intentions to developers digitally without losing any effect or hype and they saved money doing it. Not only that but they were able to garner more media attention and get the public more involved in the discussion. I wrote my first GDC related blog post this year because of that presentation, which I would not have even watched had it been a normal GDC year. SONY isn’t going to forget that. God willing this pandemic ends soon and events can go back to happening again. But don’t think for a second the companies involved are gonna just go back to the ways things were. They will see the hype, the efficiency, the reduced costs, and whatever other benefits and decide they can just keep doing it that way. That’s what’s gonna happen to E3.
For the next year, you’re gonna have every company create and distribute digital game presentations. They will all be different and specific to their companies. Some companies will copy the Nintendo Direct model and try to keep things current and relevant for the short term. Some companies will do a presentation for the next year’s worth of announcements. Some companies will create individual presentations for each game coming in their portfolio and release them periodically. But no company is going to coordinate with any other companies to release their presentations concurrently or close to each other. And what we’re all going to have to finally accept is that not only is that OK, but it’s better. It’s better for everyone involved.
Every E3 I don’t watch the presentations. I find a website like IGN or GameSpot and look at their roundup article and then watch the clips from the presentations of the games I’m interested in. Why? Because there are too many presentations to deal with in too short a time span. And a lot of the junk presented is stuff I don’t give two shits about. And when you’ve got Microsoft, SONY, Nintendo, Ubisoft, EA, Devolver Digital, and others even if you just look at two games from each one that’s still way too many games to try to reasonably keep track of and give a proper amount of time and attention to. But if instead each of those presentations was released at a completely different point in the year with nothing going on around it, I’d probably watch every presentation in its entirety. Especially right now. The number one problem with the quarantine for most people is boredom. They have nothing to do at home. Would you rather have everything thrown at you in the span of three days for you to binge and then go back to being bored or have things peppered out throughout the quarantine so that you continuously have things given to you to help combat your boredom in the long term? A singular event is really good for the company running the event, because they can turn a large profit. But for literally everyone else involved, including the audience, it’s at best a troublesome burden disguised as convenience due to travel restrictions/costs and time. But when no one can travel and everyone has too much time on their hands, a singular physical event isn’t useful at all. A singular digital event is only slightly more useful.
After this year of disconnected digital game presentations, everyone will be forced to acknowledge that it was fine. Gaming didn’t stop. Profits didn’t go down . . . due to the lack of E3 and other such events. Hype wasn’t reduced. Nothing negative will have happened to any of these larger companies because of the absence of E3. And because of that, when the ESA tries to get companies to invest a large sum of money to be featured at E3 2021, many if not all of them are going to say no. They’re gonna go the way of Nintendo and say it’s just not worth the money, labor, time, and inconvenience. At that point, the event simply won’t have enough attendees to warrant most people buying tickets. And at that point, E3 is dead as a door nail.
Change tends to come by force rather than by choice sadly. This pandemic has forced companies to change the way they announce new games. Yet these changes should have taken effect long before a pandemic because technology had already provided the means to do so more effectively, efficiently, and affordably. These changes were a long time coming. Companies and consumers only fought them out of some odd dedication to tradition. Now that tradition is being forced out, things will never be the same.
This statement from the ESA, as reported by PC Gamer, is more telling than people will probably give it credit for right now.
“Given the disruption brought on by the COVID-19 pandemic, we will not be presenting an online E3 2020 event in June. Instead, we will be working with exhibitors to promote and showcase individual company announcements, including on http://www.E3expo.com, in the coming months,” the rep said. “We look forward to bringing our industry and community together in 2021 to present a reimagined E3 that will highlight new offerings and thrill our audiences.”
The shift from an online E3 event to “working with exhibitors to promote and showcase individual company announcements” is a fancy way of saying that the ESA will shift into being a promotional company similar to traditional online media. In other words, they will become leeches that garner value by promoting content created by other companies online. Now of course this statement acts as if it only applies to 2020. The ESA has already stated plans to return to normal for E3 2021. But this assumes that all the companies decide to go back to the old model. I’ve already explained why that won’t happen. In 2021, E3 will be cancelled again, but ideally it won’t be because of coronavirus. It will be due to lack of participation. And once again the ESA will be “working with exhibitors to promote and showcase individual company announcements”. Over time the ESA will either shift completely into the media space and operate as a digital promotions platform that operates pretty much the same way as any other mainstream media/games marketing company or it will cease to exist. At best, E3 may end up becoming a smaller event that acts similar to PAX with a focus on smaller companies and projects desperate for any attention at all. While I have been predicting the end of E3 for some time now, I had originally given it a few more years, as can be seen in previous blog posts. But with the virus accelerating things, I think it’s done. E3 is de facto dead in the water from here on out.
If you’ve been following me for a while then you know I don’t really like shooters and I tend to hate PVP games. Especially those with no story based campaign. To this day I can proudly say that I have never played a single match of Fortnite. While I enjoy the art style and quirkiness, I absolutely loathe the Overwatch model. These games simply aren’t for me. So when I was invited to try the closed alpha for Rogue Company I went in assuming that I would dislike it. I was pleasantly surprised to find that this wasn’t the case.
I don’t want to write a full review of this alpha. Not only was/is it under an NDA, but it was also very limited in what was available so writing a full review at this point would most likely do the game more harm than good in a way that won’t necessarily be beneficial to consumers. What I will say is that the game blends a number of different styles together in order to make a very satisfying gameplay experience. It has the single life mechanic of a battle royale game coupled with the condensed maps and team mechanics of Overwatch. This is done in a first to five rounds won model. The two modes available in the alpha were 3v3 fights to the death and 4v4 objective matches. I enjoyed both modes. The pacing is very fast with single life elimination. The gameplay, though flawed mechanically in certain ways, is very well balanced and accessible to amateur players. The best way to describe it is the weapons are balanced in a way where the amateur shooting first isn’t automatically going to get creamed by the more experienced player like you see in so many other shooters. The in game money system that allows you to upgrade between rounds worked fairly well and added a layer of depth to the game that I think harkens back to CS GO but in a more refined form. I have to say that it’s the first team based round by round shooter with no story that I’ve ever actually enjoyed playing.
I spent most of my time playing the objective mode in the alpha. This was much simpler than Overwatch’s objective mode. It’s just a bomb in the center that you have to reach before the other team and hack with a single button held for about four seconds. Once the bomb is hacked you have to defend it for 60 seconds. The other team can re-hack the bomb and claim it for themselves. The same rules apply afterwards. Hold it for 60 seconds to win the round. The “problem” with this mode is that when combined with the single elimination mechanics it devolves into killing the four guys on the other team first equals a win. You can win the round by completing the objective, which takes the time to reach the bomb plus the time to hack the bomb plus the 60 seconds defending the bomb. This is how the mode was actually meant to be played. But you can also just kill the opposing team’s four members in a fraction of the time, if your team is better, and net the same results i.e. a victory for that round. As you can imagine, once people caught wind of this they stopped caring about the objective entirely.
I’m one of those people that actually care about the objective. That’s why I play(ed) the objective mode as opposed to the team kill mode. When I first started playing, I was misled into believing I was playing with people but was actually in the bot mode. I had so much fun. Not because the bots were easier but because they were playing for the objective. Rather than just going for kills, the bots had been programmed to play as if completing the objective was the only way to win. This made for a much more interesting and varied gameplay experience because while killing the opposition mattered and happened, it wasn’t the main focus of each round. Both sides played for the objective as their main concern. This shaped the way they approached the map and the firefights. Once I started playing with actual people, I quickly started to enjoy the game less. This was because human players didn’t care about the objective.
Playing Rogue Company’s objective mode, and so many other shooters with objectives I’ve tried, with humans always ends up being the same garbage experience. This is because everyone except me always seems to think they’re playing slayer mode and just ignores the objective. This makes sense when you look at the framework for how these types of games work though. Notice that people who play shooters rarely discuss wins. Have you ever noticed that before? No one ever describes their win percentage when talking about how good they are at shooters. The talk about their K/D ratio. In a way this makes a lot of sense. K/D ratio is more effective at describing an individual player’s skills in the game while wins accounts for a number of external factors that aren’t all related to the individual player’s performance. You can be the best in the world but if you’re playing a team based game against the 2nd, 3rd, and 4th best in the world and the other members of your team are crap then you probably still won’t win the match. Also, even when you do get a good team, the map layout usually gives one team the advantage. In Rogue Company, the objective location changes each round but it’s always to a particular spot from a preset list of locations on the map. These spawn points absolutely deal an advantage to one team over the other and I could see no formula for how these spawns were decided. Some matches my team got the advantage multiple times over and other matches the other team got the advantage more often than we did. It was random and yet clearly gave an advantage to a particular team. Factors like this play a huge role in determining why K/D seems to matter more than win percentage to most committed players of shooters.
Another huge factor in why players tend to ignore the objective is the rewards system in these games. Rogue Company, like most shooters of this sort, has player levels. You amass experience based on your accomplishments and that experience levels up your account. Leveling up presumably has some benefit, but as this was an alpha, I don’t know what the particular benefits will be in this particular game. I assume it will be similar to most other shooters by being a mix of cosmetic options, avatars, and titles. There is of course always the prestige of having a higher level as well. Experience is given based on accomplishments but kills always net more than completing the objective in these games. Completing the objective may be the stated purpose of the game but the experience points given to the individual player for completing the objective never compares to racking up kills. So if you’re a player that cares about leveling up your account, it is the objectively correct decision to focus on getting more kills rather than completing the objective. Again acknowledging the fact that killing off the other team will get you a win even if you ignore the objective completely.
Ignoring the objective becomes the standard of play because it’s always profitable. This is so common that playing for the objective becomes a taboo. This was definitely the case in Rogue Company. As I said, I play for the objective. It’s what I like to do. It’s the reason I play that mode in these types of games in the rare instances that I do play them. I was criticized multiple times during the alpha for trying to prioritize the objective. People would take the time to jump on their mics or text chat to tell me to stop going for the objective and just focus on killing. That angered me, but I understood their reasoning behind it. The truth is that by being the only person on the map playing for the objective, I tended to die first fairly often. But let’s unpack that a bit. Seven of eight players on a map ignoring the objective and one playing for the objective and getting criticized for it should not be seen as acceptable from a game design standpoint. Why even make an objective mode if 87% of players are just going to ignore it anyway? Because there are simply too few players like me who will risk victory for love of the game. What should have happened was not that my three team mates criticized me for pursuing the objective but instead cover my ass so that I can get the objective before the other team does. That’s the intended way to play. But it’s not the common way people play.
It’s very telling when you look at the scorecards from the matches I played. It was extremely common to see something like me with the lowest score on my team but with the most objective completions while the person with highest score on my team would have zero objective completions but the most kills. It’s no wonder most players ignore the objective and I can’t blame them for that. But this, in my opinion, should be considered bad game design. Yes the gameplay loop is fun. Yes the combat is balanced. Yes the round to round character development system is well made. But if more than 2/3 of your players are flat out ignoring the gameplay methodology you’ve built into the mode then it’s a badly designed mode. And that’s not a knock against Rogue Company specifically. That’s a criticism of all these shooters. Because they all tend to have this same issue. So my question is how do “we” fix this?
There has to be a way for a developer to create an objective mode in a shooter that has a fulfilling gameplay loop, meaningful objectives, and encourages people to actively prioritize completing the objective(s) over mindlessly killing the other team regardless of the objective being completed. I don’t know what that looks like. I don’t know if it’s already been done, because I don’t play every shooter. But I do know that this is something that I’ve never witnessed before.
I’ve got some ideas. Maybe completing the objective should net the individual player way more points. Like 10x that of a single kill. Or maybe the game shouldn’t let anyone get killed permanently until the objective is completed. Or maybe if the round is ended without the objective being completed everyone gets zero points or at least severely reduced points. I don’t know the answer but I do believe there’s a way to make a meaningful objective mode in a team based shooter where people on both teams actively care about the objective more than getting kills. But then we have to ask the question does it matter?
If 87% of players will happily ignore the objective in a game, maybe the answer is to stop building objective modes in these games. Clearly people don’t care about them. But is it that they simply aren’t made to be meaningful enough or that most players genuinely don’t want them but play in that mode for some other reason. Maybe they prefer the maps for example. In Rogue Company the objective mode maps were much more interesting than the one straight slayer map that was available. There is a risk that making an objective mode where players have to actually play for the objective could backfire on the developer. People might say they don’t like actually having to take the objective seriously and ultimately not play the mode. This is a real risk to be considered. But I believe that there’s a way to do it successfully. I believe that players will change their conduct when motivated to do so in an effective and meaningful way.
I don’t know if what I’m looking for in a team based shooter already exists. It may have been here for years and I just don’t know about it because of how rarely I play shooters. Maybe that’s exactly what Rainbow Six Siege is and I just don’t know about it. In any case, I want a team based shooter with Rogue Company’s fast paced gameplay loop with an objective mode that actively motivates players to take the objective seriously. Until then I’ll probably keep ignoring team based shooters with no story mode.
A few weeks ago, Mark Cerny delivered the originally planned GDC presentation for PS5 digitally. In this he spoke at length about hardware developments for the upcoming PS5 home console. To start, this was the GDC presentation. GDC stands for Game Developers Conference. That’s where the presentation was originally intended to be given but because of the coronavirus this was changed to a digital presentation. So to be clear, this was originally intended to be a presentation given exclusively to developers about what the new PS5 architecture will look like and how it will help them better create the games of the next generation. Which means it was not and was never intended to be a presentation of new games. PlayStation was very transparent about the fact that this was the GDC presentation. Meaning if you were one of those mouth breathers that responded to the presentation with “this is boring where are the games?” you’re either an idiot, misinformed, or some combination of the two. The presentation was exactly what it was always intended to be, should have been for the venue it was originally created for, and was actually extremely interesting and informative if you actually care about the technology you’re going to invest hundreds to thousands of dollars in over the course of the next 4 – 7 or more years.
Cerny spoke at length about a number of things, but the two topics I found to be most interesting and informative were the SSD and the audio experience enhancement technology. I’ll start with the audio technology because it’s a bit more straight forward and less debatable. He showed that the PS5 is working towards fully immersive 3D sound from normal TV speakers or headphones. Now I’ve never been a huge sound guy but I do appreciate the fact that PlayStation is trying to build up the immersion factor by giving audio technology its just deserts. What I was extremely interested in was the idea of using HRTF (Head Related Transfer Function) to mix sound in a more immersive way. HRTF can be summarized as the way ears receive sound. The idea being that if you can master the way the ear perceives sound then you can trick the listener into thinking they’re hearing sounds from different angles and locations regardless of where the sound is actually coming from. This is how PlayStation is attempting to make TV speakers sitting in front of the player simulate sound coming from behind, above, or anywhere else not immediately in front of them. This is also how and why headphones are better able to simulate sounds from multiple directions than just left and right.
What was fascinating about the HRTF was how it could be applied to make games seem more real without changing the sound in games all that much. Really it’s more about how sound is delivered than the sounds themselves. The problem is that HRTF is person specific. Meaning that your HRTF profile and mine can be very different. So for most games they use a general HRTF based on average testing. This isn’t optimum for any player but works generally well for most. Cerny showed this in the presentation next to his own specific HRTF and they were noticeably different. He went on to say that when his own personal HRTF was applied to games for testing the games sounded way more immersive to him. He then went on to say that the problem is that there’s currently no practical way to personally get every player’s specific HRTF profile and apply it to game audio, but he does see a future where that is the case.
I think the idea of being able to have games tailored to my own sound profile is amazing. It would completely change the way we as individuals experience games. They would be way more immersive, audio (not music) would be taken way more seriously in the discussion and judgement of video games, and everyone would have a more personal direct connection to the games they play. I do believe one day it will be easy to measure your own HRTF. There will surely be an app that you use with a VR headset or something like that. Since your HRTF generates an image, it would be easy to send to developers and they could tailor the game’s audio to your specific hearing profile. But that’s still a lot of work. If every person that purchased FIFA wanted their own sound profile applied that would be millions of profiles to implement for the developers. So there certainly needs to be an AI component added where the game can automatically apply your HRTF directly without human intervention. I imagine a world where you measure your HRTF directly on your console, have it tied to your username/profile, and AI applies it to all the games you play automatically. The only question is how does this work for couch co-op? Unless everyone has headphones, you’d still need to have the general HRTF in place or the audio experience might be severely reduced for other players if the profile owner has a really abnormal HRTF. But that’s the smaller hurdle in my opinion. In any case, I’m very interested in seeing how personalized audio experiences develop in gaming.
The SSD aspect of the presentation was very interesting but left me with a number of concerns about SONY’s approach to storage. To be clear, the use of an SSD is a great development for consoles. The things Cerny described made me really happy. The idea of no more wait times for fast travel, no more annoyingly long hallways and ladders just so games can render in the background, lightning fast respawn times, and many other examples given made the future of gaming sound great. And you could tell that Cerny was actually thinking about the problems the ways gamers think about things. His examples spoke directly to the problems we often face. My new favorite gaming quote is “What we euphemistically refer to as fast travel.” Currently I’m playing Dark Souls 3 on PS4. If you’ve ever played a Dark Souls game then you know the fast travel function comes with really long loading times. Cerny implied that with the new SSD architecture this would no longer be the case. Amen!
While I thoroughly support the move to SSDs, SONY’s cost cutting and proprietary measures are no bueno for me. The out of the box PS5 SSD will only have 825GB of storage. Now Cerny explained that with compression, this translates to considerably more, but at the end of the day it’s not nearly as many games as I want to store. Currently I have an internal 2TB HDD and an external 4TB HDD for my PS4. I’m using a total of just under 4TB of that 6TB total. Now I’m willing to admit that I have a lot of free shovelware in my hard drives. But I also have a large collection of meaningful games I actually paid for. Technically speaking I have just under 450 applications listed in the purchased section of my PSN profile. Again, not all of this is meaningful paid for software. But I’d say I’m storing at least 250 plus meaningful games between my two drives. Cerny stated that this is not the way PlayStation envisions the PS5 to be used. The 825GB number was stated to have been chosen based on the average weekend use of players. In other words, he’s saying that if you look at the data of player usage the average player only uses up to 825GB worth of software on their console in a single weekend. While that is probably true, or even inflated for wiggle room, it’s not the way most gamers handle storage management.
I’m happy to admit that at most I’m playing three console/PC AAA games at one time. Currently I’m playing Dark Souls 3 (PS4), Nier: Automata (PS4), and Animal Crossing: New Horizons (NS). I’m also casually playing a few other games irregularly such as Smash Bros: Ultimate (NS), Pokémon Sword (NS), and Strange Brigade (PS4). In a given weekend it’s possible but unlikely that I’ll actually play all six of these games. Even if we said each game was 100GB, which I don’t think any of them actually are, that would still be only 600GB if they were all PS5 games. And that’s without applying compression. Cerny is right in saying that 825GB is enough storage space for the average user’s singular weekend. But the implication here is that PlayStation believes that I want to download/manage my stored games for the weekend every week in advance or that I have lightning fast internet so I can quickly erase and install new games if I want to change any of my currently installed list on the fly. These are two grossly misinformed assumptions because personally I’m not doing the former and while my internet is pretty solid, it’s not good enough to quickly download a new 100GB game on the fly in a manageable amount of time relative to when I decide I actually want to play the game. Now for most players this will be a rarity. It’s accurate to say that if you’re in the middle of a game, such as an RPG or adventure title, then you’re probably going to be playing it for a while so you won’t need to uninstall/install games often. But that doesn’t change the fact that in practical terms if I did want to change games I’d most likely have to delete one of the ones I have installed. Personally I don’t like being asked to do that.
The other reason storage is important is because it retains “ownership” of the games you buy. We’ve all seen games disappear from PSN and other online stores. This can happen at any time. Servers are repurposed, games are discontinued for political reasons, and businesses are sold or sued. There are countless examples of games being made unavailable to download both temporarily and permanently that apply to countless games over the years. Not to mention that one day the PS5 server will inevitably shutdown. When that day comes, you don’t want to be limited to just 825GB worth of games to store forever. I’d be giving up more than half of my digital PS4 collection in that scenario. I buy a lot of games and I want to be able to play them at any time I choose, even long after the PS5 server goes down.
Of course SONY is aware that 825GB is low so they have provided players the ability to upgrade storage. The problem is they did it in the most expensive way possible. The PS5 uses an M.2 SSD. From a hardware standpoint, that’s awesome. From a consumer standpoint it’s an absolute nightmare. M.2 drives are really fast. But they’re unregulated for size and are extremely expensive. A normal 2.5 inch SSD looks like a bargain compared to a large M.2 drive. Large size M.2 drives aren’t common. 2TB is more widely available but 3TB+ drives are extremely rare. And again, there’s no regulated form. So even if you do manage to find one, it may not fit in the PS5. The pricing is atrocious. A 2TB M.2 SSD is gonna be a minimum of $200 and they can go over $500, still at only 2TB. Add the fact that prices will mostly inflate for “PS5 compatibility” being used as a selling point and you’re paying more for the storage than the console in some cases. For reference, you can get a 2TB 2.5 inch SSD for under $200 easy. And technically they go all the way up to 7.6 TB. Good luck paying for that size though.
This storage limitation and cost issue is a huge problem for many, myself included. As soon as it was announced, people got angry. And rightly so, in my opinion. 1TB default drives are the minimum standard for consoles, Nintendo notwithstanding, in 2020. What I actually would like to see is a multiple SSD board on the PS5 that works like a PC motherboard. Imagine if you had three or four M.2 SSD slots and you could install them as time goes on, thus increasing your storage without having to completely gut and reset your system every time you upgrade storage space. These could work more like interchangeable memory cards with the default one being the only one that has to be changed prior to initial startup in order to not have to gut your whole console and start over. While I will definitely buy a PS5, this storage issue means I won’t be buying one until way after initial launch. I’ll have to wait both for the price of the console to come down and the price of a large M.2 SSD that’s compatible to drop.
The future looks bright for actual gameplay. Mark Cerny’s presentation gave me high hopes for how games will play and sound on the PS5. But the way they want me to manage software is not acceptable. I will continue to store my entire digital library concurrently and if that means investing in large drives and having to wait longer to buy the console then so be it. I’m backlogged anyway so upgrading from PS4 later is a non-issue for me.
How do you feel about the PS5 based on current information?
When I was a kid there were no backlogs. Games were released very sparingly with maybe one to two new releases every two to three months at best. And many of those releases were skippable. Holiday season was the time when the good games dropped. I still remember waiting for Christmas to get The Legend of Zelda: Ocarina of Time (November 1998). This was a time when replaying games was not just common but the norm. There simply weren’t enough games to play. Not to mention that games were more expensive.
When I say more expensive what I really mean is that the prices didn’t drop. It’s fairly common to see a new release drop from $60 to $30 or less within a few months today. With the exception of Nintendo, it’s pretty much the norm. I bought Star Wars Jedi: Fallen Order a month after it released for $16 in a holiday sale. Worth every penny and then some, by the way. We didn’t have those price drops when I was a kid. Games simply were the price they were. We also didn’t have weekly free games on Epic Game Store, monthly free games on PlayStation Plus, and so on. You bought or borrowed every game you played, the prices weren’t discounted much if at all, and there weren’t a ton of games to play. This meant that you were usually caught up on games you actually wanted to play, assuming you had the money or friends to borrow games from. It’s not like that anymore.
I often feel like we as gamers have become spoiled when it comes to the volume of games available to us these days. There are just so many great games to play releasing so often now. In 2020 we get Yakuza: Like a Dragon, Dragon Ball Z: Kakarot, Animal Crossing: New Horizons, Doom Eternal, Cyberpunk 2077, Nioh 2, Ghost of Tsushima, The Last of Us Part 2, Marvel’s Avengers, Watch Dogs: Legion, God & Monsters, and Romance of the Three Kingdoms XIV. That’s just 12 of the many AAA titles releasing this year. We can also be sure that most of them will either have additional content added or will take hundreds of hours to play at a minimum. Then there’s the many remakes/remasters coming as well such as Final Fantasy VII, The Wonderful 101, Resident Evil 3, and so on. We have well over one knock out game a month. There’s enough content to get every normal gamer through the year with games to spare.
Not only are there plenty of new games to play this year but there are also all the games we still haven’t gotten to play. When I was a kid, we were waiting for the next release. “I have nothing to play” was a literal statement of fact. Not a metaphorical statement of preference based irony. I haven’t had nothing to play in a good 10 years. My backlog is so preposterous that I know I will never actually complete it. I have unplayed copies of Final Fantasy XV, World of Final Fantasy, The Surge, The Legend of Zelda: Link’s Awakening Remake, Middle-earth: Shadow of Mordor & War, The Witcher 2 & 3, Shadow of the Tomb Raider, and countless other AAA titles sitting in my backlog with the intention of playing them. And that doesn’t even account for the fact that my Black Friday/Christmas sale purchases from 2019 just arrived at the end of February adding all of this to my already preposterous backlog. That is to say that I have well over one AAA game a month at my disposal for the next several years without ever buying another game. And remember that this doesn’t include the countless indies I have and would like to play, doesn’t include the various freebies from Epic Game Store and PlayStation Plus, when I don’t already own them (looking at your Shadows of the Colossus and Sonic Forces) and doesn’t account for all the retro ports I never beat as a kid and would like to play available on Nintendo Switch Online.
My backlog is quite large, but it’s not abnormal. I don’t know any gamers that aren’t backlogged. Pretty much every person I know that considers them self a gamer has accepted the fact that they will always be backlogged. Backlogs are like the US national debt. We as people have simply accepted it as part of life and though we may complain about it constantly we have no intention or putting in any serious effort or life changes to reduce it by a noticeable amount. Being a gamer means you simply are backlogged as a natural state of being these days. I believe that this is the first time in gaming history that we’re in a position where the industry can take advantage of that fact.
Let me preface the rest of this post by acknowledging that what I’m about to write is never going to happen. I’m aware of that fact. I have no delusions about the fact that this idea, though good and productive for all parties involved, goes against the status quo and would never actually happen. But I think it’s important to put it out there anyway.
As I said, games used to be in limited supply and expensive. The prices couldn’t drop in order for larger publishers to remain profitable because they had a limited number of products and no additional revenue streams. Now we’re in the completely opposite situation. Not only are there so many games to play, but those games continue to be relevant with additional content for multiple years in the case of some games. Rainbow Six Siege is four years old and just broke its concurrent player record on Steam. Games last way longer now and have tons of additional revenue streams with things like DLC, expansions, microtransactions, and e-sports revenue. What this actually means is that a company can put out a game and continue to support that game for a long time and remain profitable without releasing a new game for quite some time. What if that became normal practice?
People often complain about games getting delayed, such as Cyberpunk 2077, but there’s not really a valid reason to complain about those delays. As I’ve said, everyone is backlogged. You don’t need to play Cyberpunk 2077 today. You’ve almost assuredly got something else to play that you haven’t already beaten. You can also lean into online multiplayer in many games if you have absolutely no backlog to speak of, which is ridiculous. There are also plenty of free and older discounted games to take advantage of during the wait for a new release. Delays also mean a game gets more of its bugs worked out before launch. It means less patches required day one. It means less crunch time for the developers. There are lots of good things that come out of delays with very few bad things, unless a project ultimately gets cancelled because of a delay, such as with Scalebound (never forget). So generally I don’t have a problem with delays. So why don’t companies leverage all the time they need to get a game right from launch like they used to? I’m tired of large day one patches, broken games that need to be fully updated, and hearing about developers getting worked to death to make an arbitrary deadline. It’s simply not necessary when people can fill the time. Especially when you’re a company like Ubisoft where people can fill the time with other games already published by Ubisoft. Personally I’m nearing the end of Watch Dogs 2 and still need to play Assassin’s Creed: Origins & Odyssey. So the fact that Watch Dogs: Legion got delayed doesn’t faze me in the slightest bit. If anything it helps me.
When you look at this year’s AAA lineup you can see a large amount of corporate representation. Just about every large publisher is putting out something noteworthy this year. And most of these will be long form games with DLC, games as service content, and or plenty of base content. So why do any of them need to release another big game in 2021? What if instead every publisher agreed to make 2021 a development year? All studios will not publish any games and will instead allow all developers to work without a 2021 deadline so they maximize the performance of their games that would have released in 2021. Why isn’t that a thing? All larger players agreeing to periodically take a year off releasing and just ride their current revenue streams, allowing studios more time and gamers a year to focus on their backlogs. This is the first time in history that taking a year off publishing new titles won’t break the larger players. With so many additional revenue streams available now, they don’t need to release new games as often as they do and can still remain profitable.
Imagine what you could get done if you had an entire year where you were guaranteed that you wouldn’t miss out on any new games. How would that make you feel? What would you do with that time? You’d finally put some real work into your backlog. You’d revisit games you wish you had time to revisit. It could be a super productive time for many gamers. And people could save money for the next year of games.
Now obviously everyone wouldn’t care for this. Indie studios that can barely keep their doors open couldn’t take a year off like this and shouldn’t. Streamers that focus on new games at release would be starved for content, which isn’t really a concern of mine but it is something that should be acknowledged. Also let’s not forget that the PS5 and XBOX Series X are scheduled to release at the very end of 2020 so them not releasing any games in 2021 specifically is a tall order. But that’s a specific situation that doesn’t happen every year. So focus on the concept rather than the specific dates.
Again, this won’t happen. Too many powerful people would complain too much about a year off with no additional revenue streams being added to the mix. And too many whiny gamers that don’t want to work on their backlog would take to the internet with change.org petitions and angry Reddit posts. Changing the status quo is hard. Changing the status quo when it will reduce profits, even if only temporarily, is nearly impossible. But I think it’s important to note that for the first time in history, such a thing could be comfortably implemented without a required lapse in total gameplay hours for consumers and without AAA game companies having to suffer real losses from not releasing for a year. The only scenario where a company really loses is if they were already gold and ready to launch because that would mean idle time for that studio where no work is actually getting done. But something as radical as an entire year devoted to backlog play would be coordinated in advance so studios shouldn’t end up in that situation in most cases.
I think the idea of granting a year of gaming furlough to both consumers and developers would be a good thing. I think there would be many positive benefits to it, including the fact that gamers would be that much more appreciative of new games after having to wait an additional year to play them. People wouldn’t be nearly as critical when they spent a year playing only older games.
What would you do with a backlog year? Would you be able to keep yourself occupied or would you have absolutely nothing to play? Would you appreciate the time to catch up with older stuff and save money or is it a non-issue for you?
Last week E3 2020 was officially cancelled due to concerns about COVID-19 aka coronavirus. If you have been reading my blog regularly for the past several years, then you know that I have been very negative about E3. I have called for a massive change to the structure of the event, I have maligned the ESA for their focus on using the event as a way to uplift media personalities, both professional and private, and I’ve said countless times that the event needs to be more focused on the public and providing them access. It is not inaccurate to say that I would happily have supported the show closing down for good. I even wrote a blog post in February pretty much saying that I believed E3 was on its way out within the next five years. But this is not how I wanted it.
While I did want to see E3 change or die, I wanted it to be a choice made in good faith. I did not want it to be at metaphorical gun point. I did not want the world to literally be collapsing under the weight of a pandemic that has led to the indefinite cancellation of the rest of the NBA season, several other events being cancelled or delayed indefinitely, and the delay of both movies and TV shows. Sure I’ve been very negative about E3 for years, but not so much that I wanted things to get to a point where people started dying in order to get it cancelled. It’s fairly depressing to have predicted and even called for E3 to be cancelled and to then have seen it cancelled in this way. It’s kind of like being able to see the future but not the causes of it and ultimately using that partial information to make things worse. Like being in an episode of That’s so Raven but much more depressing. Well maybe not much more depressing . . .
It is sad to see not just E3 but many gaming events cancelled because of coronavirus. Taipei Game Show, which I attend every year in person, was also cancelled, or “delayed” to be completely accurate. I am encouraged that some larger brands have already stated that they will still be able to present their E3 announcements on time via digital means. This is what I’ve wanted to be implemented for years. Again, I didn’t want it to be a forced decision in order to literally save lives, but yes the E3 announcement cycle needs to be replaced with digital presentations and should have been years ago. Every so often Nintendo has the right idea before everyone else.
I hope the industry as a whole takes this opportunity to completely revamp the way gaming announcements are distributed. We and they should not be limited to big news at a couple of key events in limited locations in specific languages at pre-determined times every year. Developers and publishers have the ability to create and distribute digital presentations at any time to everyone in the world concurrently in whatever language and style they want. The freedom of directly controlling and distributing information without having it filtered by media personalities and specific event dates should be taken advantage of by all developers and for whatever reason really hasn’t been to a wide degree. There’s no reason a developer that’s ready to announce a project in February should have to wait for June when a bunch of other projects by will also be announced by their competitors. It would be much more beneficial to that studio, and in my opinion effective, to be able to announce the information they want to when they’re ready directly to consumers via social media.
I love the Nintendo Direct and PlayStation State of Play presentations. I wish PlayStation had a bit more consistency about when they released them, but I believe the models work and more importantly work well. Every publisher can and should do something similar. And they shouldn’t try to release them all at the same time. Imagine a world where every month you get to watch a new presentation with announcements about different projects you may or may not have known about so you have enough time to properly analyze and consider each one, giving it a fair amount of consideration before rendering a verdict. Imagine being able to watch a company’s presentation without having to consider stupid questions like “Who won E3?” because each company presents their games as an independent entity at their own time trying to deliver products they’re passionate about rather than compete for media hype. Imagine a world where presenters can talk about the games they’re presenting without constantly being interrupted by entitled YouTubers trying to get free special editions of unreleased games and garner hits to their channels. This is the world of game announcements I want to live in.
I want to live in a world where the popular media outlets are the ones that create the best content by the strength of their writing and presentation. Not their access to information. I want to live in a world where all media, big or small, famous or just starting out, get information at the same time and can create their content the way they want to without having to worry about getting beaten out in the news cycle by someone who was given early access and handed an automatic win. We are in an age where consumers and developers no longer have to be held hostage by media entities and event organizers for exorbitant fees, favoritism, and inconvenient optics, both physical and digital. It’s now possible for any developer to present the content they want to present directly to the gamers with no middle man. I hope more of them take advantage of this opportunity moving forward.
Make no mistake, I am not happy about the spread of this virus. I am not happy that the world is getting turned upside down because of it. I am not happy to learn that most of the world’s governments are run by ill prepared career politicians that never really had the best interests of the public in mind or the ability/desire to protect them. None of these things make me happy. I am not happy that gaming events, among other things, are being canceled. All of these things make me sad. But let us not ignore that these cancellations are an opportunity to completely overhaul the way games are announced, hyped up, and even released. Hopefully it won’t be wasted.
Stay safe, stay inside as much as possible, and use this time to work on your backlogs.
I’ve never been a fan of the games as service model. It’s honestly crippled my experience with a lot of games. More specifically a lot of Ubisoft games since that’s become their staple model for games. The fact is that, like many if not most gamers, I’m severely backlogged. Like I have games I bought years ago that have never been opened. I’m not alone in this. It’s a common “problem” for gamers. Especially for those of us who buy in bulk during sales. Because of this, I rarely have the time or patience to go back to a game I’ve already “beaten”. I put the term beaten in quotes there because it’s hard to even declare a game beaten in the games as service model. That’s why “finished the main campaign” has become the more appropriate way to describe the experience of playing these games in the last several years.
I have played some great games from Ubisoft and missed out on much of the later released content, even though I basically always get the gold edition of their games. The Division is the best example of this for me. I think The Division was one of the best online cooperative experiences I’ve ever had. I had an active clan that played daily. We did everything. Beat every side mission, got every collectible, and dominated the dead zone. But eventually we all got bored and moved on to other games, as is normal for gamers. Then months after we had all moved on they started introducing new content. But we weren’t all in the same place at that point. Some of us did come back right away. Others never came back at all. I tried to go back in super late and it just didn’t work out. And I heard the newer content was really good. But I never really got to enjoy it. I was busy enjoying other games. This was my experience with The Division 2 as well, save for the fact that I never formally linked up with a clan in that one. It’s these sorts of experiences that have sort of ruined a number of great games for me because I always feel like I’m missing out on the content I paid for (Gold Editions). But I simply don’t have the time, or patience, to wait around in a game that is currently idle while waiting for new content. This is kind of why I’ve steered away from games as service titles as of late.
All that being said, I started playing Ghost Recon: Breakpoint day one. I thoroughly enjoyed the campaign and side missions. I had a terrible experience with the raid, which did release while I was still playing the campaign, so I do at least commend Ubisoft for that. But once I was done with the campaign, I was pretty much done. I completed my time with Breakpoint at the very end of December. Since then I’ve completed six other games. January was a rather productive month for me. At the very end of January, almost exactly one month after I finished and moved on from Breakpoint, Ubisoft held the Terminator event. The trailer was/is very good. The marketing email I received was also very compelling, as many Ubisoft emails I receive for games I’m already playing/have played are. So I decided to jump back in. Breakpoint was still fairly fresh in my mind and I happen to be a big Terminator fan. But I have to say that the main reason I was compelled to jump back in was that I had literally just finished a game and hadn’t yet started my next one coupled with the fact that this was a limited time event. Those two factors just happened to line up perfectly. If either wasn’t true then I can’t honestly say that I would have given this event a shot. But I’m very glad I did.
The Terminator event was really good. One of the best limited time events I’ve ever played in a shooter. I usually hate limited time events but this one handled things correctly and that’s what made it fun. The first thing I want to absolutely praise about the event is that it was short. I don’t mean short as in the amount of time it lasted. I mean short as in the amount of time it took to fully complete. There were 21 available rewards in this event plus two plot based guns. We were given nine days to finish the event (beat both the main missions and enough side missions to collect all 21 rewards) but it only took three days to actually accomplish this. And when I say three days, I don’t mean 72 hours. I mean three days of completing two daily missions a day plus the two main side quests. Overall this only took me about six to eight hours of actual play. And I consider that a good thing. This event wasn’t asking me for a new commitment. It was just asking me to visit an old friend for a little while. That made it enjoyable. I got to remember what I liked about the game without having to dive back in whole hog. The rewards were good. Mostly cosmetic, but stuff I actually enjoyed using. I bought those in-game store Terminator skins and used those Terminator shades. Are they useful? Not at all. Are they fun for old school movie nerds? Hell yeah!
It was fun playing story missions that only took a few hours but that tied in directly to the Terminator narrative. It was interesting fighting Terminators and having to use a special gun to destroy them. It was cool having a boss fight where you pretty much fight Arnold Schwarzenegger by another name and haircut. It was a nice weekend experience. That’s the kind of content a backlogged gamer is comfortable going back into an already beaten game to do. No long winded commitment that’s gonna make me have to learn an entirely new gameplay scheme. No months long timed daily missions scenario. Just a nice story driven weekend where I get to shoot killer robots instead of run of the mill soldiers.
The story worked really well because it was based on an already well established IP. They didn’t need to explain too much about what was going on because everybody already knows how Terminator works. So they could quickly throw you into the action and let you start fighting killer robots immediately. It also fit really well with the fact that Breakpoint is already about fighting killer drones. This event also worked well because of the large map size. While most will agree that the Breakpoint map is way too big, this actually does make implementing events like this way easier. They can easily drop random stuff into the map without it being too noticeable to those who aren’t interested in playing the events. They could drop Decipticons into the map and there’s still a good chance you might never see one.
My only real complaint about this event was the microtransactions content. There were a few skins I really wanted that required spending real money to get even when I have the gold edition of the game. I didn’t buy them but I have to say that this was the first instance where I actually took issue with microtransactions in Breakpoint. Up to this point I always felt the complaints were unnecessary because they didn’t actually affect the gameplay experience that much. And while sure they didn’t affect actual gameplay in this instance either, a Terminator event where cosmetic Terminator stuff is locked behind an additional paywall is pretty much the equivalent of affecting gameplay, in my opinion. But that also comes down more to the limited selection of Terminator cosmetics available without using microtransactions. If there were more skins than just Terminators available at no additional cost then I wouldn’t care so much that I couldn’t get things like a Kyle Reese skin.
While I absolutely loved this event and would most likely play more like it, I have to say that the game as a whole is still riddled with glitches. Even after 12 GB of patches and updates before starting the event, I still experienced a ton of problems. My entire experience with the final boss of the event was odd because the boss room didn’t even render for me. I was walking around only able to see enemies and completely blind to the room’s layout. It’s a wonder I got through the mission at all. Joining up with other players is still a lot of trouble. The fact that there was no event specific matchmaking options was quite annoying but I actually did end up doing some co-op play for the event missions a couple times anyway.
All in all, I consider the Terminator event to have been rather successful. It’s certainly the type of content I’d like to see more of and the way it was managed was very convenient and accessible. I have never gone back in and tried to do the raid again but if they keep doing events like this then I can definitely see myself returning to Breakpoint every so often for more short term events.