A question about release quality
There are two main reasons for bugs and glitches at release: Time pressure and a near infinite amount of systems to test. While PC gaming for the most part is great, it also suffers from the curse of letting us upgarde on our own. I say curse, because every time you swap out a component on your PC, you are changing the entire system. Thing that worked well before, might not work at all after the upgrade. Now picture two million people, each with their own systems, trying to get the same game working? Some have state of the art systems. Some have old systems. Some swear to nVidia. Some swear to ATI. Some are happy with their CPU. Others have the same CPU, but overclock it. You can see the problems here. It’s nearly impossible to make a game run flawlessly on every single system imaginable. Add in Mac and Linux-support, and it just gets hilarious how many system the game has to run on.
Of course, even so, it’s fully possible to wipe out most of the bugs and glitches, if given time. But that leads us to the first issue: Time and money. Most companies simply don’t have the time and money to clear out all the bugs in a game before launch. There are a lot of reasons for this, but for now, it’s sufficient to say the developers are only a small part of the whole machinery. When they are done with the game, there are still a ton of things left to do before the game ships. Everyone who isn’t directly working on the game still needs to get paid, so if the game is delayed, the company won’t make any money – and are thus essentially loosing money, by being forced to pay those pesky workers who can’t do anything, as the game is delayed. So the faster you ship the game, the quicker you get money.
This is also a reason why PC-ports of AAA-games are usually a bad idea. The latest example is Batman: Arkham Knight. The console version is great, but the PC version? Don’t even bother. But why focus on the PC-version, rather than polishing the console version?
Put all that together, and I have learned to avoid PC-ports of AAA-games like the plague, and I’m fine with small bugs and glitches in PC-exclusives like Guild Wars 2. As long as the bugs don’t mess up my game or prevent me from finishing it, I don’t mind them. Even an occasional CTD is fine. Annoying, yes, but it doesn’t take long to restart the game, and I often start near where I crashed anyway. No biggie.
There’s a lot that will happen in the live environment with 100,000’s of players that is impossible to duplicate in any testing environment, even if you use a simulation of 100,000’s of players. That’s were the disconnect comes from between the community and developers/programmers thinking like you did, ‘did they not test for that?’. Also, real players will often try and do things that no testing person has even thought of, even those testers that are players, that’s how things get by the devs, and believe me, the always ask testers to try and break any new content or designs that they may be working on, and testers do try. It’s just not possible to test for every single contingency that might arise, even ones that would appear to be ‘no brainers’.
Bugs are never acceptable for a company of any size. But they are an inevitable part of software development, and leeway should be given with that in mind.
What Anet fails at utterly is in how they handle issues, and the ignorance they show toward their own fundamental game design. The HP buff toward World Bosses being only the most recent in a long line of poorly thought out ideas, lack of large scale testing for the good ones, trying to fix design problems (or even bugs) with raw number adjustments rather then actual design fixes, and then shifting priorities before they even finish implementing a system into the game. They also have a bad habit of disposable content (see all of Season 1), that could be utilized in other areas of the game, or even as seasonal events.
Given the current climate of things, I expect HOT’s launch to be a complete mess on a technical level. Hopefully the Betas will be more productive this time around, as they’ve always done very limited testing windows. I mean, we have scripting issues that have existed since pre-launch Beta, and events (and related achievements) being perpetually broken. In fact, an ironic side effect of the mega server design, is that the map spawning frequency actually improved our chances of broken events being temporarily functional. But many to this day (I’m looking at you Orr) still break in the exact same spot.
Anet’s level of bugginess is extraordinary. This is exacerbated by:
- Not even being close to ready (testingwise) despite the slogan “when it’s ready.”
- Severe lack of communication. Is it a bug: who knows? When will it be fixed: who knows? Will it be fixed: who knows? e.g. Lupi autoattack issue: three months and no acknowledgement it’s even a bug yet, despite several promises of “checking into it, will get back to you.”
- So many serious bugs that are never fixed, exacerbated by #2.
- Extremely slow bug fixing, exacerbated by #2.
- So many obvious bugs, exacerbated by #2.
An public/semi-public beta test server would theoretically help a lot, but it seems they don’t even have time to test at all and it needs to be released right now.
I share same answer with Lord Kuru and starlinvf 100%. In many research and my investigation: many serious problems was there before and there was no serious action happen to resolve them: that is why they appear again.
" The more you hide your problems, the more they show. The more you deny your problems, the more they will grow "
" You can not fix a problem that you refuse to acknowledge, You can not acknowledge a problem that you refuse to fix "
Ankur