An Engineering Update on the Dragonflight Launch

With Dragonflight’s recent launch behind us, we want to take some time to talk with you more about what occurred these past few days from an engineering viewpoint. We hope that this will provide a bit more insight on what it takes to make a global launch like this happen, what can go right, what hiccups can occur along the way, and how we manage them.

Internally, we call events like last Monday “content launch,” because launching an expansion is a process, not one day. Far from being a static game running the same way it did eighteen years ago—or even two years ago—World of Warcraft is in constant change and growth, and our deployment processes change as well.

Expansions now consist of several smaller launches: the code first goes live running the old content, then pre-launch events and new systems turn on, and finally, on content launch day, new areas, quests, and dungeons. Each stage changes different things so we can find and fix problems. But in any large, complex system, the unexpected can still occur.

One change with this expansion was that the content launch was triggered using a timed event —multiple changes to the game can be triggered to all happen at a particular time. Manually making these changes carries the risk of human error, or an internal or external tool outage. Using a timed event helps to mitigate these risks.

Another change in Dragonflight: greatly enhanced support for encrypting game data records. Encrypted records allow us to send out our client with the data that the game needs to show cutscenes, share voice lines, or unlock quests, but keep that data from being mined before players get to experience them in-game. We know the community loves WoW, and when you’re hungry to experience any morsel, it’s hard to not spoil yourself before the main course. Encrypted records allow us to take critical story beats and hide them from players until the right time to reveal them.

We now know that the lag and instability we saw last week was caused by the way these two systems interacted. The result was: they forced the simulation server (that moves your characters around the world and performs their spells and abilities) to recalculate which records should be hidden more than one hundred times a second, per simulation. As a great deal of CPU power was spent doing these calculations, the simulations became bogged down, and requests from other services to those simulation servers backed up. Players see this as lag and error messages like “World Server Down”.

As we discovered, records encrypted until a timed event unlocked them exposed a small logic error in the code: a misplaced line of code signaled to the server that it needed to recalculate which records to hide, even though nothing had changed.

Here’s some insight on how that investigation occurred. First, the clock struck 3:00 p.m. PST. We know from testing that the Horde boat arrives first, and the Alliance boat arrives next. Many of us are logged in to the game on our characters sitting on the docks in both locations in one computer window, watching logs or graphs or dashboards in other windows. We’re also on a conference call with colleagues from our support teams from all over Blizzard.

Before launch, we’ve created contingency plans for situations we’re worried about as a result of our testing. For example, for this launch, our designers created portals that players could use to get to the Dragon Isles in case the boats failed to work.

At 3:02 p.m. the Horde boat arrives on schedule. Hooray! Players pile on, including some Blizzard employees. Other employees wait (they want to be test cases in case we must turn on portals.) The players on the boats sail off, and while some do arrive on the Dragon Isles, many more are disconnected or get stuck.

Immediately we start searching logs and dashboards. There are some players on the Dragon Isles map, but not many. Colleagues having issues report their character names and realms as specific examples. Others start reporting spikes in CPU load and on our NFS (Network File Storage) that our servers use. Still others are watching in-game, reporting what they see.

Now that we’ve seen the Horde boats, we start watching for the Alliance boats to arrive. Most of them don’t, and most of the Horde boats do not return.

A picture emerges: the boats are stuck, and Dragon Isles servers are taking much longer to spin up than expected. Here’s where we really dig in and start to problem solve.

Boats have been a problem in the past, so we turn on portals while we continue investigating. Our NFS is clearly overloaded. There’s a large network queue on the service responsible for coordinating the simulation servers, making it think simulations aren’t starting, so it launches more and starts to overwhelm our hardware. Soon we discover that adding the portals has made the overload worse, because players can click the portals as many times as they want, so we turn the portals off.

As the problems persist, we work on tackling the increased load to get as many players in to play as possible, but the service is not acting like it did in pre-launch tests. We continue to problem-solve the issue and discount things we know aren’t the issue based on those tests.

Despite the lateness in the day, many continue to work while others take off to get rest so they can return early the following day to get a fresh start and relieve those who will work overnight.

By Tuesday morning, we have a better understanding of things. We know we’re sending more messages to clients about quests than usual, although later discoveries will reveal this isn’t causing problems. A new file storage API we’re using is hitting our file storage harder than usual. Some new code added for quest givers to beckon players seems slower than it should be. The service is taking a very long time to send clients all the data changes made in hotfixes. Reports are coming in that the players who have gotten to the Dragon Isles playing have started experiencing extreme lag.

Mid-Tuesday morning a coincidence happens: digging deep into the new beckon code we find hooks for the new encryption system. We start looking at the question from the other side —could the encryption system being slow explain these and other issues we’re seeing? As it turns out, yes it can. The encryption system being slow explains the hotfix problem, the file storage problem, and the lag players are experiencing. With the source identified, the author of the relevant part of the system was able to identify the error and make the needed correction.

Pushing a fix to code used across so many services isn’t like flipping a switch, and new binaries must be pushed out and turned on. We must slowly move players from the old simulations to new ones for the correction to be picked up. In fact, at one point we try to move players too quickly and cause another part of the service to suffer. Some of the affected binaries cannot be corrected without a service restart, which we delay until the fewest players are online to not disrupt players who were in the game. By Wednesday, the fix was completely out and service stability dramatically improved.

While it took some effort to identify the issue and get it fixed, our team was incredibly vigilant in investigating the issue and getting it corrected as quickly as possible. Good software engineering isn’t about never making mistakes; it’s about minimizing the chances of making them, finding them quickly when they happen, having the tools to get in the fixes right away…

…and having an amazing team to come together to make it all happen.


—The World of Warcraft Engineering Team


342 Likes

Interesting to hear about stuff on the technical side of things… IE what gets monitored, what becomes a red flag, etc. Might just be the software developer in me of course, but wouldn’t mind seeing more posts like this in the future.

109 Likes

This was awesome to read. Totally love this Behind the Scenes perspective :dracthyr_yay_animated:

64 Likes

I love these sorts of posts, giving looks into the internal goings on. Would be great to see more of these on a regular basis.

:dracthyr_heart:

51 Likes

I love hearing about what goes on behind the scenes! Especially during a launch.

24 Likes

Neat. I mean, it’s nice you guys fixed the day 1 issue but you’re not out of the woods, Azure Span is still a disaster, taking a lap like this feels odd.

In my eyes you fixed the Tier 0 error but you still have major Tier 1 product issues.

12 Likes

I want to preface this by saying I’m enjoying the expansion so far. I am also a senior software engineer by trade, working for about 15 years in the industry including systems that require high availability and large distributed systems which are queried thousands of times per second. It’s not the same as video game programming, but I have a vague idea of what’s going on.

You need to be better than this. The quality of launches in BFA, SL, and DF have not been to the quality that Blizzard used to be 10 years ago. You’re a multi-billion dollar mega game studio. Once upon a time, Blizzard released games when they were ready and in particular Shadowlands and Dragonflight launches have felt rushed.

Having said that, I’m excited for the first time since Legion with the story and direction of the game and I’m looking forward to what’s to come. Thank you for sharing this.

12 Likes

This was an actual “test it on production” issue, lol. It’s kind of funny. I feel less bad that the small-ish company I work for does this kind of thing when a billion dollar company is testing their code on prod, lol.

8 Likes

Knowing the encryption system was one of the bigger problems, will you try to improve on it for future expansions, or look for a different way to achieve what you wanted?

1 Like

And None of this tells us why Streams were still able to play but us regular people were left to rot.

4 Likes

textbook entitlement

I love the explanation of some of the things that go on behind the scenes. Thank the engineer team for all their hard work and the communication! Thank you so much Kaivax for sharing.

10 Likes

Well when i pay for a service i Expect it to work, That’s not entitlement that’s a expectation for a Service they agreed to provide in the ToS when you pay.

5 Likes

did you not play at all for the past week. time for refund

1 Like

You mean why Asmongolds kept crashing for 3 hours while I was able to quest?

2 Likes

Encryption is almost for sure the most practical way to achieve their goals, so I suspect the focus will be on improving implementation more than replacing it.

8 Likes

Cool, Most of Area 52 could not even reach the Continent regardless even after they added the portal.

And ask precisely why I’m not on one of the largest servers out there because they always seem to be the most riddled with problems. And with crossrealm and faction now available, there’s no reason to be on a highly populated server.

2 Likes

Great article! Doing DevOps for a living, I can relate to a lot of the information here all too well.

4 Likes