Is classic getting dedicated physical servers?

Ok let me try and spell this out for you cause you have very poor reading comprehension skills.

The game USED to run on a slow tick rate that processed EVERYTHING.

NOW the game runs on a FAST tick rate, but all actions that need to be handled get a priority so not all interations are handled every tick, so they can have much faster tick rates but dont have to spend every tick checking everything.

In Vanilla the slow tickrate that handled spells caused poor interations… so with the faster tick rates they solved those problems, because they didnt want those interations to work that way, they put spells as a high priority(getting checked every tick).

Now to simulate the old behavior they are dropping the priority of spells so they only get checked every few ticks instead of every tick. The servers still run at the faster tick rate they still handle the same amount of spells they just let the spells stack up for a few ticks before actually processing them.

The Classic servers will still run on the faster tick rate and most of the priorities are staying the same, however they are adjusting a little bit to simulate slower tick rates. The game is not going to run at a slower tick rate.

except thats not what they said they were doing…

They are using clouds because clouds are cheaper and easier and reduce risk, and therefore maximize profit and minimize losses.

Please don’t play dumb and ask if Activision Blizzard would really do something like maximize profit.

If the server is essentially sleeping for a tick to simulate Vanilla, that’s not causing load.

Thats not what its doing…

Its still processing stuff just not spell casts.

Yeah, hes clearly not experienced here. Hes having trouble understanding how the bandwidth is effected by processing frequency. Its not something thats easy to teach in a forum post. Especially without being condescending. Im trying to think of a way to paint a picture, but everything I come up with ends up with me seeming like Im talking down to the guy. And he refuses to Google this or read any links so its like … okay.

This is Blizzard directly telling the players all spell casts are being moved to a low priority loop for them to be PROCESSED AT THE FREQUENCY of Vanilla. Processed is the key word there. As is being aware of WoW’s code well enough to know that 90% of everything in the game is a “spell” with a set SpellID. Anyone who has run a Mangos server to fool around as a GM can tell you that. By putting 90% of all load processing (spells) on a lower tickrate the servers remaining load is absolutely miniscule with only minor things like Player Location being updated at current rates. This frees enormous amounts of resources.


Sounds good, I think we’re done here.


Lol, exactly. Heres some more information with a Blue post explaining tick rate in WoW and how it works and how it was updated to work versus how it worked in the past:

I was wondering when this’d get brought up. I’ve hinted about this in the past in a couple interviews… Yes, there was a very significant underlying change here, that may have implications for theorycrafting (though minor).

I don’t want to get too deep into the under-the-hood workings of WoW servers, but here’s a super short version. Any action that one unit takes on another different unit used to be processed in batches every 400ms. Some very attentive people may have noticed that healing yourself would give you the health instantly (minus client/server latency), whereas healing another unit would incur a delay of between 0ms and 400ms (again, on top of client/server latency). Same with damaging, applying auras, interrupting, knocking back, etc.

That delay can feel bad just due to the somewhat laggy responsiveness feeling, but also because the state of things can change during that time. For example: Holly the Holy Priest is healing Punky the Brewmaster. Punky spikes low, and Holly hits Guardian Spirit in a panic. The server verifies that Holly is able to cast it, and that Punky is alive (great!). The cast goes off, Guardian Spirit goes on cooldown, and a request is placed for the Guardian Spirit aura (that prevents dying) to be placed on Punky. That request will be filled next time the 400ms timer loops, which happens to be 320ms from now. 250ms later, the boss lands another hit on on Punky. Punky dies. Sadface. Another 70ms goes by, and the Guardian Spirit aura request pops up, and goes "Hey guys, I’m here!.. Aww… damn, I missed the party. Sadface."

We no longer batch them up like that. We just do it as fast as we can, which usually amounts to between 1ms and 10ms later. It took a considerable amount of work to get it working that way, but we’re very pleased with the results so far; the game feels noticeably more responsive.

I can’t guarantee that you’ll never ever again run into cases where Guardian Spirit went on cooldown and the tank still died… but it’ll be literally 40x rarer than before, and the whole game will feel more responsive too.

By [Celestalon][on 2014/06/18 at 2:30 AM](Patch 5.4.7)

The Punky example the Blizzard poster gives here should clear this up for you. The server does not actual expend resources processing UNTIL the tick is reached. This means less ticks == less time the server is processing overall. This is why less ticks is less stressful on a server, and why a server with a higher tickrate naturally is more expensive if you want it to perform well under pressure.

1 Like

Except what you seem to fail to understand is that they are NOT slowing the ticks down. They are keeping them fast, they just drop the priority of spells casts.
you still have the same amount of spells being processed the same number of ticks.

It still runs through all the things it needs/doesnt need to check and it processes the same amount of data.

Let’s say we have two identical servers running identical software.

On server “A” we set the tick / update rate to 10 billion per second.

On server “B” we also set the tick / update rate to 10 billion per second, but we modify the software to only send updates in batches of 60 per second.

Assuming the same amount of client activity on the server, which server will be under more load?

1 Like

This is where you are getting lost. Dropping the priority by definition is lowering the rate at which they are PROCESSED! Listen to Blizzard:

Do you see them. Right there TELLING YOU what dropping the spell casts to a low priority loop does? Because that’s literally them TELLING you the effect of the change.

No… You don’t. That’s the point. You are saying the opposite of what Blizzard told you:

Processed AT THE FREQUENCY of Vanilla wow… Not processed immediately, then again at the 400ms batch… Processed at the frequency of vanilla. Which was 400ms

No… Its does not. First of all, this would be computationally redundant. Second, if it still processed the things it needed to check immediately then two mages couldn’t sheep each other, because the check would be done and grant one mages spell priority over the other. The batching doesn’t PROCESS anything except on the 400ms loops. That’s what all the Blizzard posts Im linking is telling you in English. Let me link it again…

Any action that one unit takes on another different unit used to be processed in batches every 400ms

This is such a specific and direct statement here that it should be impossible for you to misinterpret it. This is a Blizzard employee literally telling you that the data is NOT processed as its received, but in 400ms batches. Specifically. There is no way for you to interpret this any other way… It’s impossible.

Edit: No offense, but its also long past time for you to realize that you are unable to provide citations to back any of your assumptions and yet, the opposing side has receipts for days… even from direct Blizzard employees. Either you are wrong, or the world is all collectively conspiring to put incorrect information all over the internet to make you LOOK wrong. Which is more likely dude?


Depends on how “B” is coded…

also not quite what is happening…
they way you would make “B” less strain is by cheating which they cant do in the model they made, so “B” would technically take more load.

Why do I need to provide citations when you keep providing ones that Prove my what I am saying is correct…

just cause you can’t read properly does not mean others can’t.

U wot m8…

Tickrate isn’t batching.


Not ever. Never ever.

Tickrate refers to the rate at which data is sent and received from the server.

Batching refers to which “slot” a particular point of data is executed in.

Say the polling rate for data is 1/30 and the batch rate is 1/10. If you and another character send data to the server and it is received at poll point 0, 1 or 2 - you will be in batch slot 1. If the data is received in poll point 7, 8 or 9 - you will both be in batch slot 3.

The rate at which a server executes commands DOES NOT have to be directly tied to the server’s polling rate.

Source: (Non-Blizz) programmer here. This is literally netcode 101.

You can send and receive data as fast as any connection and hardware allow and limit the rate at which commands sent to the server are executed from “as fast as we get them” to “as slow as we want”.

All of this in mind we now take into consideration what points of data are processed on what side of the internet connection. Is the damage of an attack after any character modifiers calculated by the client or the server? Is that damage passed to a check on the client side of the player or server? Etc etc

If all calculations are performed server-side, this is where loads come into play and lower batch rate can reduce performance since you can potentially spike lots of calculations every X batch rate.

It’s like opening multiple programs in a single click on your PC vs opening one every 5 seconds. Opening all at once will yield a greater load on the CPU where allowing one to load or load partially before the next reduces load spikes.

And on and on we go down this rabbit hole

Knowing all of this - draw your own conclusions on performance vs potential player saturation in a single area at once.

1 Like

I remember back in Vanilla waiting in Ironforge for an impending horde attack. Two raid groups is all it took to take down the server. The servers these days can easily handle that.

There are no large companies running (what you call) “physical” servers any longer. They are more commonly referred to as “bare-metal” or “single-box” by those who have to deal with them.

Single-box servers are single-point failure sources. Meaning if something goes wrong, the service is interrupted. Which is unacceptable for 5 9s uptime services (WoW is not a 5 9s service, more like a 2 9s due to weekly server maintenance, but the goal is the same: uninterrupted service).

What service-oriented companies employ now are cloud services, either their own, or leveraged from google/apple/amazon/etc. The difference between a cloud and a single-box/bare-metal server is that they can never expand beyond the processing power available to it by the hardware in its own box. Whereas a cloud-based system may bring additional servers into the processing loop to handle increased loads from unexpected traffic (such as a big raid descending on Stormwind).

5 9s uptime description:
Cloud Computing description:
Single-box/Bare-metal description:

As for the processing loop argument, there have been a few eloquent descriptions (particularly from Apocryphal above) that does a good job describing the difference in load when batching as opposed to real time (or near real time) message handling.

I’ve noticed much trolling and blind-eyeing in this thread. I’m not sure why I’m feeding it more, but I felt the need to weigh in, given I’m software developer and have knowledge of the field (to some extent).


There is no need for dedicated servers. Blizz has been using virtual servers from some time, and with how poorly BfA is performing, there is plenty of bandwidth available on existing physical servers.

I mean, WoW once managed 12 million+ players… it’s likely well below 3 million… even if were say 6 million… that’s a LOT of bandwidth free for Classic.

Now if my prediction is correct and Classic overperfoms… :slight_smile:

I have a question regarding this. I will admit that I know next to nothing about server tech or the cloud. My concern is that this new server tech requires sharding to keep it running smoothly. I’m not trying to turn this into a sharding discussion or debate, it’s just something I have been curious about for some time.

We do know, as the OP said, that the new tech can’t seem to handle the load as they did in Vanilla, and that too many people will cause crashes or too many people will cause the area to break into shards to keep it stable.

My concern about this comes down to two main things that would ruin Classic for me personally. The fear that this new tech depends on sharding to keep the servers stable to avoid too many people in a given area. If that’s the case, then sharding wouldn’t be restricted to the beginning and starter zones, but would instead turn on automatically when too many people are in an area. Like world PvP, the opening of the Gates of AQ, etc.

That brings me to my other concern. What happens to large, open world PvP in Classic that took place in Vanilla? I remember being in massive raids on Orgrimmar, the UC, Tarren Mill, etc. Will the servers crash or be sharded? Either scenario ruins Classic to a big extent in my opinion if those raids I once loved can no longer happen.

Am I correct in how this new tech works? It’s highly possible I am wrong, which is why I’m asking the question to understand this properly.

Well all I can say is based on the many many threads on sharding, Blizzard has to be very well aware of this concern by the community. They’ve implied very limited use of sharding so hopefully the impact of sharding doesn’t extend past the first few weeks of launch and afterwards they remove it from the game entirely.

I can’t really tell you exaxtly how Blizzard’s cloud computing services work and why exactly there may be some issues with small densely populated areas because it’s Blizzard’s internal system. My guess is that for whatever reason they limit the resources that can be allocated to a given zone. Theoretically they should just be able to increase that limit, but I can’t really say for sure.

Cloud computing basically means you have many servers working together to distribute tasks evenly. So instead of having one dedicated server for every realm you can actually have less servers overall because you don’t need the full capacity of a server for each realm 24/7. This saves Blizzard a ton of money but would also reduce the maximum potential output of all servers running at full capacity.

1 Like