What's the limit of players within a single instance?

I’m pretty sure that in vanilla’s engine, Kalimdore and Eastern Kingdoms were both single giant instances… Right?

As someone who’s tried to build an RTS game from scratch to be super efficient, I was only ever able to get up to 1,500 little individual units to run around like ants (the pathfinder collision took up a butt load of CPU power per cycle)

Unlike WoW my RTS engine did have to factor in pathfinding so that no two units could ever occupy the same XY position in the matrix. This consumed a ton of CPU power, without it I could’ve easily doubled the amount of total units to around 3,000, each with healthbars, targets and aggro reactions.

I’m sure WoW must be a lot more efficient than my engine since I wrote mine in Swift, where I’m pretty sure Blizzard developed some parts in either C or straight out assembly language.

Does anyone know what the maximum number of players the current WoW engine is capable of having in a small single instance assuming there’s no sharding/layering or other shenanigans?

They’ve never given a number. The original servers were capable of around 4.5k according to old devs, but the capabilities are far higher now. BfA is a bad example because the n-squared spell interaction complexity is far higher than Vanilla effects.

Additionally each zone is instanced independently with information flow across borders for visibilty, so you need to talk about population caps for total realm capacity, and zone caps for individual zones.

But given some of their statements, I’d imagine that Silithus will be capable of running 3K+ players and a whole lot of monsters at once.

No. They definitely did not do that. They’d still be working on it if that were the case.

C and Assembly is typically only used for things like electronics/appliances such as thermostats and refrigerators or for machinery. It’s rarely if ever used for any sort of PC software development. One exception that comes to mind is operating systems and BIOS software. But video games? Nah.

3K sounds about accurate. Damn it’s a shame that CPU cores have BARELY nudged upwards from 4GHz, which is what a typical Pentium 4 had in vanilla.

Sharding/Layering is just a uneffective trick to take advantage of multi-core CPUs.

The same scenario repeatedly crashed servers back in the day.

The 3k limit is not and never was a technical limitation. It’s designed to have a large enough population without overloading gameplay.

To paraphrase Brian Dawson: “We want to have gameplay issues before technical issues”. They don’t want the servers to crash because of load, before a point where there’s too many people to be unplayable. And they think that even after layering is removed, they’re at that point for a 3k server.

I would be shocked if they didn’t…

You need to use C to have full control of the flow of memory allocation in the heap for max efficiency.

You don’t need C for that. C++ can do that just fine. And what you’re talking about is rarely necessary with today’s hardware or even hardware from 2004.

Idk maybe I’m wrong but I’ve never heard of low level languages ever being used for game development since the 16 bit era. I’m sure they were used in the N64/PS1 generation as well but after that I’m thinking low level languages in game dev probably wasn’t much of a thing.

The beauty of game developers is that they still are forced to write lean software otherwise suffer from the bad side effects of Wirth’s law.

Actually they wrote the whole thing in C++ which harnesses the power of C direct access, in a far easier to write language.

I don’t know if we can really find out that number because mostly what happens now is that there’s too much data for our computers to process and we lag like heck/disconnect before ever reaching the point of a server/zone crash.