When you have the same old problems that existed 20 years ago. Is a fair assumption of lack of changes in key areas.
D2 code doesn’t scalate well. Like d1 code was, the “battlenet” implementation was “bruteforced” on the OG game. Most “technnicians” in that age under blizzard umbrela was doing implementations on fly to actually make it work
That’s what they could do, 20 years ago with the number of concurrent calls for it. Was inovative but also messy.
Like about a decade ago most “stream services” like youtube needed to change their code to actually handle enough requests at same time in a acceptable period of time.
If you ever open the OG, and check singleplayer latency you have 0-30ms doing singleplayer stuff locally. When the game needs to calculate and evaluate certain kind of events, either you have a celeron or a i9. The time would be almost the same within that range.
If you put that in a server handling several stuff you create bottlenecks. They chose keep the OG code both on netcode but also in the game itself. That’s why several headaches appeared when they tried to refine the treatment that the game does, about checksums and state of stuff.
Their servers aren’t the issue, they use google servers for mostly all their games and only a few of them have issues that regularly. They increased the number, or at least is what they said, about 2 times already and the issue kept happening. Got even worst when folks started to MF more regularly.
You can have the fastest hardware in the world, would lose if your software are badly implemented and lack optimization.
An FPGA with spec of 10 year old cpus can beat in their purpose task any modern cpu with room to spare. The arch matters, but the most differential was the optimization for the software or the software being optimized for the hardware. FPGA does the prior.
While most of the applications does the later. The issue was that the application itself wasn’t. Because they didn’t improved enough or just chose to stick with the old stuff and sort it out along the road. Solve issues later can be more profitable because you’re paying the development on the go, while if they started to the ground would take more 1-3 years to the game be released.
Industry works with cash, if you’re trying to do something is costly. So you either reuse stuff to make profit right now or create from the ground. Most of the stuff aren’t new or from the ground. Because is time consuming and costly.
D2 was implemented in a rush, they even released the game before was ready. Act4 and even David already stated that several times. The expansion was a way to “fill” the gap. The game itself, even 20 years later has a ton of bugs.
D2R reusing the same half baked code, would be the same. Don’t be naive about it, sure they changed some stuff, but most of the “core” issue was left unchanged because they wouldn’t have time to sort or diagnose it in a enough time to actually be viable on the economic perspective. Software testing is one of the jobs more oversighted in the industry, even when they have some. Most of them are put on a balance between: “release and fix later”, “leave as is” or just “we are aware but the cost is too high right now”.
The artistic side of the game are amazing, but the code behind is really messy. That’s both legacy from d2 but also some implementations they done on D2R.
When they start to do solutions instead of paliatives, things will improve. Otherwise the same mess of 20 years ago will keep return to haunt them.
About ladder, expect something like january-march window. Will depend on how well PTR testing will go.