PTR Patch Notes update

You go right on ahead believing that, if you like.

Yes, there are tools and compilers out there that use human-readable code for programming (I myself use RAD Studio 10), but even those have support for machine language.

Why? Because sometimes using Machine Language produces superior optimized results to even the best compilers.

Keep believing that fiction if you like…I see an increasing number of students enrolling in assembly classes where I teach.

Even my intro students could answer the question you quoted.

Since technically computers understand only machine language…all running code becomes machine language whether interpreted or compiled. I expect you meant to say assembly code which is definitely true.

There are a whole lot of tools. for programming, I remember using notepad, and vi. Vi was way superior especially the advanced version which offered syntax highlighting.

Later on I used vis studios, and eclipse…then went into teaching as I burned out of the industry.

Even within programming languages and compilers yes, there are engines used for game development. Buy the engine slap a new skin and you have a new game to sell. Using these one must know the engine and scripting the config files to make changes. But one must know the code to change the engine.

Some of our requested fixes may be as simple as a change in a script…others may require a change in actual code…or both.

3 Likes

Definitely. But knowing which changes can be made safely without breaking anything is another whole ball of string.

Or figuring out why seemingly innocuous changes cause huge problems. Like the addition of the Shadow Clones or the 4th Kanai’s Cube slot…

It a known fact that when you fix a bug your create 10 more.

I don’t think the shadow clones were the cause of the lag. The 4th cube slot in and of itself also was not the cause directly. It’s the additive nature of what was being chosen for that slot and the additional calcs to perform. The question then becomes is the lag from a specific item or from the 4th cube slot. In any moden game the number of factors can be quite large to look at.

I know as of yesterday they seem to have a good handle on it as I have not noticed any lag while playing…Even started a hardcore game.

And with certain changes (not with everything, but with certain things) you actually can anticipate on whether or not they will be fine.

Adding a 4th cube slot and the Shadow Clones are not small or innocuous changes, but rather at least medium level changes.

1 Like

Well, I certainly hope the addition of the 4th Kanai’s Cube slot is not the direct cause of the problems, as this is the one feature of the new patch that I most to have carry over to the main game when Season 22 is completed.

1 Like

It a known fact that when you fix a bug your create 10 more.

It’s not true in general. Only if you write spaghetti code or if you work on terrible legacy code. Todays’ software engineering has many practices (SOLID) to move away from such paradigm.

Oh, please! Not another one! :crazy_face:

Yes, today’s software engineering is far better than it was 20 years ago, or even 10. But show me a single application development project, of any kind or caliber, that does not have bugs of some kind.

Regardless of how solid our coding practices may be, there will always be bugs in a program. There is not a single programmer alive today, has ever lived, or is yet to be born that can create 100% bug free code or cannot introduce new bugs by eliminating existing ones.

Get your head out of the clouds and stop dreaming.

1 Like

That’s what I do and I don’t introduce new bugs while resolving one, I may introduce new ones when I add a new feature, and I try to write tests before so I assert I don’t; the feature is finished and added when my tests are green. That’s a fact, and I’m definitely not alone. You demonstrated to have a very different experience and assumptions on that, which is OK.

Bug are about scope too, if your tests are green you covered your usage scoped by your tests.
One may find a way, a specific scenario to use the program/type/function in a way that was not covered, then one updates the tests and resolve the bug.

On the other extreme side, one can find an exploit, in that case yes, likely the vulnerability it could be called a bug.

So you’re maybe right in the sense that maybe there is no 100% bug free in absolute, but you’re wrong in the sense that programs are 100% bug free in a considered scope (usage+security).

If your program is formally proven to be 100% bug free, where would you see a bug ? If so, please provide a formal proof that a program always has a bug (not in the libraries, nor the OS it relies on, not through side-channel attacks).

The 100% is scoped too (with a context). Think about it.

Part of the testng problem is auto testing often does not reflect how users utilize a particular application. Users find the bugs testing misses. There will always be issues created as users will always use something in a way that is not foreseen.

Perhaps inconsistencies with the operating system? Or other installed programs? Libraries? Hardware incompatibilities? User Error? Unforeseen circumstances?

It is impossible to create software that always performs as designed across all platforms or configurations.

How many different versions/builds of Windows 10 are out in the wild? On how many different hardware setups? How many are running on inadequate hardware? How many operate under extreme conditions where the user has Tweaked the system in ways not intended?

It is under these conditions that even the most extremely well coded piece of software can fail in ways unanticipated by the developer. In other words, the software could not properly adapt to differences in multitudes of configurations.

This is why companies such as Blizzard have PTR, or Microsoft has their Insider Program. To help find and eliminate Bugs.

Some bugs are the fault of the developers. Some manifest because of conditions outside of their control.

No software can ever be considered to be bug free. Be it a bug in the truest sense of the word, or an exploit as you put it, bugs will always exist. It is the nature of computers.

TLDR : No one will ever convince me that anything related to or a part of modern technology is without problems.

Say my program does +1 to an integer. I can foresee that when I reach 0xFFFF then +1 gives 0x0000, so I need to provide safeties. Once it’s provided, I can let my algebra go wherever it wants, I don’t need to explore all the possibilities of how my algebra will be used.

The definition of a bug depends also on the expectations, and we put all expectations both in tests and user experience (will 0xFFFF +1 give 0xFFFF (which is in that case defined as infinite) ? will it throw an exception ? log a warning ? will it give 0x00010000 ?).

In a word a bug is always by definition in the design of a program, the end user only has different expectations.

I often have a popup ā€œfailed to initialize D3D / DirectXā€; should I consider that as a bug as a user or as a useful message as a developer ? This ā€œbugā€ is actually a feature from another point of view.

Prometheus: yes there will always be a bug in that sense (but I’m talking about the other), not always (unless you give me a proof, but I know there are not any) if you scope out all those external effects and focus on the program, which is the responsibility of the developer. And even if the developer chooses the OS, OS problems (including drivers, network, etc) are out of his/her responsibility, and those problems are not DIII bugs.

Say you can’t login to you DIII account: it’s a bug if it’s because the program messed up somewhere, it’s not a bug if it’s because the server is overloaded, but it’s still undesired by the end user.

BTW (for the sake of fun) there are also other possibilities: Alan Turing introduced universal Turing machines (the first formal CPU) and a formal definition of a program (a partial recusive function), and in particular the criterion of ā€œthe program finishesā€ (ā€œa proof existsā€, ā€œthe result is computableā€). We actually can have: 1) there is always a bug 2) not(there is always a bug) 3) we can’t conclude. Maybe one can prove that we can always conclude (any program has always a proof). And yet, 4) it’s independant (we can arbitrary choose if there is a bug or not, and not by changing the definition of a bug).

Few difficult cases are for instance race conditions and deadlocks, you reach something equivalent to while(true) {sleep(1);} and A.IsWaitingForB & B.IsWaitingForA. It’s hard to reproduce, and eventually to detect, but we know which classes of designs (in the sense ā€œkind of designsā€) that can lead to such problems. These fall out of case 3) (ie., it is provable), but is in ā€œthe rangeā€ of cases 1 and ā€œthere are never any bugā€ ; and these are still the responsibility of the developers. The end user will see the game freeze or something in the game freeze, but the freeze is not necessarily a manifestation of a race condition or a deadlock.
The mobs that don’t die and the absence of ressurection might be similar problems, I suspect a non-complete sync of the states of mobs between the server and the client: the mob is dead for the server, not for the client that show a mob which is up but does nothing (its actions are driven by the server).

Those are indeed problems that are more or less unprovable by tests, but we can get a hint or even a proof from the design/static analysis. We can design programs without deadlocks and race conditions.

This is actually neverending fight. If you really consider your program to be 100% secure, then its usually local program on a device not connected to the web :smiley: Or you are naiive.

Introducing bugs is easier than you think.
Writing tests in some cases is difficult, as cerain environments (like, that’s just an example, Diablo 3 with tons of users on one instance of server) are VERY difficult to simulate and then run the tests in these environments.
You have a multithreaded environment, multiple realtime client-2-server synchronizations, with builds that use thousand of damage instacnes per second.

What I initially mentioned is a change to dmg multipliers as is, as that change shouldnt affect any actual system component - it’s indifferent for a system if it multiplies something by 10 or by 100, as long as any of used and resulting numbers wont overflow (not a case in dmg results, as we see already billions).

This thread starts to look like programmer forum flame-thread. Move it somewhere please.

1 Like