Say my program does +1 to an integer. I can foresee that when I reach 0xFFFF then +1 gives 0x0000, so I need to provide safeties. Once itās provided, I can let my algebra go wherever it wants, I donāt need to explore all the possibilities of how my algebra will be used.
The definition of a bug depends also on the expectations, and we put all expectations both in tests and user experience (will 0xFFFF +1 give 0xFFFF (which is in that case defined as infinite) ? will it throw an exception ? log a warning ? will it give 0x00010000 ?).
In a word a bug is always by definition in the design of a program, the end user only has different expectations.
I often have a popup āfailed to initialize D3D / DirectXā; should I consider that as a bug as a user or as a useful message as a developer ? This ābugā is actually a feature from another point of view.
Prometheus: yes there will always be a bug in that sense (but Iām talking about the other), not always (unless you give me a proof, but I know there are not any) if you scope out all those external effects and focus on the program, which is the responsibility of the developer. And even if the developer chooses the OS, OS problems (including drivers, network, etc) are out of his/her responsibility, and those problems are not DIII bugs.
Say you canāt login to you DIII account: itās a bug if itās because the program messed up somewhere, itās not a bug if itās because the server is overloaded, but itās still undesired by the end user.
BTW (for the sake of fun) there are also other possibilities: Alan Turing introduced universal Turing machines (the first formal CPU) and a formal definition of a program (a partial recusive function), and in particular the criterion of āthe program finishesā (āa proof existsā, āthe result is computableā). We actually can have: 1) there is always a bug 2) not(there is always a bug) 3) we canāt conclude. Maybe one can prove that we can always conclude (any program has always a proof). And yet, 4) itās independant (we can arbitrary choose if there is a bug or not, and not by changing the definition of a bug).
Few difficult cases are for instance race conditions and deadlocks, you reach something equivalent to while(true) {sleep(1);} and A.IsWaitingForB & B.IsWaitingForA. Itās hard to reproduce, and eventually to detect, but we know which classes of designs (in the sense ākind of designsā) that can lead to such problems. These fall out of case 3) (ie., it is provable), but is in āthe rangeā of cases 1 and āthere are never any bugā ; and these are still the responsibility of the developers. The end user will see the game freeze or something in the game freeze, but the freeze is not necessarily a manifestation of a race condition or a deadlock.
The mobs that donāt die and the absence of ressurection might be similar problems, I suspect a non-complete sync of the states of mobs between the server and the client: the mob is dead for the server, not for the client that show a mob which is up but does nothing (its actions are driven by the server).
Those are indeed problems that are more or less unprovable by tests, but we can get a hint or even a proof from the design/static analysis. We can design programs without deadlocks and race conditions.