Linux vs windows FPS difference

So, I have a problem. I installed wow on my linux machine which has a 5900x and a 2060 super in it.

I have 170fps pretty much solid running in Lutris

My windows machine I have a 3900x with a 6950xt and I average around 90fps.

I dont get it, does the 5900x make that much of a difference? DX12 and DXVK are both taking advantage of the extra cores but the game isnt taxing either processor and the video card in the windows machine is 3x as good as the one in linux.

Honestly, if it wasnt for the wineserver crashes and general weird performance of the entire machine when gaming I would just switch the 6950 into the linux machine and just run linux all the time.

Windows runs way more processes on computer rather than linux. So it affects everything start from CPU end to RAM and GPU. After Windows 7, new windows are horrible. They have so much extra garbage no one uses and it just overloads computer. Just think of how much extra garbage was added after Windows 7 and you will realize.

Linux is simple, it does not overload computer as much.

Yet Linux is not a supported OS to connect to the game. That would only be Windows, and Apple’s OS.

There is a lurking variable, what else do you have running? If you go into task manager in Windows, you’ll probably see about 30 things running even when the only window open is task manager. With linux, you can run it more bare bones. So, given the same specs otherwise, I would say that linux would be slightly faster.

It depends on which version of nix you have, and how up to date the drivers are.

WINE / CrossOver of course is basically just a Vulkan wrapper for DX12, so it’s not 1:1 with DirectX12, as far as the low-level APIs / programming interface. What you have installed is very similar to CodeWeaver’s Crossover, but an unpaid / free version ( performance / compatibility also depends on the quality of the profiles. ) It depends on how up to date the drivers are, whether WINE is compiled and updated for that distro on a regular basis, etc, which operating system distribution this is.

You can read over this list, for a brief overview of what it would take to compile your own drivers and update the distro yourself:

Summary

You even have to sign the kernel module as well. There’s a big checklist of things to get done, without any instructions available, but it will work if you spend time on it.

AMD Open Source Driver for Vulkan -> https://gpuopen.com/amd-open-source-driver-for-vulkan/

AMD Open Source Driver for Vulkan (Releases) -> https://github.com/GPUOpen-Drivers/AMDVLK/releases

Vulkan Runtime / SDK -> https://vulkan.lunarg.com/sdk/home

Nvidia Unix Driver Archive -> https://www.nvidia.com/en-us/drivers/unix/

Nvidia Vulkan Driver Support -> https://developer.nvidia.com/vulkan-driver

Vulkan Developer Tools -> https://developer.nvidia.com/vulkan#tools

Khronos Group Reference Guides -> https://www.khronos.org/developers/reference-cards/

Khronos Group Developer Resource Hub -> https://www.khronos.org/developers

Intel Graphics for Linux - Programmer's Reference Manuals -> https://01.org/linuxgraphics/documentation

AMD Developer Guides, Manuals & ISA Documents -> https://developer.amd.com/resources/developer-guides-manuals/

DirectX Landing Page -> https://devblogs.microsoft.com/directx/landing-page/

Clear Linux is one of the fastest distros when it comes to gaming, or anything else, but I wouldn’t suggest to switch to one thing for that reason alone. As an example, I prefer Oracle Linux with the UEK (Red Hat Kernel removed,) which is slower, but far far more secure. It’s probably bad for gaming, and has limited driver support, but there you go. I’m sure I could compile something to achieve a similar result, but I’m not that focused on it at the moment. If you look at the latest release for AMD’s Open Source Vulkan Driver, it was 7 days ago. I really doubt there are that many variations of Unix / Linux that even have a Vulkan driver from a few months ago, let alone the GPUOpen framework with the kernel module and accompanying hardware abstraction layer. I bet that’s on a big todo list somewhere.

NOTE: I haven’t actually done this, though I’ve got half a dozen compiler toolchains installed, and I could accomplish this, but I would much rather focus my efforts on porting things to Oracle Solaris 10.x / 11.x, etc, due to the superiority of Unified Archives / Golden Images (IPS Repository,) ZFS (OpenZFS doesn’t compare even remotely,) etc, various other closed-source commercial-grade features.

So after actually playing awhile, it seems it was stormwind. When I posted this I was standing around in stormwind, then when I queues into a random dungeon it was culling of strat which uses stormwind assets. As soon as I switched to shadowlands I was getting 200+ in those zones.

Its still strange that it was 170 in linux while in stormwind.

1 Like

Too many variables of player densities. If you really want to benchmark them, go find an unpopulated area, like inside a graphically intense dungeon, and then compare by having the player stand in exactly the same spot and line the camera up to the exact same angle (use some reference object to line it up). Make sure vsync is off in the game and in the nvidia control panel. THEN you can do an actual comparison.

Trying to do it in SW just isn’t going to work. There might have been 100 people in the AH during the one test and five minutes later, there could have been 5 people in the AH. Not reliable enough to compare.

1 Like

I also have low FPS for some reason, on my RTX2060, laptop. It could be 96 degrees heat in my state or maybe they did some changes to wow and messed it up

Vulkan presents a very specific problem compared to OpenGL or other frameworks. While it is more powerful, you get more low-level access, a lot of the responsibility lies with the application developer, and very little is managed by the driver / framework. WoW in general, even without it being run on top of a Vulkan Wrapper that intercepts / hooks DirectX11/12 API handles, struggles to utilize the majority of resources available to it. Even on my computer it barely uses 25-50% of my GPU memory, and prefers to store everything in slower DRAM instead.

[Responsibility. OpenGL does a lot in the background, from very simple things such as error checking, compiling high-level shaders, to avoiding deletion of resources that are still used by the GPU (which operates asynchronously) or managing internal resource allocation and hardware cache flushing. Another example is the handling of out of memory situation, where the OpenGL driver implicitly splits up workloads or moves allocations between dedicated/system memory. This code now moves from the driver into the application domain. Developers may also need to replace middleware that was OpenGL only with their own code. While Vulkan doesn’t have the "state-bleeding" issue that OpenGL has for middle-ware, resource management may be a new topic for developers.

Portability. Vulkan being so low-level means that getting the best out of different hardware architectures will very likely require dedicated code-paths.]

Transitioning from OpenGL to Vulkan -> https://developer.nvidia.com/transitioning-opengl-vulkan
Summary

Player models / entities, are more complex than map primitives and one-dimensional textures used to represent trees / fire / bushes / foliage, etc. Most player models / entities, are not very complex in comparison to newer FPS engines, which focus almost entirely on meshes / splines / B-splines, polygon counts exceeding 50 million per model. Game engines like that have a ton of vertical-integration with hardware APIs, workloads with a very specific target, whereas WoW is more focused on generic software / hardware rendering tasks that utilize a combination of integrated / discrete graphics card resources ( it almost resembles software 3D rendering. ) They don’t want it to lag on low-end / mid-range PCs, with the majority of players on WoW being casuals.

They updated the game engine in Cataclysm, but before that it didn’t even have things like bump mapping (back in vanilla WoW.) Even old FPS games from the late 90s had bump mapping. I wouldn’t expect too much from WoW. Player models in the old engine probably never exceeded more than 200-1200 polygons if you were lucky.

If you want to render a lot of things off-screen, utilizing z-buffer depth calculations to determine which objects to rasterize / draw to the screen, based on field of view / the camera layout, whether first-person or third-person ( projection, ) there are a lot of things to take into account, given they have multiple expansions, with the map primitives having an entirely different level of complexity, based on the expansion alone. WoW struggles with particle shaders, shadows / geometry shaders, lighting in general. If you disable almost all forms of that, the lag disappears, hilariously enough. It doesn’t scale very well, and it also relies almost entirely on DRAM to store game assets (EMIF / external memory interface, directly adjacent to the CPU, etc, DDR memory banks.) That’s one reason why it’s slow, but I’m sure with it being an MMO / MMORPG, they have a specific target market, which is mostly casual players with very generic hardware ( combination of integrated / discrete graphics, etc. ) If you tried to optimize for a million different hardware APIs, things you don’t see on blockbuster FPS games, because they deliberately set the system requirements for mid-range or high-end gaming PCs only, then of course they would suffer in some way ( they might optimize for a particular segment of their userbase which is in fact only a small proportion of people / players, and on top of that they might waste money doing so, vs putting the money into other things like developing game content, etc.) It’s hard to KNOW what their priorities are, so you can’t entirely predict why something occurs, for what reason, based on a design decision. You can take a guess, which is only a stab in the dark, and often involves giving boiler-plate responses. I have nothing to do with anything like this, other than just knowing very obvious things. If I had a lot of time I could dig deeper, and uncover the truth, but I’m sure even a simple educated guess is probably more than good enough ( I don’t worry about whether I’m wrong in that sense. ) My personal view, if I was one of those people who had THAT MUCH TIME on my hands, I would rather focus my time and effort on something that has a massive effect on MOST THINGS in general ( such as updating Kitware’s VTK to migrate from OpenGL to Vulkan. )

When WoW came out, if you tried to employ meshes / splines, even for simple models, the polygon count shot up to 3K-6K per model, which is probably the equivalent of a 20 player match in WSG in Vanilla ( under level 32, ) where most people had bad gear equipped. Even that probably lagged, because half of it I’m sure was software-rendered ( almost like the software-rendering mode in the Quake II engine. ) WoW I’m sure is quite a bit better now, but I actually have no idea why it lags, given I haven’t fully explored this engine, other than immediately realizing it hardly takes advantage of most hardware on even mid-range GPUs that are a couple years old, let-alone anything high-end. A lot of people do weird stuff like enable vsync with super high refresh rates, or crank the frame rate, not knowing they are just rendering a ton of stuff off-screen, that will never be displayed ( wasting GPU cycles, etc. ) There are tons of ways to get around performance limitations, but half of it is knowing the inefficiencies of the engine, and how half of it is just your motherboard / CPU / how much DRAM you have ( not entirely your GPU. ) Sure I could try to do something to get hundreds of frames per second. When there is no noticeable difference beyond 20-24, maybe even 30FPS, I could save 3/4 of the power, and only run my game at only 8W-30W load ( vs 150-300W draw, not counting the power wasted due to any minor inefficiency. It would still look barely any different. ) Everyone has different ideas, everyone has their own opinion, no one is really wrong when it comes to their interpretation of this. It’s just how well you can express one idea or another. Here are some more things to read that include a lot of basic terminology:

Valve Hammer / Worldcraft -> https://developer.valvesoftware.com/wiki/Category:Level_Design

Source SDK ( Modeling ) -> https://developer.valvesoftware.com/wiki/Category:Modeling

ZBrushCoreMini (Sculptris) -> https://zbrushcore.com/mini/download.php

Blender -> https://www.blender.org/download/

Softimage Mod Tool -> https://developer.valvesoftware.com/wiki/Softimage_Mod_Tool


Modeling Basics -> http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/


Z-buffer -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Z-buffer

Overscan -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Overscan

Render -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Render


Projection -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Projection

Procedural Texture -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Procedural-Texture

Bounding Box -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Bounding-Box

Depth of Field -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Depth-of-Field

Field of View -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Field-of-View


Mip-Mapping -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Mip-mapping

Particle System -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Particle-System

Pixel -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Pixel

Alpha Channel -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Alpha-Channel

Mask -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Mask


Triangle -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Triangle

Vertices -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Vertices

Micropolygons -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Micropolygons

Mesh -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Mesh

Vertex -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Vertex

Edge -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Edge

Face -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Face


Spline Types -> https://docs.blender.org/manual/en/latest/modeling/curves/structure.html#curve-spline-types

NURBS / B-Splines (Blender) -> https://docs.blender.org/manual/en/latest/glossary/index.html#term-Non-uniform-Rational-Basis-Spline

NURBS / B-Splines (3ds-Max)-> https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2023/ENU/3DSMax-Reference/files/GUID-82A6AEA2-568C-4F19-9BC3-D5C362122EDA-htm.html

Same is true of DX12. Also, they are running it on Linux with a wrapper, which adds a whooooole other layer of potential choke points with translating and isn’t an actual 1:1 conversion of DX to VK. Sometimes they work great, sometimes they don’t. Really depends on the game and how well the wrapper is optimized.

They usually only keep core or stale assets in DRAM that might be used again soon. Not going to dive into garbage collection and texture pooling, but you’ll need to read up on them to fully grasp it all. It’s perfectly normal for your GPU to be at 25-50% VRAM and have a lot of assets in the DRAM still. The GPU doesn’t tend to keep bloat in the VRAM and typically only keeps exactly what it needs for that frame or two(some amortized effects might keep some persistent data in VRAM). WoW has a very efficient and aggressive LOD system for textures and models. The reason why your usage looks low is because there simply isn’t enough on the screen to need it. Things like render resolution can bump up that VRAM usage though, since it’s a deferred rendering engine (each render pass takes up X*Y pixels * different bit depths for various passes and there are quite a few passes for each frame). Also, the game is highly dynamic. You might suddenly have 200 people hearthstone in at the same time, what is the client going to do then? It doesn’t have any clue of what random stuff like that can happen. What if those 200 people suddenly decided to spawn 200 random spell effects and swap every piece of their armor all of a sudden? Hopefully you’re starting to see what I mean.

Anyways, long story short, WoW has a pretty efficient rendering engine for being an MMO and surprises me sometimes with how well it handles some of the extremes thrown at it. It’s a game designed to run on the widest range of platforms possible: $300 toasters to $3000 Ferraris. The average PC today runs at 1080p, has between 4 to 8GB of VRAM and has 16GB of DRAM (steam hardware survey). The most common GPUs are the 1060, 1650 and 1050ti. Blizzard likely targets between that and all the way back to whatever the survey was 4-5 years ago.

You just copy pasted my entire post, and then tried to reword it lmao. Interesting forum troll anyways.

You mean the post that was half-cooked-copy-pasted-paraphrased stuff off the internet? And you want to call me a forum troll? No, that isn’t the case here… You brought some stuff up, without actually knowing what you’re talking about.

You brought up:

  • Wrappers, but left out the key part about interpreters not being 1:1 perfect translations, in terms of performance

  • Things prefering to store everything in “slower DRAM instead,” without understanding how rendering engines actually work, so I elaborated on it. You act like it’s some massive issue that stuff is stored in DRAM, when even some mid range DD4 ram has peak speeds in the 20 gigabyte per second range and it’s not like you’d be moving gigabytes worth of stuff to the VRAM in one single frame…

  • Vulkan vs OpenGL, without realizing you’re basically comparing DX12 vs DX11.0, in terms of how low level they are.

  • Bump mapping, which WoW and next to no other game engine really ever use. It’s called normal mapping, which is far different from bump mapping.

  • Rendering off screen things, z-buffer depth calculations, FOV and other frustum culling, which 99.999% of deferred rendering engines all do. Oh and deferred rendering engines have a pretty small border padding for objects just outside of the frustum. It might be like 15% larger than the frustum.

  • That WoW struggles with particles, shadows, shaders and lighting, which is false. Translucency passes are expensive in all engines, due to sorting and they are almost always done after the deferred pass, in a separate forward pass, but before any post processing or tone mapping. You run into something known as overdraw. This is why most game engines try to use dithered masking instead of translucency, if they need gradients. Shadows, depending on what type you’re using (assuming WoW uses CSMs) can be pretty cheap and performant, but it depends on a lot of factors. Layering on SSAO and raytracing will jack the cost up. Lighting, WoW has never been known for having a ton of dynamic lighting and almost all of it is always prebaked because dynamic lights are expensive. Compound that with hundred of meshes and actors on the screen and it’s simply not worth the drain on peformance.

  • “Not knowing they are rendering a ton of stuff offscreen” which is wrong, they aren’t rendering it, the assets sit in ram until they are needed, likely based on proximity to the player. I won’t dive into instancing rendering costs either, but that’s another piece to the offscreen puzzle you’re trying to describe.

  • 20-24/30fps, without realizing that the frame buffer on PC doesn’t reliably work that way and that 30FPS on PC, even with GSYNC, looks horrible vs running 30fps on a console, which uses different frame buffer outputting.

I could go on and on about this all, but basically, you wrote a huge wall of text, without actually knowing much about what you’re talking about; trying to seem “impressive” on the internet. Then you have the nerve to call someone a troll, who expands on topics talked about and who actually works in the industry? Sounds like a typical Linux user…

there are a few services that can be dissabled … you need to lookup the service(s) to find out what it does and if it’s safe to disable

I can tell you services are not what is causing it. My CPU utilization is around 14% when wow is running and considering this is only happening in Stormwind I suspect it has to do with whatever assets are used there since it doesnt happen in Orobos or Org which has just as many people.

I am going to pull the 5900 and put it into the windows machine this weekend to see what happens. Im not looking forward to it because the Linux machine is in a watercooled lan li 14h20 case and I have big fat sausage fingers.

The following list has quality reference material / information to help anyone independently verify the basic differences between DDR4 and GDDR5. Sorry for my limited replies, but I don’t want to write anything outside of the scope of this particular discussion:

Summary
DICTIONARY OF TERMS FOR SOLID-STATE TECHNOLOGY, 7th Edition -> https://www.jedec.org/standards-documents/docs/jesd-88c

Micron - Technical Note - GDDR6: Design Guide - PG#2

While standard DRAM speeds have continued to increase, development focus has been primarily on density often at the expense of bandwidth. GDDR has taken a different path, focusing on high bandwidth. With DDR4 operating from 1.6 to 3.2 Gb/s, LPDDR4 up to 4.2 Gb/s, and GDDR5N at 6 Gb/s, the increase in clock and data speeds has made it important to follow good design practices. Now, with GDDR6 speeds reaching 14 Gb/s and beyond, it is critical to have designs that are well planned, simulated and implemented.

Table 1: Micron GDDR and DDR4 DRAM Comparison

Product: DDR4
Clock Rate (tCK) Max: 1.25ns
Clock Rate (tCK) Min: 0.625ns
Data Rate (Gb/s) Min: 1.6
Data Rate (Gb/s) Max: 3.2
Density: 4–16Gb
Prefetch (Burst Length): 8n
Number of Banks: 8, 16

Product: GDDR5
Clock Rate (tCK) Max: 20ns
Clock Rate (tCK) Min: 1.00ns
Data Rate (Gb/s) Min: 2
Data Rate (Gb/s) Max: 8
Density: 4–8Gb
Prefetch (Burst Length): 8n
Number of Banks: 16

Product: GDDR6
Clock Rate (tCK) Max: 20ns
Clock Rate (tCK) Min: 0.571ns
Data Rate (Gb/s) Min: 2
Data Rate (Gb/s) Max: 14
Density: 8–16Gb
Prefetch (Burst Length): 16n
Number of Banks: 16

https://www.micron.com/products/ultra-bandwidth-solutions/gddr6/part-catalog/mt61k256m32je-14

Micron - Technical Note - DDR4 Point-to-Point Design Guide - PG #2

Table 1: Micron's DRAM Products

Product: DDR4
Clock Rate (tCK) Max: 1.25ns
Clock Rate (tCK) Min: 0.625ns
Data Rate Min: 1600 Mb/s
Data Rate Max: 3200 Mb/s
Density: 4–16Gb
Prefetch (Burst Length): 8n
Number of Banks: 8, 16

https://www.micron.com/products/dram/ddr4-sdram/part-catalog/mt40a512m8sa-062

You could just re-read everything I wrote, or read more carefully, it would make more sense to you. You will notice I was describing game engine from 1997 - 1999, ie, an FPS game, in comparison to Vanilla WoW. The developers actually have touched on this topic too. Most of my writing on this was designed to be child-friendly, as far as how simplistic and easy-to-understand it was. I also used some examples to show how it is used in a sentence. All I’ve included are basic definitions, nothing more:

Summary
Bump maps

These are textures that store an intensity, the relative height of pixels from the viewpoint of the camera. The pixels seem to be moved by the required distance in the direction of the face normals. (The “bump” consists only of a displacement, which takes place along the existing, and unchanged, normal vector of the face). You may either use grayscale pictures or the intensity values of an RGB texture (including images).

https://docs.blender.org/manual/en/2.79/render/blender_render/textures/properties/influence/bump_normal.html

Thinking back, you’ve done this in other threads I’ve been in as well and it just results in the thread getting locked because you won’t leave it alone. I’m not going to keep clogging up this thread with your trolling at this point. You simply don’t know what you’re talking about and posting walls of text won’t change that either.

Getting back on topic, there are a lot of Windows settings that can mess with performance. The Xbox Game Bar is famous for locking frame rates to 60, if the “record what happened” option is enabled. People are notorious for not having their Nvidia drivers up to date or for having messed around with NV control panel settings, registry, etc etc. That or they forget to set the refresh rate to match their monitor, so they get g/vsynced to something lower than they expect.

WoW uses up to four or six threads I think. A 3900x has 12 cores/24 threads. 24*0.14 = roughly 3.36 threads saturated, which lines up with what I said about WoW using 4-6 threads. At most, you’ll only ever see your CPU usage beteen 25% and maybe up to 50% while playing WoW, due to each thread not technically having the same power as an actual core, but you get my point: You’ll never see WoW fully saturate your CPU, unless you’ve got multiple clients open.

what is your GPU Utilization at when the game is running? (task manager > performance… should be called GPU 0)

edit> if you change the setting from auto detect to the graphics card the you use it will force wow to use that card

Make sure the WoW client advanced settings aren’t set to limit the background FPS, otherwise, the useage will drop when you alt tab (defaults to like 8fps in background I think)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.