Ultimate Starcraft 2 Optimization Guide ((CPU))

I also found this tonight. In theory it offers potential cpu clock cycle savings. Not sure if it applies to dx9 however.

I see.

The deletion has zero probability as a workaround it would appear with that explanation being native to the games code itself. If only we could impose the correct path through software illusion. At the end of the day it’s all x86.

Would genuinely be great if next patch fixed this. It is a problem on ryzen which otherwise thanks to its machine learning assisted branch predictor is a promising ipc uptick in wonky loads. Perhaps even 9 thread ps3 emulation come ps5s launch as gpu compute has its own special bus even under OG ps4. X86 is perfectly crunchable using gpu backends. Poor jag needed it since debut.

You want a novel?

Ive tried the 2 hour video walkthrough on youtube for OS level manipulation chasing numbers. Its either too much information or too little. And by too little thats your typical antisocialite rebranding for reality. If you want a guide walking you through everything (i assume you arent a total invalid) you probably have no business messing with advanced methods literally no one has demonstrated anywhere pertaining to this title. I generally assume the average person, even child, knows how to play with the in game settings and figure out what works for their system. I mean what are you expecting, me to finger dance out? – lowering resolution can improve your frame delivery? Unfortunately the availability of options when it comes to optimization for this title are exceedingly limited hence ultimate, because it brings to the table something relevant to the biggest bottleneck in this program everyone else ignores. Youre welcome.

Thanks for the input. Ill have to come this weekend update the guide. Looks like one can spoof the software and it indeed works. Ill credit that addition accordingly. Apparently sse 1 2 and 3 are omitted. Its essentially gameworks of the cpu world. Kind of sad honestly. The monopolization game. Especially in recent times since 32mbs of l4 was a card they had the blip of broadwell. Nothing is stopping them from reusing this strategy except lithographic sloth. I dont think blizzard will patch it because they get royalties from intel as evidenced by their competition advertising. In my experience compilation on ryzen is massively slower and if i like a madman click through the menus, but the actual in game performance isnt as blatantly effected as its all compiled by that point. Its still annoying and comes across like a bug, when its in truth clearly intel playing dirty.

I presently cannot do gods work and further improve this guide until a later date by including a work around. Unfortunately im paid to be away from my machine and i also do this thing called sleep.

If you have an amd cpu, you would in the short term perhaps benefit from googling intel compiler starcraft 2. Reddit has a promising tidbit.

I am just saying that the title of the topic is misleading/false. I am not saying that is your duty to provide step by step guide or anything, but then title your topic correctly (or don’t be surprised when someone expect something different). “TLDR” would be appropriate I guess. As for the content - too many words, too little content. As for your reply to my post - too many insults, too little of anything useful.

I often marvel at the concept of why human variation in the starcraft universe continues to exist beyond blizzards entertainment value. How would any species prevent its evolution into another across worlds without what we today have as cloning, crispr, and genomic screening to prevent the non prosocial to exist in a galaxy swimming with installations of presumably anti matter warheads and fractional light speed missiles capable of liquidating planetary bodies and shifting the rearranged mass into new orbits. A humanity with such weaponry as micro blackholes colliding within a designated target being thousands if not millions of years in advancement could not as an intelligent life form maintain efficient civilization scaling lightyears apart if interspecies arms race via darwinian checksums were still the only game in town. It would be inefficient at best. Probably one very easily conquered by another civilization that did away with random mutation or phenotypes exhibiting tendencies toward self destruction in wake technic that makes nuclear triggered hydrogen volleys look like pop rocks.

If its incomprehensible, and you are incapable of contribution you should probably just move along.

Ultimate means the best. I dont see you or anyone else making threads explaining why this game runs sub optimally from a software perspective. Let alone finding and sharing how to get around this. Just be honest. You get a dopamine rush being disagreeable. I think we as a species will have that eventual honor of purging the likes of character who enjoy being obtuse talking primates. It will be required whether we like it or not as a practical neccesity. Right now we are still primitives. We still point arsenals at ourselves of fleeting suns where a more sophisticated species couldn’t afford it.

@ 08:58-11:47

I cant believe it. Its not scientific. I don’t have a control recorded before and after but it appears simply deleting mcupdate_AuthenticAMD.dll using a software that adds the ability when you right click to delete almost anything does the trick on ryzen. It at least appears to my tired existence, faster. I will have to do a full work up sometime this weekend, ideally a proof video compilation since YouTube is allowed along with as one user pointed out, URLs with advanced spacing techniques so as to not be flagged, though I refrained for other reasoning. I hope its working legitimately and I don’t have to make that write up about virtual machine CPU ID spoofing just to get that reported 10% boost in CPU performance because SSE instructions matter.

Its hard to believe the solution was that simple, and I don’t know the downstream consequences of removing such a file in other situations.

Edit: Actually playing with it more, it might not be changed. Theres a bit of variance I guess.

Ill eventually edit the first post and make it pointedly about the CPU bottleneck and scrub the bonus material as it has its own thread anyway. My goal is to get this working, though the overhead of a VM in itself might have me include an OS optimization segment which is frankly more work than the average person would be willing to follow through with. That would take more time, since the background task and service dependencies have grown more unified with each build revision, and im not super familiar with the nuance of the latest release. Another problem is, once I release an optimized OS segment, are people REALLY going to be stuck on that build? – let alone, do their own optimization as new builds emerge in the wild with potential new background overhead?

Im almost doing it more for myself at that point, and the random rare obsessive personality type that actually cares enough to get the reward of what this hand holdery has to offer. Preferably the commenter Knowbody already knows how to do this and would love to share specifics.

Hook us up broski. The ryzen in menu performance is absurd.

Edit: Might take longer. Im going to have to buy a different motherboard. Turns out these modules arent actually supported. In fact despite the mundane 3600 cas 18 nature of this serial, I cant find a single X570 that supports it properly (QVL). It really comes across as a way of selling more ram to the dumb masses. Its a statistical dice throw, if your ram will survive binned intended functionality into other generations despite the traces having the same speed limit. There is really no good reason why DDR1, DDR2, DDR3, suddenly DDR4, the QVL is the holy grail of compatibility, with the curve ball of ryzens version of XMP is a hit or miss affair which is actually a thing. The good news is its an excuse to get 64gbs of 3600 cas 16. The bad is this project is super definitely on hold much longer.

Im sure if I do all of this I can lose a lot faster

No. The game’s executable contains the GenuineIntel check.

The only ways to make it use the Intel path are:

  • Modifying the game executable to remove the GenuineIntel check, or replacing it with AuthenticAMD. But I think this would almost certainly trigger some anti cheat, so I don’t recommend trying this.

  • Tell Blizzard about this and hope that they remove the GenuineIntel check in their official game executable.

  • Spoof the CPU’s vendor ID to say GenuineIntel
    You can’t do this natively with Ryzen CPUs, but you can do it inside a virtual machine.
    If you install a Windows VM, use GPU passthrough, and configure the VM to spoof the CPU’s vendor ID, it does work.

3 Likes

Compatibility is not my concern. I got you mean.
https://i.imgur.com/1N15Huf.png
I think it would be faster and more efficient to fire the entire engine team and pay Unreal or other game engine company royalty. Autonomy?
I want Aiur like this, please
https://i.imgur.com/mnl2wbG.jpg

We need a way to port assets to modern API’s without the headache. Hardware abstraction GPU side has that disgusting DX11 on life support. Just reintroduce hardware abstraction since devs cant be trusted to go low level with GPU outside fixed hardware and get our CPU thread balance automated. Pretty much every engine that is free to use, is severely gimped at 2 cores maximum.

Even my side rig ive been relying on has had issues. Its about to get mothballed and museumified as a antique, but build 2004 (the latest) fully updated with painstaking efforts to ensure no corruption, or driver weirdness registry wise via remnants, just does not play well with my GTX 980M. Getting into nvidia control panel takes literally a minute, and manipulating variables is literally going to cost you 5 minutes. Ive put them back on all 8 threads so its definitely just a bug. All the new drivers ive tried are the same story. Looks like ill be back on this thread in a month or so to polish it off. I intend for it to be one edit, and done. Hopefully it doesn’t get removed. Shame I have to use a virtual machine and lie to the game, to get the full potential of that x86 license.

Fix computer is the most miserable work. Eh…I don’t know.

1 Like

Let another company fix it and you teach Blizzard how to use it. Don’t touch it. Aren’t you in pain.

So ive succeeded in installing both Linux and windows through their own VM.

You seem to know more about it than I do. Do we ultimately have to buy a license for a specific software to get said functionality? That’s kind of what its looking like. Im open to doing that, but maybe you could clear up what sort of options exist.

Windows pro’s built in method might be lacking, or im just ignorant. Progress. The defaults even in firmware definitely aren’t productive to such adventures.

If you’re using VMWare, put this in the .vmx file:

cpuid.0.ebx=“0110:1000:0111:0100:0111:0101:0100:0001”
cpuid.0.edx=“0110:1001:0111:0100:0110:1110:0110:0101”
cpuid.0.ecx=“0100:0100:0100:1101:0100:0001:0110:0011”
cpuid.1.eax=“0000:0000:0000:0001:0000:0110:0111:0001”

This will make the vendor ID say “GenuineIntel”, and set the CPU family to Intel.

1 Like

I have a theory this is more common than we might think. In the largest city section of the witcher 3 with raytracing enabled through reshade I get a noticeable amount of CPU overhead. As I understand it without the instruction set utilization if this game doesn’t use the extensions, more cycles are used using inferior slower methods. I think this is the case in cysis 3 as well.

Edit: The witcher at some point was patched to use up to 16 threads now! No more stutter even with reshade overhead.

My plan for harmonious frame delivery (G sync, NVME) was to find out if the overhead tax I take running reshade can be canceled out if I use intels path, assuming in fact its being gimped. I managed to remove some of the stutter time by taking everything in the OS I have access to in process lasso, except explorer.exe and game clients to my logical cores. The game itself is exclusively on 8 full fledged cores (Edit: Was). This yielded a noticeable improvement, but I might have to go ryzen 4000 series to smooth things out if there is no intel gimpery afoot hereabouts, and aim for a ridiculous 128mbs L3 on their higher end offering.

I also didn’t mention, I got raytracing working in at least one API the game supports. It works in the real time 3D rendered movies and the UI between campaign missions. Looks awesome. Unfortunately its happy to destroy your performance making calculations (GPU bottleneck) with my current set up, but you cant actually see any difference in actual game. I assume the reasoning is when nvidia had ambient occlusion injection working, it might have been adjustable to render on cloaked units. The result was an unfair situation arising from them not adopting the technology, as easy as it is to adopt, natively into their code, so it was removed at some point quietly.

I am assuming the game uses 2 APIs from its behavior, but I cant actually confirm it. Im uncertain if its just features being turned off in the same one, or if the rumors about DX10 were true, but in special scenarios. I doubt the latter however highly. It wouldnt surprise me if though.

You come across almost like a former employee or someone closer to the inside in some sort of abstract way. Youd never admit it, but thats perfectly understandable. The point is, this video I linked is 4 days old. This dude is bringing this stuff more, mainstream. Hes even showed up on gamers nexus’ channel. So he has some traction. Intel might be dragged through this one kicking and screaming if necessary.

I dont suspect they are in any real danger considering 10nm desktop lithography is probably around the corner, and whats stopping them from pulling the EDRAM game? They did it in their 5th generation. It just, like he said, had terrible optics.

So I ran StarCraft 2 with virtual studios alongside, and attached the SCIIX64.EXE process to each of the 3 patches that come from the link he showed in his video. If im not mistaken it worked.

In fact each time I did this, the game crashed, which is what you would expect if the code was being changed in some way.

I will have to do more testing however. Hopefully it saved me 250$ but I might get the software eventually anyway because its better than windows PRO version of virtual machinery.

Its definitely behaving differently. That must have been the fix! Now instead of massively lagging if I flip through the menu tabs one after the other without much pause, it would stutter.

Now it still will have a single pause, but instead will go into the loading screen where it will have the bar fill up again. Originally it wouldnt do this as often, which would be a compiler based change. It does appear more fluid, and my framerate might even be higher in game.

So the fix is at 11 mins in this video he shows the website, and you can go there and download the patches. Get visual studios 2017, run the game, and run all 3 patches by attaching them to the game from the attach menu. You will have to restart the game each time as it will crash, but AMD systems will run the game faster.

My in menu framerate went from 60-62 to 62-63 funnily enough. It was and is super stable at those numbers. That to me is proof enough. Thank you random YouTube tech guy.