Advanced GPU Settings For Windows 10 & 11

Gandolph1

Well-Known Member
USA team member
I just thought I'd share this for any of the Windows users on here, it made a huge difference for me;

HWpecker wrote:
Hello Anandbhat,

I have tried the scheduling setting for a little bit on Win10Pro, wasn't good for me.

WinMenu>Settings>System>Display tab>Graphics settings>Hardware-accelerated GPU scheduling ON/OFF

It was turned off for me by default, so I turned it on and crunched for a short while...

The average time per WU was 23% slower on a 3060ti LHR 8GB so I turned it off rather quickly after the first few WUs were returned being fully crunched under the new setting.


Wow! I'm glad I decided to read this thread! Turning off the GPU settings as described here instantly dropped my crunch times on my 3080ti!! (I'm running Windows 11 pro BTW) I would guess they fell by at least 20 percent if not more. Great find!



Thanks!!

** Edit **
I can confirm that this works on my 2080ti machine as well. Wish I had known about this sooner...
 
Last edited:

Nick Name

Administrator
USA team member
This is good information. I'm curious how much "serious" GPU computing is done in the Windoze world versus Linux. Linux has ruled compute performance for years, at least for CUDA applications.
 

Jason Jung

Moderator
USA team member
I too noticed crunching performance to decrease when I tried it out shortly after becoming an option for Windows 10. I tested it with an RTX 2070. I have no clue if it actually helps with latency in games either.
 
Last edited:

Nick Name

Administrator
USA team member
From what I can recall based on comments I've read over the years, XP was on par with Linux for GPU run times. Vista and succeeding OS releases changed to the WDDM (Windows Driver Display Model) which was created to cut down on GPU driver crashes. This came at a cost, greater latency and slower processing times for compute tasks. It was probably a good move on Microsoft's part since gaming had and probably still has a much larger user base. There was talk about 10 having a GPU compute function which would fix the WDDM slowdown, and maybe this is it but the way it's described doesn't make any sense. It offloads the GPU task processing order to the GPU, increasing GPU workload. That seems like something the CPU should be doing. It's no wonder it slows things down.

With Windows 10 May 2020 update, we are introducing a new GPU scheduler as a user opt-in, but off by default option. With the right hardware and drivers, Windows can now offload most of GPU scheduling to a dedicated GPU-based scheduling processor.

Windows continues to control prioritization and decide which applications have priority among contexts. We offload high frequency tasks to the GPU scheduling processor, handling quanta management and context switching of various GPU engines.

I don't have 10 on any of my systems but if I did I'd try tinkering with the priority just to see if it made a difference. Process Lasso has been recommended for this, and you can also do it from the command line although I forget how. I'm also curious if it's any better in 11, although I doubt it.
 

Jason Jung

Moderator
USA team member
...was created to cut down on GPU driver crashes.
The graphics drivers still crash all the time. What happened is a lot of the graphics drivers were moved from kernel mode to user mode. Those in user mode can crash and be restarted without causing a fatal system error.

It offloads the GPU task processing order to the GPU, increasing GPU workload. That seems like something the CPU should be doing.
My understanding is newer GPUs have hardware dedicated to scheduling so it shouldn't negatively affect the performance of the GPU. I'm more inclined to think the hardware accelerated GPU scheduling is doing a better job getting frames out closer to real time at the expense of delaying work on our BOINC tasks more frequently. I believe that's what it's suppose to do. I have yet to find test data specifically looking at latency; everyone seems to be focused on how it affects frame rates in games and nothing else.
 

Nick Name

Administrator
USA team member
Reading about this a little more, I realized I wrongly equated GPU-based scheduling to Win10 GPU compute mode. They have nothing to do with each other, if there is such a thing as Windoze compute mode. Testing from the time this feature was first available showed no difference in games with it on, based on frame rates. Any difference was within the margin of error. It was pointed out that shouldn't be surprising since the Microsoft developer post about it said users shouldn't notice any difference with it on. That begs the question why develop a feature if there's no obvious benefit. I saw a couple comments from users with mid to lower end CPUs that said they saw decreased CPU load, which makes some sense. It still seems like a lot of trouble to me to rework this if that's the only benefit. More interesting were some comments about newer tech coming in the future. We'll have to see what that looks like. Based on this I'm doubting there will be any benefit for compute purposes, at least for us. Most telling is that we've already seen the negative affects from having it enabled.
 
Top