MilkyWay@home delta PPD with SSD vs SSH...wrong conclusion

Vester

Well-Known Member
USA team member
Edit addition: After further observation, the large improvement was due to AMD Radeon Settings, not the SSD. The improvement was accomplished on my Radeon HD 7990 video cards by increasing voltage to +10% and reducing VRAM speed to 150 MHz instead of 1500 (for heat reduction). The increased voltage is needed to keep the GPUs running at 950 MHz without intermittently throttling back to 501 MHz. I am sorry for the bum dope, but I do admit my mistakes.

I recently decided to mirror my SSD to a SSHD for use with my four Radeon HD 7990 video cards (eight GPUs). Everything else remained the same as before with the SSD (except that I defragged the SSHD). I did not anticipate a performance change, but I saw decreased PPD as can be seen in the Daily Project Scores histogram at Free DC, photo: https://ibb.co/VB6WHm0 . The difference is about 500,000 PPD (3,300,000 vs 2,800,000). I changed back to the SSD on 13 October. The computer is a dedicated cruncher and is used for no other purposes. I waited a few days because I did not want to prematurely say that I observed a difference of near 15%.

The rig has 8 GPUs on four PCIe x 1 risers, an Intel i5 7600 (4 cores without hyperthreading) at 4.0GHz, and 16GB DDR4 2666MHz RAM on an ASUS B250 Mining Expert motherboard. I run 3 work units per GPU simultaneously or 24 WU at a time with the 4 core processor.

I don't know what it can do for a computer with a single video card, but it seems worth trying a $50 SSD if yours has a rotating hard drive.
 
Last edited:

Nick Name

Administrator
USA team member
This is pretty shocking, some difference wouldn't have surprised me but this is extreme. If everything else was equal then I suspect the frequent writes from tasks completing is the cause. That's a lot of concurrent tasks and MW jobs on those GPUs finish pretty quickly.
 

Vester

Well-Known Member
USA team member
Thanks, Jason Jung. I changed the preference from 60 seconds to 240 seconds. The work units take about 150 seconds or less when I run three per GPU (under 50 seconds per WU).
I have 16GB RAM and I see no change in the amount of RAM in use (5.7GB of 15.9) after changing the time to 240 seconds. The RAM usage increases if I run more WUs per GPU.
 

Nick Name

Administrator
USA team member
It's been awhile since I've run this, I'm not sure this app even has a checkpoint. It wouldn't be worth bothering with on high performance double-precision cards like Vester has.
 
Top