It’s dead and will not move no more

After having pushed several of my graphics cards to their limits and beyond I have to announce a death in the family. My Geforce GTX 570 died after some burn-in tests with FurMark. I was able to run 3DMark 11 a couple of times, but the card finally suffered the big heat and decided to simply turn off. Actually, one voltage transformer decided to burn out and the card wasn’t functional anymore. This happened because of several reasons. First of all, I think that the core voltage wasn’t set to high, because 1.152V is actually a value a GTX 570 should be able to handle.

I personally think that the overall heat, even in the water circle, was a to high. Around 50°C of pure water warmth is something that brings every circular flow assisted system on its knees. Anyhow, the whole system was boiling and cooking and my graphics card said “No, I’m leaving here”

Like I said before, I know that the GTX 570 is capable of handling the previously mentioned core voltage. I guess the used components were like manufactured on a Monday and deserved to die. After giving condolences and mumbling some last words to her, I decided it’s time to switch to something really, really fucking bad and ordered a Geforce GTX 580. Firstly, I ordered the wrong one. A Good Edition by Gainward with the main problem that the PCB differs from the original one. That’s a giant problem for me, because I use a waterblock that is designed for the common Fermi 570/580 design. The card is going back to sender and I finally ordered a card by EVGA.

I decided to take the Superclocked model, which is for what I believe, the card with the better single components on it. I had to pay a higher price, but that’s okay. EVGA offers some great terms of warranty and is in my opinion the new high end manufacturer. I was a fan of XFX for a long time, but they do not build Nvidia cards anymore, which is sad, because they put out some absolutely great products. I also liked Gainward a lot, but the price for their cards compared to the service afterwards is nothing in comparison to EVGA. You could say, EVGA is the real shit for every Nvidia fanboy.

At this very moment, my gaming rig is finished like 70 per cent and hopefully up and running within the next two days. I also hope that it will work, because I had some serious space issues with the retention module for the waterblock. Screws and kind of things like that touch the backplate. This IS serious, man. All fingers crossed for my work on electric isolation.

Ah, push it! Push it real good…!

Like I said in my previous blog entry, I’m going to upgrade my gaming rig. As a show of gratitude for my old system I just had to push the last bit of power out of it. I’m currently using two graphics cards in this system that are also going to be in the new system. It’s mainly a replacement of motherboard, RAM and CPU. Graphics card #1 is a Geforce GTX 570 by Point of View and it’s a charged model, which means it’s already overclocked by factory defaults. Stock clocks are 810MHz for the core and 1980MHz for the memory.

Graphics card #2 is a Geforce 9800 GT Green Edition by XFX, which is pretty common without any overclocks. Anyhow, the fan design looks pretty nice on this card. Stock clocks are 550MHz for the core and 700MHz for the memory.

I was able to push the Geforce GTX 570 up to a 880MHz core clock which is pretty good, if you consider that the stock core is actually 725MHz on a completely non-overclocked graphics card. The Geforce 9800 GT received no more than a fifty MHz overclock to the core and no overclock to the memory. So far so good, but I wasn’t very happy with these results and I also wasn’t very satisfied with the fact that all these overclocks happened in software only by using the MSI Afterburner.

After some research I finally found a version of NiBiTor that was able to actually handle my BIOS files that I read out with GPU-Z. The Geforce 9800 GT was pretty to modify. I was able to set the core-, shader- and memory-clock on the “main-page” of the program and I slightly adjusted the core voltage to 1.05V to get a little “insurance” on the core.

My Red Led Fan

The Geforce GTX 570 was a little bit more difficult, because I wasn’t just able to adjust the clocks on the main page. NiBiTor offers a sub-menu especially for Fermi CPUs and the bunch of numbers I first saw was way confusing. After a couple of minutes of asking Google I finally found a good website explaining how to adjust the clock speeds correctly and I was ready to let the editing begin. I also had to adjust the minimum and maximum voltage on the core for two simple reasons. #1 – I wanted to get a little bit more coolness in idle mode so I decided to undervolt my card a tiny little bit from 0.92V to something around 0.85V without any stability issues. #2 – I wanted to increase the maximum headroom by increasing the voltage in 3D performance mode. The defaults were around 1.062V and I decided to bring it up to 1.151V and the maximum allowed voltage was set to a value beyond 1.2V which I probably never going to use. Damage risk, you know?

I knew what my cards were able to do in the past and I did not have to experiment that much to find the stable clock values. So the final results on my cards have been the following. The Geforce 9800 GT now runs with a clock core of 725MHz and a memory clock of 900MHz. I did not wanted to push the memory to far because it simply wouldn’t make any sense. You always have to consider that this card is used for PhysX only.

The Geforce GTX 570 got a real blast. The core now runs stable on 950MHz and the memory was pushed slightly to 2150MHz. That’s an increase of 31% compared to the stock clocks given by Nvidia. 3D Mark 2011 increased by 400-500 points. The average score on factory default clocks was around 5,300-5,400 and after overclock around 5,900 points. Unfortunately wasn’t I able to kick it beyond the 6,000 point mark. I’m pretty sure I can push the final score on the new system to something between 7,000 and 7,500 points. I’ll keep you informed.