Leadtek, EVGA Geforce 8800 GTX tested, even in SLIFirst INQpression The fastest graphic thing on earthBy Fuad Abazovic: Wednesday 08 November 2006, 11:09WE TALKED a lot about G80. It has been in the news for a while and today it is finally ready to launch. It had its last minute tweak-up, as Nvidia says that one of its resistors was too weedy. But the firm and its partners fixed that and now the cards that you are about to buy should be trouble-free.The G80 is a totally new chip, 90 nanometre, with 681 million transistors and manufactured at TSMC. It is the largest graphics chip built to date, shielded in a big metal heat spreader. The fastest version is clocked at 575MHz while the GTS version works at 500MHz.Nvidia did a good job and it is the first DirectX 10 chip on market. It is also the first to support the 'unified' marchitecture. Nvidia claims that its chip is completely unified. The same pipeline calculates vertex, pixel and geometry Shader information. Nvidia claims 128 processing streams for its faster version of G80, the Geforce 8800 GTX, while the second in line, the Geforce 8800 GTS, has 96.Geforce 8800 GTX has a core working at 575MHz. GDDR 3 memory works in a rather odd 384-bit mode and it has twelve chips, totally of 768MB of memory. While we all expected 1024 would be the next step, Nvidia decided to go for 768MB due its different memory controller. Nvidia clocked the memory at 1800MHz with a totally bandwidth of respectable 86.4 GB/s and a fill rate of 36.8 billion a second. The Geforce 8800 GTX is 28 centimetres long and is the longest card we've had in our hands. Only the Geforce 7900 GX2 was longer but we never managed to get one of those.Nvidia's unified parallel Shader design has 128 individual stream processors running at 1.35GHz. Each processor is capable of being dynamically allocated to vertex, pixel, geometry or physics operation. We don’t have any idea where this 1.35GHz number comes from but the card renders fast that's all you should care about now. And of course if supports DirectX 10, an exclusivity for Vista only.Nvidia finally made a chip that works with FSAA and HDR. It supports the new 128-bit HDR. This one handles 32 bits per component and produces better quality. The Geforce 8800 GTX handles 24 ROPs (Raster Operation Units) or, should we say, can render 24 pixels per clock, while the Geforce 8800 GTS can render 20 only. You'll need a 450W or higher PSU to power the card with 30A 12V current so a quality 450W will do. Our 8800 GTX SLI worked with 700W OCZ GameXstream PSU so we can recommend this one.Two power six-pin connectors means that the card gets 2x75W from the cables plus an additional 75W from the PCIe bus. This brings total power consumption to an earth-heating 225W.Geforce 8800 GTX has a dual slot cooler - massive and heavy but it does the job. The card always worked around 55Celsius in 2D mode and a bit higher in 3D.We will tell you more about Geforce 8800 GTS in the separate part that will follow. We had two cards on test. You guys do realize the ramifications of this right? DIFFERENT BRAND SLI???The first one to arrive in our hands was an EVGA Geforce 8800 GTX with a brand-new ACS3 Cooler. EVGA dared to change the Nvidia holy cooler and made some modifications to it. It added a huge metal heat spreader on the back of the card and it covered Nvidia heatpipe cooler with a massive EVGA branded piece of squared metal. It is as long as the card itself but you won’t have any troubles to connect the power cords - the cards have two of them or the two SLI connectors.Nvidia decided to put two SLI connectors on the top of the card and G80 works with the existing cables that you get with your motherboard. The only trouble is that Nvidia doesn’t bundle an additional SLI cable so, if you have a single motherboard, you will end up with a single cable. As we had two in the lab we just borrowed one SLI cable from the other board. The card takes two slots and you need a bigger SLI connecting cable to put them together. We tried it and it works in SLI too.The ACS3 cooler is actually more efficient that Nvidia's reference cooler as the card works at 55C in 2D mode while the Nvidia reference cooler cools the card to 60 Celsius only. This is what makes the difference between EVGA card and the rest of the cards based on the reference cooler design.The retail box is super small. We could not believe that EVGA managed to pack the card in such a small package and still you have all the CDs and cables you need.Second to arrive was a Leadtek Geforce 8800 GTX card and the moment we got it we just had to test it in SLI, of course. As we don’t have any support from Nvidia we had to figure out how it works, but we did it.Leadtek Geforce 8800 GTX is packed in a nice widee retail box. It includes two games, Spellforce 2 and Trackmania Nation TMN, along with a driver CD and a bonus software pack, including Adobe reader and Power DVD 6 ORB. It has a single DVI to VGA dongle, S-video to HDTV cable including S-video out. It has a two Molex to six-pin power connectors and that is all. This is all you need. The card supports HDCP and the driver CD also includes Vista drivers, we didn’t try it yet and Leadtek puts an extra touch and at least brands Nvidia driver as Leadtek Winfast ones. They work flawlessly and they told us that the temperature in 2D mode is around 55 Celsius for EVGA or 60 Celsius for Leadtek. The Leadtek card heated up to 75 Celsius in the open, out of the case environment. EVGA ACS3 works at around 70 Celsius after a heavy 3D load.The second smaller chip on a PCB is TMDS display logic as Nvidia either could not put it in G80, had problems with the integrated TMDS. We believe that the chip would be too big to have it all and Nvidia decided for a cheaper two chip approach.The driver in both cases has a few new features, including support for 16X Anisotropic filtering and 16X FSAA in a few modes. The big news is that Nvidia finally supports FSAA 8X and 16X. Nvidia's Luminex is a marketing name for incredible image quality that includes support for 16X FSAA, 128-bit HDR and support for 2560x1600 resolution with a high frame rate.The new FSAA modes include 8x, 8xQ, 16x, and 16xQ. The 8xQ and 16xQ are brand new but we don’t know what Q here stands for, we will try to play with it at a later date. We didn’t get the cards early enough to test everything we wanted.BenchmarketingWe used :Foxconn C51XE M2aa Nforce 590 SLI motherboardSapphire AM2RD580 with SB600 board for CrossfireAthlon FX 62 2800 MHz 90 nanometre Windsor core2x1024 MB DDR2 Corsair CM2X1024-6400C3 memorySeagate Barracuda 7200.9 500GB SATA NCQ hard driveThermaltake Mini Typhoon Athlon 64/X2/FX cooler and Intel CPU'sOCZ 700W GameXstream power supplyFor ATI cards we used 6.10 drivers the most current available, G80 based cards 8800 GTX used a driver supplied on a CD version 9.6.8.9 and a 91.47 drivers for Gainward 7950 GX2 card.The power supply was enough for SLI as well. We plugged both cards in Foxconn C51XE motherboard and we have to mention that the retention mechanism on this board sucks. It works flawless but it is very hard to unplug the card as long as the 8800 GTX.We patched the cards with two SLI connectors from two different boards, installed the drivers for both cards, restarted and Voala, it works. SLI is enabled and we did a few benchmarks. The scores can be even higher when you use Core 2 Duo or Quad CPU but at press time we decided to use our reference platform based on FX62 CPU.Composite Figures 3DMark 03 3Dmark 05 3Dmark 063Dmark 03ATI X1950XTX 19741 12348 6283650/2000 MHz ATI X1950XT 32951 15725 9909650/2000 MHz Cross Fire Gainward BLISS 29758 13633 80837950GX2 PCX 500/1200MHz EVGA eGeforce 30485 15210 98148800GTX ACS3 575 / 1800 MHz EVGA-Leadtek 49390 16374 109748800GTX SLI 2x 575 / 1800 MHz --> Game benchmarks removed (layout problems in the cut and paste) see here for full detailsWe started with 3Dmark03 and already saw the light. A single EVGA G80 card, EVGA eGeforce 8800GTX ACS3 575/1800MHz scores 30485. it is just slightly faster than the 7950 GX2 card, 2500 slower than Crossfire and more than 10000 faster than a single fastest X1950XTX card.It gets even better as the EVGA-Leadtek 8800GTX SLI 2x 575/1800MHz combo scores 49390, almost 50K. You'll need a faster or overclocked CPU for 50K, it is simple as beans. SLI is sixty-two per cent faster than a single card. A single G80 is fifty-four per cent faster than an X1950XTX. It scores 194 frames in game 3, just a bit slower than two ATI cards patched together. It is three times faster than Crossfire set up in Pixel Shader 2.0 test. It scores 1125.5 FPS, versus 373.3 the score of Crossfire. It is three times faster or 301 percent. Amazing isn’t it? Vertex Shader test is twice faster on G80 than on ATI's faster card.A single G80 beats ATI by almost 3000. SLI beats Crossfire by only 600 marks but it is still faster. 16300 is a great score for this test. Complex vertex Shader is almost twice as fast as ATI's card.Now to the 3Dmark06, you score 10974 with two cards or 9814 with a single card. This test is super CPU bounded. We will do some testing with Core 2 Duo and overclocking as we believe we should be reaching 13500 or more with a better or overclocked CPU.EVGA eGeforce 8800GTX ACS3 575/1800MHz scores is eighty-four percent faster than the ATI's X1950XTX in Shader model 2.0 test and seventy-one in Shader model 3.0 / HDR testing. It really smashes the competition.Doom 3 scores around 135 FPS at first three resolutions and drop to 125 at the 20x15, SLI even then scores 135 so this is clearly CPU limited. EVGA eGeforce 8800GTX ACS3 575 / 1800 MHz is almost 80 percent faster at 2048x1536. Doom 3 with effects on scores 130 FPS at first two resolutions and later starts to drop but is still faster than 7950 GX2 cards all the time. SLI doesn’t drop at all at first three resolutions only slightly drops at 20x15.FEAR scores are up to sky with the weakest score of 95 FPS at 20x15, faster than Crossfire in the last two resolutions and from GX2 and X1950XTX at all times. It is up to 66 percent faster than X1950XTX and 68 percent from the Gainward 7950 GX2 card. SLI is again 68 percent faster than Crossfire a massive difference.EVGA eGeforce 8800GTX ACS3 575 / 1800 MHz scores 53 FPS even at highest resolutions with all the effects on and 4X FSAA and 16X Aniso and much more at lower ones. Crossfire beats a single card by three frames at 16x12 and eight at 20x15 but a single card loses by some forty percents. SLI is twice as fast as 7950GX2 and 57 percent than Crossfire.Quake 4 runs up to forty seven frames faster on G80 and SLI gets the score better but not much. G80 is always faster than GX2 and Crossfire. Quake 4 with FSAA and Aniso runs some forty percent faster than ATI's fastest card and 30 per cent in Crossfire versus SLI G80.Far Cry with effects on performance is matched with both G80 and X1950XTX while the SLI can outperform both in 20x15.Serious Sam 2 is faster on ATI in first two resolutins by three frames while EVGA eGeforce 8800GTX ACS3 575 / 1800 MHz wins by eight, or fifteen frames at higher resolutions. SLI is 23 per cent faster than a single G80 and 43 percent faster than X1950xTX.Serious Sam 2 with FSAA and Aniso on always scores faster at EVGA eGeforce 8800GTX ACS3 575 / 1800 MHz card but not that much, some nine to ten percent while the SLI is 54 per cent faster than a single card and sixty eight percent than ATI's card.We decided to reintroduce Oblivion powered with Fraps and let me conclude it with it. We tested Nvidia cards in 8X FSAA mode in our custom developed test, a super-intensive routine with all the settings on. At 8xFSAA + 8xAniso and HDR SLI only makes a big stand over a single card in 16x12 and especially at 20x15. It is 21 per cent faster or 67 at 20x15.ATI could not do more than 6X FSAA and even then it runs slightly slower than a single card. At 20x15 it is unplayable with a few FPS but at 4X FSAA it scores 27.26 FPS. This is still the game that can bring the G80 to its knees and SLI can make it barely playable with all effects on.Before we finished up testing, we decided to do a quick Quake 4 16X and 16X Q FSAA test. Q mode is always about 11 percent slower than the 16X mode and maybe that Q stands for Quality. Not sure here as we didn’t get any documentation. We will figure it out later. As for the 16X scores it is two or more times faster and drops to 29 FPS at 20x15, but if you ask me that should be enough. If you want 60 FPS Quake 4 and G80 scores that much up to 12x10 the resolution of most 19 inch TFT gaming displays out there.In ShortBasically, at this time I can say G80 rocks. I love it, it's great and I don’t have anything bad to say about it. Pick up your wallet if you want to spend €/$600 on a graphic card and enjoy it. You can whine that it's long, it can get warm but it's all nonsense, as this is the fastest card on planet Earth and you have to live with the heat and size. It is super stable, that is what matters the most and we didn’t have any issues with stability.You can set the quality to 16X Anistropic filtering, 16 times full-scene anti-aliasing and have HDR 128 bit at the same time. It looks great and even the FSAA and HDR works on this card, it didn’t work on any Geforce 7 series.Nvidia can run 16X. It affects your performance but it is still playable, at least in lower resolutions. We will look at the picture quality in the next few days as we are far from being done from analysing G80.Overall, both EVGA Geforce 8800GTX ACS3 575/1800MHz and Leadtek Winfast PX8800 GTX TDH are great cards and get either of them you can. The only advantage is that EVGA temperature can get a bit lower but just a bit. And the good thing is that they both work in SLI together. So maybe you can buy one of EVGA and one Leadtek and be like us. If you want the fastest card on planet then you have to buy G80. If you want the fastest gaming setup on the planet then you have to buy two Geforce 8800 GTX cards and put them together in SLI. Not much more to say. Nvidia did it and this is the fastest card for some while, at least until the beginning of next year. µReviewed and tested by Sanjin Rados and Fuad Abazovic
Nvidia's dual-display SLI implementation is bonkersBuy three graphic cards!!By Theo Valich: Wednesday 08 November 2006, 10:33WE HAVE BEEN asking the Graphzilla about SLI and multi-monitor support ever since it came out with the SLI technology.The continuous mantra previously growled at us was that it was going to be solved in future revisions of drivers. Then, the 7800GTX series came out and the mantra was changed to: watch out for future revisions of hardware. And now, with the 8800 series coming out, this is changing to: Buy a third graphics card.Nvidia is launching its third-generation SLI-capable hardware, and users can still forget about multi-monitor support when SLI is enabled. We know the problem lies not in the driver, but in the way two displays are being set-up, GPU-wise.The presentation about the Nforce 680i chipset, which is being launched today, contains an explanation as to why Graphzilla is now offering three long PCIe slots for placing three graphics cards on the motherboard.The possibilities offered by the third PCIe slot are six monitors (which is nice), SLI Physics (marked as "Future Support") and SLI + Dual Display support - also marked "Future Support".Now, wait a bit: Dual Display support and three graphics cards? We won't go into the power consumption debate, but this seems rather excessive and expensive.As you can see in picture above, two of the graphic cards are today's babies, the 8800GTXs, while that one in the middle is 7900GT, probably playing a role of future DX10 mainstream graphics card. And in the future, dual display support just may be added. As SLI-Physics will.Of course, since the third PCIe connector will be a feature of 680i only, you really need to forget about lower-end chipsets such as also introduced 650i. Bear in mind that the third graphics card will be pretty much idle, since this is not for enabling gaming on the second monitor, but rather enabling Windows desktop. You know, heavy fillrate hitting stuff like Windows Explorer, Skype, ICQ, MSN, and oh my, incredibly complex TeamSpeak screen. At least, according to powerpointery.Nvidia should either stop talking about dual display support or pull a George Broussard and utter the words: when it's done. µ
First watercooler for Geforce 8800 arrivesAnd its a beautyBy Theo Valich: Wednesday 08 November 2006, 17:40SOME CRAZY GERMANS from AwardFabrik.de have shown the world's the first waterblock for the yet-to-be-launched 8800 series.The prototype comes from the company named SilenX, and it looks menacing. Preliminary testing has relealed that the temperature of the GPU drops sharply by 30 degrees Celsius, from 81-85C down to a 50C range. Overclocking scores also show significant improvement.Before the waterblock was installed, the GPU was running in high 90s at a 625MHz clock. With the watercooler you can enjoy a 25degC lower temperature and a stable clock of 660MHz for the GPU, which is almost 100 MHz faster than a default clock.Of course, getting a GPU to run at 660MHz yields in Shader clock of over 1.GHz (a 90nm part with almost 700 million transistors has parts that run at 1.5GHz) and not using AMD or Intel transistor technologies only gives a tribute to engineers at TSMC and Nvidia, who created a small semico miracle.The reason the cooler works so well is its backside. While the front looks pretty much like many of the waterblocks out there, turning the back reveals the German precision in designing this monster. All of the elements on the PCB are cooled: the G80 GPU, Samsung GDDR3 memory, nV I/O chip and the power regulation.This enables the G80 PCB to become a warm-to-touch instead of scorching-hot board, especially on the backside. The values we mentioned earlier are for the GPU alone, while the Windows thermal monitoring did not show a sharp decline in the temperature of power regulation, which in the end greatly reduce the stress on the PCB and will probably lead to a longer life of the board.The cooler will be available soon, and can be ordered using our complementary L'INQ. We're welcoming a test unit, so that we can either confirm or negate the scores in independent testing. µ
Sparkle gives the G80 a cold receptionFirst INQpression Sparkle's Calibre P880+ the first Peltier Junction cooled GF 8800GTX cardBy Nebojsa Novakovic: Wednesday 08 November 2006, 18:13BIRTHDAYS ARE FUNNY affairs, sometimes full of surprises. My birthday this year had no surprises, it just happened to be the official birth date of Nvidia's G80, also known as GeForce 8800 (GTX the preferred variety).By now, all is known about its Lumenex engine with unified shaders and DirectX 10 Shader 4 support, GigaThread array of 128 streaming processors, and (finally) Nvidia's own "Quantum Effects" physics acceleration, not to mention the somewhat odd 384-bit memory bus with 768 MBytes of GDDR3 RAM there, providing some 86 GB/s of bandwidth, 7x that of Athlon AM2 socket or Intel Core 2 overclocked to FSB1600. Overall, undisputably the world's fastest GPU right now, at least until DAAMIT R600 comes out in, hopefully, a few months.Sparkle has also joined the Nvidia team fray in announcing a new G80 series of GF8800GTX and GTS cards, however, in this case, they are coming up at the very start with a (slightly) modified offering as well. There are two GeForce 8800GTX models coming out from them - one is a standard reference model, the other the high-end Calibre model with a different cooling design (see photo). Sparkle claims that the Calibre P880+ 8800GTX cooler has better cooling performance, yet lower noise than the reference design, and, once Nvidia allows G80 family overclocking, it might be of more help there, too.Looks-wise, I'd say that the reference design is more polished, but again, the huge heat sink on the Calibre P880+, with its dual fans, gives an image of sheer power, doesn't it? But it's what is under those two fans that gives thess cards the extra oomph (once unlocked) and lower noise with cooler operation: an active Peltier Junction cooling engine, with its own 4-pin power supply (on top of the two 6-pin feeds for the card itself!). Co-developed by Sparkle and TEC, the system is second only to the best water cooling systems when it comes to the heat removal efficiency and reduced noise operation - after all, you don't want the card fan noise to overpower even the machine gun fire in your favourite 3-D shootout.Inside the huge cooling system is a thermal electric cooler, quad heatpipes and (quite slim) dual 8cm fans. The Peltier effect uses a transducer to produce a cold source that lowers the temperature of the GF8800GTX GPU. A transistor sensor placed near the GPU detects its temperature, while software monitors the overall temperature on the video card. When a certain temperature is reached near the GPU, the transducer will turn on the cold source. When the GPU is in idle mode, the transducer will turn off automatically. The dual 8 cm fans fan push the hot air to the exterior of the card via the quad heatpipes, helping to eliminate the heat remaining in the video card system. The fans have an exhaust cover to minimise the noise.The first sample that I got did not have any heat sinks on the 12 memory chips or the I/O chip on the graphics card. I volunteered and obtained 13 pieces of Zalman's good quality GDDR memory heat sinks and, to avoid removing the whole Peltier assembly, spent over an hour with a pincer, positioning and fixing these heat sinks on the memory chips - under the Peltier cooler! See the photo with the end result.Now, I ran both cards on two different CPU configurations - one the Intel "Kentsfield" Core 2 QX6700 running at 3.20GHz with FSB1066, whose four cores proved useful in feeding the GPU for 3DMark06 for now at least (till some actual 4-core optimised games come out), and the old Intel "Presler" Pentium XE 965, running at 4.27GHz with its two cores, again on FSB1066. Both of these are the highest frequencies I could run these CPUs on and reliably complete the CPU portion of 3DMark06 repeatedly without a hitch, even when using Zalman 9700 high-end cooler. The CPUs were using the same platform - Intel D975XBX2 with 1GB of Corsair XMS-5400UL memory, enabling 3-2-2-7 low latency at up to around 670 MHz.The performance was about the same - after all, Nvidia doesn't allow overclocking right now (feel free to share with us your experiences in overcoming this overclocking barrier here) and the GPU and RAM parts are the same. However, there was a definite difference in the noise. Sparkle claims to have reached over 20 deg C heat benefit (60 C working temperature instead of 80C) and 12 decibels less working noise compared to the reference card. I didn't measure the precise sound or heat levels yet, but the card was substantially quieter and, even after several rounds of 3DMark 06 with and without AA, anisotropics and so on, the huge Peltier block stayed coolish to the touch.Also, I looked at the power consumption during the 3DMark test runs, and there was only a few watts difference between the reference card and the P880+. Few watts more for far less heat & noise, not to mention some 60 grams less card weight? I guess the deal is good, and will be very good once the overclocking is unlocked. Most importantly, neither card took more watts than the X1900XTX running at 700/1600 speed settings, for way higher performance. In all cases, the system consumption peaked at about 230Watts as measured.In any case, these guys offer both card versions (the reference design under the Sparkle brand) and the Peltier version under the Calibre brand - also, while the Sparkle version gets the Call Of Duty 2 game CD, Calibre has the more 'unique' Painkiller game bundled along, as a full retail set.Here are the initial 3D Mark06 results on the Kentsfield and Presler:As you can see, this GPU really makes the difference between the CPUs, even in plain 3-D graphics. If you want to spend upwards of US$ 700 on a card like this, you better allocate a good sum for the right CPU (quad core QX6700 or QuadFather isn't a bad choice) and, of course, the right cooling and the right mainboard. Since these GPUs exert strong pressure on the CPU, PCI-E subsystem and memory bandwidth consumption, I'd never go with a dual GF8800GTX SLI configuration in a dual PCI-E X8 configuration, like say Intel 975X chipset. It simply has to be the new Nforce 680i (or, if you can make it work, one of those rare ATI RD600 chipset based boards).Sparkle Calibre P880+ is worth checking out for any high-end "enthusiast" keen on G80 - I like the idea of a Peltier cooler on such a high end card, however I have yet to see how it will perform in long term operation. Also, trying it in SLI mode with heavily overclocked water-cooled QX6700 on the Nforce 680i, with much faster FSB and memory system to feed the twin G80's, would be tempting. But well, that is for next week. µ
One from guru3dhttp://www.guru3d.com/article/Videocards/391/