G80, Geforce 8800 GTX is very CPU dependentScores as 7950 GX2 on a non overclocked CPUBy Fuad Abazovic: Tuesday 31 October 2006, 08:51Job searchTop INQ jobsWeb Developer, LondonWebPortal Engineers, TelfordSenior Support Engineers, BristolProject Manager, AberdeenJunior Developer, WokingSearch for a job: Job searchOUR PALS with a nice toy named G80, Geforce 8800 GTX informed us that unless you overclock your CPU, you won't get the right performance delta.If you test 3Dmark06 with a 1000MHz overclocked Core 2 Quad or Duo you get 11300 marks. But when you test the same card, same test on a same non overclocked CPU at 2.66GHz the situation completely changes.You score about 8000+ with a single card, almost identical to our score with 7950 GX2, the two GPU card. Shocking isn't it, as you'd expect more? To be fair to Nvidia's latest greatest card, the Geforce 7800 GTX scores some 5200+ in the same test so Geforce 8800 GTX is significantly faster than this single GPU card but not from the dual chip one. We are sure that this is just case for 3Dmark and that games will benefit more.Don't say that we haven't warned you - you will need a faster CPU to push this card to its limits. A chap called Victor metioned in the overclocking story actually bought G80 card, he is not under NDA, which makes it even worst for Nvidia. µ
dey go release a driver an fix datits nvidiadont worry
Nvidia's G80 innards exposedGigaThread, Lumenex, True HDR, Quantum Effects - we disclose all!By Theo Valich: Thursday 02 November 2006, 09:44Click here to find out more!THIS ARTICLE REVEALS all of the important information regarding GeForce 8800 series, which is set to be released to the world on November 8th, 2006 in San Jose. We have learned that during traditional Editor's Day in San Francisco nVidia kept its rules, so "no porn surfing" and "no leaks to the Inquirer" banners were shown. But, we have no hard feelings about that. It is up to the companies to either respect millions of our readers, including employees of Nvidia or... not.As you already know, Adrianne Curry, a Playboy bunny, America's Next Top Model star and an actor from My Fair Brady is the demo chick for G80. After we posted the story, we received a growl from Graphzilla, but we are here to serve you, our dear readers. However, this was just a story about a person who posed for the G80. Now, it's time to reveal the hardware. Everything you want to know, and don't want to wait for November 8th - lies in this article. Get your pop-corn ready; this will be a messy ride.For starters, the 8800 launch is a hard one, so expect partners to have boards in store for the big day's press conference at 11AM on the 8th. The board delivery will go in several waves, with the first two separated by days. The boards were designed by ASUSTeK, and feature a departure from usual suspects at Micro-Star International. This is also the first ever black graphics card from nVidia. Bear in mind that every 8800GTX and 8800GTS is manufactured by ASUS. AIBs (add-in board vendors) can only change the cooling, while no overclocking is allowed on 1st gen products. Expect a very limited allocation of these boards, with UK alone getting a mediocre 200 boards.The numbersG80 is a 681 million transistor chip manufactured by TSMC. Since Graphzilla opted for the traditional approach, it eats up around 140 Watts of power. The rest gets eaten by Nvidia's I/O chip, video memory and the losses in power conversion on the PCB itself.->GG, so the GPU ALONE consumes 140W... look for a PSU upgrade if you want this forget SLI, we looking at 1KW PSUs for dat.If you remember the previous marchitecture, the G70 GPU embedded in 7800GTX 256MB, you will probably remember that the Pixel and Vertex Shader units worked at a different clock speed. G80 takes it one step forward, with a massive increase in clocks of Shader units.GigaThread is the name of the G80 marchitecture which supports thousands of executing threads - similar to ATI's RingBus, keeping all of the Shader units well fed. G80 comes with 128 scalar Shader units, which Nvidia calls Stream Processors.The reason Nvidia went with SP description is a DirectX 10 function called Stream Output, that those Shader units will now work on Pixel, Vertex, Geometry and Physics instructions, but not all at the same time. The function, in short, enables data from vertex or geometry shaders to be sent to memory and forwarded back to the top of GPU pipeline in order to be processed again. This enables developers to put in more shiny lighting calculations, physical calculations, or just more complex geometry processing in the engine. Read: more stuff for fewer transistors.In order to enable that, Nvidia pulled a CPU approach and stuffed L1 and L2 cache across the chip. On the other hand, you might like to know that both Geometry and Vertex Shader programs support Vertex Texturing.And when it comes to texturing itself, G80 features 64 Texture Filtering Units, which can feed the rest of the GPU with 64 pixels in a single clock. For comparison, GF7800GTX could manage only 24. Depending on the method of texture sampling and filtering used, G80 ranges from 18.4 to 36.8 billion texels in a single second. Pixel wise, the G80 churns out 36.8 billion of finished pixels in a single second.When it comes to RingBus vs. GigaThread, DAAMIT's X1900 can branch granularity of 48 Pixels, X1800 can do 16. GeForce 8800GTX can do 32 pixel threads in some cases, but mostly the chip will be able to do 16, thus you can expect Nvidia to lose out on GPGPU front (for instance, in Folding@Home stuff).However, Nvidia claims 100% efficiency, and we know for sure that ATI is mostly running in high 60s to high 70s in percentage points.How many pixels can G80 push?One of the things we are using to describe the traditional pixel pipeline is the number of pixels a chip can render in a single clock. With programmable units, the traditional pipeline died out, but many hacks out there are still using this inaccurate description.To cut a long story short, on the pixel-rendering side, G80 can render the same amount of pixels as G70 (7800) and G71 (7900) chips.The G80 chip in its full configuration comes with six Raster Operation Partitions (ROP) and each can render four pixels. So, 8800GTX can churn out 24, and 8800GTS can push 20 pixels per clock. However, these are complete pixels. If you use only Z-processing, you can expect a massive 192 pixels if one sample per pixel is used. If 4x FSAA is being used, then this number drops to 48 pixels per clock.For game developers, the important information is that eight MRT (Multiple Render Targets) can be utilised and the ROPs support Frame Buffer blending of FP16 and FP32 render targets and every type of Frame Buffer surface can be used with FSAA and HDR.If you are not a game developer, this sentence above means that Nvidia now supports FP32 blending, which was not a thing in the past, and FSAA/HDR combination will be supported by default. In fact, 16xAA and 128-bit HDR are supported at the same time.Lumenex Engine - New FSAA and HDR explainedROPs are also in charge of AntiAliasing, which has remained very similar to GeForce 7 series, albeit with quality adjustments. The G80 chip supports multi-sampling (MSAA), supersampling (SSAA) and transparency adaptive anti-aliasing (TAA). The four new 1GPU modes are 8x, 8xQ, 16x and 16xQ. Of course, you can't expect that you will be able to have enough horsepower to run the latest games with 16xQ enabled on a single 8800GTX, right?Wrong. In certain games you can buy today, you can enjoy full 16xQ with the performance of regular 4xAA. The reason is exactly the difference between those 192 and 48 pixels in a single clock. But in games which aren't able to utilise 16x and 16xQ optimisations, you're far better off with lower AntiAliasing settings.This mode Nvidia now calls "Application Enhanced, joining the two old scoundrels "Application Override" and "Application Controlled". Only "App Enhanced" is the new mode, and the idea is probably that the application talks with Nvidia's driver in order to decide which piece of a scene gets the AA treatment, and what does not. Can you say.... partial AA?Now, where did we hear that one before.... ah, yes. EAA on Renderition Verite in late 90s of the past century and Matrox Parhelia in the early 21st century?On the HDR (High Dynamic Range) side, Nvidia has designed the feature around OpenEXR spec, offering 128-bit precision (32-bit FP per component, Red:Green:Blue:Alpha channel) instead of today's 64-bit version. Nvidia is calling its new feature True HDR, although you can bet your arse this isn't the latest feature that vendors will call "true". Can't wait for "True AA", "True AF" and so on...Anisotropic filtering has been raised in quality to match for ATI's X1K marchitecture, so now Nvidia offers angle-independent Aniso Filtering as well, thus killing the shimmering effect which was so annoying in numerous battles in Alterac Valley (World of WarCraft), Spywarefied (pardon, BattleField), Enemy Territory and many more. When compared to GeForce 7, it looks like GeForce 7 was in the stone age compared to the smoothness of the GeForce 8 series. Expect interesting screenshots of D3D AF-Tester Ver1.1. in many of GF8 reviews on the 8th.Oh yeah, you can use AA in conjunction with both high-quality AF and 128-bit HDR. The external I/O chip now offers 10-bit DAC and supports over a billion colours, unlike 16.7 million in previous GeForce marchitectures.Quantum EffectsSince PhysiX failed to take off in a spectacular manner, DAAMIT's Menage-a-Trois and Nvidia's SLI-Physics used Havok to create simpler physics computation on respective GPUs. Quantum Effects should take things on a more professional (usable) level, with hardware calculation of effects such as smoke, fire and explosions added to the mix of rigid body physics, particle effects, fluid, cloth and many more things that should make their way into games of tomorrow.GeForce 8800GTXDeveloped under a codename P355, the 8800GTX is Nvidia's flagship implementation. It features a fully fledged G80 chip clocked at 575MHz. Inside the GPU, there are 128 scalar Shader units clocked at 1.35GHz and raw Shader power is around 520GFLOPS. So, if anyone starts to talk about teraflops on a single GPU, we can tell you that we're around a year before that number becomes true. Before G90 and R700 these claims come from marketing alone.-> MY GOD768MB of Samsung memory is clocked at 900MHz DDR, or 1800 MegaTransfers (1.8GHz) wielding out a commanding 86.4 GB/s of memory bandwidth.-> Double OMGThe PCB is massive 10.3 inches, or 27 centimetres, and on top of the PCB there are couple of new things. First of all, there are two power connectors, and secondly - the GTX features two new SLI MIO connectors. Their usage is "TBA" (To Be Announced), but we can tell you that this is not the only 8800 you will be seeing on the market. Connectors are two dual-link DVIs and one HDTV 7-pin out. HDMI 1.3 support is here from day one, but we don't think you'll be seeing too much of 8800GTX w/HDMI connection.Cooling is not water/air cooled, but more manufacturer friendly aluminium with copper heat pipe. The fan is expected to be silent as a grave, and several AIBs are planning a more powerful version for 2nd gen 8800GTX, expected to be overclocked to 600 MHz for GPU and 1 GHz DDR for the memory.The board's recommended price has changed couple of times and stands at 599 or dollars/euros, or 399 pounds. However, due to expected massive shortage, expect these prices to hit stratospheric levels.GeForce 8800GTSCodenamed P356, the 8800GTS is a smaller brother of the GTX. The G80 chip is the same as on the GTX, but the amount of wiring has been cut, so you have the 320-bit memory controller instead of 384-bit, 96 Shader units instead of 128 and 20 pixels per clock instead of 24.The board itself is long and comes with a simpler layout than the GTX one. Dual-Link DVI, 7-pin HDTV out come by default. "Only" one 6-pin PEG connector is used, and power-supply requirements are lighter on the wallet.The clocks have been set at 500MHz for the GPU, 1.2GHz for Shader Units, while the 640MB of memory has been clocked down to 800MHz DDR, or 1600 MegaTransfers (1.6GHz), yielding out bandwidth of 64GB/s. Both pixel and texel fill-rate fell by a significant margin, to 24 billion pixels and 16 to 32 billion texels.Recommended price is 399 dollars/euros, but who are we kidding? Expect at least 100 dollars/euros higher price.Performance is CPU BoundYes, you've read it correctly. Both GTS and GTX are maxing out the CPUs of today, and even Kentsfield and upcoming 4x4 will not have enough CPU to max out the graphics card – G80 chip just eats up all the processing power that a CPU can provide to them.-> I don't think a Driver update will fix this crixx. I think I wil be holding off meh upgrade for 4x4, I can't see pairing this GPU or then R600 with my current Dual core offering. I think these cards will really shine with 4x4 and Higher clocked Kentfields as mentioned belowHaving said this, expect fireworks with AMD's 4x4 platform once that true quad-core FX become available.In the endNvidia has a really strong line-up for upcoming Yuletide shopping madness. However, within the ranks of Graphzilla's troopers there is an obvious intent to bury all of the more advanced features that the competition will offer in couple of months' time. 512-bit memory interface, more pixels per clock, second gen RingBus marchitecture... all this is hidden in the dungeons of Markham and DAAMIT's R&D Labs in Santa Clara and Marlboro.Also, we have to say that the market is now set for repeat of 2005 and the R520/580 vs.G70/71 duel, since Nvidia will probably offer a spring refresh of the high-end model at the same time as DAAMIT launches the long delayed R600 chip. µ
Geforce 8800 GTX is faster than Crossfire X1950XTXSuper fast in gamesBy Fuad Abazovic: Monday 06 November 2006, 09:11Click here to find out more!BEFORE the reviews start popping up on Wednesday, we got the chance to get you some games numbers for the G80.In Open GL based Quake 4, the Geforce 8800 GTX is faster than two ATI X1950 XTX cards in Crossfire.The Geforce 8800 GTX scores a few frames faster in every single resolution with and without FSAA 4X and Aniso 8X.In FEAR, Crossfire keeps it nose in front by seven to eight frames, without the effects on, but at 1600x1200 or higher resolutions the 8800 GTX card is faster. In FEAR with 4X FSAA and 16X Aniso on, Crossfire wins in three from four resolutions but it is never more than seven to eight frames faster.We also know that you can play Battlefield 2142 at 1600x1200 all effects on and have a 100 FPS all the time. Nvidia really did a great job this time, it will be a worthy upgrade.A single Nvidia G80, Geforce 8800 GTX card can beat a pair of ATI's fastest X1950 XTX cards! I think DAAMIT might have a problem until it releases its much-delayed R600 card, next year. µ
To add a bit of humour, I'm posting this pic i came across whileon google.It's "confidential" info on teh GPU as of a 3 months ago.NSFW!! you are warned!http://img399.imageshack.us/img399/3839/8800gtx0ep.jpg
doh worry, its cheap a mere £5400 daz all... :p
Nvidia Geforce 8800 GTX tested in SLI Premature emission 75 per cent faster in FEARBy Fuad Abazovic: Tuesday 07 November 2006, 15:56FEAR is one of our favourite benchmarks. And we thought it'd give a pair of G80, Geforce 8800 GTX cards in SLI a workout. We got a 75 per cent performance increase over a single Geforce 8800 GTX card. Yes, we are talking about G80, Geforce 8800 GTX times two. Are you ready for some big numbers? 3Dmark03 scores 49,400. For comparison, a 7950 GX2 dual core graphic card scores 29,800 in the same test. A single X1950XTX scores 19,500. In 3Dmark06, the 8800 GTX SLI reaches almost 11,000. The 7950GX2 achieves around 10,000. In Fear, the dual G80 cards can score up to 48 frames faster than a single card - a massive performance increase - and some other games will benefit from it too. Stay tuned for more numbers we just wanted to give you a hint. Oh yes, the single Geforce 8800 GTX scores 9,800 in 3Dmark, on the fastest AMD CPU to date. µ