Apple Mac Pro fastest Windows XP PC ever Fanbois contemplate their belly buttonsBy Nick Farrell: Thursday 05 October 2006, 08:21 PC PRO magazine has decided that the Apple Mac Pro is the fastest Windows XP PC in the UK. The magazine compared a quad core Apple Mac Pro Macinteltosh with other PCs in the same price range and it broke all records.The first thing the reviewers did was ignore the the pre-installed copy of Mac OS X and run Windows XP Professional on the machine using Apple's Boot Camp beta.Once the machine was free of its Unix based software, the beast soared. It managed record-breaking speeds in a PC Pro multiple applications test. It could run Microsoft Office, Photoshop and a music decoder at high speeds simultaneously.The reviews said that the results spoke volumes about for the ability of Intel's new 3GHz Xeon 5160 processors more than anything else.Not something that the Apple faithful want to hear much. If you have just under £5,000 to spend on a good Windows XP machine it looks like you should buy an Apple and ditch the operating system.More here. µ
Also I doh know bout allyuh, but apple's mobo, case, layout, noise levels, design and marketing is better than the rest.
thinking about that carefully, the Mac Pro 16GB of memory, and has 2 × dual-core 2.0, 2.66 or 3.0 GHz 64-bit, and as i dont know of and commercial mother boards that can exceed 8GB's of memory. So i would actually say that mac owns pc's at this moment.
The beauty of Intel's FB-DIMM architecture Part Three ConclusionBy Charlie Demerjian: Wednesday 07 April 2004, 07:12 See also There's magic in the Intel FB-DIMM old buffer and part one, Intel FB-DIMMs to offer real memory breakthroughsPART OF THE BEAUTY of the FB-DIMM architecture is that the part that the memory controller talks to stays constant. If you go from DDR2 to DDR3, all you need to change is half of the buffer, the rest architecture does not change. The presentation I saw had FB-DIMMs working with speeds from DDR2-533 to DDR3-1600.This all means that when DDR2 is considered passé, you can theoretically plug in a DDR3 module, and as long as the buffer supports the old signaling, it should work. The memory controller should not know or care. In practice, I doubt it will be this simple or straight forward, but it could happen. Either way, it will make the transitions to completely new memory architectures vastly easier and probably quicker. It effectively decouples the logic of the memory controller from the memory architecture without adding much delay. This is a very good thing. Let me repeat, a very good thing.If all this isn't enough for you, remember we mentioned reliability earlier? FB-DIMM is primarily a server architecture, and people in the server world actually do care about such things. You may think the occasional crash is tolerable, heck, you may be acclimated to it if you use Windows, but server people tend to get, shall we say, pissy when servers crash. Users get annoyed as well, and bosses get more annoyed when they see how much the downtime is costing them. All in all, servers should not crash, and anything that promises to make them crash less is generally regarded as a very good thing.FB-DIMMs built in vastly increased reliability from the start. The goal was to have one silent data error per 100 years or less, and they claim to have achieved that. If you are wondering what a silent error is, it is one that is not caught, and goes on to do bad things. With ECC, single bit errors can be noticed and corrected. Multi-bit errors can still cause headaches, but those are much less common. There are current schemes to work around this, and if you care about them, read this article, parts one and two.Any non-silent error can have one of two outcomes. It will either get flagged and corrected by the system, or flagged and not corrected. This usually leads to the system taking corrective measures, from halting a process to notifying the operator that something is wrong.Silent errors are the ones that do not get flagged, and pass through undetected as 'good' data. For a game, that is not much of a problem, you get a misplaced polygon on a frame or two. For a credit card transaction, you have a much worse problem, an extra zero added to your Visa bill is not a good thing. Having the probe in the telemedicine session move left instead of right is worse. The answer to the question "Can I get confirmation to fire the missiles sir?" is a much worse scenario. The moral is you don't want silent errors.How does the FB-DIMM achieve the less than once in 100 years metric? A robust CRC scheme protecting the commands and the data are the start of the protection. This is vastly more reliable than any of the schemes in common use today, and while it is a good start, things do not end there.The next major advance builds on the concept of chip-kill. Most modern server boards have the ability to shut down a known bad memory chip on the fly, and somewhat correct for it. The AMD Opteron controller has this built in, and any modern AMD server board will let you turn this on.FB adds to this with what it calls "Bit Lane Fail Over Correction", or the ability to take a data path that is known bad out of service on the fly. I prefer the term wire-kill, it sounds so much cooler, but you have to allow the designers to pick their own marketspeak. Either way, it brings the next level of protection to the memory subsystem. A chip, DIMM or channel going down will no longer mean a crash, not even a decrease in bandwidth, but probably a loud annoying beep from an internal speaker.How do they do this? When a bit lane fails for whatever reason, it is mapped out, that much is obvious. The controllers then adjusts the CRC scheme to use less bandwidth, basically prioritizing the data itself. This leaves you with a little less protection until the fault can be fixed, but does not slow things down. Additionally, mapping out parts of bit lanes were considered, but the added complexity, and therefore cost, were not deemed to be worth it. If read pair #5 between DIMMs 2 and 3 goes bad, you lose all of pair #5 to all DIMMs. If the server administrator is not completely asleep at the wheel, this will be fixed sooner rather than later, so the tradeoff is probably a good one.The buffer itself adds a level of intelligence to the DIMM not found in previous architectures. It contains error registers that can theoretically allow for decisions to be made on more than the current bits flying through the system at that point in time. This can let designers do a lot of tricks on the subsystem as a whole that were previously impossible.Since most server companies use different levels of error protection to differentiate between product lines, mandating a single scheme would give the marketers fits. Worse yet, it would not allow them to differentiate their products from the obviously inferior competition. FB-DIMMs do not implement any of their own error correction schemes other than making sure the data gets from point A to B intact.If a server company wants to use the building blocks FB brings to the table in new and innovative ways, more power to them. If you have a new ECC algorithm that is better than your rival, by all means put it in. If it costs ten times what the older scheme does, put it in the expensive boxes. FB won't stand in your way here.One that I thought up was to allow the DIMM itself to keep track of an error history; there is an onboard EEPROM that can store data. You could know the error history of a DIMM as soon as you plug it in, and have the hardware keep track of its history in an OS independent fashion that will survive a reboot. A good example of this would be modern tape cartridges that have a built in EEPROM to track data, from reads and writes to contents and serial numbers. I think we will see some very creative uses for this in the future.This all brings us back to the old problem of cost. As we said earlier, it uses plain old DDR/2/3 chips that are common as dirt. The buffer should not add that much to the total, and the cheaper boards will probably more than make up for the cost by reducing the layer count. That is the physical part.The more problematic part is volume and licensing. The volume part is not all that bad, the main cost, the RAM chips themselves, are commodity items. At IDF, Intel said there were multiple buffer vendors and FB-DIMM suppliers, most of the big names in the field are currently on board. While they may be expensive at first, every new tech always is, this should drop down dramatically in short order.Standards committees will hopefully avoid the licensing part, always a barrier for the adoption of any new technology. There are currently seven JEDEC working groups trying to set an industry standard around the FB-DIMM. A quick look here shows that some of the committees are active, and include participants not limited to Intel. This also falls into the "very good thing" category. It will lead to vastly quicker and more widespread adoption, along with greater interoperability.So, what it all comes down to is, did Intel achieve its goals with the FB-DIMM design? On the capacity and reliability fronts, this is a clear yes, improvements of an order of magnitude or more are all over the place, and you have to look for a down side. These two things most likely cement the future of FB-DIMMs in the server world.Cost is a less clear win. It will probably end up being a bit more expensive than a similar DIMM, the cost of the buffer itself pretty much assures this. Whether the simplification of the board design outweighs the DIMM cost is still up in the air. If I had to guess, I would think that in a couple of years, the answer would be yes. For the first few months of FB-DIMMs existence, I think it will not be cheaper than the prevailing solution. The upside to this is that if you need it, any premium will be a small price to pay, quite literally.Last is performance. That one is the trickiest, and it is still an open question. RAM performance measurement is like herding cats, there are so many different things to measure, and so many different ways to do it, just thinking about it makes your head hurt. There is no correct answer here, nor is it a single question.You can measure bandwidth and latency for starters, and they are not necessarily related. You can have bandwidth sensitive apps and latency sensitive apps, or both. To make matters more complex, the numbers may change with added DIMMs. RDRAM was famous for adding latency the more RIMMs you added.The performance numbers all come down to the app you are running. Servers don't run QuakeIII all that often, and that is a notoriously latency sensitive app. They tend to run things that are more bandwidth sensitive than games, but again, this is not an exclusive statement.So the clear-cut answer here is, it depends. Bandwidth dependant apps are a clear win, latency sensitive programs are a possible win, possible loss. If the "tricks" of the architecture work for your app, then it is a win. If they do little for you, then it could be a loss. Until the first chips using FB-DIMMs come out early next year, it could go either way. Even then, it could very well go both ways, don't you just love technology?