Advanced Micro Devices may be looking to buy graphics company ATI Technologies, a move that would benefit the overall graphics industry, according to RBC Capital Markets."The synergies of this seem consistent with the recent announcements by AMD to significantly increase capacity over the next few-years," wrote analyst Apjit Walia in a note to investors Wednesday. "We believe ATI is a rare-buy in the semiconductor space right now given the near-term tie-up dynamics."Walia based his prediction on recent checks in the PC food chain. RBC has an "outperform" rating and $23 price target on ATI, and no rating on AMD. The firm expects ATI to report fiscal 2007 earnings per share of $1.06.It has long been discussed that the graphics-companies are likely to be bought by one of the microprocessor companies, according to Walia. However, for AMD-rival Intel (nasdaq: INTC - news - people ), a partnership with a graphics company may not be the best idea.
I am convinced AMD will buy ATI Computex 2006 It makes senseBy Charlie Demerjian in Taipei: Tuesday 06 June 2006, 02:43 I AM NOW UTTERLY convinced that AMD will buy or merge with ATI, and I am also nearly as convinced that it will happen as soon as Computex, which opens today. If someone came to me and said 'Guess what AMD just announced at their conference', I would not be shocked. On the surface it looks insane, but if you stretch your time horizon out more, say to the timelines of chip design, AMD would be sunk if they didn't buy ATI.Several very smart analysts I have talked to seem to think it is madness for AMD to hook up with their Canadian brethren, mainly because it would antagonize their closest partner, NVidia. They are right, and it would, but where does NV run, the loving arms of Intel? Not a chance.The whole reasoning behind this merger is nothing you would ever think of, GPU functionality on the CPU, the 'next next big thing. x86 is about to take the biggest left turn since the 286 -> 386 transition, and GPU-like functionality will be the key. I would be shocked if Intel is not doing this now, they actually do have the vision to do this, implementation seems to be the stumbling block of late. To keep up, AMD has to throw reams of non-existent engineers at it, hire a coherent GPU team with a track record of delivery, or buy that coherent team. Guess which one is possible?To utterly make up numbers, if a GPU will accelerate 10% of your problems 500x, and cost 2x the power of a non-GPU-like CPU, it can be considered a pretty compelling case. With GPUs getting minor revs every 6-9 months, and complete architectural overhauls every 12-18, these guys can dance around traditional CPU design teams and have the phrase 'time to market' tattooed on each and every individual neuron. Each rev brings that compelling case to a larger percentage of the market, and software advances bring it closer from the other side.Additionally, with each new GPU, the shader pipeline acquires more and more CPU functionality. What is the difference between a VS/PS pipeline and an x86 CPU, other than the ISA? How about this fall when ATI puts out the second gen converged graphics part? And how about Christmas '07 when the third gen comes out? This is what Intel and ATI see.Intel people, unofficially, behind locked and bolted doors have made no bones at all to me about who the real enemy is for them, not AMD but NVidia. If they are not working on the convergence of these two techs, I may have to call for Intel heads again. AMD is taking the shorter but more expensive route. Because Intel is using their own internal tech, I don't expect ATI/AMD to drive a NVidia/Intel tie up.Either way, the gen of chips around 2010 will be radically different from today's cores. The current chips are showing diminishing returns for the time and effort, and a radical change is long overdue. If AMD does not buy ATI, they will have to have a miracle happen to beat Intel in 5 years. ATI will shortcut this enough to allow them to possibly beat Intel to market too.Crazy as it seems on the surface, I honestly don't see how AMD can not pick up ATI and survive. This is long term planning mind you, not short term. AMD has a very good sense of this, look at how the K8 chips nailed the market, and other things come out at the nick of time. Yeah, they are good, and I have the sense that Intel is good as well, this time.The only thing that will possibly sink this is an all out revolt by NVidia, VIA, SiS and Broadcom. These guys are famous for working together [cough], so that is not much of a downside. AMD learned from the SiS near-miss last year, and is clamping down on leaks this time, hard. Today may be one of the biggest watershed days in x86 history, or it may be another hot, humid day in old Taipei. One thing for sure, Computex is never dull. Two things for sure, I believe this will happen.µ
Today AMD unveiled what it calls the evolution of enterprise level computing, called Torrenza. The new platform, says AMD, will utilize next-generation multi-core 64-bit processor that have the capability to work alongside specialized co-processors.
yall remember cyrix?anyhow, they had a chip, called the mediaGx.. CPU that did GPU and audio , in addition to its regular duties. ( GPU on CPU idea)long story short, after numerous company sales, the mediaGx, renamed Geode, ended up at AMD a couple years agoi.e. amd already has the technology, patents and know-how to pull it off on a larger scale, if they wanted to.. but i think the market they aimin it at is enough ( embedded)this is old hat.. and as someone quite rightly said earlier, the performance hit may be too much for today's desktop users.. perhaps their purchase of bitboys is to strengthen that offerin, wintry? cuz i read they gettin duss out from via and intel in that arena, as is...
Celeron M 'Shelton' performance analysed Mendocino heaven or Covington hell?By Chip Mulligan: Monday 05 June 2006, 13:59CHIPZILLA HAS been rather secretive about the Shelton processor since its introduction in early 2005. This Celeron M, based on the 130nm Banias core but with no L2 cache, still receives no mention on its maker's developer pages. Oh the ignominy! Originally reported as a low-cost challenger to Sempr0ns and VIA C3s in Asian markets, Shelton then shifted to target embedded designs. The first boards that escaped into the wild used the 845GV chipset and featured a chip clocked at 1GHz it its pre-release configuration. In release form, it was sold as a bundle of 600MHz, 0 L2 cache, 400MHz FSB, BGA-mounted processor, 852GM north bridge and an ICH4-M south bridge. This was priced competitively to the 1GHz VIA Eden and Geode NX. The question on our inquisitive minds was how much the lack of cache would have hurt this chip. Certainly a 600MHz Dothan would, on paper, have given a good shoeing to a C3, and maybe given the K7 Geode a run for its money, but how about with that cache missing? Readers with long memories will remember the last time Intel produced an x86 CPU without Level 2 cache: the infamous Covington Celeron - launched in 1998. An experiment in cost reduction, Covingtons were thoroughly outperformed by, well, just about everything. This sullied the word 'Celeron' for years, forcing Intel into the early launch of the ambitious Mendocino core, with its L2 cache integrated onto the processor die: par for the course now, but a big leap in 1998. So what made Intel think that things would be any different now? After all the Banias and Shelton are very much members of the same P6 family as Covington. Well, one thing would be the massive increase in bandwidth available to the processor from memory, and reduced latencies. The new bus, even with DDR266 memory (all that the 852 chipset will allow) offers 4x the bandwidth of old Covington, whereas the clockspeed has only doubled. Coupled with a much stronger hardware pre-fetch unit, the theoretical disadvantage should, on paper at least, have diminished. Enough pondering, let's get to the numbers! First: the competition. We tested the Shelton, or as it is otherwise known, the Celeron M ULV 600MHz (Zero Cache) against an AMD Geode NX 1GHz sat in a SiS741CX chipset, with DDR333; against the VIA Eden "Nehemiah" 1GHz with the CN400 and DDR400; and threw in a Dothan down-clocked to 600MHz on an 855GME board for good measure.All chips (apart from the Pentium M) feature similar TDPs, are fanless and cost about the same. Sadly time constraints stopped us from performing all tests on all the boards, but here’s what we came up with. 3DMark 2001SE Internal Graphics VIA Eden: 765 Geode NX: 1056 Shelton: 1123Under-clocked Pentium M: 1831 First strike to Shelton. An impressive show with Shelton beating out the AMD and VIA options. The Intel graphics system proves to have the edge over the SiS. Adding 2MB cache and moving to the 855GME adds 63% to the score of the Shelton. The VIA is in last place. To isolate the effect of the internal graphics, we put in an ATI Radeon 9000 card, and the results shifted around quite a bit: Mobility Radeon 9000, 64MB VIA Eden: 3737 Shelton: 4344 Geode NX: 5364 Under-clocked Pentium M: 5772 Even with the graphics removed from the equation, the Shelton retains a 16% lead over the VIA Eden clocked 400MHz higher, but now the extra horsepower of the K7-based Geode pulls it well clear with a 24% lead. The added cache of the Dothan still gives it a 33% boost over its baby brother. Sciencemark 2.0 (32-bit) MolDyn Primordia Crypto Stream Memory Blas Shelton 109.78 162.78 233.14 303.83 489.17 162.5 Geode NX 321.55 324.46 436.06 359.26 359.26 456.9 AdvantageAMD +193% +99% +87% +18% -27% +181% From this we can see that the Intel solution is efficient with its memory usage: even with DDR266, this shows a good advantage against the AMD with DDR333. However, despite this, the AMD is miles ahead on the real tests. We didn’t test the VIA board, however the C3 has traditionally not scored well in Sciencemark. Office Applications One thing that interested us was how the original Celeron’s biggest weak point was in traditional office-type applications. How would Shelton perform here against the C3 with its higher clock speed and 64KB L2 cache? We dug out some older benchmarks to do a modern day re-run: Eden1GHz Shelton600 Advantage VIA Business Winstone 2002 13.4 12.1 +11% Business Winstone 2004 7.7 6.9 +12% OfficeBench 2001 37.7 56.2 +49% Well finally a result for VIA! It appears that the Shelton’s old weakness is still there; and it runs "standard" software better than the Intel solution. Again, we didn’t have a chance to run these tests on the AMD, but based on prior benchmarks would imagine it to have a strong lead over both solutions. Video Playback On video, theoretically, VIA’s CN400 with its integrated MPEG2/4 acceleration should have an advantage over the 852. This was borne out by our rough tests: playing back a DVD the C3 was averaging only around 15% CPU usage, whereas the Shelton was at 35%. A similar situation existed with an MPEG4 file, 40% for the C3 and 60% for the Shelton. However it’s clear that neither of these chips will be ideal for your HDTV playback! In ShortSo there we have it, Shelton – a very mixed bag indeed. In some areas, it really does resemble the old Covington Celeron: terrible performance in office and scientific applications. However the strong chipset and higher memory bandwidth have lifted it up to be a real contender in graphics performance. Overall our feeling is that AMD have a good little chip in the NX, offering a very well-balanced set of performance and power, and proving there’s life in the K7 yet. Likewise, the C3 and identically performing "Luke" CoreFusion processor still has its strengths over the Shelton at 600MHz. We’d have loved to have had a go at overclocking this, because without L2 cache, its low clock speed, and the famous low power of the Pentium M, we feel that it would have had some mammoth headroom, but our board’s BIOS was locked down at the 400MHz bus, sadly. On a final note, we’ve heard that Intel have just made a version at 800MHz available, though, while unlikely not enough to beat out the Geode NX, it should well match the C3 / CoreFusion in the weaker areas. µ
AMD to offer Transmeta's Efficeon Is that what Samsung wants for its UMPC?By Chip Mulligan: Tuesday 06 June 2006, 13:08HOW THINGS CHANGE in a year! In 2005, Transmeta announced that they were moving away from being a "chip" company, would focus on flogging their IP, and would allow a Chinese company, Culturecom, to take over selling the Crusoe and Efficeon processors. Transmeta had struggled with the complexity of their design; essentially a simple, low power VLIW engine running a crafty emulation layer, known as CMS, that allowed it to pretend transparently to be an x86 processor. With long delays, the realisation that their design was highly cache-dependent, and Chipzilla’s catching on to the "low power is good" concept, the tiny chipmaker was always going to struggle. Well now they’re back: as with so many things rumoured, the partnership with AMD has eventually come true, with AMD announcing that they were taking over the marketing and sales of what is now imaginatively known as the AMD Efficeon, still manufactured by Fujitsu. While the announcement from Transmeta explicitly linked the new Efficeon design to AMD’s 50x15 effort to sell lots of PCs in the developing world, we couldn’t help but wonder if there was a link between this and the announcement that Samsung would be releasing UMPCs based on AMD chips. Let’s think about it… a tiny portable device with even the lowest powered Turion 64 and ATi chipset in would take considerably more juice than the 900MHz Celeron M ULV (5.5W) driving it at present, and is a physically large solution – not ideal! So if it’s an AMD x86 chip driving this, we’re really left with three options: 1) It’s a Geode LX chip – advantages: very cheap, small, reasonable runtime low power, but poor idle power and not exactly class-leading performance (think VIA C3 at around 700MHz) 2) It’s the next generation of Geode architecture: the Dragonfly. This was originally set for a release in H1 2006, so might be on the cards still. Dragonfly’s specs were fairly well-suited – a 90nm 1GHz core, large L2 cache and on-chip PCI express, coupled with low average power consumption. 3) It’s an Efficeon, probably the TM8820. Available in a tiny package, just 2x2cm, with DDR, AGP and HT interfaces, 1GHz at around 3W (inc the north bridge) with excellent performance. And the key advantage - LongRun 2: a combination of power saving techniques from dynamic clock and voltage reduction, aggressive clock gating, with the pièce de résistance - an incredibly clever method of reducing the processor’s leakage in the lower power states by biasing the substrate, which has been licensed to Sony, NEC, Toshiba and Fujitsu. All of this results in a very low idle power state, and in turn should lead to battery life better than anything else x86. ULi offered a tiny HT-connected south bridge, M1563S, which completed the Efficeon as a very compact platform, rumoured to be making its way into smartphones back before it got the chop. Our tests showed that a 90nm Efficeon at 1.6GHz (still barely warm to the touch), was churning out similar 3DMark scores to an Athlon XP 1800+, so not much by modern standards, but still enough to be a very usable Windows XP PC. Could Efficeon be at the heart of the new Samsung UMPCs then? We think it’s a possibility, and it could help solve the battery life issues of the first models, which might well help the struggling platform, and with AMD’s support this chip might get a second life yet. µ