Great review, thats what we all need to get Nvidia and ATI stop bitchin around and stealing our money with slow hardware that can't even outperform last generations hardware. If you ask me the 8800Ultra should be the middle 150$ class here and top end should be some graphic card with 320 stream processors 1GB GDDR4 clocked at 2.4GHZ and 1000MHz core clock, same from amd they need the X2900XT to be middle 150$ class and top of the line should be some graphic card with 640stream processors 1GB GDDR4 2.4GHz and 1000MHz core clock!
More of this kind of reviews please so we can put to ATI and Nvidia we won't buy their hardware if its not good!!!!!!!!
I really enjoyed this review. I have been agonizing over selecting an affordable graphics card that will give me the kind of value I enjoyed for years from my trusty and cheap GF5900xt (which runs Prey, Oblivion, and EQ2 at decent quality and frame rates) and I am just not seeing it.
I'm avoiding ATI until they bring their power use under control and generally get their act together. I'm avoiding nVidia because they're gouging the hell out of the market. And the previous generation nVidia hardware is still quite costly because nVidia know very well that they've not provided much of an upgrade with the 8xxx family, unless you are willing to pay the high prices for the 8800 series (what possessed them to use a 128bit bus on everything below the 8800?? Did they WANT their hardware to be crippled?).
As a gamer who doesn't want to be a victim of the "latest and greatest" trends, I want affordable performance and quality and I don't really see that many viable options. I believe we have this half-baked DX10 and Vista introduction to thank for it - system requirements keep rocketing upwards unreasonably but the hardware economics do not seem to be keeping pace.
This article is ridiculous. Why would Nvidia and other dx10 developers want gamers to buy G80 card for high dx10 performance? DX10 is all about optimization, the performance factor depends on how well it is implemented and not by blindly using API's. Vista's driver model is different and dx10 is different. The present state of Nvidia drivers are horrible, we can't even think of dx10 performance at this stage.
the dx10 version of lost planet runs horribly eventhough it is not graphically different from dx9 version. So this isn't dx10 or GPU's fault, it's all about the code and the drivers. Also the CEO of Crytek has confirmed that Nvidia 8800 (possibly 8800GTS) and E6600 CPU can max Crysis in Dx10 mode.
Long back when dx9 came out I remember reading an article about how it sucked badly. So I'm definetly not gonna buy this one.
No, it's not about sucky code or sucky drivers. It's about shaders. Look at how much faster cards with more shader power are in d3d9. Now in d3d10 longer, richer, prettier shaders are used that take more power to process.
It's not about optimization this time as the IHVs have already figured out how to write optimized drivers, it's about raw FLOPS for shader performance.
DX9 performance did (and does) "suck badly" on early DX9 hardware.
DX10 is a good thing, and pushing the limits of hardware is a good thing.
Yes drivers and game code can be rocky right now, but the 162 from NVIDIA are quite stable and NV is confident in their performance. Lost planet shows that NV's drivers are at least getting close to parity with DX9.
This isn't an article about DX10 not being good, it's an article about early DX10 hardware not being capable of delivering all that DX10 has to offer.
Which is as true now as it was about early DX9 hardware.
yeah, 9700 pro sucks ... when actually running real world DX9 code.
Try running BF2 at any playable setting (100% view distance, high shadows and lighting). This is really where games started using DX9 (to my knowledge, BF2 was actually the first game to require DX9 support to run).
But many other games still include the ability to run 1.x shaders rather 2.0 ... Like Oblivion can turn the detail way down to the point where there aren't any DX9 heavy features running. But if you try to enable them on a 9700 Pro it will not run well at all. I actually haven't tested Oblivion at the lowest quality so I don't know if it can be playable on a 9700 Pro, but if it is, it wouldn't even be the same game (visually).
CoH also got 59.9fps on the GTX Ultra. Today's d3d10 games can run at full frame rates on today's hardware. Just ensure you're using the latest drivers and you have the most expensive card money can buy ;)
I'm with Derek here. I liked your article. We need much more powerful hardware and time for DX10, Drivers, Developers & Vista to get it together. A couple of years from now, all should be good with DX10. Not any sooner methinks.
This article, I thought, was extremely poor for several reasons:
(1) DX10 in terms of developer support presently is just about exactly where DX9 was when Microsoft first released it. At the time, people were swearing up & down that DX8.1 was "great" and wondering what all of the fuss about DX9 really meant. Then we saw the protracted "shader model wars" in which nVidia kept defending pre-SM2.0 modes while ATi's 9700 Pro pushed nVidia all the way back to the drawing boards, as its SM2.0 support, specific to DX9, created both image quality and performance that it took nVidia a couple of years to catch.
AnandTech did indeed mention almost in passing that DX10 was still early yet, and that much would undoubtedly improve dramatically in the coming months, but I think that unfortunately AT created the impression that DX10 and DX9 were exactly *alike* except for the fact that DX10 framerates were about half as fast on average as DX9 framerates. A cardinal sin of omission, no doubt about it, because...
(2) DX10 is primarily if not exclusively about improvements in Image Quality. It is *not* about maintaining DX9-levels of IQ while outperforming DX9. It is about creating DX10 levels of Image Quality--period. AnandTech does not seem to understand this at all.
(3)First lesson in Image Quality analysis that even newbies can readily understand is this: if the performance is not where you want it, but the IQ is where you want it, then you do the following to improve performance *without* sacrificing Image Quality (This is a lesson that AnandTech truly seems to have completely forgotten):
Instead of talking about how sorry the performance was in DX10 titles (those very few early attempts that AT looked at), AT should have seen what AT could have done to increase performance while maintaining DX10-levels of Image Quality. That is, AT should have *lowered* test resolutions and raised the level of FSAA employed to get the best balance of Image quality and performance. AnandTech did not even try to do this--which in my view is inexcusable and fairly unforgivable. It is a very bad mistake. I'm sorry--but many, many people, including me, do not use 1280x1024 *exclusively* while playing 3d games. My DX9 resolution of choice is 1152x864, for instance.
The point to be made about DX10 is *not* frame rates locked in at 1280x1024. Sorry AT--you really screwed the pooch on this one. The whole point of DX10 is *better image quality* which everyone who ever graduated from the 3d-school-of-hard-knocks is *supposed* to know!
So, just what does a bunch of *bar charts* detailing absolutely nothing except frame rates tell us about DX10? Not much, if anything at all. Gee, it does tell us that with the reduced image quality that DX9 is capable of providing contrasted with DX10, that DX9 runs faster in terms of frames per second on DX10 hardware! Gosh, who might ever have guessed....<sarcasm>
IMO, the fact that DX10 software even early on is running slower than DX9 on DX10-compliant hardware tells *me* nothing except that DX10 is demanding a lot more work out of the hardware than DX9, which means that we can expect the Image Quality of DX10 to be much better than DX9. These early games that AT tested with are merely the tip of the iceburg of what is to come. AnandTech really blew this one.
"DX10 is *not* frame rates locked in at 1280x1024"
Too bad you didn't say this earlier in your rant, I could have saved myself a few minutes and stopped reading sooner.
Thanks for the laugh though... imagine... someone who thinks 1280x1024 is a frame rate telling AnandTech they screwed the pooch when publishing this article. LOL
This article was awesome. Too bad reality doesn't match your expectations or you'd like it too.
1) You have no idea what the "shader war" was about. There isn't even a single parallel to be drawn if you *correctly* look back.
2) d3d10 is nothing about image quality. No hardware on the market or rumoured to be coming out next generation can handle running a full length SM2.0b shaders! And forget about even 3.0! Or even 2.0a that the GeForceFX supported for that matter. d3d10 is about one thing-making d3d on the PC more like a console. That means lower CPU (not GPU!) overhead (more performance in CPU limited situations) and making lives easier for developers.
3) AT has CoH at 1024x768, 1280x1024, 1600x1200, and 1920x1200 (common native LCD resolutions). These days everybody has bigger monitors and playing below 1024x768 is pretty rare. You can extrapolate your 1152x864 resolution from that.
What this shows us is if you increase the image quality (see the screen shots) today's cards don't have enough power to run at full frame rate, with the exception of the 8800 GTX Ultra. AT did a great job showing the world that with simple bar graphs and everything. It's too bad you were too angry to realize that is all it was trying to show.
umm, they tested Company of Heroes and Call of Juarez at 1024x768, and Lost Planet at 800x600, and some of the cheaper cards could still not maintain playable frame rates. How low on the resolution do you want them to go?
I'm not sure how raising FSAA is going to improve performance?
Nor how removing DX10 visuals but lowering screen res will "maintain DX10 level of visuals" either?
... if you presently have a DX9 system with acceptable performance. Until the NEXT generation of DX10 hardware is released.
Anybody who goes out and buys a current Dx10 card ( or even worse, dual cards ) just because "the Dx10 games are coming" ( and "I want bragging-rights") has a lot more money than sense. Buying a 8600 or 2600 for acceptable HD-decoding in your HTPC (or your mid-range PC with a weak CPU) is the only purchasing action with the current Dx10 offerings that makes total sense. All of the upcoming Dx10-capable game-titles for 2007 will have excellent Dx9/SM3 graphics. The lack of Dx10 hardware will smother "bragging-rights" but will have zero effect on playability.
nVidia has been developing the successor family to the G80-series GPU for almost a year now and the first graphics cards from this new generation are expected by the end of 2007. I would not be at all surprised if the first card out of the chute in the new family will immediately fill the cost-space between 8600GTX and 8800GTS, but with DX9 and Dx10 performance far superior to the 8800GTS. No doubt the GPU will also be 65nm, since the manufacturing/yield cost of the huge 80nm G80 die is the immovable stumbling-block to dropping the price of the 8800GTS.
That's fine if you game at resolutions below 1680x1050, but some of us game at higher resolutions, and most high-end DX9 cards were struggling mightily to play the latest DX9 games (Oblivion, Supreme Commander, STALKER, etc...).
These new generation cards are not only about DX10, they are also about improved performance (exponentially improved performance actually) over that last generation, and some of us actually need that power.
What you said is all well and good for you if you are still gaming at 800x600 or whatever, but I like my resolution a little higher thank you very much.
So Nvidia and AMD actually complained about their "mainstream" parts being below par?
It sounds to me like they are seriously out of touch. Media center PCs will get their 8400 and 2400 cards for h264 acceleration. Gamers with lots of money will buy those $400+ cards as usual. But your average gamer in the $200 market is stuck with junk that is unplayable for dx10 games and performs like previous gen hardware that just barely makes the grade for current games already on the market.
I'm glad that some of the things that were said in the article, are starting to be said. DX10 reminds me of Physx, great concept but can't possibly succeed due to certain hurdles in the technology being able to actually take off. Unfortunately DX10 also isn't going to be what we all hoped it would be in real performance, video drivers are not the only reason.
There is something real counterintuitive with quality native DX10 rendering support.
There is very little incentive for a developer to produce a good DX10 renderer when developers have DX9 support on Vista along with many of their real current goals of console support for more customer base. With only so many hours a day, console support is much more lucrative with new people having access to buying a title, that will also actually run just fine with only DX9 on a Vista platform, never needing or really benefitting from DX10.
The costs of making DX10 games outweigh the benefits, and the benefits aren't currently that palpable in the first place. Furthermore, actual DX10 performance will rarely ever be all that positive, something that is too early to prove, but rather negative even though apples to apples DX9 to DX10 comparisons cannot be made.
As to the low end parts, it's just marketing, theirs no "real" value DX10 for $100 -$150 for gaming. If people want a good video value purchase for games, $200 is where the good quality/price starts and it's mostly in a previous generation, not some value card for $150. Whether we agree with that price point being 'worth it' is another matter altogether and is purely personal preference. I don't know why Nvidia or ATI even introduced low end DX10 compatible hardware when customers will only get angry at developers or video mfr's for the blunder of underpowered hardware for high end game titles. This low end DX10 hardware mystified me. it was either going to slow down DX10 progress or have to be ignored. It seems obvious that all high end games will have to pretty much ignore the low end parts for achieving acceptable framerates with DX10 for new eye candy titles. They should have left DX10 out entirely in low end, but they had to include it because of competition between AMD/Nvidia, neither wanted to leave the other with a marketing advantage of saying, look we have DX10. DX10 with it's lofty goals of being able to render more in a scene and produce even greater quality eyecandy is at odds with low price. Higher quality rendering will always be at odds with low price, they are mutually exclusive. Low price never is going to give you good performance and quality, people should really start being realistic as to what they are paying for at a $100 - $150 price point, it's a consumer expectations problem. Low end hardware will work fine for games like Sims and those games will target low end hardware, but not high end games for higher resolutions and decent frame rates.
In the end, with future goggles on I think the picture is becoming quite clear that DX10 will become one of the DX API's that becomes mainly skipped (if a decent DX successor becomes available in next few years). The only time it will really make sense to go above DX9 native support is when Vista saturates >%85 gaming market share. In the several years that that will take, DX11 or higher should be available and will be superior to DX10, so DX10 in hindsight will really end up being just a marketing ploy to upgrade to Vista, little more.
Glad the mask is starting to come off and more people are being able to see the picture clearer that making any purchases around DX10 with a GPU or OS is silly and bound to cause frustration.
i upgraded recently - but ended up only spending under $300 for a core2duo system
this is why - people were saying get a DX10 card - future proof
i decided to keep my old agp card because i felt real dx10 games and real dx10 hardware were not here yet.
i'm happy i made i feel the right choice and didn't spend money on new psu, sata drives etc. so i could have a $$$ dx10 card only to play call of juarez at 14 fps.
I have to agree. I was considering trying to get an 8600 for my SFF PC, but after looking at this, I'm probably going to hold off until the next generation. HTPC and SFF PCs just can't handle the heat an 8800 series generates, and I want at least playable DX10 performance.
quote: Both NVIDIA and AMD were very upset over how little we thought of their DX10 class mainstream hardware. They both argued that graphics cards are no longer just about 3D, and additional video decode hardware and DX10 support add a lot of value above the previous generation. We certainly don't see it this way. Yes, we can't expect last years high-end performance to trickle down to the low-end segment, but we should at least demand that this generation's $150 part will always outperform last generation's.
Seriously, F them. It's pathetic they're trying to pawn off half-assed hardware as "mid-range enthusiast" parts when they can't even perform as good as the mid-range from the previous generation. Jerks.
Another new article showing how DX10 Vista performance propaganda is garbage.
Gotta love people who try to act superior b/c they bought Vista for gaming when all it does is suck up more system resources and uses DX10, the combination of which will easily inhibit performance on equivalent hardware.
"They both argued that graphics cards are no longer just about 3D, and additional video decode hardware and DX10 support add a lot of value above the previous generation."
Yeah, I don't buy into this either. I've pretty much given up on 'video decode,' be it avivo or purevideo. You end up stuck with using a specific product, rather than ati or nvidia opening the features for any developer to access. Right now, it's only useful with the latest windvd, powerdvd or nero but you have to hope your driver version is the right one, and doesn't (and probably never will) work for x264 or xvid content.
Purevideo is horribly branded by nvidia - is it card features that everyone has access to, or do you have to buy it from them? And has ati actually released their avivo video converter to the public? Could I use it to compress some of my recorded tv shows from mpeg to xvid?
Maybe this is like the mpeg2 decoder situation was in 98/99, in which case we should just wait for cpu speeds to increase and mean that we don't need video decode acceleration.
I agree. How much quicker would things be if all the video transistors were spent on more shader processors? I want my video card for video games. I want my dvd player for movies.
I have a 7600gt oc'd to 635/800. I get 30+fps with better than default settings, with AAx2 set @ 1024x768. Why do these cards not seem to do much better (given they are @ 1280x1024)?
I would have been nice if you guys could have included numbers with the latest publicly available drivers (beta or not) from ATI and NV, just to get an idea of what type of performance we can expect in the future
First of all, the whole name of the article is a poor choice of words. DX10 is pretty, but it's not fast. At least I think that was your point.
Next, the choice of processors is too limited. I don't know where you guys get your sales figures from, but Intel's extreme processors aren't their best sellers. Since part of the point of these processors is to do work in the video card in DX10 that was done in the CPU before, you might want another data point with a relatively inexpensive processor and see how DX9 relates to DX10.
Saying there will not be ANY DX10 only games for the next two years is strange. You will have an installed base to overcome, but for gamers it's not so important because they upgrade often, and some of the games don't design for old hardware anyway. Aren't there some games now that weren't made to play on hardware two years ago well? I would guess someone will decide well before then that designing for obsolete software isn't worth the effort and cost, and will ignore the installed base to create a better product that will take less time and cost less. Going only slightly, if you had an exceptionally good DX10 game, there would be enough people to make it profitable if you were the only one that did it. You always have the dorks that like using words like "eye-candy" (which probably means they have insufficient testosterone in their bloodstream) that will go out and buy whatever is prettiest. No DX9 means no wasted developers, means faster development for DX10, and less cost. So, before two years, it's entirely plausible that someone making a cutting edge game will decide it's not worth the effort, or cost, of supporting an obsolete software base.
OK, those charts are horrible. Put a little effort in them, instead of letting everyone know they are an afterthought that you hate doing. The whole purpose of charts is to disseminate information quickly and intuitive, your charts totally fail at this. Top chart, you have the ambiguous "% change from DX9 performance". Now most people, just viewing the chart, would assume something to the right is an enhancement. But, no! Everything is less. The next chart uses the same words, but this time green denotes a gain in performance. Yes, blue is typically what people associate with negatives, green with positives. Ever hear of "in the red", or "in the black"? I have. Most people have. Red for negatives, and black for positives would have been a little more understandable if you simply refuse to put in a guide.
Next you have "perf drop with xxx", which is what the first chart should have read, but you didn't want to go back and fix it. Even then, red would have been a better color for a negative, it's more intuitive and that's what charts are about.
Last, maybe Nvidia and AMD are right, that features are important. I was saying this earlier, and you're comparing apples and oranges when you say performance went down. Did DX10 performance go down? No. So, broadly, performance did not. DX10 performance didn't even exist, nor did some of the other features. I'm not saying everyone should buy these cards at all, they should measure what they need, and certainly some of the older cards with their obsolete feature set are fine for many, many people. At least today. But, by the same token, someone running Vista is probably going to want DX10 hardware, and they may not have massive amounts of money either to spend on it. So, the low end cards make sense too. Someone wanting the best possible visual experience WILL buy a DX10 card, and an expensive one. They have reasons for existing, they aren't broadly failures, but I think what irritates you is they aren't broadly successful either, like previous generations are and you are used to. It's understandable, but I think you're taking this so far you aren't seeing the value in them either. Obviously, since everyone has disappointed you in DX10 performance (Intel, Nvidia and AMD), shouldn't it show it's not an easy implementation, and that maybe they aren't all screwing up? If you give an exam and everyone gets a 10, maybe you need to look at the nature of the grading or test.
I think the point regarding the low end dx 10 hardware was the following.
" Performance is so low that you will in reality not be able to use DX 10. "
So everyone that cannot afford the high end DX 10 would be better off with a dx 9 card that will be cheaper and performing the same (or maybe even better).
Not true, the only software that runs DX10 will not be super demanding video games. And you will not be forced to run them at very high settings. For gaming, yes, but that's not the whole thing.
While you got voted down because your comment was set in an attacking tone, I felt that the comment was very well written and that you had some very valid points. As for the charts, I thought that those were performance increases until I read your comment. I doubt most casual readers will take the time to understand the counterintuitive charts. The charts are what most people look at, so they are really the most important part of the article to get right. Maybe the author will learn from this article and do a better job on the charts in the future.
You know, when I see people like him writing things without thinking like that, it really irritates me, because it's so uninformed, so I get angry. I wish I didn't react like that, but let who is without sin cast the first stone. I guess it's better than being passionless. I really don't mind negative votes, I'd be more worried if I got positive ones.
The charts really got under my skin, because he made no effort. It's just half-ass garbage. When you consider how many people read them, it's unsupportable. If I did work like that, I'd be ashamed.
Instead of taking a step back and saying, well, all the DX10 hardware hasn't been what we expected, maybe there is a reason for it, they quickly damn every company that makes it. It's got to be comparitive, because obviously these authors really don't know anything about designing GPUs (nor do I for that matter, so I'm not saying it to be vicious). But, when Intel, AMD and Nvidia all have disappointing DX9 performance with their DX10 cards, and DX10 performance broadly isn't great (although it seems better unless you add features), then maybe you should take a step back and say "Hmmmm, maybe we need to adjust our expectations".
It's kind of funny, because they do this with microprocessors already, because there was a fundemental change that made everyone reevaluate it. The Core 2 would be a complete piece of crap if you judged it by the 1980s and 1990s standard. It was, by those standards, an extremely small improvement over the P7 core, and even worse over the Yonah. But, the way processors are graded are different now, because our expectations were lowered somewhat by the P6 (it was a great processor, but again, it wasn't as good as the P5 was over the P4, or the P4 over the P3), and greatly by the K7 and P7. So, maybe GPUs are hitting that point in maturity where the incredible improvements in performance are a thing of the past, and the pace will slow down. Now, someone will correctly say, well, DX9 performance has decreased in some cases. Well, that's fine, but we also have a precedence in the processor world. Let's go back to the P6. It ran the majority of existing code (Real Mode) WORSE than the P5, because it was designed for 386 protected mode and didn't care much about Real mode. Or how about the 386? It wasn't any better on 16-bit code in terms of performance (it did at the virtual 86 mode though, but it wasn't for performance), but added two new modes the most important of which was (although not at the time) was 386 protected.
Articles like this irritate me because they are so simplistic and have so little thought put in them. They lack perspective.
Here's a bit of info on the cards. Back in the d3d8 era Matrox introduces the first 256bit memory bus for their cards with the Parhelia. All things equal that provides twice the speed of a 128bit bus, but is more expensive to manufacture.
nvidia and ATI still had 128bit buses at the time, but for their high end d3d9 cards (Radeon 9700 and GeForceFX 5900) switched to 256 bit buses because 128 just couldn't keep up.
We're four generations ahead now and both IHVs have used 128bit buses for their mid range d3d10 cards, even though 128bits was becoming bottle neck during the d3d8 era five generations ago!
This article was right on the money, nvidia focused on making their 8600 pin compatible with their old mid range 6600 card which is now three generations old! Intel didn't even leave their p4 compatible with itself! Yay, nvidia allowed Asus, etc to save on R&D costs. Too bad it meant customers have a handy capped chip. This article called them on it.
256bit, 512megabytes is the standard for mid range d3d10. We need to wait a generation to get there.
First, the data presented in the graphs does make sense if you take a second to think about what it means.
It shows basically that under DX10 NVIDIA generally performs better relative to AMD than under DX9.
It also shows that under DX10 AMD handles AA better relative to NVIDIA than under DX9.
I will work on altering the graphs to show +/- 100% if people think that would present better. Honestly, I don't think it will be much easier to read with the exception that people tend to think higher is better.
But ... as for the rest of your post, I completely disagree.
When we step back and stop pushing AMD and NVIDIA to live up to specific expectations, we've stopped doing our job.
Justifying poor design choices by looking at the past is no way to advance the industry. A poor design choice is a poor design choice, no matter how you slice it. And it's the customer, not the industry of the company, who is qualified to decide whether or not something was a poor choice or not.
The fundamental problem with engineering is that you are building a device to fit within specific constraints. It is a very difficult job that consists of a great deal of cost/benefit analysis and hard choices. But the bottom line with any engineering project is that it must satisfy the customer's needs or it will not sell and it doesn't matter how much careful planning went into it.
It's when consumers (and hardware review sites who represent the consumers) stop demanding fundamental characteristics that absolutely must be present in the devices we purchase that we subject our selves to sub par hardware.
Having studied computer engineering, with a focus in microprocessor architecture and 3d graphics, I certainly do know a bit about designing GPUs. And, honestly, there are reasons that DX10 hardware hasn't been what people wanted. This is a first generation of hardware that supports a new API using a very new hardware model based on general purpose unified shaders. It was a lot to do in one generation, and no one is damning anyone else for it.
But that doesn't mean we have to pretend that we're happy about it. And our expectations have always been more subdued than that of the general public BTW. We've said for a while not to expect heavy DX10 dependent games for years. It's the same situation we saw with DX9.
And honestly, the problems we are seeing are similar to what we saw with the original GeForce FX -- only not as extreme. Especially because both NVIDIA and AMD do well in the thing they must do well at: DirectX 9 rendering.
Honestly, this supports what we've been saying all along: the most important factor in a 3d graphics purchase today is DirectX 9 performance.
They're right though, the charts need work. They are not intuitive and there are multiple better ways to present 'percent change' data that would make sense on first glance without the reader having to decipher an unintuitive method that is contrary to the readability of the article.
I had the exact same reaction to the charts. For the Lost Planet chart with the two colors, either pick better (more standard) colors, or make the performance drop bars grow to the left (or down), and the performance increase bars grow to the right (or up).
The best thing to do with those charts is change them to show relative performance in DX10 compared to DX9, with 100% meaning no change (same performance in DX10 as DX9). Improvements with DX10 give scores above 100%, reduced performance gives a result below 100%.
Doing that would make the graphs much easier to understand than the current mess.
This might be a silly question as I can't recall the current status of MultiGPU performance with Vista drivers. Will it be possible to test these games with SLI/CrossFire configurations soon?
The results show exactly why I am waiting to buy a DX10 video card, all these people who rushed out to buy a Geforce 8800GTX or AMD 2900XT..hah..especially all those 2900XT fanboys who said the R600 would destroy the 8800GTX in DX10 benchmarks because it has 320 stream processors and a 512-bit memory interface....well guess what, the benchmarks are in and they show the R600 is still the power hunry POS video card it is.
I'm not sure how it is a 'hah' to the people that purchased these cards as they still blow everything else out of the water in DX9, I mean it is not even close.
So for those of us running at higher resolutions (1920x1200 or higher), an 8800/2900 or two made perfect sense (and still does). I doubt very many people were expecting great DX10 performance right away anyway, particularly as the games available barely make use of it.
I agree with your idea, I've always skipped over one generation of hardware to another.
Especially when users are still "testing" Vista gaming for Microsoft, Nvidia and AMD I see no need to part with my money until performance is at least on par with DX9.
Good article...Nice to see some real numbers on DX10 vs DX9
there are applications where the 2900 xt does outperform its competition, as is shown by call of juarez.
it really depends on how developers go forward. we'll have to wait and see what happens. judging the future of AMD/NVIDIA competition under dx10 isn't really feasible with only 3 apps to go by.
one thing's for sure though, we'd love to see better performance out of the mainstream parts from both camps. And having some parts to fill in the gap between the lower and higher end hardware would be nice too.
I think the point here is that many claimed that "R6xx is designed for DX10, don't judge it based on DX9 performance blah blah blah". Those claims gave the impression, that relative DX10 performance of R6xx series will be much better than their DX9 performance.
Your tests show that on average, R6xx takes a HIGHER performance hit from moving to DX10. Thus, under DX10 R6xx is even SLOWER than it was under DX9.
"there are applications where the 2900 xt does outperform its competition" - where? 2900XT has 22FPS, 8800GTX 24FPS nad 8800ULTRA 26FPS. Despite "crippled for NVIDIA" / "paid by ATI" I see still green camp to outperform ATI.
And in Lost Planet NVIDIA has 2x (!) better performance.
It is not even worth considering ATI for purchase.
Read the comment again: The point is that the 2900XT does not compete against the 8800GTX, it competes against the 8800GTS, which it did outperform in that test.
It certainly isn't the fastest card available, but I could also make a statement like "The GeForce7300Go outperforms it's competition" without saying it's the fastest thing available. I'm just saying it beats the cards in a similar price range.
Um, what about price? Last time I checked the 8800GTX still costs about $150 more than the 2900XT. I will not even bring up the Ultra which is still way overpriced.
So for $150 less you get a card that competes with the GTX some of the time and is more than capable of playing most games maxed out at high resolution. That is why the 2900XT is worth considering for purchase.
Problem is, 2900XT is many times NOT playable. Its performance is sometimes close to NVIDIA, sometimes 2x lower. And this applies for both DX9 and DX10. With 60+ games I own, I can assume ATI would "suck" on 30 of them.
Look at Lost Planet with AA, 22FPS versus 40-50FPS is a huge difference in playability. On 8800GTX/ULTRA you can play even at 1920x1200 (30FPS), with 2900XT even 1280x1024 is giving you problems.
Well I have two of them and they work more than fine on every single game I have tried so far, including demos, betas, and a couple of older games (using Catalyst 7.6).
Lost Planet is a POS port anyway, but when I ran the test benchmark in DX9 with Crossfired 2900XTs I had frames well above 40 with everything maxed at 1920x1200 so I am somewhat confused by the numbers here. I will have to wait until I am home to see my exact numbers, but they were much higher than what was presented here. Maybe there is something wonky with the beta drivers?
I'll post back tonight once I have verified my numbers.
We didn't use the demo benchmark, and the release version (afaik) does not include an updated version of the benchmark either.
For the Lost Planet test, we had to use FRAPS running through the snow. This will absolutely give lower performance, as the benchmark runs through a couple in door scenes as well with higher framerates.
I mentioned FRAPS on the test page, but I'll add to the Lost Planet section the method we used for testing.
Do Intel CPUs outperform currently AMD or not? After all, $200 AMD CPU is about as fast as $200 Intel CPU....
It's natural that slower parts have good price/peformance ratio compared to competition since otherwise nobody would buy them. However, this has nothing to do which one fastest...
Not sure what you are getting at, I was responding to this ridiculous statement:
quote: It is not even worth considering ATI for purchase.
Which is completely untrue, because price can be a big consideration.
With respect to CPUs, if you spend an extra $50 - $100 for the better Intel processor, you are getting exponentially better performance (I know this from experience), while if you spend $150 more for a GTX, you are getting only marginally better performance.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
59 Comments
Back to Article
slickr - Monday, July 9, 2007 - link
Great review, thats what we all need to get Nvidia and ATI stop bitchin around and stealing our money with slow hardware that can't even outperform last generations hardware. If you ask me the 8800Ultra should be the middle 150$ class here and top end should be some graphic card with 320 stream processors 1GB GDDR4 clocked at 2.4GHZ and 1000MHz core clock, same from amd they need the X2900XT to be middle 150$ class and top of the line should be some graphic card with 640stream processors 1GB GDDR4 2.4GHz and 1000MHz core clock!More of this kind of reviews please so we can put to ATI and Nvidia we won't buy their hardware if its not good!!!!!!!!
ielmox - Tuesday, July 24, 2007 - link
I really enjoyed this review. I have been agonizing over selecting an affordable graphics card that will give me the kind of value I enjoyed for years from my trusty and cheap GF5900xt (which runs Prey, Oblivion, and EQ2 at decent quality and frame rates) and I am just not seeing it.I'm avoiding ATI until they bring their power use under control and generally get their act together. I'm avoiding nVidia because they're gouging the hell out of the market. And the previous generation nVidia hardware is still quite costly because nVidia know very well that they've not provided much of an upgrade with the 8xxx family, unless you are willing to pay the high prices for the 8800 series (what possessed them to use a 128bit bus on everything below the 8800?? Did they WANT their hardware to be crippled?).
As a gamer who doesn't want to be a victim of the "latest and greatest" trends, I want affordable performance and quality and I don't really see that many viable options. I believe we have this half-baked DX10 and Vista introduction to thank for it - system requirements keep rocketing upwards unreasonably but the hardware economics do not seem to be keeping pace.
AnnonymousCoward - Saturday, July 7, 2007 - link
Thanks Derek for the great review. I appreciate the "%DX10 performance of DX9" charts, too.Aberforth - Thursday, July 5, 2007 - link
This article is ridiculous. Why would Nvidia and other dx10 developers want gamers to buy G80 card for high dx10 performance? DX10 is all about optimization, the performance factor depends on how well it is implemented and not by blindly using API's. Vista's driver model is different and dx10 is different. The present state of Nvidia drivers are horrible, we can't even think of dx10 performance at this stage.the dx10 version of lost planet runs horribly eventhough it is not graphically different from dx9 version. So this isn't dx10 or GPU's fault, it's all about the code and the drivers. Also the CEO of Crytek has confirmed that Nvidia 8800 (possibly 8800GTS) and E6600 CPU can max Crysis in Dx10 mode.
Long back when dx9 came out I remember reading an article about how it sucked badly. So I'm definetly not gonna buy this one.
titan7 - Thursday, July 12, 2007 - link
No, it's not about sucky code or sucky drivers. It's about shaders. Look at how much faster cards with more shader power are in d3d9. Now in d3d10 longer, richer, prettier shaders are used that take more power to process.It's not about optimization this time as the IHVs have already figured out how to write optimized drivers, it's about raw FLOPS for shader performance.
DerekWilson - Thursday, July 5, 2007 - link
DX9 performance did (and does) "suck badly" on early DX9 hardware.DX10 is a good thing, and pushing the limits of hardware is a good thing.
Yes drivers and game code can be rocky right now, but the 162 from NVIDIA are quite stable and NV is confident in their performance. Lost planet shows that NV's drivers are at least getting close to parity with DX9.
This isn't an article about DX10 not being good, it's an article about early DX10 hardware not being capable of delivering all that DX10 has to offer.
Which is as true now as it was about early DX9 hardware.
piroroadkill - Friday, July 6, 2007 - link
Wait, performance on the Radeon 9700 Pro sucked? I seem to remember games several years later that were DirectX 9 still being playable...DerekWilson - Saturday, July 7, 2007 - link
yeah, 9700 pro sucks ... when actually running real world DX9 code.Try running BF2 at any playable setting (100% view distance, high shadows and lighting). This is really where games started using DX9 (to my knowledge, BF2 was actually the first game to require DX9 support to run).
But many other games still include the ability to run 1.x shaders rather 2.0 ... Like Oblivion can turn the detail way down to the point where there aren't any DX9 heavy features running. But if you try to enable them on a 9700 Pro it will not run well at all. I actually haven't tested Oblivion at the lowest quality so I don't know if it can be playable on a 9700 Pro, but if it is, it wouldn't even be the same game (visually).
DerekWilson - Saturday, July 7, 2007 - link
BTW, BF2 was released less than 3 years after the 9700 Pro ... (aug 02 to june 05) ...Aberforth - Thursday, July 5, 2007 - link
Fine...Just want to know why a DX10 game called Crysis was running at 2048x1536 res with 60+ FPS equipped with Geforce 8800 GTX.
crysis-online.com/?id=172
titan7 - Wednesday, July 11, 2007 - link
CoH also got 59.9fps on the GTX Ultra. Today's d3d10 games can run at full frame rates on today's hardware. Just ensure you're using the latest drivers and you have the most expensive card money can buy ;)BigDDesign - Thursday, July 5, 2007 - link
I'm with Derek here. I liked your article. We need much more powerful hardware and time for DX10, Drivers, Developers & Vista to get it together. A couple of years from now, all should be good with DX10. Not any sooner methinks.WaltC - Thursday, July 5, 2007 - link
This article, I thought, was extremely poor for several reasons:(1) DX10 in terms of developer support presently is just about exactly where DX9 was when Microsoft first released it. At the time, people were swearing up & down that DX8.1 was "great" and wondering what all of the fuss about DX9 really meant. Then we saw the protracted "shader model wars" in which nVidia kept defending pre-SM2.0 modes while ATi's 9700 Pro pushed nVidia all the way back to the drawing boards, as its SM2.0 support, specific to DX9, created both image quality and performance that it took nVidia a couple of years to catch.
AnandTech did indeed mention almost in passing that DX10 was still early yet, and that much would undoubtedly improve dramatically in the coming months, but I think that unfortunately AT created the impression that DX10 and DX9 were exactly *alike* except for the fact that DX10 framerates were about half as fast on average as DX9 framerates. A cardinal sin of omission, no doubt about it, because...
(2) DX10 is primarily if not exclusively about improvements in Image Quality. It is *not* about maintaining DX9-levels of IQ while outperforming DX9. It is about creating DX10 levels of Image Quality--period. AnandTech does not seem to understand this at all.
(3)First lesson in Image Quality analysis that even newbies can readily understand is this: if the performance is not where you want it, but the IQ is where you want it, then you do the following to improve performance *without* sacrificing Image Quality (This is a lesson that AnandTech truly seems to have completely forgotten):
Instead of talking about how sorry the performance was in DX10 titles (those very few early attempts that AT looked at), AT should have seen what AT could have done to increase performance while maintaining DX10-levels of Image Quality. That is, AT should have *lowered* test resolutions and raised the level of FSAA employed to get the best balance of Image quality and performance. AnandTech did not even try to do this--which in my view is inexcusable and fairly unforgivable. It is a very bad mistake. I'm sorry--but many, many people, including me, do not use 1280x1024 *exclusively* while playing 3d games. My DX9 resolution of choice is 1152x864, for instance.
The point to be made about DX10 is *not* frame rates locked in at 1280x1024. Sorry AT--you really screwed the pooch on this one. The whole point of DX10 is *better image quality* which everyone who ever graduated from the 3d-school-of-hard-knocks is *supposed* to know!
So, just what does a bunch of *bar charts* detailing absolutely nothing except frame rates tell us about DX10? Not much, if anything at all. Gee, it does tell us that with the reduced image quality that DX9 is capable of providing contrasted with DX10, that DX9 runs faster in terms of frames per second on DX10 hardware! Gosh, who might ever have guessed....<sarcasm>
IMO, the fact that DX10 software even early on is running slower than DX9 on DX10-compliant hardware tells *me* nothing except that DX10 is demanding a lot more work out of the hardware than DX9, which means that we can expect the Image Quality of DX10 to be much better than DX9. These early games that AT tested with are merely the tip of the iceburg of what is to come. AnandTech really blew this one.
Jeff7181 - Tuesday, September 11, 2007 - link
"DX10 is *not* frame rates locked in at 1280x1024"Too bad you didn't say this earlier in your rant, I could have saved myself a few minutes and stopped reading sooner.
Thanks for the laugh though... imagine... someone who thinks 1280x1024 is a frame rate telling AnandTech they screwed the pooch when publishing this article. LOL
titan7 - Thursday, July 12, 2007 - link
This article was awesome. Too bad reality doesn't match your expectations or you'd like it too.1) You have no idea what the "shader war" was about. There isn't even a single parallel to be drawn if you *correctly* look back.
2) d3d10 is nothing about image quality. No hardware on the market or rumoured to be coming out next generation can handle running a full length SM2.0b shaders! And forget about even 3.0! Or even 2.0a that the GeForceFX supported for that matter. d3d10 is about one thing-making d3d on the PC more like a console. That means lower CPU (not GPU!) overhead (more performance in CPU limited situations) and making lives easier for developers.
3) AT has CoH at 1024x768, 1280x1024, 1600x1200, and 1920x1200 (common native LCD resolutions). These days everybody has bigger monitors and playing below 1024x768 is pretty rare. You can extrapolate your 1152x864 resolution from that.
What this shows us is if you increase the image quality (see the screen shots) today's cards don't have enough power to run at full frame rate, with the exception of the 8800 GTX Ultra. AT did a great job showing the world that with simple bar graphs and everything. It's too bad you were too angry to realize that is all it was trying to show.
strikeback03 - Friday, July 6, 2007 - link
umm, they tested Company of Heroes and Call of Juarez at 1024x768, and Lost Planet at 800x600, and some of the cheaper cards could still not maintain playable frame rates. How low on the resolution do you want them to go?jay401 - Friday, July 6, 2007 - link
I'm not sure how raising FSAA is going to improve performance?Nor how removing DX10 visuals but lowering screen res will "maintain DX10 level of visuals" either?
anandtech02148 - Thursday, July 5, 2007 - link
Excellent sincere analysis of the current hardwares situation. Which lead me to some after thoughts,- Maybe a 600buxs PS3 isn't so bad after all.
-What am i going to do with this 8800gtx and the lack of pc games, quite dry season compare to consoles.
-For those of you who hold out longer than I have, a 8800gts or 2900xt is a decent investment if you have a 1920x1200 monitor to go with it.
kilkennycat - Thursday, July 5, 2007 - link
... if you presently have a DX9 system with acceptable performance. Until the NEXT generation of DX10 hardware is released.Anybody who goes out and buys a current Dx10 card ( or even worse, dual cards ) just because "the Dx10 games are coming" ( and "I want bragging-rights") has a lot more money than sense. Buying a 8600 or 2600 for acceptable HD-decoding in your HTPC (or your mid-range PC with a weak CPU) is the only purchasing action with the current Dx10 offerings that makes total sense. All of the upcoming Dx10-capable game-titles for 2007 will have excellent Dx9/SM3 graphics. The lack of Dx10 hardware will smother "bragging-rights" but will have zero effect on playability.
nVidia has been developing the successor family to the G80-series GPU for almost a year now and the first graphics cards from this new generation are expected by the end of 2007. I would not be at all surprised if the first card out of the chute in the new family will immediately fill the cost-space between 8600GTX and 8800GTS, but with DX9 and Dx10 performance far superior to the 8800GTS. No doubt the GPU will also be 65nm, since the manufacturing/yield cost of the huge 80nm G80 die is the immovable stumbling-block to dropping the price of the 8800GTS.
KeithTalent - Thursday, July 5, 2007 - link
That's fine if you game at resolutions below 1680x1050, but some of us game at higher resolutions, and most high-end DX9 cards were struggling mightily to play the latest DX9 games (Oblivion, Supreme Commander, STALKER, etc...).These new generation cards are not only about DX10, they are also about improved performance (exponentially improved performance actually) over that last generation, and some of us actually need that power.
What you said is all well and good for you if you are still gaming at 800x600 or whatever, but I like my resolution a little higher thank you very much.
KT
misaki - Thursday, July 5, 2007 - link
So Nvidia and AMD actually complained about their "mainstream" parts being below par?It sounds to me like they are seriously out of touch. Media center PCs will get their 8400 and 2400 cards for h264 acceleration. Gamers with lots of money will buy those $400+ cards as usual. But your average gamer in the $200 market is stuck with junk that is unplayable for dx10 games and performs like previous gen hardware that just barely makes the grade for current games already on the market.
What is so hard to understand?
MadBoris - Thursday, July 5, 2007 - link
I'm glad that some of the things that were said in the article, are starting to be said. DX10 reminds me of Physx, great concept but can't possibly succeed due to certain hurdles in the technology being able to actually take off. Unfortunately DX10 also isn't going to be what we all hoped it would be in real performance, video drivers are not the only reason.There is something real counterintuitive with quality native DX10 rendering support.
There is very little incentive for a developer to produce a good DX10 renderer when developers have DX9 support on Vista along with many of their real current goals of console support for more customer base. With only so many hours a day, console support is much more lucrative with new people having access to buying a title, that will also actually run just fine with only DX9 on a Vista platform, never needing or really benefitting from DX10.
The costs of making DX10 games outweigh the benefits, and the benefits aren't currently that palpable in the first place. Furthermore, actual DX10 performance will rarely ever be all that positive, something that is too early to prove, but rather negative even though apples to apples DX9 to DX10 comparisons cannot be made.
As to the low end parts, it's just marketing, theirs no "real" value DX10 for $100 -$150 for gaming. If people want a good video value purchase for games, $200 is where the good quality/price starts and it's mostly in a previous generation, not some value card for $150. Whether we agree with that price point being 'worth it' is another matter altogether and is purely personal preference. I don't know why Nvidia or ATI even introduced low end DX10 compatible hardware when customers will only get angry at developers or video mfr's for the blunder of underpowered hardware for high end game titles. This low end DX10 hardware mystified me. it was either going to slow down DX10 progress or have to be ignored. It seems obvious that all high end games will have to pretty much ignore the low end parts for achieving acceptable framerates with DX10 for new eye candy titles. They should have left DX10 out entirely in low end, but they had to include it because of competition between AMD/Nvidia, neither wanted to leave the other with a marketing advantage of saying, look we have DX10. DX10 with it's lofty goals of being able to render more in a scene and produce even greater quality eyecandy is at odds with low price. Higher quality rendering will always be at odds with low price, they are mutually exclusive. Low price never is going to give you good performance and quality, people should really start being realistic as to what they are paying for at a $100 - $150 price point, it's a consumer expectations problem. Low end hardware will work fine for games like Sims and those games will target low end hardware, but not high end games for higher resolutions and decent frame rates.
In the end, with future goggles on I think the picture is becoming quite clear that DX10 will become one of the DX API's that becomes mainly skipped (if a decent DX successor becomes available in next few years). The only time it will really make sense to go above DX9 native support is when Vista saturates >%85 gaming market share. In the several years that that will take, DX11 or higher should be available and will be superior to DX10, so DX10 in hindsight will really end up being just a marketing ploy to upgrade to Vista, little more.
Glad the mask is starting to come off and more people are being able to see the picture clearer that making any purchases around DX10 with a GPU or OS is silly and bound to cause frustration.
strafejumper - Thursday, July 5, 2007 - link
i upgraded recently - but ended up only spending under $300 for a core2duo systemthis is why - people were saying get a DX10 card - future proof
i decided to keep my old agp card because i felt real dx10 games and real dx10 hardware were not here yet.
i'm happy i made i feel the right choice and didn't spend money on new psu, sata drives etc. so i could have a $$$ dx10 card only to play call of juarez at 14 fps.
stromgald - Thursday, July 5, 2007 - link
I have to agree. I was considering trying to get an 8600 for my SFF PC, but after looking at this, I'm probably going to hold off until the next generation. HTPC and SFF PCs just can't handle the heat an 8800 series generates, and I want at least playable DX10 performance.jay401 - Thursday, July 5, 2007 - link
Seriously, F them. It's pathetic they're trying to pawn off half-assed hardware as "mid-range enthusiast" parts when they can't even perform as good as the mid-range from the previous generation. Jerks.
jay401 - Thursday, July 5, 2007 - link
Another new article showing how DX10 Vista performance propaganda is garbage.Gotta love people who try to act superior b/c they bought Vista for gaming when all it does is suck up more system resources and uses DX10, the combination of which will easily inhibit performance on equivalent hardware.
BigLan - Thursday, July 5, 2007 - link
"They both argued that graphics cards are no longer just about 3D, and additional video decode hardware and DX10 support add a lot of value above the previous generation."Yeah, I don't buy into this either. I've pretty much given up on 'video decode,' be it avivo or purevideo. You end up stuck with using a specific product, rather than ati or nvidia opening the features for any developer to access. Right now, it's only useful with the latest windvd, powerdvd or nero but you have to hope your driver version is the right one, and doesn't (and probably never will) work for x264 or xvid content.
Purevideo is horribly branded by nvidia - is it card features that everyone has access to, or do you have to buy it from them? And has ati actually released their avivo video converter to the public? Could I use it to compress some of my recorded tv shows from mpeg to xvid?
Maybe this is like the mpeg2 decoder situation was in 98/99, in which case we should just wait for cpu speeds to increase and mean that we don't need video decode acceleration.
titan7 - Thursday, July 12, 2007 - link
I agree. How much quicker would things be if all the video transistors were spent on more shader processors? I want my video card for video games. I want my dvd player for movies.vailr - Thursday, July 5, 2007 - link
Comparing mid-range DX10 cards:(lowest prices found via froogle.com; shipping not included)
Radeon 2600XT 256Mb ~$145
http://www.ewiz.com/detail.php?p=AT-2600XT&c=f...">http://www.ewiz.com/detail.php?p=AT-2600XT&c=f...
nVidia 8600GT 256Mb ~$100 (after $15 MIR)
http://www.newegg.com/Product/Product.asp?Item=N82...">http://www.newegg.com/Product/Product.asp?Item=N82...
How is the 2600XT worth the added $45 v. the 8600GT?
Comdrpopnfresh - Thursday, July 5, 2007 - link
I have a 7600gt oc'd to 635/800. I get 30+fps with better than default settings, with AAx2 set @ 1024x768. Why do these cards not seem to do much better (given they are @ 1280x1024)?SniperWulf - Thursday, July 5, 2007 - link
I would have been nice if you guys could have included numbers with the latest publicly available drivers (beta or not) from ATI and NV, just to get an idea of what type of performance we can expect in the futureDerekWilson - Thursday, July 5, 2007 - link
Actually, the beta drivers will give you a better idea of what to expect than the current WHQL drivers, which is why we used them.TA152H - Thursday, July 5, 2007 - link
First of all, the whole name of the article is a poor choice of words. DX10 is pretty, but it's not fast. At least I think that was your point.Next, the choice of processors is too limited. I don't know where you guys get your sales figures from, but Intel's extreme processors aren't their best sellers. Since part of the point of these processors is to do work in the video card in DX10 that was done in the CPU before, you might want another data point with a relatively inexpensive processor and see how DX9 relates to DX10.
Saying there will not be ANY DX10 only games for the next two years is strange. You will have an installed base to overcome, but for gamers it's not so important because they upgrade often, and some of the games don't design for old hardware anyway. Aren't there some games now that weren't made to play on hardware two years ago well? I would guess someone will decide well before then that designing for obsolete software isn't worth the effort and cost, and will ignore the installed base to create a better product that will take less time and cost less. Going only slightly, if you had an exceptionally good DX10 game, there would be enough people to make it profitable if you were the only one that did it. You always have the dorks that like using words like "eye-candy" (which probably means they have insufficient testosterone in their bloodstream) that will go out and buy whatever is prettiest. No DX9 means no wasted developers, means faster development for DX10, and less cost. So, before two years, it's entirely plausible that someone making a cutting edge game will decide it's not worth the effort, or cost, of supporting an obsolete software base.
OK, those charts are horrible. Put a little effort in them, instead of letting everyone know they are an afterthought that you hate doing. The whole purpose of charts is to disseminate information quickly and intuitive, your charts totally fail at this. Top chart, you have the ambiguous "% change from DX9 performance". Now most people, just viewing the chart, would assume something to the right is an enhancement. But, no! Everything is less. The next chart uses the same words, but this time green denotes a gain in performance. Yes, blue is typically what people associate with negatives, green with positives. Ever hear of "in the red", or "in the black"? I have. Most people have. Red for negatives, and black for positives would have been a little more understandable if you simply refuse to put in a guide.
Next you have "perf drop with xxx", which is what the first chart should have read, but you didn't want to go back and fix it. Even then, red would have been a better color for a negative, it's more intuitive and that's what charts are about.
Last, maybe Nvidia and AMD are right, that features are important. I was saying this earlier, and you're comparing apples and oranges when you say performance went down. Did DX10 performance go down? No. So, broadly, performance did not. DX10 performance didn't even exist, nor did some of the other features. I'm not saying everyone should buy these cards at all, they should measure what they need, and certainly some of the older cards with their obsolete feature set are fine for many, many people. At least today. But, by the same token, someone running Vista is probably going to want DX10 hardware, and they may not have massive amounts of money either to spend on it. So, the low end cards make sense too. Someone wanting the best possible visual experience WILL buy a DX10 card, and an expensive one. They have reasons for existing, they aren't broadly failures, but I think what irritates you is they aren't broadly successful either, like previous generations are and you are used to. It's understandable, but I think you're taking this so far you aren't seeing the value in them either. Obviously, since everyone has disappointed you in DX10 performance (Intel, Nvidia and AMD), shouldn't it show it's not an easy implementation, and that maybe they aren't all screwing up? If you give an exam and everyone gets a 10, maybe you need to look at the nature of the grading or test.
NT78stonewobble - Thursday, July 5, 2007 - link
I think the point regarding the low end dx 10 hardware was the following." Performance is so low that you will in reality not be able to use DX 10. "
So everyone that cannot afford the high end DX 10 would be better off with a dx 9 card that will be cheaper and performing the same (or maybe even better).
TA152H - Thursday, July 5, 2007 - link
Not true, the only software that runs DX10 will not be super demanding video games. And you will not be forced to run them at very high settings. For gaming, yes, but that's not the whole thing.DerekWilson - Friday, July 6, 2007 - link
in these cases, dx10 wouldn't necessarily be a better fit than dx9 -- or (more probably) opengl ...Martimus - Thursday, July 5, 2007 - link
While you got voted down because your comment was set in an attacking tone, I felt that the comment was very well written and that you had some very valid points. As for the charts, I thought that those were performance increases until I read your comment. I doubt most casual readers will take the time to understand the counterintuitive charts. The charts are what most people look at, so they are really the most important part of the article to get right. Maybe the author will learn from this article and do a better job on the charts in the future.TA152H - Thursday, July 5, 2007 - link
You know, when I see people like him writing things without thinking like that, it really irritates me, because it's so uninformed, so I get angry. I wish I didn't react like that, but let who is without sin cast the first stone. I guess it's better than being passionless. I really don't mind negative votes, I'd be more worried if I got positive ones.The charts really got under my skin, because he made no effort. It's just half-ass garbage. When you consider how many people read them, it's unsupportable. If I did work like that, I'd be ashamed.
Instead of taking a step back and saying, well, all the DX10 hardware hasn't been what we expected, maybe there is a reason for it, they quickly damn every company that makes it. It's got to be comparitive, because obviously these authors really don't know anything about designing GPUs (nor do I for that matter, so I'm not saying it to be vicious). But, when Intel, AMD and Nvidia all have disappointing DX9 performance with their DX10 cards, and DX10 performance broadly isn't great (although it seems better unless you add features), then maybe you should take a step back and say "Hmmmm, maybe we need to adjust our expectations".
It's kind of funny, because they do this with microprocessors already, because there was a fundemental change that made everyone reevaluate it. The Core 2 would be a complete piece of crap if you judged it by the 1980s and 1990s standard. It was, by those standards, an extremely small improvement over the P7 core, and even worse over the Yonah. But, the way processors are graded are different now, because our expectations were lowered somewhat by the P6 (it was a great processor, but again, it wasn't as good as the P5 was over the P4, or the P4 over the P3), and greatly by the K7 and P7. So, maybe GPUs are hitting that point in maturity where the incredible improvements in performance are a thing of the past, and the pace will slow down. Now, someone will correctly say, well, DX9 performance has decreased in some cases. Well, that's fine, but we also have a precedence in the processor world. Let's go back to the P6. It ran the majority of existing code (Real Mode) WORSE than the P5, because it was designed for 386 protected mode and didn't care much about Real mode. Or how about the 386? It wasn't any better on 16-bit code in terms of performance (it did at the virtual 86 mode though, but it wasn't for performance), but added two new modes the most important of which was (although not at the time) was 386 protected.
Articles like this irritate me because they are so simplistic and have so little thought put in them. They lack perspective.
titan7 - Thursday, July 12, 2007 - link
Here's a bit of info on the cards. Back in the d3d8 era Matrox introduces the first 256bit memory bus for their cards with the Parhelia. All things equal that provides twice the speed of a 128bit bus, but is more expensive to manufacture.nvidia and ATI still had 128bit buses at the time, but for their high end d3d9 cards (Radeon 9700 and GeForceFX 5900) switched to 256 bit buses because 128 just couldn't keep up.
We're four generations ahead now and both IHVs have used 128bit buses for their mid range d3d10 cards, even though 128bits was becoming bottle neck during the d3d8 era five generations ago!
This article was right on the money, nvidia focused on making their 8600 pin compatible with their old mid range 6600 card which is now three generations old! Intel didn't even leave their p4 compatible with itself! Yay, nvidia allowed Asus, etc to save on R&D costs. Too bad it meant customers have a handy capped chip. This article called them on it.
256bit, 512megabytes is the standard for mid range d3d10. We need to wait a generation to get there.
DerekWilson - Thursday, July 5, 2007 - link
First, the data presented in the graphs does make sense if you take a second to think about what it means.It shows basically that under DX10 NVIDIA generally performs better relative to AMD than under DX9.
It also shows that under DX10 AMD handles AA better relative to NVIDIA than under DX9.
I will work on altering the graphs to show +/- 100% if people think that would present better. Honestly, I don't think it will be much easier to read with the exception that people tend to think higher is better.
But ... as for the rest of your post, I completely disagree.
When we step back and stop pushing AMD and NVIDIA to live up to specific expectations, we've stopped doing our job.
Justifying poor design choices by looking at the past is no way to advance the industry. A poor design choice is a poor design choice, no matter how you slice it. And it's the customer, not the industry of the company, who is qualified to decide whether or not something was a poor choice or not.
The fundamental problem with engineering is that you are building a device to fit within specific constraints. It is a very difficult job that consists of a great deal of cost/benefit analysis and hard choices. But the bottom line with any engineering project is that it must satisfy the customer's needs or it will not sell and it doesn't matter how much careful planning went into it.
It's when consumers (and hardware review sites who represent the consumers) stop demanding fundamental characteristics that absolutely must be present in the devices we purchase that we subject our selves to sub par hardware.
Having studied computer engineering, with a focus in microprocessor architecture and 3d graphics, I certainly do know a bit about designing GPUs. And, honestly, there are reasons that DX10 hardware hasn't been what people wanted. This is a first generation of hardware that supports a new API using a very new hardware model based on general purpose unified shaders. It was a lot to do in one generation, and no one is damning anyone else for it.
But that doesn't mean we have to pretend that we're happy about it. And our expectations have always been more subdued than that of the general public BTW. We've said for a while not to expect heavy DX10 dependent games for years. It's the same situation we saw with DX9.
And honestly, the problems we are seeing are similar to what we saw with the original GeForce FX -- only not as extreme. Especially because both NVIDIA and AMD do well in the thing they must do well at: DirectX 9 rendering.
Honestly, this supports what we've been saying all along: the most important factor in a 3d graphics purchase today is DirectX 9 performance.
jay401 - Friday, July 6, 2007 - link
They're right though, the charts need work. They are not intuitive and there are multiple better ways to present 'percent change' data that would make sense on first glance without the reader having to decipher an unintuitive method that is contrary to the readability of the article.DerekWilson - Friday, July 6, 2007 - link
The dx9 vs dx10 scaling graphs have been altered to present the data in a different way.Please let me know if this is still not adequate.
Andyvan - Thursday, July 5, 2007 - link
I had the exact same reaction to the charts. For the Lost Planet chart with the two colors, either pick better (more standard) colors, or make the performance drop bars grow to the left (or down), and the performance increase bars grow to the right (or up).-- Andyvan
PrinceGaz - Thursday, July 5, 2007 - link
The best thing to do with those charts is change them to show relative performance in DX10 compared to DX9, with 100% meaning no change (same performance in DX10 as DX9). Improvements with DX10 give scores above 100%, reduced performance gives a result below 100%.Doing that would make the graphs much easier to understand than the current mess.
sterlinglittle - Thursday, July 5, 2007 - link
This might be a silly question as I can't recall the current status of MultiGPU performance with Vista drivers. Will it be possible to test these games with SLI/CrossFire configurations soon?gigahertz20 - Thursday, July 5, 2007 - link
The results show exactly why I am waiting to buy a DX10 video card, all these people who rushed out to buy a Geforce 8800GTX or AMD 2900XT..hah..especially all those 2900XT fanboys who said the R600 would destroy the 8800GTX in DX10 benchmarks because it has 320 stream processors and a 512-bit memory interface....well guess what, the benchmarks are in and they show the R600 is still the power hunry POS video card it is.KeithTalent - Thursday, July 5, 2007 - link
I'm not sure how it is a 'hah' to the people that purchased these cards as they still blow everything else out of the water in DX9, I mean it is not even close.So for those of us running at higher resolutions (1920x1200 or higher), an 8800/2900 or two made perfect sense (and still does). I doubt very many people were expecting great DX10 performance right away anyway, particularly as the games available barely make use of it.
KT
Sceptor - Thursday, July 5, 2007 - link
I agree with your idea, I've always skipped over one generation of hardware to another.Especially when users are still "testing" Vista gaming for Microsoft, Nvidia and AMD I see no need to part with my money until performance is at least on par with DX9.
Good article...Nice to see some real numbers on DX10 vs DX9
DerekWilson - Thursday, July 5, 2007 - link
there are applications where the 2900 xt does outperform its competition, as is shown by call of juarez.it really depends on how developers go forward. we'll have to wait and see what happens. judging the future of AMD/NVIDIA competition under dx10 isn't really feasible with only 3 apps to go by.
one thing's for sure though, we'd love to see better performance out of the mainstream parts from both camps. And having some parts to fill in the gap between the lower and higher end hardware would be nice too.
defter - Thursday, July 5, 2007 - link
I think the point here is that many claimed that "R6xx is designed for DX10, don't judge it based on DX9 performance blah blah blah". Those claims gave the impression, that relative DX10 performance of R6xx series will be much better than their DX9 performance.Your tests show that on average, R6xx takes a HIGHER performance hit from moving to DX10. Thus, under DX10 R6xx is even SLOWER than it was under DX9.
DerekWilson - Thursday, July 5, 2007 - link
this is true -- our current information shows that AMD does relatively worse than NVIDIA when compared under DX10 than under DX9.rADo2 - Thursday, July 5, 2007 - link
"there are applications where the 2900 xt does outperform its competition" - where? 2900XT has 22FPS, 8800GTX 24FPS nad 8800ULTRA 26FPS. Despite "crippled for NVIDIA" / "paid by ATI" I see still green camp to outperform ATI.And in Lost Planet NVIDIA has 2x (!) better performance.
It is not even worth considering ATI for purchase.
smitty3268 - Thursday, July 5, 2007 - link
Read the comment again: The point is that the 2900XT does not compete against the 8800GTX, it competes against the 8800GTS, which it did outperform in that test.It certainly isn't the fastest card available, but I could also make a statement like "The GeForce7300Go outperforms it's competition" without saying it's the fastest thing available. I'm just saying it beats the cards in a similar price range.
KeithTalent - Thursday, July 5, 2007 - link
Um, what about price? Last time I checked the 8800GTX still costs about $150 more than the 2900XT. I will not even bring up the Ultra which is still way overpriced.So for $150 less you get a card that competes with the GTX some of the time and is more than capable of playing most games maxed out at high resolution. That is why the 2900XT is worth considering for purchase.
KT
rADo2 - Thursday, July 5, 2007 - link
Problem is, 2900XT is many times NOT playable. Its performance is sometimes close to NVIDIA, sometimes 2x lower. And this applies for both DX9 and DX10. With 60+ games I own, I can assume ATI would "suck" on 30 of them.Look at Lost Planet with AA, 22FPS versus 40-50FPS is a huge difference in playability. On 8800GTX/ULTRA you can play even at 1920x1200 (30FPS), with 2900XT even 1280x1024 is giving you problems.
KeithTalent - Thursday, July 5, 2007 - link
Well I have two of them and they work more than fine on every single game I have tried so far, including demos, betas, and a couple of older games (using Catalyst 7.6).Lost Planet is a POS port anyway, but when I ran the test benchmark in DX9 with Crossfired 2900XTs I had frames well above 40 with everything maxed at 1920x1200 so I am somewhat confused by the numbers here. I will have to wait until I am home to see my exact numbers, but they were much higher than what was presented here. Maybe there is something wonky with the beta drivers?
I'll post back tonight once I have verified my numbers.
KT
DerekWilson - Thursday, July 5, 2007 - link
We didn't use the demo benchmark, and the release version (afaik) does not include an updated version of the benchmark either.For the Lost Planet test, we had to use FRAPS running through the snow. This will absolutely give lower performance, as the benchmark runs through a couple in door scenes as well with higher framerates.
I mentioned FRAPS on the test page, but I'll add to the Lost Planet section the method we used for testing.
defter - Thursday, July 5, 2007 - link
Do Intel CPUs outperform currently AMD or not? After all, $200 AMD CPU is about as fast as $200 Intel CPU....It's natural that slower parts have good price/peformance ratio compared to competition since otherwise nobody would buy them. However, this has nothing to do which one fastest...
KeithTalent - Thursday, July 5, 2007 - link
Not sure what you are getting at, I was responding to this ridiculous statement:Which is completely untrue, because price can be a big consideration.
With respect to CPUs, if you spend an extra $50 - $100 for the better Intel processor, you are getting exponentially better performance (I know this from experience), while if you spend $150 more for a GTX, you are getting only marginally better performance.
KT