Well my thought is that if amd would release this card on SOI/HK-MG in 32/28nm MCM config at 4 GHz it would leave nvidia wondering what it did to deserve it. And I wonder ... why the heck not? These cards would be business-grade-able and with a decent silicon they could ask for enormous prices. Plus they could probably stick the entire thing in dual config on one single card (possibly within the 300 W pcie v2 limit)... that would be a 40-50 Tflops card and with the new GDDR5(5ghz +) it should qualify as a damn monster. I would also expect a ~2-3k$ per such beast but I think its worth it. 50Tf/card, 6cards/server... 3Pflop/cabinet... hrm... well...it wouldn't be fair to compair it to general purpose supercomputers... buuut you could deffinatelly ray trace render avatar directly into HD4x 3d in realtime and probably make it look even better in the process...
Clearly nobody buying this card is going to put Crysis on "Gamer Quality" They'll put it on the max it can go. Why is AT still the only tech site in the whole world who is using "Gamer quality" with a card that has enough power to run a small town?
I'm a professional L4D player, and I know the Source engine gives out very high frame rate on today cards. The test become silly because there is no different at all from 60 to 300+ fps. So, it all comes down to min fps.
I suggest that you record a demo in map 5 Dead Air, with 4 Survivors defend with their back onto the limit line of the position of the first crashed plane. The main player for the record will be vomitted on by boomer, another throws pipe bomb near him, another throws molotov near him also. Full force of zombies (only 30), 2 hunters, 1 smoker, 1 tank attacking. (When a player become a tank, the boss he's controlling become a bot, and still attacking the survivors).
This is the heaviest practical scene in L4D, and it just makes sense for the benchmark. You dont really need 8 players to arrange the scene, I think using cheats is much easier.
I know it will take time to re-benchmark all of those cards for the new scene, but I think it wont be too much. Even if you cant do this, please reply me.
If you read AnandTech's 'Triple Buffering: Why We Love It' article, there is a very slight advantage at more than 60fps even though the display is only running at ~60Hz. If the GPU finishes rendering a frame immediately after the display refresh then that frame will be 16ms stale by the time the display shows it as it won't have the next one ready in time. If someone started coming around the corner while that frame is stale it'd be 32ms (stale frame then fresh frame) before the first indicator showed up. This is simplified as with v-sync off you'll just get torn frames but the idea is still there.
To me, it's not a big deal, but if you're looking at a person with quick reaction speed of 180ms, 32ms of waiting for frames to catch up could be significant I guess. If you increase the fps past 60 you're more likely to have a fresh frame rendered right before each display refresh.
OK, so seriously, did you really take a $600 video card and benchmark Crysis Warhead without turning it all the way up? The chart says "Gamer Quality + Enthusiast Shaders". I'm wondering if that's really how you guys benchmarked it, or if the chart is just off. But if not, the claim "Crysis hasn’t quite fallen yet, but it’s very close" seems a little odd, given that you still don't have all the settings turned all the way up.
Incidentally, I'm running a GeForce 9800 GTX (not plus) and a Core2Duo E8550, and I play Warhead at all settings enthusiast, no AA, at 1600x900. At those settings, it's playable for me. People constantly complain about performance on that title, but really if you just turn down the resolution, it scales pretty well and still looks better than anything else on the market IMHO.
Carnildo writes:
> ... I was the administrator for a CAVE system. ...
Ditto! :D
> ... ported a number of 3D shooters to the platform. You haven't
> lived until you've seen a life-sized opponent come around the
> corner and start blasting away at you.
Indeed, Quake2 is amazing in a CAVE, especially with both the player
and the gun separately motion tracking - crouch behind a wall and be
able to stick your arm up to fire over the wall - awesome! But more
than anything as you say, it's the 3D effect which makes the experience.
As for surround-vision in general... Eyefinity? Ha! THIS is what
you want:
I have an SGI VHS demo of it somewhere, must dig it out sometime.
Oh, YouTube has some movies of people playing Quake2 in CAVE
systems. The only movie I have of me in the CAVE I ran was
a piece taken of my using COVISE visualisation software:
Naturally, filming a CAVE in this way merely shows a double-image.
Re people commenting on GPU power now exceeding the demands for
a single display...
What I've long wanted to see in games is proper modelling of
volumetric effects such as water, snow, ice, fire, mud, rain, etc.
Couldn't all this excess GPU power be channeled into ways of better
representing such things? It would be so cool to be able to have
genuinely new effects in games such as naturally flowing lava, or
an avalanche, or a flood, tidal wave, storm, landslide, etc. By this
I mean it being done so that how the substance behaves is governed
by the environment in a natural way (physics), not hard coded. So far,
anything like this is just simulated - objects involved are not
physically modelled and don't interact in any real way. Rain is
a good example - it never accumulates, flows, etc. Snow has weight,
flowing water can make things move, knock you over, etc.
One other thing occurs to me: perhaps we're approaching a point
where a single CPU is just not enough to handle what is now possible
at the top-end of gaming. To move them beyond just having ever higher
resolutions, maybe one CPU with more & more cores isn't going to
work that well. Could there ever be a market for high-end PC
gaming with 2-socket mbds? I do not mean XEON mbds as used for
servers though. Just thoughts...
Am I the only one waiting for TI to come out with a 3x3 grid of 1080p DLPs? You'd think if they can wedge ~2.2 million mini-mirrors on a chip, they should be able to scale that up to a native 5760x3240. Then they could buddy up with Dell and sell it as an Alienware premium package of display + computer capable of using it.
"This means that it’s not just a bit quieter to sound meters, but it really comes across that way to human ears too"
Have you considered using the dBA filter rather then just raw dB? dBA is weighted to measure the tones that the human ear is most sensitive to, so noise-oriented sites like SPCR use dBA instead.
This looks like a sweet card. Certainly ATI is taking control of the market.
One question though...
What happened to the Hydra by Lucid Logix? I haven't heard anything about it in a while. Theoretically, the Hydra should take 2 ATI cards and make them perform better than Crossfire can.
Wow - so we have a card now that plays the latest Crysis at 2560*1600, 4XAA with details at a smooth playable FPS. IMHO I believe we've entered a new GPU generation. Or new compared to the capabilities I'm used to.
Is there some way to set up a dual-monitor setup (2 x 1920x1080) to run in horizontal span mode (3840 x 1080), like you do with your three monitors, without an Eyefinity card? I'm currently running with a 4870 and haven't been able to find a way to do this.
I haven't tried but you should be able to do this easily, using catalyst. There's an option there to flip your screens for horizontal/vertical views and duplicate/extend your screens. That should do it for you.
I haven't use the flip feature but I use the extend all the time because I'm hooked up to my tv.
I've spent hours trying to find a way to do it with Catalyst, and I can't find one. Right now it's on extend, which just leaves the secondary monitor as the desktop, with no taskbar on the bottom, and leaves me unable to play games at 3840x1080.
If anybody has an explanation or idea, I'd appreciate it.
To put it simple, no it's not supported. The only way you'll be able to use the second screen is with games that are explicitly coded to support dual screens (Supreme Commander?).
Your only other option is to use a Matrox multimon device (forgot the name) or an EyeFinity card of course. No NVIDIA card will allow this either, it's not a driver issue.
Either way, 2 screens wouldn't be a nice experience anyway, with the big bezel right in the middle, I can't imagine any type of game where that would work (no FPS, RTS, RPG, racing game, ...)
That's what I was afraid of. Thanks. And yeah, I wasn't planning on playing with 2 monitors, but I was considering getting a third. Oh well, probably better that I can't blow the cash :)
On the same topic, does anyone know why my dual screen setup resets after PC restarts/shutdowns? In addition, I HAVE to select duplicate first, set it and then switch to extend. Selecting extend first doesn't enable it. Using current driver but this was there with previous versions as well.
I've Googled and read many forums but haven't encountered many users having this particular issue. This is consistent in XP, Vista and Win7 as far as I can remember.
4870 with Dell 30" and Samsumg 73" 1080P TV. Temperatures around 56C for video card. If you have any tips I appreciate them. Thanks in advance.
I think we all seen enough data on this poorly programmed game to removed it from the test. There's really not point as even this card, 5970, can choke on it. Seriously, utter crap of programming.
Three 30" monitors? Dude, I had to work hard just to afford a single 24" monitor. And because I'm salary I don't get overtime (although they do make me work it). If I get a second job flipping burgers at the local fast food joint I might be able to afford two more 24" displays. I bet Eyefinity would still look awesome on that.
And I was only kidding about you getting paid too much. The article was great. Now I am eagerly awaiting Nvidia's response.
but PLEASE MARK THE REVIEWED CARD to STAND OUT in the GRAPHS next time. going top down through the list and reading each caption to finaly find the card for EACH GRAPH is realy annoying.
Anand graphs are really annoying at times. I wish they were more consistent. Xbitlabs are easy and consistent which makes it a breeze for people like me who just wants to look at the specifics.
I concur, the graphs can be a little confusing without some sort of color coding...
Here is a suggestion Ryan. Use light green and orange for Nvidia and AMD single cards, dark green and red for SLI and Xfire setups and lastly, blue for the card reviewed (you can differentiate with light and dark blue readings for the same card in Xfire or OCed readings). I think the graphs would look much better this way, and its a very easy to implement feature anyway...
Great and interesting article, but I'm confused about this Eyefinity situation again.
You state that your monitor didn't support mini-DP, just 'regular' DP, and go on to talk about buying an adapter. Yet, it appears (according to the wiki page at any rate) that mini-DP is electrically identical to regular DP, so only a mini-DP to regular DP cable would be needed. Indeed, other reviews of the 5970 show such an adapter cable included with the card...
What's the score, and why the comment that you need an *active* adapter to go from mini-dp to regular dp?
This is however incorrect. You need active adapters because rv870 only supports 2 clock sources for display outputs. However, DP (or mini-DP) don't need any such clock source because they use a fixed clock independent from display timings. Hence, if you want to connect more than 2 monitors using DVI/HDMI/VGA you need active DP-to-something adapter. But for DP outputs this isn't needed. And mini-DP is the same as DP anyway electrically.
That kind of explains it, but I'm still confused about the whole thing. If your third monitor supported mini-DP then you wouldn't need an active adapter, right? Why is this when mini-DP and regular DP are the 'same' appart from the actual plug size. I thought the whole timing issue was only relevant when wanting a third 'DVI' (/HDMI) output from the card.
All the single game tests are great and all, but once I would love to see AT run a series of video card tests where multiple instances of games like EVE Online are running. While single instance tests are great for the FPS crowd, all us crazy high-end MMO players need some love too.
Jacerie the problem with benching MMO's and why you don't see more of them is all the other factors that come into play. You have to now deal with server latency, you also have no control of how many players are usually in the server at any given time when running benchmarks. There is just to many variables that would not make the benchmarks repeatable and valid for comparison purposes!
I think more what he is interested in is how well the card can render multiple instances of the game running at once. This could easily be done with a private server or even a demo written with the game engine. It would not be real world data, but it would give an idea of performance scaling when multiple instances of a game are running. Myself being an occasional "Dual boxer" I wouldn't mind seeing the data myself.
That's exactly what I was trying to get at. It's not uncommon for me to be running at lease two instances of EVE with an entire assortment of other apps in the background. My current 3870X2 does the job just fine, but with 7 out and DX11 around the corner I'd like to know how much money I'm going to need to stash away to keep the same level of usability I have now with the newer cards.
The so fast is only becouse 95% of the games are dx9 xbox ports. Still crysis is the most demanding game out there quite a time (it need to be added that it has a very lazy engine). In Age of Conan the diference in dx9 and dx10 is more than half(with plenty of those efects on screen even1/3) the fps drop. Those advanced shader efects that they are showing in demos are actualy much more demanding on the gpu than the dx9 shaders. Its just the thing they dont mention it. It will be same with dx11. A full dx11 game with all those fancy shaders will be on the level of crysis.
Unfortunately, since playing MW2, my question is: are there enough games that are sufficiently superior on the PC to justify the inital expense and power usage of this card? Maybe thats where eyefinity for AMD and PhysX for nVidia come in: they at least differentiate the PC experience from the console.
I hate to say it, but to me there just do not seem to be enough games optimized for the PC to justify the price and power usage of this card, that is unless one has money to burn.
Having not bought MW2, I can say conversely that the lack of differentiation between console and PC features hurts game sales. According to news reports, in the UK PC sales of MW2 account for less than 3% of all sales. This is neither representative of the PC share of the gaming market (which should be ~25% of all "next-gen" sales based on quarterly reports of revenue from publishers), nor the size of the install base of modern graphics cards capable of running MW2 at a decent frame rate (which should be close to the size of the entire console market based on JPR figures). Admittedly the UK has a proportionately larger console share than the US or Germany, but I can't image MW2 sales of the PC version are much better globally.
I am sure executives will be eager to blame piracy for the lack of PC sales, but their target market knows better...
[quote]Unfortunately, since playing MW2, my question is: are there enough games that are sufficiently superior on the PC to justify the inital expense and power usage of this card? Maybe thats where eyefinity for AMD and PhysX for nVidia come in: they at least differentiate the PC experience from the console.
I hate to say it, but to me there just do not seem to be enough games optimized for the PC to justify the price and power usage of this card, that is unless one has money to burn.[/quote]
Yes this is exactly my thoughts. They can tout DX11, fancy schmancy eyefinity, physx, everything except free lunch and it doesn't change the fact that the lineup for PC gaming is bland at best. It sucks, I love gaming on PC but it's pretty much a dead end at this time. No thanks to every 12 year old who curses at you on XBox Live.
My main reason to want this card would be to drive my 30" LCDs. I have two Dell's already and will get another one early next year. I don't actually play games much but I like having the desktop space for my work.
-VM's at higher resolution
-more open windows without switching too much
-watch movie(s) while working
-bigger font size but maintaining the aspect ratio of programs :)
Currently have my main on one 30" and to my 73" TV. TV is only 1080P so space is a bit limited. Plus working on the TV sucks big time :/
I am glad ATI is able to keep competing as that helps keep prices at a "decent" level.
Still, for all of you so amazed by eyefinity, do yourselves a favor and try 3D vision with a big screen DLP then you will laugh at what you thought was cool and "3D" before.
You can have 100 monitors but it is still just a flat world....time to join REAL 3D gaming guys!
Back in college, I was the administrator for a CAVE system. It's a cube ten feet on a side, with displays on all surfaces. Combine that with head tracking, hand tracking, shutter glasses, and surround sound, and you've got a fully immersive 3D environment.
It's designed for 3D visualization of large datasets, but people have ported a number of 3D shooters to the platform. You haven't lived until you've seen a life-sized opponent come around the corner and start blasting away at you.
But Ryan, I feel you might need to edit a couple of your comparison comments between the 295 and this new card. Based on the comments in a several previous articles quite a few readers do not look at (or understand) the charts and instead rely on the commentary below the charts. Here's some examples:
"Meanwhile the GTX 295 sees the first of many falls here. It falls behind the 5970 by 30%-40%. The 5870 gave it a run for its money, so this is no surprise."
This one for Stalker is clear and concise. I'd recommend you repeat this format for the rest of the games.
"As for the GTX 295, the lead is only 20%. This is one of the better scenarios for the GTX 295."
This comment was for Battleforge and IMO is confusing. To someone not reading the chart it could be viewed as saying the 295 has a 20% advantage. Again I'd stick with your Stalker comment.
"HAWX hasn’t yet reached a CPU ceiling, but it still gets incredibly high numbers. Overclocking the card gets 14% more, and the GTX 295 performance advantage is 26%."
Again, this could be seen as the 295 being 26% faster.
"Meanwhile overclocking the 5970 is good for another 9%, and the GTX 295 gap is 37%."
This one is less confusing as it doesn't mention an advantage but should just mention 37% slower.
Finally I think you made a typo in the conclusion where you said this:
"Overclock your 5970 to 5870 speeds if you can bear the extra power/heat/noise, but don’t expect 5970CF results."
I think you meant 5870CF results...
Overall, though, the article is really interesting as we've finally hit a performance bottleneck that is not so easily overcome (due to power draw and ATX specifications). I'm very pleased, however, that you mention first in the comments that this truly is a card meant for multi-monitor setups only, and even then, may be bottlenecked by design. The 5870 single card setup is almost overkill for a single display, and even then most people are not gaming on >24" monitors.
I've said it for the past 2 generations of cards but we've pretty much maxed out the need for faster cards (for GAMING purposes). Unless we start getting some super-hi res goggles that are reasonably priced, there just isn't much further to go due to display limitations. I mean honestly are those slightly fuzzy shadows worth the crazy perforamnce hit on a FPS? I honestly am having a VERY difficult time seeing a difference in the first set of pictures of the soldier's helmet. The pictures are taken slightly off angle from each other and even then I don't see what the arrow is pointing at. And if I can't see a significant difference in a STILL shot, how the heck am I to see a difference in-game!?
Thanks for the edits, I've made some corrections for Ryan that will hopefully make the statements more clear.
I agree that the need for a faster GPU on the desktop is definitely minimized today. However I do believe in the "if you build it, they will come" philosophy. At some point, the amount of power you can get in a single GPU will be great enough that someone has to take advantage of it. Although we may need more of a paradigm shift to really bring about that sort of change. I wonder if Larrabee's programming model is all we'll need or if there's more necessary...
One of the main things I'd like to see GPU drivers implement is an artificial framerate cap option. These >100fps results in several of the tests at insane resolutions are not only pointless, but add unneccesary heat and stress to the system. Drop back down to normal resolutions that >90% of people have and it becomes even more wasteful to render 150fps.
I always enable V-sync in my games for my LCD (75Hz), but I don't know if this is actually throttling the gpu to not render greater than 75fps. My hunch is in the background it's rendering to its max but only showing on the screen the Hz limitation.
I tryed out full screen furmark with vsync on and off (in 640*480) and the diference was 7 degre celsius. I have a custom cooler on the 4850 and a 20cm side fan on the case so thats quite lot.
Thanks for the reply Zool, I was hoping that was the case. So it seems like if I ensure vsync is on I'm at least limiting the gpu to only displaying the refresh rate of the LCD. Awesome!
I am going to post this again here. Thought it may not get noticed since I posted it first as a reply in previous pages. Hope I wont hurt anyones feelings :)
I thought everyone knew about Furmark and ATi by now. It used to be like this on 4870 series too.
It went like this, at first there were few reports of 4870(X2) cards dying when running Furmak. Further investigation showed that it was indeed Furmark causing VRM's to heat up to insane levels and eventually killing them. Word reached ATi from that point on ATi intentionally throttles their card when detecting Furmark to prevent the damage.
Yeah in fact the amount of heat load Furmak puts on VRMs is unrealistic and no game is able to heat up the VRMs to the level Furmark does. OCCT used the same method (or maybe even integrated Furmark) to test for stability (in their own opinion ofc)
So beware about Furmark and OCCT if you have HD4K or 5K.
The term "Hardware Virus" is rightfully applicable to Furmark when it comes to HD4K (and 5K perhaps)
I want to add that VRM overheating is quite tricky, since normally people only check on GPU temps.
When you run Furmark you would notice that GPU temps are in acceptable range while at the same time your VRM's are cooking without you knowing about it.
So remember to always check your VRM temps when running graphics stability tests like Furmark or OCCT's graphics test specially when you're overclocking the card.
The temp reading that is displayed by Everest is an averaged figure. The junction temp of the slaves is the critical issue. Even though the average temp may appear to be within bounds, there is the possibility that one of the slaves may be running abnormally. Volterra keep their specifications under NDA. What I do know is that the general configuration if one slave shuts down is that the remaining slaves take the load. The result is not usually pretty. I think ATI may have implemented throttling to prevent the kind of burnouts users experienced running OCCT GPU tests on the last gen.
Personally, I think the 3 phase Volterra solution used on the 5970's is right on the hilt for current draw (circa 135 amps per GPU). I'd wait for non reference solutions with enhanced power delivery if you plan on overclocking this card long term or plan to stress it heavily when OC'd).
Any one that has a clue is buying a proper case like the Storm Sniper Black Edition and fitting this with heaps of space to spare.
Also I recommend a case with positive or at least neutral are flow.
The storm sniper has a dust filter on its 20cm side fan to push more air in and aid in the GPU fans air flow that will go out the back of the cards vent holes at the DVI ports.
Plan on getting one of these in another 7-8 months - The way I see it, despite being bleeding edge - ATI has a deadlock winner in this card and will produce only limited quantities so I'm kind of 'worried' about the availability & price down the line
Wait til the end of December. Apparently the yeilds of cyrpress are going to be improved a lot then, so the prices will either remain the same or be lower a bit.
However, since nV has nothing real for a long time, i dont foresee a drop in prices on ATI parts. Given the estimates, NV will have fermi out in april 2010, but not in significant quantities for a while after that. Im gonna grab a 5970 around xmas. :)
Dude I hate you already :p I just bought a 5750 - I dont have the necessary components that would not bottleneck the 5970, so I'm going to have to wait a while. plus the whole 3 new monitors thing for the Eyefinity
Since AMD is binning their chips to get the 5970 within spec, I suppose it wouldn't make sense to make a 5950 SKU since a 5850 is simply a re-harvested 5870 (which failed the initial binning process), and 2x5850 would be out of the ATX spec anyway.
Anyway, a great card for those who can afford it, and have the proper case and PSU to handle it.
With 512 SP, 6.67% more than a GTX 295, I dont see Fermi has any chance of beating the 5970. nVidia will need a dual Fermi to dethrone the 5970, and thats not happening until Q3 or Q4 2010.
nVidia has targeted a wrong, niche market rather than gamers. Sooner or later monitors without bezel will come out, and Eyefinity makes much more sense. Its really funny that the R770s aka HD 4870s are in 1 out of 5 fastest supercomputers and not Tesla.
They have taken a long, deep sleep after the 8800GTX and now they're paying for it.
Unfortunately, PC gaming is almost dead. Look at Call of Duty's release. Look at Dragon Age which is also available on consoles. Sure the PC version might look a bit better, but when you spend as much on a video card as someone does on an entire system that can download movies, demos, act as a media box, and play Blu-Rays...you get the point.
Unfortunately, PC gaming has been declared "nearly dead" for decades. It hasn't died, and as much as console fanboys will rage on hearing this, it isn't going to either.
PC gaming is a niche industry, it always has been and always will be. Yes, console games do tend to be more profitable, which means that most games will be developed for consoles first and then ported to the PC. Doesn't mean there will never be games developed for the PC first(or even exclusivly), or that there's no profit to be had in PC games.
Yes, it can be cheaper to get a console than a mid-level gaming PC, just like it can be cheaper to just buy some econobox off the lot than to buy or build your own hot rod. Sure, one market is much larger and more profitable than the other, but there's still money to be made off of PC games and gearheads alike, and so long as that's true neither will be going away.
PC gaming is no longer an isolated economy, though. That changes things. With most games being written with consoles in mind, there isn't the broad-based software push for hardware advance that there was at the dawn of 3d acceleration.
I could give you dozens of great reasons to have upgraded from a NV4 to a NV15 back in the day, but the upgrade from a G80 to 5970 today? ~$800 when you factor in the PSU, and for what? Where's the must-have game that needs it? TNT to Geforce 2 was two years -- it's now been 3 since the release of the 8800, and there's been no equivalent to a Half Life, Quake 2, Deus Ex, Homeworld, Warcraft III, or WoW.
Unfortunately, this is precisely the problem. When looking at AAA (large budget) games, six years ago PC game sales were predominantly PC exclusives, with some well known console ports (e.g. Halo, Morrowind). Twelve years ago PC game sales were almost entirely exclusives. Today the console ports are approaching the majority of high profile PC titles.
Being multiplatform isn't necessarily a detriment for a console game. After all, having a larger budget allows more money to be spent on art and polishing the code to get the best performance on console hardware. In most cases, however, the PC version of a multiplatform title is almost always an afterthought. Little to no effort is spent redesigning the user interface and rebalancing game play because of the different controls. Shaders are almost never rewritten to take advantage of effects that could only be accomplished with the power of the graphic cards in modern PCs when porting. At most we seem to get better textures at somewhat higher resolutions.
The biggest problem with multiplatform development, however, is that multiplatform games are almost always aimed at the lowest common denominator in terms of both technology and content. All this does is produce the same game over and over again -- the clichéd rail shooter in a narrow environment with a macho/reluctant superhuman protagonist thrown against hordes of respawning mooks.
Based on the quarterly reports of sales revenue from the major publishers (EA, Activision and Ubisoft), PC games sales are comparable to PS3 game sales. The PS3, however, has several more exclusives because Sony owns several games studios and forces them to release exclusives. AMD and nVIDIA do not, much to PC gaming's detriment.
Hehe 5970CF to power three screens, now that sounds like a killer setup.
Besides that one's burning 600+ watts for the graphic. What's the CPU supposed to live on? The BIOS-Battery?
M.
It really seems weird...:( I've seen some reviews that had way better overclocking than the standard 5870 clocks and their tests seem to be ok without any "throttling" problems.
It's possible, but the 850TX is a very well regarded unit. If it can't run a 5970 overclocked, then I surmise that a lot of buyers are going to run in to the same problem. I don't have another comparable power supply on hand, so this isn't something I can test with my card.
Anand has a 1K unit, and of course you know how his turned out.
To be frank, we likely would have never noticed the throttling issue if it wasn't for the Distributed.net client. It's only after realizing that it was underperforming by about 10-20% that I decided to watch the Overdrive pane and saw it bouncing around. These guys could be throttling too, and just not realize it.
Seems iffy then since most reviews put it at 900 core and 5ghz + on the ram, with only a modest overvolt to 1.16. I would think ATI wouldnt bother putting in 3 high quality VRM and japanese capacitors if they didnt test it thoroughly at the specs they wanted it to OC at.
ATI went all out with building these 5970, the components are top notch. The chips are the best of the bunch. I'm surprised they did this, as they are essentially selling you 2x 5870 performance (IF your PSU is good) at $599 when 2x 5870 CF would cost $800. They have no competitor in the top, why do they not price this card higher or why even bother putting in quality parts to almost guarantee 5870 clocks?
I believe its ATI's last nail on the nV coffin and they hammered it really hard.
Too much discussion about adapters for the mini-displayport. The 27" iMac has such an input port and a resolution of 2560 x 1440, and it seems a sin to not test them together. (Not that I'm blaming Anandtech or anything, since I'm sure it's not that easy to get the iMac for testing.)
Look at all the finger print smudges on the nice card! I've started to notice the hand models that corporations use to hold their products. The hands holding the ipods on the apple site? Flawless, perfect nails and cuticles. Same w/ the fingers grasping the Magny Cours chip.
I was impressed with some of the crossfire benchmarks actually showing improvement. If Eyeinfinity works with 5970 does it work with the card in crossfire?
Bear in mind that it also took him 1.3v to get there; the AMD tool doesn't go that high. With my card, I strongly suspect the issue is the VRMs, so more voltage wouldn't help.
And I'm still trying to get an answer to the Eyefinity + 5970CF question. The boys and girls at AMD went home for the night before we realized we didn't have an answer to that.
I thought everyone knew about Furmark and ATi by now. It used to be like this on 4870 series too.
It went like this, at first there were few reports of 4870(X2) cards dying when running Furmak. Further investigation showed that it was indeed Furmark causing VRM's to heat up to insane levels and eventually killing them. Word reached ATi from that point on ATi intentionally throttles their card when detecting Furmark to prevent the damage.
Yeah in fact the amount of heat load Furmak puts on VRMs is unrealistic and no game is able to heat up the VRMs to the level Furmark does. OCCT used the same method (or maybe even integrated Furmark) to test for stability (in their own opinion ofc)
So beware about Furmark and OCCT if you have HD4K or 5K.
The term "Hardware Virus" is rightfully applicable to Furmark when it comes to HD4K (and 5K perhaps)
If so, then one could suspect it's the same issue with games due to VRMs of this particular card getting heated up and throttling the card. Perhaps not enough contact between VRM and HSF or a complete lack of TIM on VRM by accident. I would have reseated the HSF if I owned that card.
I suspect it is VRM/heat related. The 'biggest' slaves Volterra currently supply are rated at 45 amps each afaik. Assuming ATI used the 45 amp slaves (which they must have), you've got around 135 amps on tap. Do the math for OCP or any related throttling effects kicking in. Essentially, 1.10VGPU puts you at 150w per GPU before things either shut down or need to be throttled (depends on how it's implemented as it nears peak). Any which way you look at it, ATI have used a high end VRM solution, but 4 slaves per GPU would have given a bit more leeway on some cards. I wonder what the variance is in terms of leakage from card to card as well. Seeing as there's not much current overhead in the VRM (or at least there does not appear to be), a small change in leakage would be enough to stop some cards from doing too much in terms of overclocking on the stock cooler.
It could be your PSU, some "single rail" PSU arent in fact using a single rail but several rails with a max limit on AMPs. Its deceptive.
Guru3D uses 1200W PSU and manages 900 core, which is typically what a 5870 OC to on air. Essentially the chips are higher quality cypress, maybe you should retry it again with a different PSU then conclusions can be drawn.
newegg list 5 different models, they come and go quite fast.
I managed to get one of them in my shopping card.
All it would need now is pay. (which I don't want to...).
So yea they are not exactly easy to get, but far from impossible.
So not a paper launch.
Be real, it's day two after the launch, and you CAN get them. That's not bad at all.
M.
Er, have you noticed the "Not in Stock" or "Pre-order" when you have gone to order one. You might get a 5850, but try finding a 5870 without having to psy a jacked-up premium over MSRP. Best of luck.
Since the 5870 seems to be in such great supply, I would like for someone to post a link where I can actually buy one of these. I have been trying to buy one for a month and haven't been able to find one.
These are just some of the sellers in my place who sells those so-called mythical ATI cards online (doesn't include the gazillions others sold in retail). You may want to argue that they won't ship to you in United States, but then again the likes of NewEgg doesn't ship here too.
If you are desperate enough, I can help you obtain one of those cards. Want to take the offer?
by saying "another paper launch" you were implying that the previous launches were paper. So you were talking about the 5870. As they are and have been available, they were not paper launches. So even if the 5970 is a paper launch (it isn't) you can't very well call it another one
It would be closer. 4800x2560 would end up at a 1.875 AR, compared to 1.78 for 16:9 and 1.6 for 16:10. I think that 16:9 content stretched to fill 4800x2560 should look fine (about the same as 16:10 stretched to fill a 16:9 monitor).
Of course, the more difficult question is how to put three 30" LCDs into portrait mode. You would need a different base stand -- none of the 30" LCDs I've seen allow you to rotate the display into portrait mode, probably because the LCDs are two feet wide.
Why not be inventive, and make a stand to hold 3 x 30" LCDs ? I do not mean you specifically of course, but whomever would want to have one. It really is not that difficult . . . just a little planning, and the ability to work with steel ( heavy ) or quality aluminum. Now if someone did not have the skills to make brackets etc, they could even draw something up, give it to a local fabricator, and be on their merry way . . .
Personally, I like the first option mainly because I enjoy working with materials as such ( metals, wood, plastics, etc ). Not to mention the fact that it can cost far less doing it yourself.
I understand it's entirely possible. My point is merely that it's yet another expense. I don't think 3x30" with EyeFinity is going to be anything but a very, *VERY* niche product. LOL.
5970 = $600
3 x 30" = $3000 (minimum)
3 x Stands = $120 to $600
So besides having the money, you need the space (and possibly time). I'd say $4000+ just for the GPU and LCDs is where the costs start, and naturally you would want a killer system (i7-920 with overclocking, or i7-975). But hey, you want the best of the best, there you have it. Until the next big thing comes along.
Speaking of which, what about 30" LCDs with 120Hz and 3D Vision? LOL.... (No, I'm not saying that's out or coming soon, but it could be.)
I doubt 30" @ 120 will be here soon. 1920x1200 @ 120fps is the theoretical limit of a dual link DVI. 2560x1600 @ 120 would require a quad link DVI, or a twin HDMI, or a twin DP connection. And it would still have to be a TN panel as of 2009, because IPS just isnt fast enough for 120 yet
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
114 Comments
Back to Article
tcube - Monday, March 1, 2010 - link
Well my thought is that if amd would release this card on SOI/HK-MG in 32/28nm MCM config at 4 GHz it would leave nvidia wondering what it did to deserve it. And I wonder ... why the heck not? These cards would be business-grade-able and with a decent silicon they could ask for enormous prices. Plus they could probably stick the entire thing in dual config on one single card (possibly within the 300 W pcie v2 limit)... that would be a 40-50 Tflops card and with the new GDDR5(5ghz +) it should qualify as a damn monster. I would also expect a ~2-3k$ per such beast but I think its worth it. 50Tf/card, 6cards/server... 3Pflop/cabinet... hrm... well...it wouldn't be fair to compair it to general purpose supercomputers... buuut you could deffinatelly ray trace render avatar directly into HD4x 3d in realtime and probably make it look even better in the process...srikar115 - Sunday, December 6, 2009 - link
i agree with this reveiw ,here a complete summary i found is also intrestinghttp://pcgamersera.com/2009/12/ati-radeon-5970-rev...">http://pcgamersera.com/2009/12/ati-radeon-5970-rev...
srikar115 - Tuesday, April 20, 2010 - link
http://pcgamersera.com/ati-radeon-5970-review-suma...xpclient - Friday, November 27, 2009 - link
What no test to check video performance/DXVA? DirectX 11/WDDM 1.1 introduced DXVA HD (Accelerated HD/Blu-Ray playback).cmdrdredd - Sunday, November 22, 2009 - link
Clearly nobody buying this card is going to put Crysis on "Gamer Quality" They'll put it on the max it can go. Why is AT still the only tech site in the whole world who is using "Gamer quality" with a card that has enough power to run a small town?AnnonymousCoward - Sunday, November 22, 2009 - link
Why does the 5970 get <= 5850 CF performance, when it has 3200 Stream Processors vs 2880?araczynski - Saturday, November 21, 2009 - link
i look forward to buying this, in a few years.JonnyDough - Friday, November 20, 2009 - link
why they didn't just call it the 5880.Paladin1211 - Friday, November 20, 2009 - link
Ryan,I'm a professional L4D player, and I know the Source engine gives out very high frame rate on today cards. The test become silly because there is no different at all from 60 to 300+ fps. So, it all comes down to min fps.
I suggest that you record a demo in map 5 Dead Air, with 4 Survivors defend with their back onto the limit line of the position of the first crashed plane. The main player for the record will be vomitted on by boomer, another throws pipe bomb near him, another throws molotov near him also. Full force of zombies (only 30), 2 hunters, 1 smoker, 1 tank attacking. (When a player become a tank, the boss he's controlling become a bot, and still attacking the survivors).
This is the heaviest practical scene in L4D, and it just makes sense for the benchmark. You dont really need 8 players to arrange the scene, I think using cheats is much easier.
I know it will take time to re-benchmark all of those cards for the new scene, but I think it wont be too much. Even if you cant do this, please reply me.
Thank you :)
SunSamurai - Friday, November 20, 2009 - link
You're not professional FPS player if you think there is no difference between 60 and 300fps.Paladin1211 - Saturday, November 21, 2009 - link
To be precise, anything above the monitor refresh rate is not going to be recognizable. Mine maxed out at 60Hz 1920x1200. Correct me if I'm wrong.Thanks :)
noquarter - Saturday, November 21, 2009 - link
If you read AnandTech's 'Triple Buffering: Why We Love It' article, there is a very slight advantage at more than 60fps even though the display is only running at ~60Hz. If the GPU finishes rendering a frame immediately after the display refresh then that frame will be 16ms stale by the time the display shows it as it won't have the next one ready in time. If someone started coming around the corner while that frame is stale it'd be 32ms (stale frame then fresh frame) before the first indicator showed up. This is simplified as with v-sync off you'll just get torn frames but the idea is still there.To me, it's not a big deal, but if you're looking at a person with quick reaction speed of 180ms, 32ms of waiting for frames to catch up could be significant I guess. If you increase the fps past 60 you're more likely to have a fresh frame rendered right before each display refresh.
T2k - Friday, November 20, 2009 - link
Seriously: is he no more...? :DXeroG1 - Thursday, November 19, 2009 - link
OK, so seriously, did you really take a $600 video card and benchmark Crysis Warhead without turning it all the way up? The chart says "Gamer Quality + Enthusiast Shaders". I'm wondering if that's really how you guys benchmarked it, or if the chart is just off. But if not, the claim "Crysis hasn’t quite fallen yet, but it’s very close" seems a little odd, given that you still don't have all the settings turned all the way up.Incidentally, I'm running a GeForce 9800 GTX (not plus) and a Core2Duo E8550, and I play Warhead at all settings enthusiast, no AA, at 1600x900. At those settings, it's playable for me. People constantly complain about performance on that title, but really if you just turn down the resolution, it scales pretty well and still looks better than anything else on the market IMHO.
XeroG1 - Thursday, November 19, 2009 - link
Er, oops - that was supposed to say "E8500", not "E8550", since there is no 8550.mapesdhs - Thursday, November 19, 2009 - link
Carnildo writes:
> ... I was the administrator for a CAVE system. ...
Ditto! :D
> ... ported a number of 3D shooters to the platform. You haven't
> lived until you've seen a life-sized opponent come around the
> corner and start blasting away at you.
Indeed, Quake2 is amazing in a CAVE, especially with both the player
and the gun separately motion tracking - crouch behind a wall and be
able to stick your arm up to fire over the wall - awesome! But more
than anything as you say, it's the 3D effect which makes the experience.
As for surround-vision in general... Eyefinity? Ha! THIS is what
you want:
http://www.sgidepot.co.uk/misc/lockheed_cave.jpg">http://www.sgidepot.co.uk/misc/lockheed_cave.jpg
270 degree wraparound, 6-channel CAVE (Lockheed flight sim).
I have an SGI VHS demo of it somewhere, must dig it out sometime.
Oh, YouTube has some movies of people playing Quake2 in CAVE
systems. The only movie I have of me in the CAVE I ran was
a piece taken of my using COVISE visualisation software:
http://www.sgidepot.co.uk/misc/iancovise.avi">http://www.sgidepot.co.uk/misc/iancovise.avi
Naturally, filming a CAVE in this way merely shows a double-image.
Re people commenting on GPU power now exceeding the demands for
a single display...
What I've long wanted to see in games is proper modelling of
volumetric effects such as water, snow, ice, fire, mud, rain, etc.
Couldn't all this excess GPU power be channeled into ways of better
representing such things? It would be so cool to be able to have
genuinely new effects in games such as naturally flowing lava, or
an avalanche, or a flood, tidal wave, storm, landslide, etc. By this
I mean it being done so that how the substance behaves is governed
by the environment in a natural way (physics), not hard coded. So far,
anything like this is just simulated - objects involved are not
physically modelled and don't interact in any real way. Rain is
a good example - it never accumulates, flows, etc. Snow has weight,
flowing water can make things move, knock you over, etc.
One other thing occurs to me: perhaps we're approaching a point
where a single CPU is just not enough to handle what is now possible
at the top-end of gaming. To move them beyond just having ever higher
resolutions, maybe one CPU with more & more cores isn't going to
work that well. Could there ever be a market for high-end PC
gaming with 2-socket mbds? I do not mean XEON mbds as used for
servers though. Just thoughts...
Ian.
gorgid - Thursday, November 19, 2009 - link
WITH THEIR CARDS ASUS PROVIDES THE SOFTWARE WHERE YOU CAN ADJUST CORE AND MEMORY VOLTAGES. YOU CAN ADJUST CORE VOLTAGE UP TO 1.4VREAD THAT:
http://www.xtremesystems.org/forums/showthread.php...">http://www.xtremesystems.org/forums/sho...cd1d6d10...
I ORDERED ONE FROM HERE:
http://www.provantage.com/asus-eah5970g2dis2gd5a~7...">http://www.provantage.com/asus-eah5970g2dis2gd5a~7...
K1rkl4nd - Wednesday, November 18, 2009 - link
Am I the only one waiting for TI to come out with a 3x3 grid of 1080p DLPs? You'd think if they can wedge ~2.2 million mini-mirrors on a chip, they should be able to scale that up to a native 5760x3240. Then they could buddy up with Dell and sell it as an Alienware premium package of display + computer capable of using it.skrewler2 - Wednesday, November 18, 2009 - link
When can we see benchmarks of 2x 5970 in CF?Mr Perfect - Wednesday, November 18, 2009 - link
"This means that it’s not just a bit quieter to sound meters, but it really comes across that way to human ears too"Have you considered using the dBA filter rather then just raw dB? dBA is weighted to measure the tones that the human ear is most sensitive to, so noise-oriented sites like SPCR use dBA instead.
prophet001 - Wednesday, November 18, 2009 - link
This looks like a sweet card. Certainly ATI is taking control of the market.One question though...
What happened to the Hydra by Lucid Logix? I haven't heard anything about it in a while. Theoretically, the Hydra should take 2 ATI cards and make them perform better than Crossfire can.
Any news?
tamalero - Wednesday, November 18, 2009 - link
there as been reviews of HYDRA already, what planet are you on?GeorgeH - Wednesday, November 18, 2009 - link
1) No need to be a douche.2) No Hydra 200 products have shipped, all current "reviews" have been done using Lucid's development hardware.
3) It appears that Hydra will not work well with dual GPU cards; it will see only one of the GPUs.
4) Early results show that Hydra offers roughly equal performance overall to Crossfire/SLI.
5) Link to one of the better articles I've read:
http://www.pcper.com/article.php?aid=815">http://www.pcper.com/article.php?aid=815
driver01z - Wednesday, November 18, 2009 - link
Wow - so we have a card now that plays the latest Crysis at 2560*1600, 4XAA with details at a smooth playable FPS. IMHO I believe we've entered a new GPU generation. Or new compared to the capabilities I'm used to.rcpratt - Wednesday, November 18, 2009 - link
Is there some way to set up a dual-monitor setup (2 x 1920x1080) to run in horizontal span mode (3840 x 1080), like you do with your three monitors, without an Eyefinity card? I'm currently running with a 4870 and haven't been able to find a way to do this.The0ne - Wednesday, November 18, 2009 - link
I haven't tried but you should be able to do this easily, using catalyst. There's an option there to flip your screens for horizontal/vertical views and duplicate/extend your screens. That should do it for you.I haven't use the flip feature but I use the extend all the time because I'm hooked up to my tv.
rcpratt - Wednesday, November 18, 2009 - link
I've spent hours trying to find a way to do it with Catalyst, and I can't find one. Right now it's on extend, which just leaves the secondary monitor as the desktop, with no taskbar on the bottom, and leaves me unable to play games at 3840x1080.If anybody has an explanation or idea, I'd appreciate it.
Spoelie - Thursday, November 19, 2009 - link
To put it simple, no it's not supported. The only way you'll be able to use the second screen is with games that are explicitly coded to support dual screens (Supreme Commander?).Your only other option is to use a Matrox multimon device (forgot the name) or an EyeFinity card of course. No NVIDIA card will allow this either, it's not a driver issue.
Either way, 2 screens wouldn't be a nice experience anyway, with the big bezel right in the middle, I can't imagine any type of game where that would work (no FPS, RTS, RPG, racing game, ...)
rcpratt - Thursday, November 19, 2009 - link
That's what I was afraid of. Thanks. And yeah, I wasn't planning on playing with 2 monitors, but I was considering getting a third. Oh well, probably better that I can't blow the cash :)The0ne - Wednesday, November 18, 2009 - link
On the same topic, does anyone know why my dual screen setup resets after PC restarts/shutdowns? In addition, I HAVE to select duplicate first, set it and then switch to extend. Selecting extend first doesn't enable it. Using current driver but this was there with previous versions as well.I've Googled and read many forums but haven't encountered many users having this particular issue. This is consistent in XP, Vista and Win7 as far as I can remember.
4870 with Dell 30" and Samsumg 73" 1080P TV. Temperatures around 56C for video card. If you have any tips I appreciate them. Thanks in advance.
hechacker1 - Wednesday, November 18, 2009 - link
I once had that problem too. Even if my display went to sleep it would reset the monitor configuration.I think disabling this helped:
http://www.tomstricks.com/how-to-disable-or-enable...">http://www.tomstricks.com/how-to-disabl...multimon...
I think eventually catalyst was updated to fix the display loosing connection during sleep (in my case).
The0ne - Wednesday, November 18, 2009 - link
I think we all seen enough data on this poorly programmed game to removed it from the test. There's really not point as even this card, 5970, can choke on it. Seriously, utter crap of programming.lifeblood - Wednesday, November 18, 2009 - link
Three 30" monitors? Dude, I had to work hard just to afford a single 24" monitor. And because I'm salary I don't get overtime (although they do make me work it). If I get a second job flipping burgers at the local fast food joint I might be able to afford two more 24" displays. I bet Eyefinity would still look awesome on that.And I was only kidding about you getting paid too much. The article was great. Now I am eagerly awaiting Nvidia's response.
haplo602 - Wednesday, November 18, 2009 - link
nice review.but PLEASE MARK THE REVIEWED CARD to STAND OUT in the GRAPHS next time. going top down through the list and reading each caption to finaly find the card for EACH GRAPH is realy annoying.
The0ne - Wednesday, November 18, 2009 - link
Anand graphs are really annoying at times. I wish they were more consistent. Xbitlabs are easy and consistent which makes it a breeze for people like me who just wants to look at the specifics.Dante80 - Wednesday, November 18, 2009 - link
I concur, the graphs can be a little confusing without some sort of color coding...Here is a suggestion Ryan. Use light green and orange for Nvidia and AMD single cards, dark green and red for SLI and Xfire setups and lastly, blue for the card reviewed (you can differentiate with light and dark blue readings for the same card in Xfire or OCed readings). I think the graphs would look much better this way, and its a very easy to implement feature anyway...
cheers...^^
SJD - Wednesday, November 18, 2009 - link
Great and interesting article, but I'm confused about this Eyefinity situation again.You state that your monitor didn't support mini-DP, just 'regular' DP, and go on to talk about buying an adapter. Yet, it appears (according to the wiki page at any rate) that mini-DP is electrically identical to regular DP, so only a mini-DP to regular DP cable would be needed. Indeed, other reviews of the 5970 show such an adapter cable included with the card...
What's the score, and why the comment that you need an *active* adapter to go from mini-dp to regular dp?
Simon
strikeback03 - Wednesday, November 18, 2009 - link
The active adapter went to DVI, I was wondering the same about a simple mini-DP to DP cableAnand Lal Shimpi - Wednesday, November 18, 2009 - link
I've clarified :)Once you move to three displays AMD runs out of timing sources, the miniDP port must use an active adapter if you're using three displays.
Take care,
Anand
mczak - Wednesday, November 18, 2009 - link
This is however incorrect. You need active adapters because rv870 only supports 2 clock sources for display outputs. However, DP (or mini-DP) don't need any such clock source because they use a fixed clock independent from display timings. Hence, if you want to connect more than 2 monitors using DVI/HDMI/VGA you need active DP-to-something adapter. But for DP outputs this isn't needed. And mini-DP is the same as DP anyway electrically.SJD - Wednesday, November 18, 2009 - link
Thanks Anand,That kind of explains it, but I'm still confused about the whole thing. If your third monitor supported mini-DP then you wouldn't need an active adapter, right? Why is this when mini-DP and regular DP are the 'same' appart from the actual plug size. I thought the whole timing issue was only relevant when wanting a third 'DVI' (/HDMI) output from the card.
Simon
CrystalBay - Wednesday, November 18, 2009 - link
WTH is really up at TWSC ?Jacerie - Wednesday, November 18, 2009 - link
All the single game tests are great and all, but once I would love to see AT run a series of video card tests where multiple instances of games like EVE Online are running. While single instance tests are great for the FPS crowd, all us crazy high-end MMO players need some love too.Makaveli - Wednesday, November 18, 2009 - link
Jacerie the problem with benching MMO's and why you don't see more of them is all the other factors that come into play. You have to now deal with server latency, you also have no control of how many players are usually in the server at any given time when running benchmarks. There is just to many variables that would not make the benchmarks repeatable and valid for comparison purposes!mesiah - Thursday, November 19, 2009 - link
I think more what he is interested in is how well the card can render multiple instances of the game running at once. This could easily be done with a private server or even a demo written with the game engine. It would not be real world data, but it would give an idea of performance scaling when multiple instances of a game are running. Myself being an occasional "Dual boxer" I wouldn't mind seeing the data myself.Jacerie - Thursday, November 19, 2009 - link
That's exactly what I was trying to get at. It's not uncommon for me to be running at lease two instances of EVE with an entire assortment of other apps in the background. My current 3870X2 does the job just fine, but with 7 out and DX11 around the corner I'd like to know how much money I'm going to need to stash away to keep the same level of usability I have now with the newer cards.Zool - Wednesday, November 18, 2009 - link
The so fast is only becouse 95% of the games are dx9 xbox ports. Still crysis is the most demanding game out there quite a time (it need to be added that it has a very lazy engine). In Age of Conan the diference in dx9 and dx10 is more than half(with plenty of those efects on screen even1/3) the fps drop. Those advanced shader efects that they are showing in demos are actualy much more demanding on the gpu than the dx9 shaders. Its just the thing they dont mention it. It will be same with dx11. A full dx11 game with all those fancy shaders will be on the level of crysis.crazzyeddie - Wednesday, November 18, 2009 - link
... after their first 40nm test chips came back as being less impressive than **there** 55nm and 65nm test chips were.silverblue - Wednesday, November 18, 2009 - link
Hehe, I saw that one too.frozentundra123456 - Wednesday, November 18, 2009 - link
Unfortunately, since playing MW2, my question is: are there enough games that are sufficiently superior on the PC to justify the inital expense and power usage of this card? Maybe thats where eyefinity for AMD and PhysX for nVidia come in: they at least differentiate the PC experience from the console.I hate to say it, but to me there just do not seem to be enough games optimized for the PC to justify the price and power usage of this card, that is unless one has money to burn.
GourdFreeMan - Friday, November 20, 2009 - link
Having not bought MW2, I can say conversely that the lack of differentiation between console and PC features hurts game sales. According to news reports, in the UK PC sales of MW2 account for less than 3% of all sales. This is neither representative of the PC share of the gaming market (which should be ~25% of all "next-gen" sales based on quarterly reports of revenue from publishers), nor the size of the install base of modern graphics cards capable of running MW2 at a decent frame rate (which should be close to the size of the entire console market based on JPR figures). Admittedly the UK has a proportionately larger console share than the US or Germany, but I can't image MW2 sales of the PC version are much better globally.I am sure executives will be eager to blame piracy for the lack of PC sales, but their target market knows better...
cmdrdredd - Wednesday, November 18, 2009 - link
[quote]Unfortunately, since playing MW2, my question is: are there enough games that are sufficiently superior on the PC to justify the inital expense and power usage of this card? Maybe thats where eyefinity for AMD and PhysX for nVidia come in: they at least differentiate the PC experience from the console.I hate to say it, but to me there just do not seem to be enough games optimized for the PC to justify the price and power usage of this card, that is unless one has money to burn.[/quote]
Yes this is exactly my thoughts. They can tout DX11, fancy schmancy eyefinity, physx, everything except free lunch and it doesn't change the fact that the lineup for PC gaming is bland at best. It sucks, I love gaming on PC but it's pretty much a dead end at this time. No thanks to every 12 year old who curses at you on XBox Live.
The0ne - Wednesday, November 18, 2009 - link
My main reason to want this card would be to drive my 30" LCDs. I have two Dell's already and will get another one early next year. I don't actually play games much but I like having the desktop space for my work.-VM's at higher resolution
-more open windows without switching too much
-watch movie(s) while working
-bigger font size but maintaining the aspect ratio of programs :)
Currently have my main on one 30" and to my 73" TV. TV is only 1080P so space is a bit limited. Plus working on the TV sucks big time :/
shaolin95 - Wednesday, November 18, 2009 - link
I am glad ATI is able to keep competing as that helps keep prices at a "decent" level.Still, for all of you so amazed by eyefinity, do yourselves a favor and try 3D vision with a big screen DLP then you will laugh at what you thought was cool and "3D" before.
You can have 100 monitors but it is still just a flat world....time to join REAL 3D gaming guys!
Carnildo - Wednesday, November 18, 2009 - link
Back in college, I was the administrator for a CAVE system. It's a cube ten feet on a side, with displays on all surfaces. Combine that with head tracking, hand tracking, shutter glasses, and surround sound, and you've got a fully immersive 3D environment.It's designed for 3D visualization of large datasets, but people have ported a number of 3D shooters to the platform. You haven't lived until you've seen a life-sized opponent come around the corner and start blasting away at you.
7Enigma - Wednesday, November 18, 2009 - link
But Ryan, I feel you might need to edit a couple of your comparison comments between the 295 and this new card. Based on the comments in a several previous articles quite a few readers do not look at (or understand) the charts and instead rely on the commentary below the charts. Here's some examples:"Meanwhile the GTX 295 sees the first of many falls here. It falls behind the 5970 by 30%-40%. The 5870 gave it a run for its money, so this is no surprise."
This one for Stalker is clear and concise. I'd recommend you repeat this format for the rest of the games.
"As for the GTX 295, the lead is only 20%. This is one of the better scenarios for the GTX 295."
This comment was for Battleforge and IMO is confusing. To someone not reading the chart it could be viewed as saying the 295 has a 20% advantage. Again I'd stick with your Stalker comment.
"HAWX hasn’t yet reached a CPU ceiling, but it still gets incredibly high numbers. Overclocking the card gets 14% more, and the GTX 295 performance advantage is 26%."
Again, this could be seen as the 295 being 26% faster.
"Meanwhile overclocking the 5970 is good for another 9%, and the GTX 295 gap is 37%."
This one is less confusing as it doesn't mention an advantage but should just mention 37% slower.
Finally I think you made a typo in the conclusion where you said this:
"Overclock your 5970 to 5870 speeds if you can bear the extra power/heat/noise, but don’t expect 5970CF results."
I think you meant 5870CF results...
Overall, though, the article is really interesting as we've finally hit a performance bottleneck that is not so easily overcome (due to power draw and ATX specifications). I'm very pleased, however, that you mention first in the comments that this truly is a card meant for multi-monitor setups only, and even then, may be bottlenecked by design. The 5870 single card setup is almost overkill for a single display, and even then most people are not gaming on >24" monitors.
I've said it for the past 2 generations of cards but we've pretty much maxed out the need for faster cards (for GAMING purposes). Unless we start getting some super-hi res goggles that are reasonably priced, there just isn't much further to go due to display limitations. I mean honestly are those slightly fuzzy shadows worth the crazy perforamnce hit on a FPS? I honestly am having a VERY difficult time seeing a difference in the first set of pictures of the soldier's helmet. The pictures are taken slightly off angle from each other and even then I don't see what the arrow is pointing at. And if I can't see a significant difference in a STILL shot, how the heck am I to see a difference in-game!?
OK enough rant, thanks for the review. :)
Anand Lal Shimpi - Wednesday, November 18, 2009 - link
Thanks for the edits, I've made some corrections for Ryan that will hopefully make the statements more clear.I agree that the need for a faster GPU on the desktop is definitely minimized today. However I do believe in the "if you build it, they will come" philosophy. At some point, the amount of power you can get in a single GPU will be great enough that someone has to take advantage of it. Although we may need more of a paradigm shift to really bring about that sort of change. I wonder if Larrabee's programming model is all we'll need or if there's more necessary...
Take care,
Anand
7Enigma - Wednesday, November 18, 2009 - link
Thank you for the edits and the reply Anand.One of the main things I'd like to see GPU drivers implement is an artificial framerate cap option. These >100fps results in several of the tests at insane resolutions are not only pointless, but add unneccesary heat and stress to the system. Drop back down to normal resolutions that >90% of people have and it becomes even more wasteful to render 150fps.
I always enable V-sync in my games for my LCD (75Hz), but I don't know if this is actually throttling the gpu to not render greater than 75fps. My hunch is in the background it's rendering to its max but only showing on the screen the Hz limitation.
Zool - Wednesday, November 18, 2009 - link
I tryed out full screen furmark with vsync on and off (in 640*480) and the diference was 7 degre celsius. I have a custom cooler on the 4850 and a 20cm side fan on the case so thats quite lot.7Enigma - Thursday, November 19, 2009 - link
Thanks for the reply Zool, I was hoping that was the case. So it seems like if I ensure vsync is on I'm at least limiting the gpu to only displaying the refresh rate of the LCD. Awesome!Zool - Wednesday, November 18, 2009 - link
So yes. The answer is that the gpu is doing less work with vsync than without it.(dam still no edit button)Yojimbo - Wednesday, November 18, 2009 - link
The plural for a casting/shaping instrument "die" is "dies" not "dice."Lennie - Wednesday, November 18, 2009 - link
I am going to post this again here. Thought it may not get noticed since I posted it first as a reply in previous pages. Hope I wont hurt anyones feelings :)I thought everyone knew about Furmark and ATi by now. It used to be like this on 4870 series too.
It went like this, at first there were few reports of 4870(X2) cards dying when running Furmak. Further investigation showed that it was indeed Furmark causing VRM's to heat up to insane levels and eventually killing them. Word reached ATi from that point on ATi intentionally throttles their card when detecting Furmark to prevent the damage.
Yeah in fact the amount of heat load Furmak puts on VRMs is unrealistic and no game is able to heat up the VRMs to the level Furmark does. OCCT used the same method (or maybe even integrated Furmark) to test for stability (in their own opinion ofc)
So beware about Furmark and OCCT if you have HD4K or 5K.
The term "Hardware Virus" is rightfully applicable to Furmark when it comes to HD4K (and 5K perhaps)
Lennie - Wednesday, November 18, 2009 - link
I want to add that VRM overheating is quite tricky, since normally people only check on GPU temps.When you run Furmark you would notice that GPU temps are in acceptable range while at the same time your VRM's are cooking without you knowing about it.
So remember to always check your VRM temps when running graphics stability tests like Furmark or OCCT's graphics test specially when you're overclocking the card.
I use Riva or Everest to check on VRM temps.
Rajinder Gill - Thursday, November 19, 2009 - link
The temp reading that is displayed by Everest is an averaged figure. The junction temp of the slaves is the critical issue. Even though the average temp may appear to be within bounds, there is the possibility that one of the slaves may be running abnormally. Volterra keep their specifications under NDA. What I do know is that the general configuration if one slave shuts down is that the remaining slaves take the load. The result is not usually pretty. I think ATI may have implemented throttling to prevent the kind of burnouts users experienced running OCCT GPU tests on the last gen.Personally, I think the 3 phase Volterra solution used on the 5970's is right on the hilt for current draw (circa 135 amps per GPU). I'd wait for non reference solutions with enhanced power delivery if you plan on overclocking this card long term or plan to stress it heavily when OC'd).
Later
Raja
Rajinder Gill - Thursday, November 19, 2009 - link
I should add that I'm assuming ATI used the biggest Volterra slaves rated at 45 amps each and not the 35/40 amp varieties.thebeastie - Wednesday, November 18, 2009 - link
Any one that has a clue is buying a proper case like the Storm Sniper Black Edition and fitting this with heaps of space to spare.Also I recommend a case with positive or at least neutral are flow.
The storm sniper has a dust filter on its 20cm side fan to push more air in and aid in the GPU fans air flow that will go out the back of the cards vent holes at the DVI ports.
at80eighty - Wednesday, November 18, 2009 - link
Plan on getting one of these in another 7-8 months - The way I see it, despite being bleeding edge - ATI has a deadlock winner in this card and will produce only limited quantities so I'm kind of 'worried' about the availability & price down the lineSilverforce11 - Wednesday, November 18, 2009 - link
Wait til the end of December. Apparently the yeilds of cyrpress are going to be improved a lot then, so the prices will either remain the same or be lower a bit.However, since nV has nothing real for a long time, i dont foresee a drop in prices on ATI parts. Given the estimates, NV will have fermi out in april 2010, but not in significant quantities for a while after that. Im gonna grab a 5970 around xmas. :)
at80eighty - Wednesday, November 18, 2009 - link
Dude I hate you already :p I just bought a 5750 - I dont have the necessary components that would not bottleneck the 5970, so I'm going to have to wait a while. plus the whole 3 new monitors thing for the Eyefinitypalladium - Wednesday, November 18, 2009 - link
Since AMD is binning their chips to get the 5970 within spec, I suppose it wouldn't make sense to make a 5950 SKU since a 5850 is simply a re-harvested 5870 (which failed the initial binning process), and 2x5850 would be out of the ATX spec anyway.Anyway, a great card for those who can afford it, and have the proper case and PSU to handle it.
Paladin1211 - Wednesday, November 18, 2009 - link
With 512 SP, 6.67% more than a GTX 295, I dont see Fermi has any chance of beating the 5970. nVidia will need a dual Fermi to dethrone the 5970, and thats not happening until Q3 or Q4 2010.nVidia has targeted a wrong, niche market rather than gamers. Sooner or later monitors without bezel will come out, and Eyefinity makes much more sense. Its really funny that the R770s aka HD 4870s are in 1 out of 5 fastest supercomputers and not Tesla.
They have taken a long, deep sleep after the 8800GTX and now they're paying for it.
cmdrdredd - Wednesday, November 18, 2009 - link
Unfortunately, PC gaming is almost dead. Look at Call of Duty's release. Look at Dragon Age which is also available on consoles. Sure the PC version might look a bit better, but when you spend as much on a video card as someone does on an entire system that can download movies, demos, act as a media box, and play Blu-Rays...you get the point.Lurker0 - Wednesday, November 18, 2009 - link
Unfortunately, PC gaming has been declared "nearly dead" for decades. It hasn't died, and as much as console fanboys will rage on hearing this, it isn't going to either.PC gaming is a niche industry, it always has been and always will be. Yes, console games do tend to be more profitable, which means that most games will be developed for consoles first and then ported to the PC. Doesn't mean there will never be games developed for the PC first(or even exclusivly), or that there's no profit to be had in PC games.
Yes, it can be cheaper to get a console than a mid-level gaming PC, just like it can be cheaper to just buy some econobox off the lot than to buy or build your own hot rod. Sure, one market is much larger and more profitable than the other, but there's still money to be made off of PC games and gearheads alike, and so long as that's true neither will be going away.
DominionSeraph - Thursday, November 19, 2009 - link
PC gaming is no longer an isolated economy, though. That changes things. With most games being written with consoles in mind, there isn't the broad-based software push for hardware advance that there was at the dawn of 3d acceleration.I could give you dozens of great reasons to have upgraded from a NV4 to a NV15 back in the day, but the upgrade from a G80 to 5970 today? ~$800 when you factor in the PSU, and for what? Where's the must-have game that needs it? TNT to Geforce 2 was two years -- it's now been 3 since the release of the 8800, and there's been no equivalent to a Half Life, Quake 2, Deus Ex, Homeworld, Warcraft III, or WoW.
GourdFreeMan - Thursday, November 19, 2009 - link
Unfortunately, this is precisely the problem. When looking at AAA (large budget) games, six years ago PC game sales were predominantly PC exclusives, with some well known console ports (e.g. Halo, Morrowind). Twelve years ago PC game sales were almost entirely exclusives. Today the console ports are approaching the majority of high profile PC titles.Being multiplatform isn't necessarily a detriment for a console game. After all, having a larger budget allows more money to be spent on art and polishing the code to get the best performance on console hardware. In most cases, however, the PC version of a multiplatform title is almost always an afterthought. Little to no effort is spent redesigning the user interface and rebalancing game play because of the different controls. Shaders are almost never rewritten to take advantage of effects that could only be accomplished with the power of the graphic cards in modern PCs when porting. At most we seem to get better textures at somewhat higher resolutions.
The biggest problem with multiplatform development, however, is that multiplatform games are almost always aimed at the lowest common denominator in terms of both technology and content. All this does is produce the same game over and over again -- the clichéd rail shooter in a narrow environment with a macho/reluctant superhuman protagonist thrown against hordes of respawning mooks.
Based on the quarterly reports of sales revenue from the major publishers (EA, Activision and Ubisoft), PC games sales are comparable to PS3 game sales. The PS3, however, has several more exclusives because Sony owns several games studios and forces them to release exclusives. AMD and nVIDIA do not, much to PC gaming's detriment.
mschira - Wednesday, November 18, 2009 - link
Hehe 5970CF to power three screens, now that sounds like a killer setup.Besides that one's burning 600+ watts for the graphic. What's the CPU supposed to live on? The BIOS-Battery?
M.
monomer - Wednesday, November 18, 2009 - link
Wouldn't it be possible to run six screens using a 5970 CF setup, or are there other limitations I'm unaware of?Fleeb - Wednesday, November 18, 2009 - link
600W for the whole setup. :Smaximusursus - Wednesday, November 18, 2009 - link
It really seems weird...:( I've seen some reviews that had way better overclocking than the standard 5870 clocks and their tests seem to be ok without any "throttling" problems.For example:
Techspot: 900/1250
HotHardware: 860/1220
Tom's Hardware: 900/1300
HardOCP: 900/1350 (!)
Guru3D: 900/1200
HardwareZone however had a similar problem with you guys, could it really be the PSU?
Ryan Smith - Wednesday, November 18, 2009 - link
It's possible, but the 850TX is a very well regarded unit. If it can't run a 5970 overclocked, then I surmise that a lot of buyers are going to run in to the same problem. I don't have another comparable power supply on hand, so this isn't something I can test with my card.Anand has a 1K unit, and of course you know how his turned out.
To be frank, we likely would have never noticed the throttling issue if it wasn't for the Distributed.net client. It's only after realizing that it was underperforming by about 10-20% that I decided to watch the Overdrive pane and saw it bouncing around. These guys could be throttling too, and just not realize it.
Silverforce11 - Wednesday, November 18, 2009 - link
Seems iffy then since most reviews put it at 900 core and 5ghz + on the ram, with only a modest overvolt to 1.16. I would think ATI wouldnt bother putting in 3 high quality VRM and japanese capacitors if they didnt test it thoroughly at the specs they wanted it to OC at.My old PSU is the bigger bro of this guy being the 750 ver.
http://anandtech.com/casecoolingpsus/showdoc.aspx?...">http://anandtech.com/casecoolingpsus/showdoc.aspx?...
And had issues with the 4870x2. Got a better "single rail" PSU and it ran fine n OC well.
Silverforce11 - Wednesday, November 18, 2009 - link
ATI went all out with building these 5970, the components are top notch. The chips are the best of the bunch. I'm surprised they did this, as they are essentially selling you 2x 5870 performance (IF your PSU is good) at $599 when 2x 5870 CF would cost $800. They have no competitor in the top, why do they not price this card higher or why even bother putting in quality parts to almost guarantee 5870 clocks?I believe its ATI's last nail on the nV coffin and they hammered it really hard.
ET - Wednesday, November 18, 2009 - link
Too much discussion about adapters for the mini-displayport. The 27" iMac has such an input port and a resolution of 2560 x 1440, and it seems a sin to not test them together. (Not that I'm blaming Anandtech or anything, since I'm sure it's not that easy to get the iMac for testing.)Taft12 - Wednesday, November 18, 2009 - link
Why would they bother using a computer with attached monitor and instead use the larger, higher-res and CHEAPER Dell 3008WFP?Raqia - Wednesday, November 18, 2009 - link
Look at all the finger print smudges on the nice card! I've started to notice the hand models that corporations use to hold their products. The hands holding the ipods on the apple site? Flawless, perfect nails and cuticles. Same w/ the fingers grasping the Magny Cours chip.NullSubroutine - Wednesday, November 18, 2009 - link
Hilbert @ Guru3d got the overclocking working with 900Mhz core speed (though it reached 90c).http://www.guru3d.com/article/radeon-hd-5970-revie...">http://www.guru3d.com/article/radeon-hd-5970-revie...
I was impressed with some of the crossfire benchmarks actually showing improvement. If Eyeinfinity works with 5970 does it work with the card in crossfire?
Ryan Smith - Wednesday, November 18, 2009 - link
Bear in mind that it also took him 1.3v to get there; the AMD tool doesn't go that high. With my card, I strongly suspect the issue is the VRMs, so more voltage wouldn't help.And I'm still trying to get an answer to the Eyefinity + 5970CF question. The boys and girls at AMD went home for the night before we realized we didn't have an answer to that.
Lennie - Wednesday, November 18, 2009 - link
I thought everyone knew about Furmark and ATi by now. It used to be like this on 4870 series too.It went like this, at first there were few reports of 4870(X2) cards dying when running Furmak. Further investigation showed that it was indeed Furmark causing VRM's to heat up to insane levels and eventually killing them. Word reached ATi from that point on ATi intentionally throttles their card when detecting Furmark to prevent the damage.
Yeah in fact the amount of heat load Furmak puts on VRMs is unrealistic and no game is able to heat up the VRMs to the level Furmark does. OCCT used the same method (or maybe even integrated Furmark) to test for stability (in their own opinion ofc)
So beware about Furmark and OCCT if you have HD4K or 5K.
The term "Hardware Virus" is rightfully applicable to Furmark when it comes to HD4K (and 5K perhaps)
strikeback03 - Wednesday, November 18, 2009 - link
The article stated that they encountered throttling in real games, not Furmark.Lennie - Wednesday, November 18, 2009 - link
If so, then one could suspect it's the same issue with games due to VRMs of this particular card getting heated up and throttling the card. Perhaps not enough contact between VRM and HSF or a complete lack of TIM on VRM by accident. I would have reseated the HSF if I owned that card.Rajinder Gill - Wednesday, November 18, 2009 - link
I suspect it is VRM/heat related. The 'biggest' slaves Volterra currently supply are rated at 45 amps each afaik. Assuming ATI used the 45 amp slaves (which they must have), you've got around 135 amps on tap. Do the math for OCP or any related throttling effects kicking in. Essentially, 1.10VGPU puts you at 150w per GPU before things either shut down or need to be throttled (depends on how it's implemented as it nears peak). Any which way you look at it, ATI have used a high end VRM solution, but 4 slaves per GPU would have given a bit more leeway on some cards. I wonder what the variance is in terms of leakage from card to card as well. Seeing as there's not much current overhead in the VRM (or at least there does not appear to be), a small change in leakage would be enough to stop some cards from doing too much in terms of overclocking on the stock cooler.later
Raja
Silverforce11 - Wednesday, November 18, 2009 - link
It could be your PSU, some "single rail" PSU arent in fact using a single rail but several rails with a max limit on AMPs. Its deceptive.Guru3D uses 1200W PSU and manages 900 core, which is typically what a 5870 OC to on air. Essentially the chips are higher quality cypress, maybe you should retry it again with a different PSU then conclusions can be drawn.
Bolas - Wednesday, November 18, 2009 - link
Yep, there is certainly a market for 5970CF. Can't wait!tajmahal - Wednesday, November 18, 2009 - link
Big deal, another paper launch where only a tiny handful of people will be able to get one.LedHed - Tuesday, November 24, 2009 - link
My question is why do the OC the 5970 but not the 295...We all know the 295 is memory bottlenecked at resolutions at/over 2560x1600
But considering the GTX 295 is down below $450 and no one can find these cards in stock with a god awful price of $600 ($100 more than 295 at launch).
mschira - Wednesday, November 18, 2009 - link
newegg list 5 different models, they come and go quite fast.I managed to get one of them in my shopping card.
All it would need now is pay. (which I don't want to...).
So yea they are not exactly easy to get, but far from impossible.
So not a paper launch.
Be real, it's day two after the launch, and you CAN get them. That's not bad at all.
M.
MrPickins - Wednesday, November 18, 2009 - link
At the moment, Newegg shows two different 5970's in stock. A HIS and a Powercolor.tajmahal - Wednesday, November 18, 2009 - link
Listed, but not available. I guess newegg sold both of the ones they had available, and the 5850 and 5870 ?......... not available either.Silverforce11 - Wednesday, November 18, 2009 - link
Plenty of 5870s around at retails and etailers, what do you mean by "another paper launch"?kilkennycat - Thursday, November 19, 2009 - link
Er, have you noticed the "Not in Stock" or "Pre-order" when you have gone to order one. You might get a 5850, but try finding a 5870 without having to psy a jacked-up premium over MSRP. Best of luck.mrdaddyman - Wednesday, November 18, 2009 - link
Since the 5870 seems to be in such great supply, I would like for someone to post a link where I can actually buy one of these. I have been trying to buy one for a month and haven't been able to find one.rennya - Thursday, November 19, 2009 - link
Does it has to be online?Here, I have many options for 58xx and 57xx models in retail stores. Which is more applicable for me because Newegg doesn't ship to my place.
Well, if you insist of finding online links, plenty of them at http://flaturl.com/eb0">http://flaturl.com/eb0 or http://flaturl.com/YmU">http://flaturl.com/YmU or http://flaturl.com/pAU">http://flaturl.com/pAU or http://flaturl.com/q15">http://flaturl.com/q15 or http://flaturl.com/5av">http://flaturl.com/5av and many more.
These are just some of the sellers in my place who sells those so-called mythical ATI cards online (doesn't include the gazillions others sold in retail). You may want to argue that they won't ship to you in United States, but then again the likes of NewEgg doesn't ship here too.
If you are desperate enough, I can help you obtain one of those cards. Want to take the offer?
Alexstarfire - Wednesday, November 18, 2009 - link
And this is the 5970 that we are talking about. Not the same thing.MamiyaOtaru - Thursday, November 19, 2009 - link
by saying "another paper launch" you were implying that the previous launches were paper. So you were talking about the 5870. As they are and have been available, they were not paper launches. So even if the 5970 is a paper launch (it isn't) you can't very well call it another onetajmahal - Wednesday, November 18, 2009 - link
No link yet for the 5850 or the 5870? That's a surprise.lloyd dd - Wednesday, November 18, 2009 - link
would using 3 monitors in portrait orientation sort out the aspect ratio in eyefinity?JarredWalton - Wednesday, November 18, 2009 - link
It would be closer. 4800x2560 would end up at a 1.875 AR, compared to 1.78 for 16:9 and 1.6 for 16:10. I think that 16:9 content stretched to fill 4800x2560 should look fine (about the same as 16:10 stretched to fill a 16:9 monitor).Of course, the more difficult question is how to put three 30" LCDs into portrait mode. You would need a different base stand -- none of the 30" LCDs I've seen allow you to rotate the display into portrait mode, probably because the LCDs are two feet wide.
yyrkoon - Wednesday, November 18, 2009 - link
Hey Jarred,Why not be inventive, and make a stand to hold 3 x 30" LCDs ? I do not mean you specifically of course, but whomever would want to have one. It really is not that difficult . . . just a little planning, and the ability to work with steel ( heavy ) or quality aluminum. Now if someone did not have the skills to make brackets etc, they could even draw something up, give it to a local fabricator, and be on their merry way . . .
Personally, I like the first option mainly because I enjoy working with materials as such ( metals, wood, plastics, etc ). Not to mention the fact that it can cost far less doing it yourself.
JarredWalton - Thursday, November 19, 2009 - link
I understand it's entirely possible. My point is merely that it's yet another expense. I don't think 3x30" with EyeFinity is going to be anything but a very, *VERY* niche product. LOL.5970 = $600
3 x 30" = $3000 (minimum)
3 x Stands = $120 to $600
So besides having the money, you need the space (and possibly time). I'd say $4000+ just for the GPU and LCDs is where the costs start, and naturally you would want a killer system (i7-920 with overclocking, or i7-975). But hey, you want the best of the best, there you have it. Until the next big thing comes along.
Speaking of which, what about 30" LCDs with 120Hz and 3D Vision? LOL.... (No, I'm not saying that's out or coming soon, but it could be.)
Ben90 - Thursday, November 19, 2009 - link
I doubt 30" @ 120 will be here soon. 1920x1200 @ 120fps is the theoretical limit of a dual link DVI. 2560x1600 @ 120 would require a quad link DVI, or a twin HDMI, or a twin DP connection. And it would still have to be a TN panel as of 2009, because IPS just isnt fast enough for 120 yettdktank59 - Wednesday, November 18, 2009 - link
get 3 independent stands.Could you imagine 90" of screen! that would just be sick!
anyways need to stop drooling lol...
Price is just ridiculous...
Figure 1.2k for each monitor
1200 for the cards
thats $4800 lol
bob4432 - Wednesday, November 18, 2009 - link
build your own stand - look at www.8020.net for everything you would need for such a project :)Visual - Wednesday, November 18, 2009 - link
for 90" screen you would need 3x3=9 screens.3 30" screens in portrait mode give you about a 54 inch diagonal