Comments Locked

26 Comments

Back to Article

  • Operandi - Friday, October 16, 2020 - link

    Fix the graphs. Machine tag???; change the name in the graph to a name that easily represents the hardware. I don't want to look at a name a on a graph and cross reference a different table of information to figure out what I'm looking at.
  • Hifihedgehog - Friday, October 16, 2020 - link

    I agree. Please show the CPU and GPU name in the legend. The other platform information, such as product name, motherboard and manufacturer, can be added in a flyover window.
  • smbell - Friday, October 16, 2020 - link

    It would have been interesting to see a comparison of actual workstation graphics cards such as a low/mid/high end Quadro card, since that is the main purpose of the benchmark.
  • Duncan Macdonald - Friday, October 16, 2020 - link

    Why only Intel for the higher powered systems - no Threadripper or high end Ryzen ?

    This also needs to be rerun next month with a Ryzen 5900X or 5950X when they become available.
  • ganeshts - Friday, October 16, 2020 - link

    As specified in the article, the systems are those that were processed recently in our testbeds. We will be using the benchmark in future reviews, and the scores here will serve as comparison points.
  • nerd1 - Friday, October 16, 2020 - link

    What the hell is the point of this review? "Latest in workstation GPU benchmark" and you're bringing 6600K with 2070?
  • phoenix_rizzen - Friday, October 16, 2020 - link

    It's a review of the new workstation GPU benchmark programs themselves.

    Not a review of GPU hardware.
  • ganeshts - Friday, October 16, 2020 - link

    This is NOT a *workstation GPU review*, but a report on *test-driving* a benchmark meant for those GPUs. Nowhere does SPEC mention that the benchmarks are specific to workstation GPUs. They can be processed even on machines with integrated GPUs. That is what this test-driving article shows.

    If we get some workstations in, we will be processing these benchmarks on those machines - at that time, the results presented in this article can serve as reference points.
  • ballsystemlord - Friday, October 16, 2020 - link

    @Ganesh Why no AMD 5000 series GPUs? That seems an obvious candidate for testing, whether it be the GPU itself or the benchmark.
  • ballsystemlord - Friday, October 16, 2020 - link

    Granted you've got no 3000 series Nvidia cards in there, but then AMD hasn't yet released their new GPUs for this year.
  • zamroni - Friday, October 16, 2020 - link

    Is anand tech drunk? Integrated and gaming gpu for workstation benchmark article?
  • CiccioB - Sunday, October 18, 2020 - link

    How limited view.
    How do you know if GPUs advertised for professional work are really up to their price if you do not have a comparison with the basic consumer ones?
  • Icehawk - Friday, October 16, 2020 - link

    Would have liked to see an explanation of TDR
  • CiccioB - Sunday, October 18, 2020 - link

    Ok, this is the base comparison, now go with the big guns to see if 4 times the price of the relative gaming GPUs for the professional board has a reason.
  • BedfordTim - Monday, October 19, 2020 - link

    That would require testing the support available as well as the hardware/software.
  • abufrejoval - Monday, October 19, 2020 - link

    Just like previous incarnations of this benchmark Viewperf has become completely useless, because the software underneath uses technology that must be two decades old.

    More than 90% of all workloads are single threaded, accordingly on my 18/36 core/thread system CPU load hardly ever reaches 5%, I’ve seen some rare cases going to 8%, there is one tiny blip at 22%.

    On the (RTX 2080ti) GPU side loads are a little higher, but you can see how the more complex/modern rendering mechanisms, which actually start to offload at least tiny bits of the display list processing to the GPU (or at least eliminate some of the hidden lines) ran fastest, while the least complex wireframe models, which are just an endless forth and back between the CPU and GPU about drawing a tiny little line, run slowest: Imagine driving a Ferrari between your home and the shopping mall, purchasing every single noodle for a family dinner and you get the idea.

    Watching the benchmark is a window back into what passive frame buffers were like and how graphics cards that could at least draw full lines or even some shaded triangles changed the game. Even an iGPU should outperform all of the “benchmark” if only it were written with a modern API and the geometry data managed by the GPU.

    While my RTX 2080ti/E5-2696v3 combo seems to provide a significant uplift of your 4k results, using this ‘benchmark’ really as a benchmark, is about the worst idea you might have. Don’t waste your time installing, at least those un-zip jobs seem to use up to 50% of all CPU, but they still take a lot of time, even on NVMe storage.
  • ganeshts - Monday, October 19, 2020 - link

    Have you compared it against the actual behavior of the applications like 3ds Max, Creo, etc.?

    The purpose of the benchmark is to replay traces that are generated from the programs used by professionals in the field. If you believe that those programs could be written better with a modern API and data management by the GPU, then you should take it up with Autodesk, Siemens NX, and the like, instead of blaming SPEC. On the other hand, if the behavior of those professional programs is different from that of what viewperf does, then it would be worth giving feedback to SPECgpc about that.
  • abufrejoval - Tuesday, October 20, 2020 - link

    Please Ganesh, use your powers of observation and some logic.

    If you look at the spinning models, you can observe that they are redrawn and spinning the fastest, when they are using the highest quality surfaces with no wireframe showing, reflections from lights even mirroring from other objects. And then they typically regress to what looks like Gouraud-shading and sometimes even wireframe and we can see that these are slower and that can’t really be, because a wireframe from display list data on the GPU would just require *less* processing and be spinning so fast you’d never see more than a blur.

    Looking at HW-Info during the execution on a secondary screen, you can tell what’s going on: The CPU is sending all the drawing commands a single CPU core can stuff into the GPU pipeline, but it’s evidently not a full display list and then a series of ‘spin’ (geometric translation) commands, but it’s a series of wireframe lines, one after one in one very busy loop. It shows in the power consumption of the GPU, too, which is very low while the GPU core is 80% active (reading the shared memory command queue across the PCIe bus).

    On the shaded models GPU power consumption goes up a third or more, because there the GPU actually is using some of its acceleration resources to do the surfaces, bump maps, textures, reflections and a lot more geometry work, because the interaction between the CPU and GPU is now at the level of full display lists and geometry translation commands.

    The professional 3D software market serves its customers and those aren't very interested in things happing faster than an engineer’s mind. They also much more interested in things being exact, a true digital twin, at least for the design (digital crash testing or finite element optimizations would be another story).

    It’s the validation of drivers, making sure there are no bugs resulting in a wrong display, which make this market expensive. The times when graphics power was still a decisive issue which needed hundreds of thousands or earlier millions of Dollars to obtain the sort of graphics power a mobile phone today delivers without getting warm, are over.

    Please discuss with your colleagues, if you don’t believe me.

    If you compare what you see in these benchmarks and what you see in current Flight Simulators and car racing simulators like Project Cars 2, that should start you thinking.
  • ganeshts - Tuesday, October 20, 2020 - link

    I never said that I didn't believe you. Instead, I said that viewperf faithfully replays the sort of GPU load and CPU-GPU communication that an engineer actually triggers when working on those softwares. As I mentioned earlier, if you think those are not efficient (which may well be the case), you should take it up with Autodesk and the like. not SPECgpc.

    Have you also considered that the engineer working on these 3D models might actually have a workflow that makes it necessary for the frequent CPU - GPU communication you observed? It is not always just a final rendering flow that is typical of video / animation work, but live user-triggered modifications and re-rendering that might result in the trace being replayed.
  • abufrejoval - Tuesday, October 20, 2020 - link

    To moderate my criticism somewhat: If you know what you are looking for, ViewPerf may be useful to verify some alternatives.

    My main issue with it is that judging by the comments, most people here clearly are mislead into thinking that these results can be extrapolated to gaming. And that's simply not the case, because game engines have evolved to use much higher level (but potentially less exact) graphics primitives to give you the eye-candy and speed you just love to pay for. It's more monsters games ask for, not that the second wart on the inside of the left pinky be just precise 0.35mm in height or an exact color match. Engineers get fired when the first couple thousand cars off an assembly line need to be crushed, because an engine mount is off by 1mm.

    If the benchmark as you say actually replays traces from applications and doesn't contain the core parts of the application itself, it may be even less useful as a benchmark, because it doesn't measure the application side.

    But if it has high fidelity with regards to the traces, what is obvious is that the applications all just use a single CPU core to feed the graphics interface. That's quite simply how things were done two decades ago and it's only with Mantle/DX12 that GPU interfaces seriously supported multi-threaded CPU/GPU interfaces: Clearly there is none of that in these traces, even if some actually use DX12. Either the applications are inherently still using a single thread to communicate with the GPU or the capturing mechanism serializes the capture.

    I tend to believe the applications haven't really been refactored for multi-threading the GPU interface... because engineers don't play games and for pure design a 2 GHz CPU and a passive graphics card at 2k has been good enough for the guys raised on paper an pencils.

    Case in point: The fact that the Siemens Unigraphics/NX benchmark runs ~4x faster on Vega or even almost 2x as fast on a lowly Ryzen 7-4800U than a RTX 2070 clearly doesn't allow you to extrapolate performance *anywhere* except this very special case of using this Siemens product.

    And here most likely Siemens and AMD simply agreed at one point to have the Siemens product support a higher level API for rendering, than what is used for all others (Nvidia among them).

    I'd bet four cases of the best Belgian beer, that with a bit of API tuning, Nvidia 3xxx GPUs could match or even exceed the Vega 56 scores, simply by having the Ferrari shop for bags of noodles, instead of individual ones.

    Siemens NX is 40 years old. I'm not sure a single line of code from the original 1978 version still survives, but I am also pretty sure, it hasn't all been refactored into Rust or CUDA or OpenMP either.

    Just how well it supports dozens of general purpose CPU cores, thousands of GPU cores, scale-out, Cerebras wafer level accelerators or Quantum computing... I already said, I am betting some of the best beer in the world on single threaded 64-Bit AVX everything with some DX12 sugar coating at best: A Ryzen 7 4750G should just do fine, a Ryzen 3 4350G won't do much worse, either.
  • CiccioB - Wednesday, October 21, 2020 - link

    You probably have non understood the meaning of this bench as a whole: it is not a synthetic benchamark like the one created to get the most of your GPU.
    It is a benchmark created to mimic the behavior of the currently available professional applications to see how good the GPU works with them.

    If I use a 3D modeler that uses CPU-GPU heavy interaction to draw the wireframe view I really do not care if Tombs Raider or Flight Simulator use another technique to get the polygons be rendered faster.
    I am using this 3D modeler and I want to know on which HW it runs better or id=f spending 4 times for a professional board gives me some more productivity advantage over the on site support.

    What you are saying is just that because Control runs better than Battlefield V the latter has no meaning to be used as a benchmark. Or because you can flight with a faster framerate in Just Cause then Flight Simulator is not a match.

    This is a bench to test real application behavior. If things can be done better it's a matter of the programmer coding the app, not of the benchmark.

    As you seem to have so much knowledge about how to do things, you an apply for some of those SH and teach them which is the best way to render wireframe lines with real time modification in action and gain a lot of money for that.
  • rafalsz - Tuesday, October 20, 2020 - link

    How come there are no workstation GPUs in a workstation GPU benchmark test?
  • colonelclaw - Wednesday, October 28, 2020 - link

    Good question. And furthermore, why does a brand new benchmark use a 5-year-old version of 3ds max (currently the world's most widely used pro 3D app)?
  • alpha754293 - Thursday, November 12, 2020 - link

    It would be interesting to see how the results compare against several generations of workstation level GPUs vs. the desktop/consumer variants (e.g. going back, for example, as far as Quadro K6000).
  • ExoTech - Tuesday, April 20, 2021 - link

    Because it was asked for in the comments a lot. My 5900x + RTX2070 scored:

    3dsmax-07 composite score: 80.11
    catia-06 composite score: 53.52
    creo-03 composite score: 89.04
    energy-03 composite score: 22.68
    maya-06 composite score: 331.21
    medical-03 composite score: 27.68
    snx-04 composite score: 17.91
    solidworks-05 composite score: 213.61
  • dkimlaw - Monday, May 2, 2022 - link

    Cuando hay un accidente, ves tu vida pasar en una milésima de segundo. Los accidentes de carro no se pueden prever PERO SÍ SE PUEDEN PREVENIR. Las personas que han sobrevivido a accidentes terribles, aseguran que cuando estás envuelto en uno, hay una sensación de sentir que el tiempo se detuvo.

    https://abogadosdeaccidentesahora.com/locaciones/a...

Log in

Don't have an account? Sign up now