It seems that all the tech community is talking about today is BAPCo and its benchmarking suite called Sysmark. A new version, 2012, was released just recently and yesterday we found out that AMD, NVIDIA and VIA have all dropped their support of the "Business Applications Performance Corporation". Obviously those companies have a beef with the benchmark as it is, yet somehow one company stands behind the test: Intel.
Everyone you know of is posting about it. My twitter feed "asplode" with comments like this:
AMD quits BAPCo, says SYSmark is nutso. Nvidia and VIA, they say, also. http://bit.ly/kHvKux
AMD: Voting For Openness: In order to get a better understanding of AMD’s press release earlier concerning BAPCO… http://bit.ly/kNtKkj
Ooh, BapCo drama.
Why Legit Reviews won’t use the latest BAPCo benchmark: http://t.co/G0VHgCo@LegitReviews#BAPCo
Even PC Perspective posted on this drama yesterday afternoon saying: "The disputes centered mostly over the release of SYSmark 2012. For years various members have been complaining about various aspects of the product which they allege Intel strikes down and ignores while designing each version. One major complaint is the lack of reporting on the computer’s GPU performance which is quickly becoming beyond relevant to an actual system’s overall performance. With NVIDIA, AMD, and VIA gone from the consortium, Intel is pretty much left alone in the company: now officially."
Obviously while cutting the grass this morning this is the topic swirling through my head; so thanks for that everyone. My question is this: does it really matter and how is this any different than it has been for YEARS? The cynical side of me says that AMD, NVIDIA and VIA all dropped out because each company’s particular products aren’t stacking up as well as Intel’s when it comes to the total resulting score. Intel makes the world’s fastest CPUs, I don’t think anyone with a brain will dispute that, and as such on benchmarks that test the CPU, they are going to have the edge.
We recently reviewed the AMD Llano-based Sabine platform and in CPU-centric tests like SiSoft Sandra, TrueCrypt and 7zip the AMD APU is noticeably slower. But AMD isn’t sending out press releases and posting blogs about how these benchmarks don’t show the true performance of a system as the end user will see. And Intel isn’t pondering why we used games like Far Cry 2 and Just Cause 2 to show the AMD APU dominating there. Why? Because these tests are part of a suite of benchmarks we use to show the overall performance of a system. They are tools which competent reviewers wield in order to explain to readers why certain hardware acts in a certain way in certain circumstances.
Continue reading for more on this topic…
A single benchmark result from our recent AMD Sabine launch review
On the other hand, for the side of me that truly wants openness and fairness in our industry, I fully support AMD, NVIDIA and VIA for dropping out of BAPCo based on the complaints stemming about Intel filtering benchmark paths and miscued weighting on results. AMD’s Chief Marketing Officer Nigel Dessau said in a blog post today that "We weren’t able to effect positive change within BAPCo, and the resulting benchmark continues to distort workload performance and offers even less transparency to end users. Once again, BAPCo chose to ignore the opportunity to promote openness and transparency." Dessau even details three key points to his argument:
- While SM2012 is marketed as rating performance using 18 applications and 390 measurements, the reality is that only 7 applications and less than 10 percent of the total measurements dominate the overall score. So a small class of operations across the entire benchmark influences the overall score.
- In fact, a relatively large proportion of the SM2012 score is based on system performance rated during optical character recognition (OCR) and file compression activities − things an average user will rarely if ever do.
- And SM2012 doesn’t represent the evolution of computer processing and how that evolution is influencing average users’ experience. SM2012 focuses only on the serial processing performance of the CPU, and virtually ignores the parallel processing performance of the GPU. In particular, SM2012 scores do not take into account GPU-accelerated applications that are widely used in today’s business environments.
Other than me wondering what a "relatively large proportion" actually IS, these are fair arguments. I have done OCR on my PC exactly….zero times in the last 12 months. Though I do compress things all the time. That last point about SM2012 not "representing the evolution of computer processing" is a bit of a wash though as I mentioned above: it is A tool in a series of tools reviewers can utilize.
If Intel is in any way pushing the benchmark forward that tilts it to Intel’s favor alone, then yes, I agree the benchmark is a piece of crap and shouldn’t be used. For a "system" benchmark to not weigh the performance of a graphics card at all seems ludicrous. If it based 60% on optical character recognition and that isn’t expressly divulged, there is another knock. To quote Charlie’s SA article: "In the end, BAPCO and SYSmark 2012 is now an official shame, not just one in name only. Anyone using it seriously should be immediately suspect for both motives and technical awareness."
In my view the real problem here isn’t the benchmark alone, is that that too many people in the industry, including OEMs, governments and educational outlets that buy parts from Intel/AMD/NVIDIA/VIA/etc and "reviewers" that run a single test on a notebook and call it a day, are really freaking lazy. The idea of actually spending a considerable amount of time doing things like research, testing and comparisons is just too much for some of these people to wrap their minds around and so installing a test like Sysmark 2012 and then hitting "GO" with a single number at the end is all they can grasp. And BAPCo’s tests have long been one of the most widely accepted of those "touch and go" benchmarks thanks to the industry-wide support…that it no longer has.
So the problem isn’t that this particular benchmark is one sided, it’s that this particular benchmark is one sided AND tends to be an industry standard. Rather than bad mouthing this benchmark suite, which we are all free to do, I would instead recommend a campaign of education on what benchmarks are good, which test which aspects of the user experience and how you can best understand their results. That is something we at PC Perspective do every day, and most of our contemporaries as well. If someone wants to use Sysmark 2012 in their benchmark suite (which we are not doing, FYI) then as long as they include other tests and validate the results with some educational words for the reader, then I have no problem with that.
For the slightly uneducated, however, that have come to depend solely on BAPCo’s suite of tests to be the most fair and open of benchmarks around, it is time to finally realize that facade has faded.