Performance Testing
Our performance testing consisted of pitting the ATI Radeon 4770 against the eVGA 9800GTX+ to gauge which card utilized their respective GPGPU technologies the best in terms of performance and image quality. To do this, we transcoded each test video clip using PowerDirector 7, MediaShow Expresso, and Avivo HD and recorded the overall transcoding time, average CPU usage, and average GPU usage (for 4770 only because GPU-Z doesn’t record NVidia GPU usage). We also took screenshots at a certain point in each clip to determine the image quality of the outputted video files.
PowerDirector 7 Benchmarks


The Radeon 4770 edged out the 9800GTX+ by a few seconds in most of our transcoding times, but was pretty much dead even when we compared CPU usage results. As with our previous dealings with PowerDirector 7, this applications will push the CPU to its limits as each of our CPU usage results pegged out at 100 percent until the last clip was transcoded. This was our only H.264 outputted file so we’ll have to do some research to see what caused this anomaly in CPU usage. We also didn’t notice much difference in how PowerDirector 7 utilized ATI Stream and CUDA technologies other than GPU acceleration during video effects transcoding.
MediaShow Expresso Benchmarks


MediaShow Expresso showcased some very fast transcoding times using both of our NVidia and ATI graphics solutions. Enabling either ATI Stream or CUDA provided a decent boost in overall transcoding times, but one of the first real differences I’ve seen between these two GPGPU technologies was their CPU usage results. ATI Stream averaged between 47 to 66 percent, while CUDA had the CPU hovering between 84 to 88 percent when CUDA was enabled. Those differences were noticable in the transcoding scores as ATI’s scores feel behind a bit in a few of the benchmarks for Expresso.
ATI Avivo HD Benchmarks
For the Avivo HD benchmark portion of this review, we ran into a few snags that made us adjust our review format a bit. We used a couple different video clips that were compatible with Avivo HD. We also tested Avivo HD against a CPU-based transcoder called Handbrake because we couldn’t use Avivo HD with our 9800GTX+ graphics card due to it being integrated into ATI Catalyst. So, we think pairing two free applications will give our readers a good overview of what is available in the open source community as well as the free offering directly from ATI.


Avivo HD was the hands down winner in all our transcoding benchmarks. Stream’s CPU usage results were also very impressive as it’s highest usage point barely went over 50 percent. This means you could transcode high-definition video and still be able to run other tasks on a user’s system like Photoshop, internet, e-mail, or even possibly watch another movie at the same time. While Avivo HD isn’t packed with tons of features or options for customizing outputted video, our test results suggest that it is an excellent transcoder compared to this popular CPU-based transcoder.
Let’s move on to the second half of our performance testing that deals with the image quality of our outputted videos.



Please change the tile.
Your
Please change the tile.
Your article is not about a comparison of Stream and CUDA performance, it is the difference between two software implementations utilising Stream and CUDA.
These technologies allow you to parallelise your algorithms, to imply that one technology performs ,as you essentially say, ‘better quality maths’ than the other is ignorant.
Please do not misdirect readers like this.
Regards.
Joe Bloggs
Please change you word.
Your
Please change you word.
Your comment is not about a reply to the article, it is a quantification of how butthurt you are.
These new breakthrows allow us to see how badly you are spell ,as you essentially try to use ‘larger words’ but not good at English.
Please do not obfuscate readers’ thoughtings like this.
Regards.
Bloe Joggs
damn dude, look at your own
damn dude, look at your own english, it’s absolutely dreadful!
Ya dude, your an idiot, your
Ya dude, your an idiot, your article is misleading. For sure!
Peace
Hater Bater Fuck Face
SO MUCH HATE !
SO MUCH HATE !
You are comparing two cards,
You are comparing two cards, one is nearly a year older than the other one, its elementary that the new one is going to win. This review is biased
Why are you not comparing the
Why are you not comparing the same frame in the outputs? How can you do a comparison of different frames and make a decision on differences in quality?
My personal gaming research
My personal gaming research team has found nVIDIA’s CUDA technology to be superior, but they compared current GPUs, not GPUs with a manufacturing time gap.
This is a very interesting
This is a very interesting article to contribute to my PC Hardware class, as I’m currently in a Network Admin program in Vermont. Please keep up the good work guys I love your site, and you have been very helpful over the last several semesters.
For BitCoin Minners AMD GPUs
For BitCoin Minners AMD GPUs faster than Nvidia GPUs!
Why?
Firstly, AMD designs GPUs with many simple ALUs/shaders (VLIW design) that run at a relatively low frequency clock (typically 1120-3200 ALUs at 625-900 MHz), whereas Nvidia’s microarchitecture consists of fewer more complex ALUs and tries to compensate with a higher shader clock (typically 448-1024 ALUs at 1150-1544 MHz). Because of this VLIW vs. non-VLIW difference, Nvidia uses up more square millimeters of die space per ALU, hence can pack fewer of them per chip, and they hit the frequency wall sooner than AMD which prevents them from increasing the clock high enough to match or surpass AMD’s performance. This translates to a raw ALU performance advantage for AMD:
An old AMD Radeon HD 6990: 3072 ALUs x 830 MHz = 2550 billion 32-bit instruction per second
A New Nvidia GTX 590: 1024 ALUs x 1214 MHz = 1243 billion 32-bit instruction per second
This approximate 2x-3x performance difference exists across the entire range of AMD and Nvidia GPUs. It is very visible in all ALU-bound GPGPU workloads such as Bitcoin, password bruteforcers, etc.
Secondly, another difference favoring Bitcoin mining on AMD GPUs instead of Nvidia’s is that the mining algorithm is based on SHA-256, which makes heavy use of the 32-bit integer right rotate operation. This operation can be implemented as a single hardware instruction on AMD GPUs (BIT_ALIGN_INT), but requires three separate hardware instructions to be emulated on Nvidia GPUs (2 shifts + 1 add). This alone gives AMD another 1.7x performance advantage (~1900 instructions instead of ~3250 to execute the SHA-256 compression function).
Combined together, these 2 factors make AMD GPUs overall 3x-5x faster when mining Bitcoins!
Fucking plagerism. Copy/paste
Fucking plagerism. Copy/paste from some other source, no citation or credit. Your education should be shredded and flushed down the toilet. Here is where you copied it from for people who want to read from someone with actual knowledge and not just ctrl+c —> ctrl+v.
https://en.bitcoin.it/wiki/Why_a_GPU_mines_faster_than_a_CPU
You plagerized me. I
You plagerized me. I complained about someone else who copied something and posted a link. All you did was change the link. You are a loser and the worst scum on the internet.
Why are we bitching about
Why are we bitching about plagiarism? If i wanted to make sure his info was correct i would’ve looked it up myself. I could care less if it was “plagiarized” as long as the information was correct.