Early testing for higher end GPUs
We tested a handful of AMD and NVIDIA graphics cards in the brand new Rise of the Tomb Raider released this week!
UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.
I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.
Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.
In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.
Image Quality Settings Discussion
First, let's talk a bit about visuals, image quality settings and the dreaded topic of NVIDIA GameWorks. First, unlike the 2013 Tomb Raider title, Rise of the Tomb Raider is part of the NVIDIA "The Way It's Meant To Be Played" program and implements GameWorks to some capacity.
As far as I can tell from published blog posts by NVIDIA, the only feature that RoTR implements from the GameWorks library is HBAO+. Here is how NVIDIA describes the feature:
NVIDIA HBAO+ adds realistic Ambient Occlusion shadowing around objects and surfaces, with higher visual fidelity compared to previous real-time AO techniques. HBAO+ adds to the shadows, which adds definition to items in a scene, dramatically enhancing the image quality. HBAO+ is a super-efficient method of modeling occlusion shadows, and the performance hit is negligible when compared to other Ambient Occlusion implementations.
The in-game setting allow for options of Off, On and HBAO+ on all hardware. To be quite frank, any kind of ambient occlusion is hard to detect in a game while in motion, though the differences in still images are more noticeable. RoTR is perhaps the BEST implementation of AO that I have seen in a shipping game and thanks to the large open, variably lit environments it takes place in, seems to be a poster child for the lighting technology.
That being said, in our testing for this story I set Ambient Occlusion to "On" rather than HBAO+. Why? Mainly to help dispel the idea that the performance of AMD GPUs is being hindered by the NVIDIA GameWorks software platform. I'm sure this won't silence all of the conspiracy theorists, but hopefully it will help.
Other than that, we went with the Very High quality preset, which turns out to be very strenuous on graphics hardware. If you don't have a GTX 980 or R9 390 GPU (or better), chances are good you'll have to step down some from that even at 2560×1440 or 1920×1080 to get playable and consistent frame times. Our graphs on the following pages will demonstrate that point.
Testing Setup
For this short sample of performance we are comparing six different graphics cards with matching prices points from AMD and NVIDIA.
- $650
- NVIDIA GeForce GTX 980 Ti 6GB
- AMD Radeon R9 Fury X 4GB
- $500
- NVIDIA GeForce GTX 980 4GB
- AMD Radeon R9 Nano 4GB
- $350
- NVIDIA GeForce GTX 970 4GB
- AMD Radeon R9 390 8GB
I tested in an early part of the Syria campaign at both 2560×1440 and 3840×2160 resolutions, both of which were hard on even the most expensive cards in the comparison. Will the 6GB vs 4GB frame buffer gap help the GTX 980 Ti in any particular areas? How will the R9 390 with 8GB of memory compare to the GTX 970 with 4GB configuration that has long been under attack?
This also marks the first use of our updated GPU testbed hardware, seen in the photo above.
PC Perspective GPU Testbed | |
---|---|
Processor | Intel Core i7-5960X Haswell-E |
Motherboard | ASUS Rampage V Extreme X99 |
Memory | G.Skill Ripjaws 16GB DDR4-3200 |
Storage | OCZ Agility 4 256GB (OS) Adata SP610 500GB (games) |
Power Supply | Corsair AX1500i 1500 watt |
OS | Windows 10 x64 |
$39.42 at Green Man Gaming
$39.42 at Green Man Gaming with code 27RISE-OFTOMB-RAIDER 🙂
GMG is a great alternative to
GMG is a great alternative to the much-celebrated Steam site for purchasing games, as well as GOG.com (from the original “Good Old Games”, though these days the games aren’t necessarily “old”). Steam rarely offers discounts to pre-release and new titles. And there are other game etailers taking advantage of digital distribution to sell games at a discount as well, so it pays to shop around!
USD$38.44 on Kinguin.
USD$38.44 on Kinguin.
I’m confused about the frame
I’m confused about the frame time spikes on the Fury X and R9 Nano
you should look at VRAM usage
Yeah, I’ll try to this
Yeah, I'll try to this weekend as well. Note though that the GTX 980 has 4GB of memory and the GTX 970 has 4GB (more or less) and they don't exhibit the same behavior.
My bet is compression. Thats
My bet is compression. Thats the main difference between the 390 arch and fiji. Try a tonga card aswell.
The compression is a very low
The compression is a very low level effect, unless it has to do some prep work when it brings new data into local memory. This frame time spike seems to hit about every 6 seconds or so. If there was an issue with the color or other compression, I would expect a more even effect. Five or six seconds is a long time from the graphics card perspective. Given the time scales involved, I would expect some memory management issue or perhaps some issue when bringing data in from system memory. It is unclear why this would happen with Fiji based devices, but not Hawaii based devices. They have the same memory sizes and the same driver. There is a possibility it could be some effect due to HBM characteristics (different latency characteristics and such), but that also seems unlikely given the time span. I would expect latency characteristics to cause effects within the processing of a single frame, not every couple hundred frames. They may not be able to figure it out without getting help from AMD. Tonga performance characteristics will be interesting since it is GCN 1.2 also. That might tell us whether it is related to the GCN revision, or if it is related to HBM, but it will not give us the exact cause. It could still be some kind of driver issue specific to things done on HBM cards, but not the HBM itself.
Considering latency is where
Considering latency is where hbm should shine i doubt its the culprit. Testing tonga would make or break either possibility. Tonga has same arch as fiji (colour compression) but uses gddr.
I said that is seems
I said that is seems unlikely. It does have different characteristics though.
latency Is a big problem
latency Is a big problem right now for hmb
I don’t know if it is a big
I don’t know if it is a big problem. It still has buffers to store the currently opened row, like any DRAM device. Accesses within the same row will have low latency. There is higher latency when you need to close the current row and and open another. This is the same with any DRAM device, but due to the width of the bus, a row can be read very quickly with HBM, even though it may be larger. I would think that this would fall into the same catagory as the other effects. With so much data to be read per frame, any latency issues should show up in every frame, reducing the overall frame rate rather than introducing a stutter every couple hundred fames.
And i don’t see why you would
And i don’t see why you would expect compression to cause issues in more steady manner. I view it like vsync. If it can only process at 5/6 the speed of memory it would hicup every 6 seconds especially if on the same clock multiplier.
The compression that was
The compression that was added with GCN 1.2 is lossless delta color compression. This isn’t that high of compression so the hardware to do the work is not that complicated. I would doubt that decompression and compression hardware on chip would be slower than memory. Also, how much data do you think that the GPU has to read for every frame? You are generating a frame sixty times a second (hopefully) and the GPU has to read a massive amount of data for every frame. If the decompression system couldn’t keep up, it would increase the time for all frames. It wouldn’t just hiccup once ever few hundred frames.
I Think you’re on to
I Think you’re on to something.
I cant find where now, I remember seeing somewhere yesterday that it required a 3GB AMD card but only a 1GB nVidia card.
(I thought it was part of the steam forum official support post, But unless it’s been edited it wasn’t) I don’t know how much validity there was for this but it would be interesting to find out for sure.
AMD just released the Tomb
AMD just released the Tomb Raider optimized driver: http://support.amd.com/en-us/kb-articles/Pages/AMD-Radeon-Software-Crimson-Edition-16.1.1-Hotfix-Release-Notes.aspx?sf20134422=1
time to redo those benchmarks
So how is the frametime
So how is the frametime meassured ? I guess its in vsync off mode or ??
The thing is that I always play in vsync on mode as I can’t stand the tearing effect and always try to set it up so that I never drop below 60. But if this spikes in time exist even in that mode it means that I have to lower the quality/effects used so much that I in practice only use like 50% of what the card really can do to stay above 60FPS all the time.
Its hugely annoying to have a frame skip every 5 second.
Any difference running the
Any difference running the game on Windows 7 Vs. Windows 10?
Also any idea when they will patch the game for DX12?
Heyyo,
Tbh I don’t think
Heyyo,
Tbh I don’t think Square Enix nor Crystal Dynamics nor Nixxies (the devs who ported it to PC) ever promised DirectX 12 patch for the PC version of the game… so unless that changes soon? I doubt it. :
I’ve read that the Xbox One uses some DirectX 12-ish features (but doesn’t actually use Dx12), but the XBOne has had features like DirectX 12 for quite some time.
Afaik? Only the new Hitman game has promised a Dx12 update.
Thanks for the info.
I was
Thanks for the info.
I was hoping for DX 12, but it looks like I’ll stick with Windows 7 on my gaming PC then.
I seem to remember some story
I seem to remember some story about being DX12 as well. I am running it on win10 and its been pretty much 0 problems cept game has crashed once during game play and had it crash when loading my save.
Consoles have pretty muched used a closer to hardware API for a while. Even windows at one point back in 90’s used one as well but having software layer between makes machine much more stable as 9x machines would just BSOD crash.
Since TR is very CPU
Since TR is very CPU efficient and looks great already, I don’t see what the point of doing DX12 would be. That would cost money to do, and create issues for some people since DX12 is pretty new it won’t obviously be flawless.
TR is basically the poster child for why NOT to bother with DX12.
Don’t forget when a game is designed WITHOUT DX12 in mind porting to DX12 would only optimize a few things (like CPU usage). There’s a lot of things DX12 can do aside from that but you need to design the game mostly from scratch to do DX12 right.
But they did actually bother
But they did actually bother with DX12. In the game folder, a dll file relating to ‘D3D12’ was found by PCPer’s German neighbour: http://www.pcgameshardware.de/Rise-of-the-Tomb-Raider-Spiel-54451/Specials/Grafikkarten-Benchmarks-1184288/
This doesn’t guarantee a DX12 patch of course, but if they never considered that, these files wouldn’t be present.
Have you actually played the
Have you actually played the game? I hit frequency CPU bottlenecks on a 4,2ghz 4670K. Yes, it’s across all cores so the code has some nice multithreading.
But it’s nothing like the old game that could run on a Core2Duo basically. It’s very CPU heavy this time ’round.
frequent*
frequent*
This. I still run a 2600K,
This. I still run a 2600K, and I see frequent frame drops and stuttering at 4.2, but far fewer at 4.4. Running the game at modified high presets on a 780 at 1080p (pixel doubling on a 4K screen, FWIW).
Found this:
“Originally
Found this:
“Originally Posted by Szaby59 View Post
There was a stream where they mentioned currently the DX11 code runs better, until they can’t achieve better user experience with DX12 they will not patch it to the game…”
http://forums.guru3d.com/showthread.php?t=405497&page=4
So DX12 may be possible later.
Definitely a game I’ll get in
Definitely a game I’ll get in Steam’s summer sale. Judging by the results for 1440 and the cards used, my guesstimation says I’ll get decent performance running at high settings at 1080 using my 7970.
You guys still don’t have a
You guys still don’t have a 390X to test with, nor did you ever review such a card.
You are correct sir
You are correct sir
if you want results for 390x,
if you want results for 390x, just look for 290x 8gb card. They both are pretty much identical cept for 390x has a small clock bump.
This is another game that
This is another game that highlights how good the R9 390 is
The thing is though, the
The thing is though, the tests look a little different when that GTX 980 is running 1530/8000.
Also, if you have “only” 8GB of system ram and a GTX 980, very-high textures causes slowdowns. As it will use up both your VRAM and system ram to the max and start swapping to the SSD.
If you are going to run with
If you are going to run with really high resolution textures, then you will need more system RAM in addition to more graphics memory. If you are running an 8 GB graphics card, then you probably should be running 16 GB of system memory. Almost everything in graphics memory goes through system memory on the way there. It seems ridiculous if you are trying to run an 8 GB card on 4 GB of system memory or something.
Looks like the R9 Fury and
Looks like the R9 Fury and Nano keep up pretty well with Nvidia, but this article shows the drawback to choosing the AMD option is the driver support.
Isn’t it always?
Also, @Ryan.
Isn’t it always?
Also, @Ryan. You would serve the public well by retesting this (maybe not to the same extent) in the level where you are in Jacob’s town. I feel like the 390, 980 and 970 will have a very different relationship…
Performance and at release
Performance and at release driver support looks about as expected. The R9 390 results definitely stood out. Price/performance looks really nice on it if someone needed a GPU right now.
Not sure if I’ll buy it now or wait to play it after the next round of GPUs are released.
i have gtx980 using very high
i have gtx980 using very high preset @ 1080. Only noticed a handful of times it dropped below 60fps. gpu load is usally 70-90+%, temp only hitting 71-72c. (eVGA gtx980 Superclocked ACX 2.0)
Not sure why you guys wanted
Not sure why you guys wanted to do such testing with no AMD drivers for this game. Keep it up.
Problem i think about “no amd
Problem i think about “no amd drivers” is when will amd drivers be out? that is the problem/question. I am sure they will revisit it when they finally come out.
It is ridiculous to have to
It is ridiculous to have to optimize the drivers for every game. Hopefully DX12 will put an end to that eventually.
Nvidia pays title=early
Nvidia pays title=early access for nVidia for driver support
GURU3D confirmed AMD issues
GURU3D confirmed AMD issues are related to the CPU sore count. If more than 1 CPU core is active, issues come up.
This looks like a game bug, maybe AMD can hack the game with the drivers to make it work properly.
um wait a sec if its a game
um wait a sec if its a game bug then why doesn’t it show up in nvidia side? If it was to do with core count then nivida would show same problem. The claim doesn’t really make sense.
It doesn’t even show up on
It doesn’t even show up on other AMD cards. You can have weird race conditions pop up with multiple CPUs, but it seems unlikely that such a bug would only effect Fury cards and no others.
They its not a bug in the
They its not a bug in the game, its a bug in the driver amd that isn’t reacting well to something. so its really up to amd to fix.
You sound so certain.
Nixxues
You sound so certain.
Nixxues but a statement out the 29th on Steam that its investigating low performance issues.
I’m going to side with the people who made the port rather then some poster with no cred.
Ryan,
Would it not be
Ryan,
Would it not be better to test “game reviews” on minimal & recommended setups. Testing these games on high end systems really don’t give much insight to what to expect for the majority. I mean using a system with a base of 16GB is rather silly given people will likely be in the rage of the 6GB to 8GB as suggested.
These review setups look like they cost 3 to 4 times more then an X-Box. So do the added visual justify that expense?
OCZ 256 SSD = $193
ADATA 512
OCZ 256 SSD = $193
ADATA 512 SSD = $184
16GB Ram = $130
Motherboard = $479
i7 5960X = $1,095
AX 1500i PSU = $392
Windows 10 = $119
Total = $2592
Still missing case, cooler, fans keyboard & mouse plus the cost of a GPUs that were tested.
X-Box One GoW:UE bundle = $301
RotTR = $39
Total = $340
First of all, thats a stupid
First of all, thats a stupid comparison. It’s a test bed that is meant to eliminate all bottlenecks when comparing GPU’s. You probably only need about 1300 if not less, even with a 980 or 390x, to run RotTR and high settings at 1080p, all the time (unlike Xbone which drops resolution for cut scenes).
And, you get to enjoy the game without massive slowdowns during combat, tearing and capped 30fps on the Xbone.
Problem with what you propose
Problem with what you propose and ask about doing. Every game has different minimal setups and recommended. Time and money involved to get that hardware to test with is not worth the money. Reason they use super high end cpu and board in gpu test bench is to eliminate the cpu and memory as a bottleneck as much as possible so the GPU that is being tested is the weakest link not cpu.
The problem with your
The problem with your proposal is not everyone is running said hardware and test benches like these are being used by the -1%. Just look at the steam hardware survey.
15% of users only have 16GB of ram. The majority of users are using 4GB 21% or 8GB 31%. almost as many users still have 3GB then there is 12GB or above user.
0.3% of people use 8core CPUs. The majority are on 2core 48% and 4core at 44%
8% have a GPU with 4GB or higher. The majority are running 2GB
No, this doesn’t even reflect real world case. What I was asking was for it to reflect minimal and recommended case the companies making these game put forth to see if they play smooth as they are advertise and is the added investment of going up a tier to a GPU is worth the visual quality if the hardware its recommended allows it to.
Ooops..
15% of users have
Ooops..
15% of users have 12GB or above not 16GB of ram.
Yep either a bad compression
Yep either a bad compression system or it isnt fully optimised yet. Amd should have used fast fourier transform 2014 to start with. But them starting from scratch is a bitch
If it is the compression
If it is the compression system, then wouldn’t this show up on other games tested on Fury cards?
You should add easy to read
You should add easy to read framerate averages (e.g. Fury X – 4k, 60.1 FPS) etc for the retards like me who can’t properly read anything but basic bar graphs.
doing avg like that doesn’t
doing avg like that doesn’t really show much cause game could have massive fps spike at 1 point, to give them a avg fps boost when rest of time its lower.
Ok, so you disabled HBAO+
Ok, so you disabled HBAO+ because you have fear of criticism of AMD-fanboys, but you didn’t disable “purehair” effect that was an AMD-effect that uses huge mem and have a very high impact in performance, and in its moment was the center of many controversies, as an effect that damaged the performance in all nvidia cards, that needed more than a year to run but never runned as equal with both gpumakers. Now, with both sides of gpus, the technigue has very worse impact than HBAO+ in fps, yet.
HBAO+ was implemented in many games now, and in all of them showed a very neutral nature about performance and all the gpu makers. This was tested by many other sites, but you decide to ignore that.
You remembered to say that the game is now a TWIMTBP, but you “forgot” to say nothing about that this game, now, implements AMD-effects as heritage of the previous one, Tomb Raider 2013, a game that you forgot to say that was a Gaming Evolved one, too.
Biased is your first nature, $$$ir.
Don’t use bad excuses for a biased review about configs, it’s not sustainable your argumentation.
If nothing else, I’m glad
If nothing else, I'm glad your comment is here so I have as many "you are biased to AMD" comments as I do "you are biased to NVIDIA" comments. 🙂
TressFX (which you’re saying
TressFX (which you’re saying was predecessor to PureHair) had one “controversy” – when it was used in “Tomb Raider 2013”. The “controversy” was that, when Nvidia users turned on TressFX, they experienced a large performance hit.
(We know it was just the one “controversy” because, whenever there’s an article about something GameWorks wrecking performance on AMD cards, some Nvidia fan will eventually lay forth the argument that, “If TressFX is so much better, why doesn’t it get used in anything except Tomb Raider?”)
When this (almost always) happens in an Nvidia-sponsored game on AMD hardware, Nvidia fans (like, I’m presuming, yourself) frequently respond with, “Either turn it off, or stop being cheap and go buy an Nvidia card so you can run it.”
May I first suggest that you either turn PureHair off, or go buy an AMD card so you can use it.
Second, I’d like to point out to you that the “controversy” ended after less than a week, when Nvidia went and got the source code for TressFX (which was, and is, open-source, which meant they could actually look at the source code) and optimized a new set of drivers to take better advantage of it. (Also note that AMD does not have this option with libraries like HairWorks, closed “black box” libraries that nobody but Nvidia gets to see and, therefore, optimize for.)
The end result? TressFX suddenly ran BETTER on Nvidia cards than AMD cards. I guess you just “forgot” to mention that point.
Now is when I ask you to cite the source that leads you to believe that PureHair “uses huge mem (sic) and have (sic) a very high impact in (sic) performance”, and that “the technigue (sic) has very worse (sic) impact than HBAO+ in fps”.
Finally, PureHair (an “AMD-effect”) is open-source as well. Nvidia can look at the source code and optimize for it as they see fit, and as they sponsored the game (and paid a lot of money to the developers to do so, which is why it’s labelled as a TWIMTBP game) and as they had Day 1 drivers ready to go, the question I would pose is, why didn’t Nvidia optimize for PureHair? They can. They have everything they need and they’ve had it for probably months now. But NOT optimizing for it allows them to turn around and complain about AMD tech not working well on Nvidia cards, and thereby attempt to hide the fact that they’ve been using their own (proprietary) libraries to sabotage AMD performance for years by saying, “Nuh uh! See! Theirs does it to us, too!”
But don’t worry. In a week or so, there will be yet another driver update from Nvidia, PureHair will work great on their cards, and the veritable army of Nvidia fanboys will go back to calling people not like them “peasants”.
Don’t use bad excuses for a biased complaint about a review. It makes your argument unsustainable.
only one correction sir in
only one correction sir in your last paragraph. it will work great ONLY on 900 series.
nvidia needed only THREE DAYS
nvidia needed only THREE DAYS to publish a driver that was fixing geforce performance you moron, not a year.
“an AMD-effect that uses huge
“an AMD-effect that uses huge mem”
It doesn’t
“a very high impact in performance”
LOL It doesn’t UNLIKE craphair from nvidia. If you don’t believe me then go to geforce.com and check nvidias guide lol
So who is fanboy?
From my experience with the
From my experience with the game so far with my i7 6700K, 980 Ti SLI system, the Very High texture setting had by far the most detrimental effect on my frame rate (especially outdoors) even though my 980 Tis have 6GB of framebuffer. My VRAM was pretty much pegged at 6GB with the textures at Very High which resulted in framerates of 40-50 with plenty of dips.
With the textures set to High instead I get ~4GB of VRAM use and pretty much constant 60 FPS except in a few rare instances (this is with all the other settings at Very High). I’m also using a custom SLI profile that allows me to pretty much max out usage on both my cards. Bottom line, 4K with Very High textures and other high settings is basically a no no unless you have some Titan Xs. Another reason why I wish the 980 Ti had 8GB of VRAM instead of 6GB (yes I realize that this is not very feasible given the memory interface being used by GM200).
my 980 using very high
my 980 using very high settings at 1080, pegs my card pretty much 4gb used.
According to Geforce.com’s
According to Geforce.com’s guide, for Very High Settings at 1080p they recommend a card with 6GB of VRAM. Also it stated that prolonged sessions at 4K with Very High settings can see VRAM use above 10GB, which means this is one of the few games that actually benefits from a Titan X if u’re gonna play in 4K.
question is when the spikes
question is when the spikes occur and if they make a difference. If its during the animated segments where she’s reacting to broken path etc I dont think so. I see some stutter during the transitions there
No, you can see the gameplay
No, you can see the gameplay run through in the video embedded on the story. You can see it happens during normal gameplay as well.
You failed to mention this is
You failed to mention this is an nVidia title which means they probably got to optimize and work with developers a lot more prior to launch.
The fact AMD is so close in perf without it being their title is a good sign.
It’d be interesting to see the perf side by side when a new driver hits.
Yes, I have a Fury Tri-X and
Yes, I have a Fury Tri-X and it indeed does stutter in places during cutscenes and gameplay. I am currently on the Soviet Level now and it ain’t pretty. I hope it is just poor drivers and that they will eventually get them fixed.
I have heard that the original reboot (Tomb Raider 2013) was also like this until both camps (Nvidia/AMD) had a few drivers to fix the issues…is this true? I only picked up TR2013 about a year after it’s release when it ran very well indeed.
I also thought that this review was fair and a good look into the performance and issues on some cards. It seems those frame spikes on AMD Fiji Cards need to be sorted so that myself and Ms Croft can have a smooth relationship.
Thank You Ryan Shrout 🙂
Great test, but horrible
Great test, but horrible choice of colors for the charts. They are very hard to differentiate.
Nvidia cards are always going
Nvidia cards are always going to have an advantage since this game is DX11. No surprise which GPU sponsor pays the bills at PCper. When I want biased reviews then I come to PCper !
[whine] this review does not
[whine] this review does not conform my pre-existing bias, therefore i’m going to project onto PCper my personal bias and blame them for AMD’s shortcomings towards drivers. [/whine]
You’re the biased person man. Get a mirror.
Hahaha, rly?
Hahaha, rly?
So wait, am I biased or is it
So wait, am I biased or is it just better because of DX11? You make no sense.
You not biased Ryan. To the
You not biased Ryan. To the point YES, but not biased.
As an AMD fan and user, I
As an AMD fan and user, I have found PCPer to be probably the least biased out of all of the tech sites that I read.
You wanna see some tech press bias? Go read some articles on Fudzilla.