VideoCardz.com, continuing their CPU coverage of the upcoming Ryzen launch, has posted images from XFASTEST depicting the R7 1700X processor and some very promising benchmark screenshots.
(Ryzen 7 1700X on the right) Image credit XFASTEST via VideoCardz
The Ryzen 7 1700X is reportedly an 8-core/16-thread processor with a base clock speed of 3.40 GHz, and while overall performance from the leaked benchmarks looks very impressive, it is the single-threaded score from the Cinebench R15 run pictured which really makes this CPU look like major competition for Intel with IPC.
Image credit XFASTEST via VideoCardz
An overall score of 1537 is outstanding, placing the CPU almost even with the i7-6900K at 1547 based on results from AnandTech:
Image credit AnandTech
And the single-threaded performance score of the reported Ryzen 7 1700X is 154, which places it above the i7-6900K's score of 153. (It is worth noting that Cinebench R15 shows a clock speed of 3.40 GHz for this CPU, which is the base, while CPU-Z is displaying 3.50 GHz – likely indicating a boost clock, which can reportedly surpass 3.80 GHz with this CPU.)
Other results from the reported leak include 3DMark Fire Strike, with a physics score of 17,916 with Ryzen 7 1700X clocking in at ~3.90 GHz:
Image credit XFASTEST via VideoCardz
We will know soon enough where this and other Ryzen processors stand relative to Intel's current offerings, and if Intel will respond to the (rumored) price/performance double whammy of Ryzen. An i7-6900K retails for $1099 and currently sells for $1049 on Newegg.com, and the rumored pricing (taken from Wccftech), if correct, gives AMD a big win here. Competition is very, very good!
Chart credit Wccftech.com
Oh boy, if these results are
Oh boy, if these results are remotely true, Intel has become obsolete overnight. It is shocking you can get an AMD equivalent CPU for only $380 over intel’s $1000 8core/16 threads.
Ryzen 5 1600X looks to be the real threat for intel in the gaming bracket.
More like it shows you how
More like it shows you how much Intel has been screwing the consumer.
Competition is VERY good for all!
Do you honestly believe that?
Do you honestly believe that? Intel has been making 10+ core CPUs for servers for 5+ years. Do you think it’d be difficult for them to release an 8-10 core CPU for the enthusiast market within a few months? Hyperbole isn’t becoming.
They don’t have anything with
They don’t have anything with more than 4 cores on socket 1151 and won’t until Coffee Lake arrives next year. So any 6-10 core chips before then will be on X299 / socket 2066, and won’t be cheap.
Even the quad core Kaby Lake X is socket 2066.
But they have had 6+ core
But they have had 6+ core cpus on x79 and x99 sockets, which is still consumer grade.
Again, more unconfirmed leaks
Again, more unconfirmed leaks for the 10000th time.
This is getting annoying, I’m not even hyped anymore.
AMD are the kinds of overhyping. I’ll believe the numbers when we FINALLY get real reviews.
While I appreciate that these
While I appreciate that these results should be questioned, considering that so many early benchmarks are showing up from all over the place hints that these numbers may be correct. We have another week until we can see verified benchmarks.
Remember, AMD is not the source of these numbers, real or not. If we only saw AMD events showing cherry picked benchmark numbers, I’d also have a lot of doubts.
Do you think AMD is releasing
Do you think AMD is releasing only the good engineering samples? Or is it photoshop trickery? Seems like this along with all the rest of the recent leaks is pointing towards the same overall numbers.
I find this comment confusing
I find this comment confusing as it seems to be attributing the leaks to AMD (making them essentially official) but at the same time lamenting the fact that they are leaks… rather than official information.
Here’s a fact: Most leaks/rumours in the tech space are ruthlessly accurate — that is, provided you curate the information properly so that what you are left with the ones that are from generally dependable sources that independently corroborate each other.
What doesn’t make sense is to assume a leak is false or inaccurate when the sources match that criteria. Especially when the leaks come for the ‘100000th’ time.
After all, what’s more likely? That the happen to corroborate each other that they are true? Or that 100000 individual sources are collaborating in disseminating information that is eerily self-affirming?
Has anyone ever considered
Has anyone ever considered that if these leaks are fake, Intel could be behind them? AMD has not been saying anything at all since their Horizon event – they have actually been eerily silent in all this.
Ryzen could be hurt if the leaks are exaggerated, and everyone who bought the hype will feel let down, and feel let down with Ryzen and AMD specifically. Who would possibly want to hurt AMD? See there’s motivation to do this, and we all know Intel will slither low if they feel they can gain upper hand – they’ve done it countless times before. It’s a consideration that probably shouldn’t be given no thought at all.
At the same time, I am eagerly awaiting official news, as a new build is on my to do list for this spring. Bang for buck is always my God and it looks like Ryzen will do well in this area, but we’ll see in a few weeks, hopefully.
I doubt they are fake for a
I doubt they are fake for a number of reasons. AMD actually did talk back on Jan 5th/6th about Ryzen. Performance numbers are very much what many expected as well. The basic trend when it comes to leaks would be the stuff from engineering samples, which will always be lower performing parts than the release version. With new engineering samples, you see performance going up. Performance will still be low as the motherboards being used have pre-release BIOS code that will have lower performance due to debugging being enabled.
So, the leaks we are getting “feel” like they are from early production samples that are going out to various places. The press kits also have gone out, and people may have gotten them already, which is why we are getting more leaked performance numbers.
AMD will not make another public statement until the official launch, but people who violated the terms of the NDA are putting out numbers, and it is only if these numbers are grossly inaccurate that AMD would say anything.
Well with all the media
Well with all the media sitting through 3 hours of official info today, and then going mad with the results I don’t see how any of this can remotely be annoying. This stuff couldn’t be any more exciting. Especially after Linus clearly shows the 1800X doing over 1600 points in Cinebench.
https://youtu.be/3rUndzpdo1I
Maybe the original source
Maybe the original source should have been used and not WCCFTech, but the leak is not from WCCFTech in the first place.
Now on to Ryzen and its performance looks to be great if the indications are correct. The big question to go along with Ryzen’s performance metrics is will there be enough stock in the retail channels to keep the retailers in check for any price gouging. Ditto for the AM4 motherboards that should be lower cost than Intel’s competing products. Hopefully for Ryzen that new AMD wafer agreement with GF will allow AMD to second source some Ryzen production should it be needed as AMD looks to have a winner in not only the consumer market but the server market as well.
AMD’s Zen/Naples HPC market performance will depend more on Zen/Naples being paired with Vega to get that CPU FP performance deficiency in the first generation’s Zen/Naples SKUs up there relative to Intel’s Xeon/CPU versus Zen/Naples FP deficiency. So Zen/Naples and Vega(For its FP power) will still allow AMD to compete with Intel in the HPC market. The server/workstations markets will do fine with Zen’s price performance metric being the major impetus for AMD’s grand reentry into the Server/workstation CPU market.
It’s time for any of the Press that will be attending AMD’s Capsaicin and Cream event to push AMD into revealing more information on Zen’s on die interconnect fabric that exists for inter-CCX unit communication so there can be some information on Cache coherency across the CCX unit boundary for a more complete Zen micro-architectural reveal.
Also AMD should in addition to discussing Ryzen and Vega will have to at least do a limited reveal on any Ryzen/Vega APUs that will be coming in a few more months with emphasis of the laptop market.
Look a little closer,
Look a little closer, Sebastien has 3 sources:
VideoCardz:
https://videocardz.com/66182/amd-radeon-7-1700x-pictured-and-tested
XFASTEST:
https://news.xfastest.com/amd/31741/amd-ryzen-7-1700x-benchmark/
Wccftech:
http://wccftech.com/amd-ryzen-1600x-cinebench-r15-performance-confirmed/
I realize that now and which
I realize that now and which one is the original source for all the images as WCCFTech(very Often) and even Videocardz(Sometimes) can be and are not the original source on many of their articles. I am very hesitant to trust WCCFTech but Videocardz is usually an ok source for information.
Any Images need to be sourced/linked from their original sources unless that is prevented by paywall/firewall/password.
So yes that one WCCFTech image is what got may attention but looking at the refrences they appear to be properly attributed to their original sources. I’m Sorry for that.
This article write-up is very
This article write-up is very accurate.
anandtech.com has many Intel fanboys including the moderators that they are in severe depression.
Leaks of Ryzen benchmarks is creating a big problems for websites that cater to Intel fanboys.
Majority of people will rejoice with this victory on hand.
Fair competition is always good.
lol honestly i don’t think
lol honestly i don’t think so. there is also many “Amd fanboys” over there and you can see in their comments. and one of the most funny thing is anandtech was once sponsored by AMD that they even have specific section dedicated to AMD. and then you have AMD fanboys spin the thing said it is anandtech that go willingly to AMD because AMD was about to release something that will spank both nvidia and intel.
Why all the focus on
Why all the focus on multithreaded performance and super expensive Intels? That’s not where the money likes according to sales. The money appears to be in gaming CPUs where single threaded performance is king.
They say that the single thread performance of an 1700X which costs $389 is at 154. Well the single threaded performance of an i5-6600k is 169 according to that chart and it’s $155 cheaper. How many games even use more than 4 threads effectively?
The most interesting Ryzen for gamers is probably the 1400X, priced at $199 you get a lot of bang for the buck.
We’ll know more after real benchmarks are there, but I thing that I’m going to recommend the 1400X to a lot of people who don’t want to pay the i7-7700k kind of money for their gaming rig.
“The money appears to be in
“The money appears to be in gaming CPUs where single threaded performance is king.”
That’s not really true anymore. Single threaded games are few and far between. Just about every game supports multithreads. Im playing BF1 and its using 10 threads out of my 12.
The Frostbite engine is known
The Frostbite engine is known for relatively good multithreading. But how many games use that engine? You can count them using your fingers.
While many other games like GTA V, FarCry 4, Crysis 3 and other stuff that’s regulary used in benchmarks hardly seem to benefit from more than 4 threads. There’s maybe a difference between 1 FPS when it comes to something like an i7-6900k costing over $1000 to something like an i5-6600k costing $250.
Games like BF1 really are the exception to the rule. And unless we’re seeing more well optimized games appear it’s more than just a stretch to say that single threaded performance is no longer important.
Of course when you’re into video editing, gaming+streaming the new AMDs blow Intels out of the water. They really did a fine job here.
A CPU is something most
A CPU is something most people keep in their rigs for a while.
With a view to the future in mind what is going to be the safer bet? Going with double the core/thread count for the same price part you’d get from Intel with ballpark single-threaded performance?
Or forgoing the potential of these extra threads/cores for a minor boost in single-threaded applications today, that probably run more than acceptably on Intel and AMD platforms alike?
The safer bet is to look at
The safer bet is to look at the facts, which aren’t in favour of even a significant portion of games using more than 4 threads effectively in the foreseeable future.
Designing an engine to distribute the workload among multiple cores is no easy feat. And if you’ve reached that point you may also utilize the parallel processing that GPUs offer. In fact getting performant parallelism appears to be not cost effective and or too difficult for many of the smaller developers.
Even larger developers that produce games like CoD: Infinite Warfare seem to have problems with this. A PCgames Hardware Germany conducted a more extensive test with an i7-6900k, which shows that the game puts most of its workload into only two to three threads:
https://translate.google.com/translate?hl=en&sl=de&u=http://www.pcgameshardware.de/Call-of-Duty-Infinite-Warfare-Spiel-56591/Specials/Technik-Test-Benchmarks-1212463/&prev=search
The point isn’t that applications are single threaded, it’s that they don’t use more than 4 threads effectively, which makes the performance per core/thread way more significant than some synthetic multithreading score.
Of course you can go for more threads like in the 1400X if you pay roughly the same money than for an equivalent i5. Those 8 threads will allow you to allocate your resources better than 4 threads. For example you use 6 or 7 virtual cores for your game, while you can use the other one of two for applications that run in the background and don’t need much processing power. You can’t do that with the same comfort on a 4core/4thread CPU.
The point is that DX11 (which
The point is that DX11 (which most games still use) is very dependant on first core. DX12 is slightly better at evening the load. But still not very scalable.
The other thing is: Name 1 Intel CPU for consumers that has more than 4 cores. You can’t find one. So why would anyone program for more than what is out there and has been out there for 10 years now. That said, Far Cry 4, Crysis (all of them) and other games scale very well with added cores. I know my 8-core AMD machine is very busy in those games.
I suspect same is true with Frostbite engine, I don’t play BF.
I’m not sure if more cores
I’m not sure if more cores will really increase the incentive for developers to put more work into this, after all many of these games don’t even utilize 4 threads properly. Maybe times will change with competitively priced 8C/16T CPUs on the market. We’ll have to see.
Frostbite games run fairly well on AMDs even with DX11. Here’s an example for BF1(I don’t play it either, because I don’t like it, but its technology is certainly impressive)
http://techbuyersguru.com/battlefield-1-benchmarked-6600k-6700k-6900k-and-r9-270x-through-titan-x?page=1
While they do not test AMDs here, its fair to assume that they perform similarily than an equivalent Intel. After all they’re CPUs based on x86. In this case an 8/16 AMD would most likely outperform an 4/8 Intel that runs on much higher clocks.
Unfortunately Electronic Arts does not seem to want to sell licences for their engine to other software developers. So these games are really the exception on the market.
For example here are some other benchmarks from the same site. Assuming that they used the same methodology, the results should be representative for the differences between the CPUs.
http://techbuyersguru.com/intels-core-i5-6600k-vs-i7-6700k-vs-i7-6900k-games?page=1
You’ll see that the 8C/16T Intel pretty much only outperforms the 4C/8T in Battlefield 4, which is another Frostbite game.
The consoles are going to
The consoles are going to drive multi-threaded performance since they have 8 relatively low performance cores. It is unclear what Scorpio will be though. It could be a single Zen lite cluster, maybe with less cache or something. With 4 cores, 8 threads in a single cluster, it should be able to replace 8 low power cores easily. I think the console market will determine future hardware and software directions more than PCs. VR of is more suited to the living room than where most people have their gaming PC, so I expect console VR to be more widely addicted than PC VR. If all consoles support 8-threads, then games are going to move towards using 8 threads. It will take a while for the game engines to catch up though. We haven’t had DX12 for long yet.
I’ll take the 8 cores 16
I’ll take the 8 cores 16 threads and give the game 4 or more of the full cores depending on the gaming engine’s ability. The remaining cores if any are available can be used for running the OS/Bloat and keeping the system running.
SMT processor threads are great for extracting better core utilization of a CPU core’s available execution resources. So any execution pipeline resources do not go to waste on a core if the 1st thread stalls the other thread can be hardware context switched in and the cores execution pipelines’ slots/cycles kept doing useful work.
CPU cores that do not have SMT ability will always waste available execution pipeline cycles even with out of order execution ability and other features enabled. A single processor thread only NON SMT core does not have the ability in its hardware to make the processor thread context switch quickly enough in most cases to prevent wasteful execution pipeline bubbles(NOPs)from being introduced into the execution pipeline cycles on the core. Software/OS managed code thread switching is not fast/responsive enough to deal with any on core processor thread switching to prevent wasted pipeline execution slots/cycles. Even top level cache memory is often times not quick enough to manage the delivery of code in the same lowest latency way that SMT hardware is able to achieve to keep the CPU core’s execution resources fully utilized.
All the performance gains from SMT come from mostly CPU core execution resources on the core that would have been wasted had that SMT hardware not been included in the core to prevent the waste of available execution pipeline cycles/slots.
I’ll take always take an SMT enabled core over one that is not for rendering workloads because SMT is mostly making use of resources that would normally go underutilized on any non SMT enabled cores.
Usually you can add a few
Usually you can add a few hundred hz to the base/boost of the cpus with less cores and try and speculate on the performance.
The 4/4 should be pretty interesting vs the i5 4690k/6500k/7600k but if the 4/8 are cheaper than an i5 why not get the hyperthreading, never know if the next big gaming advance might be more cores for whatever.
multi cores/thread for intel
multi cores/thread for intel pricing is not set to reflect cost, intel choosed to make core and hyper threading expensive simply for market segmentation, and now AMD is using that to their advantage, it’s common sens.
AMD cannot match intel’s IPC on a node that intel is at it’s 3rd revision, while it’s the first run for AMD at 14nm node, beating haswell slightly is as good as it gets for now, and maybe Ryzen+ revision will get them another 10-15% IPC boost, so for now if their SMT is really more effcient than intel’s HT, then that’s where AMD have to focus Ryzen value by bringing multi core/threads to mainstream.
now i just hope AMD doesn’t find a way to screwup this product launch again, and that they didnt push the clock to close to it’s limit, that would reflect badly on OC, power efficiency and heat.
i hope also that AMD got alot of stock for their product, but i do have a bad feeling about this part, coolers, mobos seems to be quiet late to the party knowing how close the release is.
also more games now make use of those multiple cores, a trend seem to have started in late 2016.
Intel’s HyperThreading is
Intel’s HyperThreading is just a Branding name for Intel’s version SMT and Intel did not invent SMT! Intel has some very expensive chip fab plants that cost billions in upkeep and other corporate bloat to maintain where AMD is fabless. So it’s Global-Foundries that can spread that foundry upkeep and expenses across an entire market of 14nm customers. Intel has to spend billions and it must make billions to keep from bleeding cash from its physical plant investments. AMD does not even own its headquarters(Leased) and by using the third party chip fabrication market AMD can forgo any expensive chip fab upkeep expenses and offer lower margin SKUs that Intel can simply not offer with Intel’s high overhead chip fabs and corporate bloat to maintain.
Global-Foundries even licenses it’s 14nm process from Samsung so that 14nm process costs less as its R&D and physical plant expenses is even spread across Samsung’s customer base making that Samsung 14nm process much more affordable to both Samsung’s and Global_foundries customer based as they all share In the development costs via a larger economy of scale that can very quickly amortize any R&D and physical plant expenses. Even Intel is having to look for fab customers and other markets to keep its chip fab expenses from soaking up revenues.
Intel will be forced to cut its historical high margins in the face of AMD’s Zen/Ryzen price structure and Intel has no choice. What if AMD decided to throw a 12 Ryzen core SKUs into the competition at the top end of the consumer market. There will be new Zen/Ryzen steppings that will offer IPC improvements at 14nm and Intel will need to stay farther ahead because AMD’s Price/Performance metric will cost Intel market share. And the Ryzen/Vega APUs are not even here yet so on the integrated graphics front Intel is very much in trouble against Vega on the APU.
And what is the higher single
And what is the higher single threaded Cinebench performance of the i5-6600K worth? Not much. No one will run such a type of application single threaded. Gaming is a different scenario where often the graphics card is the limiting factor.
And there is more than just gaming. You will realize that if you work with stuff like x264/x265. You will be thankful for every additional core. 😉
That graphics card nonsense
That graphics card nonsense is what you hear a lot.
In fact it mostly depends on what resolution you want to play on. I also believed in that nonsense until I bought a GTX 1070. Biggest waste of money last year since I only play on 1920×1080 @60Hz.
Most of the time the GPU is below 50% load and the fans don’t even start to spin up unless I crank up settings like SuperSampling or use a ton of 3rd party post processes through things like ReShade. Those things can improve the visual quality by a lot, but are not as essential as things like drawing distance or the max amount of rendered objects, which are notoriously CPU heavy things.
Which brings me to the actual bottleneck: the i5-4690K in the machine. It throttles the performance of that GTX 1070 in most games.
The problem appears to be that many people think if you set the game on Ultra Graphics that it will only load the GPU. But things like shadows, rendering distance of objects, amount of rendered objects or high detail models, are mostly CPU heavy tasks and can cause severe bottlenecking in badly optimised games of which is like 95% of all games.
Because single-threaded
Because single-threaded performance is never going to improve much until we complete change architectures. A few percent a year is the most you can hope for.
On the other hand, AMD will ship a 32-core server version of Zen this year.
STOP MAKING SENSE! You’ll
STOP MAKING SENSE! You’ll just confuse them.
Finally.. a worthy / value
Finally.. a worthy / value replacement for my 2600K..
Amen to that!
Amen to that!
Yeah, verily.
These AMD
Yeah, verily.
These AMD parts upcoming will be the first serious temptation
for this 2600K user to consider upgrading my core gaming rig since 2011 (other than GPU’s and SSD’ over the years since 2011).
I’ll still mull it over as to be honest, not a single game
that I play is causing my 2600k CPU (at 4.3Ghz OC) any noticeable issues.
When I decide I’ll be glad to be buying my first AMD CPU since
before Intel Core 2 Duo’s made more sense. Not being a ‘fanboy’
of either corporation and I’m driven by cost/performance.
THat said Intel has rather ticked me off with what appears
to be greatly inflated CPU prices of late.
SO I’m more AMD biased now.
I’ll be getting top (until
I’ll be getting top (until “gen 2” comes further down the line, of course) Zen, but no way in hell am I going to simply trade-off and completely retire my GODLIKE i7 2600K after that. It’s currently in my main station, and been so pretty much all the way since 2012. I’ll be building a new main with top current-time Zen on board, but I’ll definitely won’t stop using my GODLIKE i7 2600K-based build after that, I’ll just move it in another room/house. The more the merrier, and considering that I’m already having six different builds sitting in my house (three based on AMD, and three on Intel), this would just add the seventh one, essentially.
There are 2 words to describe
There are 2 words to describe how amd managed to overtake intel in one night, Jim Keller.
The man made the k7 that left intel running for their tails. He created the soc for Iphone 4 and 4s, under Steve Jobs, then was fired by jobs because jim keller “hardware philosophy” was not compatible with Steve Jobs, as if Steve job knew anything about CPU or soc building/architecture…
After Jim keller made its magic with the Ryzen he left for tesla technology group.
Btw the team that was lead by Jim Keller stayed on amd, but I still think Keller will need to come back to AMD in 4/5 years, before ryzen becomes obsolete.
It should not be hard too
It should not be hard too hard for Zen to be improved upon without Keller needing to be brought in for things like widening the FP units to 256 bit from 128 bits or other tweaks on Zen. AMDs Pro and consumer Zen/Vega APUs and Ryzen/Vega APUs may just be making better use of the FP units on the Vega integrated graphics than any APU designs from earlier generations so Zen/Ryzen and Vega APUs may not need to have FP units beyond 128 bits in size.
Jim Keller is currently working for Tesla so who Knows where he will be once his project there in finished and that depends on what tweaks that Zen can receive over its scheduled 4 or 5 year lifetime and how far AMD has taken the APU integration of CPU cores with GPU cores on its upcoming APU systems on an interposer for the HPC/Exascale markets and consumer markets also.
>Jim Keller
SAY IT
>Jim Keller
SAY IT
Guys, you don’t see whats
Guys, you don’t see whats wrong with this results in Firestrike???
VIDEOCARDZ results say that it should be 17900 on default 8/16 @3.4GHz Ryzen, and about 20250 for 8/16 @4GHz.
https://cdn.videocardz.com/1/2017/02/AMD-Ryzen-CPU-3DMark-Physics.png
Yet, on this Firestrike test in this artice we see 3.9GHz, but barely beating (margin of error) VIDEOCARDZ score @3.4.
So… something fishy is going on…
But was that 3.4GHz locked or
But was that 3.4GHz locked or a 1700X with 3.4GHz base and 3.8GHz boost? If the latter, then 3.9GHz would be only slightly faster.
Well, that is variable that
Well, that is variable that is missing from VIDEOCARDZ picture (and article). If it says @3.4GHz, and separately @4Ghz it should be locked at those values, right? Or it should have be writen 3.4GHz+Boost.
I highly doubt it was XFR.
I highly doubt it was XFR. Just a stock turbo.
It’s going to be great for
It’s going to be great for those wanting to avoid both Intel and Nvidia on their next gaming PCs. And hopefully there will be more laptop usage of any Ryzen/Vega APUs to allow many to also avoid Intel and Nvidia for laptops as well! I’m looking to avoid Intel, Nvidia, and M$ for a totally monopoly free laptop experience once some on the Linux OS laptop OEMs start using Ryzen/Vega APUs in their products.
I will also be looking at any workstation class Zen(CPU Only) SKUs paired with Radeon Pro WX Discrete GPUs in some mobile workststion offerings. Hopefully AMD will be offering some 16 Zen core professional workstation variants that can be used in a portable workstation form factor in the same way that there are Xeon SKUs utilized in portable workstation SKUs currently. Zen/Vega(Radeon Pro WX) portable workstations running with some Linux OS options would be great.
I will avoid Intel amd nvidia
I will avoid Intel amd nvidia in my next PC. It’s really too bad AMD is the only other choice, but at least it’s a good alternative to massively overpaying amd getting fleeced.
Having so little choice is really, really sad nowadays.
Looking like a 1700 & Vega setup for me later this year.
In my opinion, Ryzen
In my opinion, Ryzen 1700X/1700/1600x for $389/$319/$259 are going to be awesome CPUs.
It is going to be very interesting to see how these CPUs perform in various applications.
Who is going to pay $1049 for 6900k? hard-core Intel fanboys.
When people can buy AMD RyZen for $389.
If this is true. Ryzen could
If this is true. Ryzen could be a killer CPU for Flight Simulator X and Prepar3D fans!! Microsoft’s ESP engine performance relies a lot on single threaded performance.
Once PCPer has these CPUs in hand, I’m really interested in how it’ll compare to an i7-4790K.
Have a 4770k my self, so im
Have a 4770k my self, so im hoping AMD delivers. If so i will be building a new machine around summer.
I’ve got a 4790k + 980Ti ..I
I’ve got a 4790k + 980Ti ..I probably won’t upgrade for another year or so..just not much of a need for what I’m doing these days.
What that biased site CPUBOSS
What that biased site CPUBOSS still put Intel chips ahead of AMDs. I think their payed by Intel…