Earlier this month Corsair released new DDR4 memory kits under its Vengeance LPX brand. The kits come in 32 GB, 64 GB, and 128 GB capacities and come bundled with a 40mm "Vengeance Airflow" RAM cooler.
At the top end, the 128 GB kit comes with eight 16 GB modules clocked at 3,000 MHz and with CAS latencies of 16-18-18-36. At stock speeds it is running at 1.35 volts. Stepping down to the lower capacities gets you faster DIMMs. Corsair has the 64 GB (4 x 16 GB) kit clocked at 3,333 MHz and runs at the same voltage and CAS latencies. The 64 GB kit does come with either black or red heat spreaders as well. Lastly, the 4 x 8 GB (32 GB) Vengeance LPX kit runs off of the same 1.35 volts but is clocked at 3,600 MHz (16-19-19-39 rated latencies). It also comes in black and red SKUs.
The memory kits are available now and are currently priced a bit below their MSRPs at Newegg. The 32 GB kit is $340 and the 64 GB kit is $526. Finally, the 3,000 MHz 128 GB kit will set you back $982. These prices seem more competitive than the last time I looked at DDR4, and there certainly does seem tot be some room for overclocking (especially on that 128 GB kit) so long as the motherboard can handle it!
Am i missing something?
Am i missing something? G.Skill kills these in price/performance… What is this, like fashion RAM or some shit?
It’s a press release. An
It’s a press release. An announcement.
It’s not a review and it’s not a recommendation.
Corsair can set the MSRP at whatever they want to. If you don’t want to pay that much for DDR4, go buy G.Skill.
I put Corsair Dominator Platinums in my computer. THAT is “fashion RAM”. I could have put in G.Skill or ADATA or Kingston memory at half the cost (and as a matter of fact, they REPLACED a set of G.Skill that I was already using), but I wanted Corsair Dominator Platinums because I wanted the LED bar and I hadn’t yet discovered Klevv Genuine or Avexir Core. And when asked, I never, ever, EVER recommend Corsair Dominator Platinums OR Klevv Genuine OR Avexir Core, unless budget is less of a concern than aesthetics.
So to sum it up:
“Am I missing something?” Nope, you seem to have the gist of it.
“G.Skill kills these in price/performance…” Yep. So?
“What is this, like fashion RAM or some shit?” Yes it is.
There used to be value in
There used to be value in Corsair memory. Back when higher speeds meant something. DDR2 was a long time ago
I’ll admit to having bought
I'll admit to having bought the DDR2 Corsair Ballistix with the activity indicator LEDs back in the day hehe
Edit:D'oh that was Crucial that made that stuff not corsair
Is it worth it to buy high
Is it worth it to buy high performance memory these days? Last I checked, it didn’t seem to make much of performance difference.
It doesn’t, really. There
It doesn’t, really. There may be a measurable difference between DDR4-2400 and DDR4-3000, but the difference is small enough that it’s only measurable by memory benchmarks, and invisible to human perception.
If gaming is your thing, I doubt you’ll see much, if any, difference in your 3dMark Fire Strike score or your GTA-V fps average.
However, if you’re a serious overclocker, or a LN2 enthusiast, or even just building a “benchmark queen” where a few single points on a benchmark score mean the difference between 1st place and 20th place, then it’s absolutely worth it.
(special note: the term “benchmark queen” is not intended as a disparaging remark towards anyone or anything; rather, it is a co-opting of the term “trailer queen” referring to a car, ridiculously expensively modified, that never sees street or track use and, as such, was never designed to be used to such effect, and instead goes from car show to car show on a trailer, towed by a truck.)
Agreed. The one other area
Agreed. The one other area that super fast memory does seem to make noticeable differences these days is if you are using an APU as it's integrated GPU benefits from the faster memory. You're not getting eight DIMMs into a FM2+ board though :(.
You’re also a weirdo for
You’re also a weirdo for using really expensive memory with the most budget of budget GPUs.
No debating that heh. Getting
No debating that heh. Getting cheaper memory and a better GPU would be better, no doubt 🙂
The Jury is actually still
The Jury is actually still out on this one. I asked W1zzard at Techpowerup to look into any possible benefits of faster RAM in an environment where you have an avalanche of processing coming from multiple sources (for example: multiple VMs doing work, while compiling some code and creating an archive on the host system).
Historically all memory benchmarks have dealt with one one application doing one type of work with nothing else going on on the system at the time. But what about the far more common multitasking scenarios? I expect there to be big benefits there, and hopefully W1zzard will find some.
If you are running a lot of
If you are running a lot of stuff like that, the most important thing will be caches. There is a reason for the large cache a Xeons to exist. Most consumer applications are highly cacheable, so running a lot of them just eats up a lot of cache. The memory probably still isn’t going to matter much since there isn’t actually much difference in latency for the high performance memory. The latency of DRAM cells has remained relatively constant even though the bandwidth of the devices have increased significantly. There is increased bandwidth demands and some interactions between bandwidth and latency for filling cache lines, but if you are thrashing the cache that much, performance is probably going to be bad anyway. For streaming applications on the CPU, you will probably be more compute bound rather than bandwidth bound. You can more easily be bandwidth bound on GPUs due to massive amount of compute available.
Server and HPC applications can be much less cacheable, so having larger caches can help, but actual unpredictable accesses over a large memory space will still result in latency bound applications. The silicon interposer technology offers some possible solutions to this. The wide, fast interconnects may allow for placing a very large L4 cache on a separate die. This resolves the issue with needing a very large die, with low yields, to make a large cache system. Combine that with some HBM to keep the caches supplied with bandwidth, and we could see significantly higher performance. This actually doesn’t change much for consumer level applications since they are already very cacheable with existing on-die caches. Even processors with relatively small L3 caches still do very well. It may be more interesting to test some applications with varying cache sizes to see what the current cut-off point is rather than testing varying bandwidth. For a lot of applications, the performance will degrade severely once the active set exceeds the cache size. Running multiple cores with multiple threads complicates any cache testing though still nice the L3 cache is shared.
*since the L3 cache is
*since the L3 cache is shared.
Posting here from a iPhone 5 is not sinful. I get a tiny space along the top in landscape mode and portrait mode only gets me half of the text area.
I appreciate the thoughtful
I appreciate the thoughtful reply, but with all due respect i believe something is missing from your analysis, and this is the miracle of compounding speed gains.
If i’m asking the RAM for something once in a while (as a single application might do) then the bandwidth limitation comes full into play every time that request is processed (which isn’t that often because we’re dealing with a single application that’s probably already well cached). But if i’m making requests from ten directions at once, wouldn’t the more robust RAM would be able to fulfill those requests faster, and thus freeing it up quicker for the next request, which it would fulfill faster, and so on, thus compounding efficiency gains?
In other words wouldn’t faster ram would scale far better than slower RAM as you ramp up the queue depth? Sort of how SSDs work?
Also, latency is improving significantly as well. I’m currently running G.Skill 3200 @ 14-14-14-34 which has an AIDA latency of 50.5ns while many of my DDR3 kits were in the 60s and 70s range. Wouldn’t this also scale, with the benefits of that huge increase in latency compounding as that avalanche of requests came in, each one getting fulfilled quicker, leaving the RAM ready faster for the next one?
Again i appreciate your reply, and I could certainly be way off base here, but it just seems to make sense. I wish we had some hard data to go by, hopefully W1zzard will pull through.
Mr. Verry, any thoughts?
You are mixing many different
You are mixing many different things here. Most consumers do not run a bunch of VMs and such. Also, you are not choosing between DDR2 and DDR4 when you build a new system. You are choosing between some standard version of DDR4 and an overclocked version at a much higher price. With how much processor speed has outpaced memory, going from slower to faster of the same memory type is like going from a 7200 rpm hard drive to a10,000 rpm hard drive; it is faster, but by a negligible amount for most applications. We are up to 3 levels of caches on-die, and I wouldn’t be surprised that we will get L4 soon in the form of HBM or HMC. This pushes the system memory out to almost disk status; access it as little as possible. It also makes it such that memory speed differences make negligible performance differences since you are running from cache most of the time. For the enterprise/server market, they do have the socket 2011 systems for quad-channel memory, but this is to support the much higher core counts on Xeon processors. You can only push one core so far. The memory systems are designed to take this into account.
The type of test you are talking about sounds similar to pushing a system into thrashing where it is constantly swapping pages from disk into memory and back, not accomplishing any real work. I have had a system start thrashing with both a hard drive and an SSD, which represent orders of magnitude difference in performance. Even with an SSD, it is still unusable; it at least allowed me to shut applications down more quickly to free up memory space, but that was about it.
Overloading a system like you are talking about isn’t necessarily a useful metric since you don’t want to do this in practice. It is incredibly inefficient. With thrashing when paging out to disk, the point where you reach this is determined by the memory size vs. the size of the working set. A CPU cache system is almost the same, just working with smaller chunks. It will start thrashing (constantly swapping out cache lines, and mostly sitting idle otherwise) based on the size of the cache and the size of the active set. You can probably measure a difference with different memory speeds, but it may not be relevant since you would never want to push a system to that point in practice. It would be overloaded and not accomplishing much usable work.
Thanks for the reply, but I
Thanks for the reply, but I use my system like that quite often. Here are the specs:
Maximus VIII Gene
6700k 4.5
64GB @ 3200 14-14-14-34
400GB Intel 750 x4
Page File Disabled
GTX 970 FTW+
This system is so freaking powerful it can handle those scenarios with ease with plenty of useful work getting done. I’m not exaggerating here: it runs smooth as silk. The cooling system is all Noctura air, nothing ever gets hotter than 65 C, the CPU never thermally throttles, and the entire rig is whisper quiet no matter what i throw at it. And you can’t tell me that faster RAM doesn’t make a significant difference in this scenario due to the compounded gains i explained above.
But even if i wasn’t running a bunch of VMs, compiling code, and creating a RAR archive, i still believe faster RAM would make a difference just in everyday multitasking because most people do more than just one thing on their computer at one time. This is what i hope the tests will show, but you might be right in that it makes virtually no difference because of the bigger and better caching which is a great point.
There’s also ‘hanger queen’,
There’s also ‘hanger queen’, an aircraft of ostensibly (or actually) extremely high performance, that spends almost all of its operational life in a hanger undergoing maintenance.
That is a pretty small
That is a pretty small market, but a profitable one. It will allow them to sell their best binned parts for a premium price, but isn’t particularly interesting even to most enthusiast. It still acts as a bit of marketing to have super high performance parts being talked about, even if almost no one needs them.
windows 10 will make this
windows 10 will make this memory spy on you!!!!!!!!!!!!!!!!
I’m sorry but that is
I’m sorry but that is absolutely ridiculous. Where do you people hear such nonsense? Windows 10 is NOT making your ram spy on you… it’s making your cpu do it.