Intel has announced that the SSD Form Factor Working Group has finally come up with a name to replace the long winded SFF-8639 label currently applied to 2.5" devices that connect via PCIe.
As Hardwarezone peeked in the above photo, the SFF-8639 connector will now be called U.2 (spoken 'U dot 2'). This appropriately corresponds with the M.2 connector currently used in portable and small form factor devices today, just with a new letter before the dot.
An M.2 NVMe PCIe device placed on top of a U.2 NVMe PCIe device.
Just as how the M.2 connector can carry SATA and PCIe signaling, the U.2 connector is an extension of the SATA / SAS standard connectors:
Not only are there an additional 7 pins between the repurposed SATA data and power pins, there are an additional 40 pins on the back side. These can carry up to PCIe 3.0 x4 to the connected device. Here is what those pins look like on a connector itself:
Further details about the SFF-8639 / U.2 connector can be seen in the below slide, taken from the P3700 press briefing:
With throughputs of up to 4 GB/sec and the ability to employ the new low latency NVMe protocol, the U.2 and M.2 standards are expected to quickly overtake the need for SATA Express. An additional look at the U.2 standard (then called SFF-8639), as well as a means of adapting from M.2 to U.2, can be found in our Intel SSD 750 Review.
cluster
cluster
With or without the U.2, I’m
With or without the U.2, I’m getting a NVMe SSD soon. (I’m sorry.)
Sounds like you’ve found what
Sounds like you’ve found what you’re looking for.
Strange, it doesn’t look like
Strange, it doesn't look like an edge device.
Yes, where the SSDs have no
Yes, where the SSDs have no name.
bono-voyage to the old name,
bono-voyage to the old name, I guess.
Thanks for the HEADS UP,
Thanks for the HEADS UP, Allyn.
Keep up the good work!
MRFS
If NVMe boot and boards with
If NVMe boot and boards with M.2 slots and U.2 connectors dont come when Zen Launches, it’ll feel as if AMD doesnt want to compete at all
Good point! Yes, we need
Good point! Yes, we need motherboards with integrated U.2 ports and support for normal RAID modes + TRIM for RAID-0. If the motherboard manufacturers won’t build them, then maybe Add-In Cards (“AIC”) will do the job until motherboards integrate enough U.2 ports. 2.5″ NVMe SSDs will become an industry-wide standard, so it’s reasonable to expect prices to come down eventually. I will enjoy the combination of speed and reliability that should result from a RAID-5 array of 4 x 2.5″ Intel 750 SSDs, or even a RAID-0 array of 2 such SSDs. We have one OS running on a RAID-0 array of 4 x Samsung 840 Pro SSDs, and the responsiveness of that workstation is notable: ATTO reports READs at 1,846 MBps:
http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q.Deluxe.Direct.IO.2.bmp
It’s going to take a lot of
It’s going to take a lot of work for either AMD or Intel to support this. A current socket 1150 processor only has 16 lanes of PCI-E coming off of it. That’s four drive connectors–and no video card.
We’re either going to have to move to 8x PCI-E for our video cards (and get two drives worth of bandwidth), we’re going to need to see a lot more lanes of PCI-E coming off the CPU, or we’re going to start seeing PCI-E switches on the MBs again.
Only the first of those is going to make users happy, while he middle one is a reasonable compromise, and the last will be slowest and most expensive.
Even a high end 40 lane processor can only handle 2 16X PCI-E slots (video cards) and two of these 4x PCI-E drive connectors. That leaves nothing for ethernet, USB3.1, lightning.
2 @ 16 = 32 + (2 @ 4) =
2 @ 16 = 32 + (2 @ 4) = 40.
Thanks for that!
What is Supermicro doing with their NVMe backplanes?
I seem to recall reading one article about their NVMe solutions for high-end servers.
I don’t understand many of
I don’t understand many of the acronyms listed here:
http://www.supermicro.com/products/nfo/NVMe.cfm
Right below “4x NVMe and 8x 2.5″ Hot-swap SAS3 HDD bays”
… what are “1x InfiniBand port (FDR, 56Gbps), w/ QSFP connector (“FR” SKUs)” ?
According to Supermicro,
According to Supermicro, their server model SS2028TP-DNCR uses LSI 3008 RAID controllers. Here’s what I found about that controller at storagereview.com :
http://www.storagereview.com/supermicro_lsi_sas3008_hba_review
Host Bus: x8 lane, PCI Express 3.0 compliant
PCI Data Burst Transfer Rate: Half Duplex x8, PCIe 3.0, 8000MB/s
8000MB/s are consistent with the PCIe 3.0 spec:
8G / 8.125 bits per byte = ~1GB/s per lane.
And one of the photos at that review shows three of these LSI model 3008 controllers.
Does that mean those 3 controllers are using 24 PCIe lanes?
The motherboard manual mentions the Intel PCH C612 chipset.
I’ll see what I can find about that C612 chipset.
Intel® C612 Chipset
(Intel®
Intel® C612 Chipset
(Intel® DH82029 PCH)
http://ark.intel.com/products/81759/Intel-DH82029-PCH
PCI Express Revision Gen 2 <--- CAN THIS BE CORRECT ??
> PCI-E switches on the MBs
> PCI-E switches on the MBs again
Could those be mounted on PCIe NVMe add-on RAID controllers, instead of directly on the motherboad?
They could, but that would
They could, but that would only allow downstream devices to share upstream bandwidth. So, if you have 4 lanes of v3 PCI-E bandwidth and two drives with similar connections, you’t not going to see double the read speed from the drives–unless the drives only used half of that BW to start with. If you are copying from one drive to the other, then you’re be in a better situation as PCI-E has separate lanes for up and down.
A card that fits in a 4x PCI-E v3 slot and provides 2-4 of these 4x PCI-E v3 U.2 connectors would be useful, but it would only be a stopgap until more lanes become available from the CPU.
Excellent analysis! Thanks.
Excellent analysis! Thanks. It’s possible that PCIe hardware designers are banking on PCIe 4.0’s 16G clock, which will double upstream bandwidth “across the board”, relieving some of the pressure to increase the total number of PCIe lanes:
https://www.pcisig.com/news_room/faqs/FAQ_PCI_Express_4.0/#EQ3
e.g.
“Q: What are the results of the feasibility testing for the PCIe 4.0 specification?
A: After technical analysis, the PCI-SIG has determined that 16 GT/s on copper, which will double the bandwidth over the PCIe 3.0 specification, is technically feasible at approximately PCIe 3.0 power levels.”
Found this
Found this today:
http://www.servethehome.com/supermicro-aoc-slg3-2e4r-supermicro-aoc-slg3-2e4-differences/
Here are Newegg’s pages for those 2 cards:
http://www.newegg.com/Product/Product.aspx?Item=9SIA5EM2KK5178&Tpk=9SIA5EM2KK5178
http://www.newegg.com/Product/Product.aspx?Item=9SIA5EM2M58671&Tpk=9SIA5EM2M58671
Any more pins and my IDE
Any more pins and my IDE cables can be used again.
The return of PATA 🙂
“Not only are there an
“Not only are there an additional 7 pins between the repurposed SATA data and power pins”
Allyn, 6 pins
From what I can tell from the
From what I can tell from the SFF-8639 spec sheet (if anything has changed for U.2, it’s not in any public document), this ONLY mcovers the drive end of things. What connector ends up at the other end of the cable on the motherboard is undefined. Could be HD Mini SAS as in the current M.2 interposer solution, could be something else entirely.
> What connector ends up at
> What connector ends up at the other end of the cable on the motherboard is undefined. Could be HD Mini SAS as in the current M.2 interposer solution, could be something else entirely.
Possibly a connector like the one on this LSI SAS 3008 HBA:
http://supremelaw.org/systems/lsi/LSI.SAS3008.HBA.JPG
Source:
http://www.storagereview.com/supermicro_lsi_sas3008_hba_review
repeating from the other
repeating from the other thread:
https://forums.servethehome.com/index.php?threads/nvme-2-5-sff-drives-working-in-a-normal-desktop.5864/
MRFS