PCI-SIG have announced PCIe 4.0 is on the horizon, with up to 16GT/s or just a hair under 32GB/s transfer rate on a 16x slot. The new standard will also allow devices to use more power, the draw from the slot remains at 75W but external power should be able to exceed 225W without exceeding the specs. This could mean GPUs can continue to emphasize performance over power efficiency, which could lead to some interesting products. There is no specific date for any products to arrive, this announcement is technically for revision 0.9, so it is a not quite ready for prime time. This may disappoint those who read about the new Optane drives, which are capable of x4 PCIe transfer as opposed to PCIe 4.0.
"PCI-SIG, the organization responsible for the widely adopted PCI Express (PCIe) industry-standard input/output (I/O) technology, today announced the release of the PCI Express 4.0, Revision 0.9 Specification, supporting 16GT/s data rates, flexible lane width configurations and speeds for high-performance, low-power applications"
Here is some more Tech News from around the web:
- NSA bloke used backdoored MS Office key-gen, exposed secret exploits – Kaspersky @ The Register
- AMD's Ryzen 7 2700U and Ryzen 5 2500U APUs revealed @ The Tech Report
- Running Linux on a Chromebook @ Techspot
- Android 8.1 dev preview arrives with Neural Networks API, Android Go enhancements @ The Inquirer
- Google Pixelbook review: Prepared today for the possible reality of tomorrow @ Ars Technica
- noblechairs ICON Series Desk Chair @ [H]ard|OCP
I wonder if Zen2 will have
I wonder if Zen2 will have PCIe 4.0 speeds. They can already run their interprocessor links faster than standard PCIe links, but I would assume part of that is due to the short distances. Routing distances on an Epyc package are probably a cm or two at most. They do have the off package links for a dual system, but I don’t know if those run at quite the same speed.
AMD is founding member of
AMD is founding member of OpenCAPI along woth IBM(The CAPI IP’s creator) and others and AMD is also in the CCIX(Cache Coherent Interconnect for Accelerators) group along with others including IBM.
AMD’s Epyc procesors will probably be supporting more than just Infinity Fabric/PCI-SIG’s standards as AMD’s membership in the other standards groups shows. Coherent Accelerator Processor Interface (CAPI) Now Called OpenCAPI support will see AMD’s Professional GPU products interfaced with Power9’s if any Power9 OpenPower Licensees want any AMD GPU accelerator option instead of Nvidia/NVLink because IBM’s Power9’s support OpenCAPI also(IBM will not be Tied to a Nvidia only standard or Nvidia’s GPUs only).
And PCI-SIG has been forced to move up their 5.0 specification by some competition from other standards:
“The seven year gap between PCIe 4.0 and PCI 3.0 appears to have emboldened other data shuttling schemes. CCIX, which stands for the Cache Coherent Interconnect for Accelerators, counts companies like AMD, ARM, Broadcom, IBM, Micron, Qualcomm, Red Hat, Texas Instruments, and Xilinx as members.
The server-focused consortium in August announced that it had managed to transfer data at a rate of 25 Gbps, three times faster than PCIe 3.0, the current standard, and faster than PCIe 4.0.”(1)
And look Big Blue/OpenPower Power9 Licensees is going the be first with PCIe 4.0:
“IBM’s POWER9 processor, expected to ship this year, is said to be the first processor that will incorporate PCIe 4.0. Intel last month showed off its forthcoming 10nm Falcon Mesa FPGA, which includes a PCIe 4.0 interface.” (1)
(1)
“Fore! PCI Express 4.0 finally lands on Earth”
https://www.theregister.co.uk/2017/10/26/fore_pci_express_40_finally_lands/
Most of these things are just
Most of these things are just protocols layered on top of a PCIe physical and/or transaction layers. The OpenCAPI isn’t supposed to be based on PCIe, but it sounds like it still uses the PCIe physical layer, just with their own transaction layer(not technically PCIe then, but close). There is no reason to reinvent the wheel.
A lot of these Protocols make
A lot of these Protocols make use that PCIe Physical Layer(Wires and Switching/etc.) because it’s so ubiquitous but they are just using that available infrasturcture and their own siginaling because they can offer faster than PCIe 3.0, and even PCIe 4.0, bandwidth. So if you look at the reason why PCIe 5.0 will be much quicker to market it’s the foot dragging by PCI-SIG in getting its PCIe 4.0 standard out the door that allowed these other standards to be needed.
That Physical PCIe layer is just wires and programmable switches/etc that can programmed to support other protcols that offer better bandwidth performance over that Physical Layer. There can also be shared Physical Layer hardware that maybe OpenCAPI will provide in a similar manner to what AMD’s Infinity Fabric provides for switching out the PCIe and using IF protocol for cross socket coherency/data transfer ability.
The Physical wires on any communication fabric system are just that Physical wires and additional controllers can be switched in and PCIe can be gated off, or the wires can be shared if possible, and whatever traffic shared across the wires depending on how the hardware(MB/controllers/PHY chips/other) is set up for such usage. The Power9’s and the Xeon and Epyc systems platform controllers/Respective motherboards can be made to support plenty of extra signialing features depending on the needs.
That Gigabyte Epyc/Sp3 single socket motherboard even supports direct Point to Point communication for Vega/MI25s to communicate directly and who knows what is used for the Point to Point communication, it may be Infinity Fabric based or PCIe based, but it’s listed explicitly on the Gigabyte MB’s product page so that product is needing some direct review attention.
Phoronix has been doing a lot of Epyc testing but on a Tyan dual socket MB, and I wish that Phoronix would get the Gigabyte Epyc/SP3 single socket MB to test the MB out for workstation workloads. AMD would not let Phoronix do any direct comparsions between Threadripper and Epyc SKUs but users could still go look up any individal results on their own.
If Readers really want loads of comparsion reviews between Threadripper and Epyc SKUs then they are going to have to start funding Indipendent Reviews that do not rely on free review samples. And some of the YouTube reviewers get support for purchasing the products directly from the retail channels and really testing with no restrictions or dependency on free samples.
That is not correct at all.
That is not correct at all. Please recheck your facts.
(For everyone) Just read the docs, it is opened for everyone to look at. And about the “AMD IF” it is completely something else, it is NOT a protocol based interface.
IF(Infinity Fabric) is a Bus
IF(Infinity Fabric) is a Bus topology that links controllers that speak their respective protocols then you could say that IF is as Charlie(1) at S/A describes as:
“If you have been following the AMD disclosures lately you probably remember Infinity Fabric. SemiAccurate first mentioned this when we talked about the new Instinct GPU compute cards but they play a part in Zen and Vega too.
On the surface it sounds like AMD has a new fabric to replace Hypertransport but that isn’t quite accurate. Infinity Fabric is not a single thing, it is a collection of busses, protocols, controllers, and all the rest of the bits. Infinity Fabric (IF) is based on Coherent Hypertransport “plus enhancements”, at a briefing one engineer referred to it as Hypertransport+ more than once. Think of CHT+ as the protocol that IF talks as a start.” (1)
So that IF protocol is actually “Coherent Hypertransport+” according to Charlie over at S/A and IF has a control fabric and coherent data/cache fabric and probably a whole lot more but the wires/traces are not the protocal as protocols are what the controllers speak encoded and transmitted over the wires, be it IF(“Coherent Hypertransport+”) or PCIe/whetever protocols.
So IF has 2 fabrics that communicate via packets one control the other data and probably a lot if extra functionality also if only AMD would publish a detailed white paper. So the article describes 2 fabrics:
“Going down to the metal, or at least metal traces, there isn’t one fabric in IF but two. As you can see the control fabric is distinct from the data fabric which goes a long way towards enabling the scalable, secure, and authenticated goals. Control packets don’t play well with congested data links, and security tends to work better out-of-band too. QoS also play better if you can control it external to the data flows. So far IF seems to be aimed right…” (1)
I’d take the time and read the entire article but AMD’s production of white papers needs to go up or AMD will suffer continued product confusion! Unlike Nvidia and Intel who produce plenty of white papers with a more detailed technical explaination. AMD suffers by letting its marketing folks speak for its engineers and PHDs with respect to not providing the proper amount of published white papers.
(1)
“AMD Infinity Fabric underpins everything they will make
The future is not quite here yet but it will be
Jan 19, 2017 by Charlie Demerjian”
https://semiaccurate.com/2017/01/19/amd-infinity-fabric-underpins-everything-will-make/
P.S. You can not state the
P.S. You can not state the entire statment is wrong without a line by line/statment by statment explanation of where the post, that you replied to, is wrong. Provide links to white papers but most everything communicated over wires/traces is done by controllers that speak protocols and encode/decode packets that are sent down the wires/traces.
There is also a lot of Tunneling Protocols made use of in computing where protocols are wrapped as data by other protocols and then sent as packets over a tunneling protocol with a lot of other different protocols that are also wrapped into the tunneling protocol and sent via that tunneling protocol’s controller command/control and data transfer service.
Tunderbolt3 can be such when any PCIe packets are encapsulated in TB3 packets(2) and sent over a TB3 cable where on the other end the device’s TB3 controller extracts the PCIe packets from the TB3 packets and sends them or their merry way on the external TB3/PCIe box with external GPU plugged in.
The cloud services providers have hardware/controllers that will send PCIe over Ethernet(1):
“ExpEther is a System Hardware Virtualization Technology that expands standard PCI Express beyond 1 km having thousands of roots and endpoint devices together on a single network connected through the standard Ethernet. Abundant commodities of PCI Express-based software and hardware can be utilized without any modification. It also provides software-defined re-configurability to make a disaggregated computing system with device-level.” (1)
(1)
“ExpEther”
https://en.wikipedia.org/wiki/ExpEther
NOTE: See refrence (2) below page 3 of PDF titled “How Thunderbolt 3 Works”
“Fundamentally, Thunderbolt is a tunneling architecture designed to take a few underlying protocols, and combine them onto a single interface, so that the total speed and performance of the link can be shared between the underlying usages of these protocols – whether they are data, display, or something else. At the physical interface level, Intel’s Thunderbolt 3 silicon builds in a few important features:
• A physical interface (PHY) layer that can dynamically switch it’s operating mode to drive either:
– USB 2.0, 3.0, and 3.1
– DisplayPort 1.1 and 1.2a
– Thunderbolt at 20 and 40 Gbps
• In the Thunderbolt mode, Thunderbolt 3 port has the ability to support at least one or two (4 lane) DisplayPort interface(s), and up to 4 lanes of PCI Express Gen 3” (2)
(2)
“TECHNOLOGY BRIEF
Thunderbolt™ 3”
https://thunderbolttechnology.net/sites/default/files/Thunderbolt3_TechBrief_FINAL.pdf
And some more intresting
And some more intresting reading(1) on CCIX and using some of the PCIe IP and throwing out some of the un-needed parts of the IP:
“The basic idea behind CCIX, pronounced see-six, is to make a single coherent interconnect for CPUs, SoCs, racks, and accelerators, IE one bus to rule them all. Why is it cool? For starters it is a triply nested acronym, but it is the tech that really is interesting. Essentially you take PCIe, throw out the heavy parts, basically the transaction layer and above, and keep the good bits. Starting with the PCIe Data Link Layer and Physical Layer, you put a CHI based CCIX Transaction Layer on top, plus more. It looks like this.” (1)
So some layers are added for needed functionality:
“Layers but not a cake
The idea is twofold starting with re-using the PCIe physical layer where applicable. This allows you to use external bits like PCIe switches like the PLX/Avago devices. Since you are stripping out the PCIe transaction layer and replacing it with a CCIX version, you can redefine a lot. This redefinition is basically a stripping out the ‘heavy’ parts of the protocol and replaces it with, more or less, a subset of the ops plus a few more.
There are two sub-flavors of CCIX messaging, one meant to work over PCIe and one for CCIX only links, determined in the header. Some message types are completely eliminated, others like packing CCIX messages into a single PCIe packet added. The most interesting type of these is for request and snoop chaining. Since the accelerators are going to be memory mapped and likely have large areas walked through sequentially, the chaining allows you to read or write sequential areas without having to resend the address every packet. This drops bandwidth usage quite a bit. These kinds of optimizations and added lightness are all over CCIX.” (1)
Again the entire article is a good read also but things can be added and new protocols used on PCIe’s existing infrastructure, with other things addad.
“ARM talks about CCIX details”
https://semiaccurate.com/2017/10/26/arm-talks-ccix-details/
And here is the wikipedia
And here is the wikipedia entry for HyperTransport and it states is unded that entry’s Infinity Fabric subheading:
“Infinity Fabric[edit]
Infinity Fabric is a superset of HyperTransport announced by AMD in 2016 as an interconnect for its GPUs and CPUs. It is also usable as interchip Interconnect for communication between CPUs and GPUs.[6][7] The company said the Infinity Fabric would scale from 30 GBytes/s to 512 GBytes/s, and be used in the Zen-based CPUs and Vega GPUs which were subsequently released in 2017.” (1)
(1)
“HyperTransport”
https://en.wikipedia.org/wiki/HyperTransport
Edit: unded
To: under
Edit: unded
To: under
The posts here leave the
The posts here leave the dross on most sites for dead. Kudos.