The Register have put up a bit more information about AMD's new embedded versions of Ryzen and Epyc. The Epyc 3000 will appear in networking, storage and edge computing devices, offering 64 PCIe lanes, eight 10 GbitE, 16 SATA, and up to 4 memory channels per CPU. The Ryzen V1000 APU will be more for POS and entertainment, with 16 PCIe lanes, dual 10 GbitE, four USB 3.1 and up to four independent 4k displays. Alternatively, it can support a 5k display, with support for H.265 and VP9 codecs. Get a look at all the models here.
"The semiconductor firm is aiming Epyc 3000 at networking, storage and edge computing devices and the Ryzen V1000 at medical imaging, industrial systems, digital gaming and thin clients. Both are embedded systems."
Here is some more Tech News from around the web:
- Hacker coaxes Windows 10 ARM to run on a Lumia 950 prototype @ The Inquirer
- Hackers Are Selling Legitimate Code-signing Certificates To Evade Malware Detection @ Slashdot
- Lenovo stuffs Alexa into its Yoga 730 and 530 convertible laptops @ The Inquirer
- Intel's announced PCs packing 5G, and that's just plain wrong @ The Register
- HP's first ARM-based Windows PC costs as much as an iPhone X @ The Inquirer
- New tool safely checks your passwords against a half-billion pwned passwords @ Ars Technica
- Galaxy S9 vs iPhone X specs comparison @ The Inquirer
- The State of 5G: When It's Coming, How Fast It Will Be & The Sci-Fi Future It Will Enable @ Techspot
- Skype is turning into a white elephant (also green, orange, purple, puce etc) @ The Inquirer
- Samsung breaks ground on new EUV line in Hwaseong @ DigiTimes
- Windows 10 WSL vs. Linux Performance For Early 2018 @ Phoronix
While we wait for a
While we wait for a general-purpose NVMe RAID controller to show up, there are some merging trends that promise to deliver enormous storage throughput e.g. PCIe 4.0 will support a clock rate of 16 GHz. And, if the SATA and SAS standards would also adopt the 128b/130b jumbo frame already in the PCIe 3.0 specification, the industry can standardize a policy of “syncing” chipsets with storage subsystems. For example, look at what Samsung just announced for the SAS ecosystem:
http://hexus.net/tech/news/storage/115430-samsung-starts-mass-production-3072tb-sas-ssds/
Am I correct to presume that AMD’s engineers are already aware of these developing trends?
“The Epyc 3000 will appear in
“The Epyc 3000 will appear in networking, storage and edge computing devices, offering 64 PCIe lanes, eight 10 GbitE, 16 SATA, and up to 4 memory channels per CPU.”
Is that “64 PCIe and 8 10 GbitE” or “64 PCIe or 8 10 GbitE”? I would expect that the 10 GbitE links are the same external pins on the die as some of the PCIe or interprocessor links, just configurable as PCIe or 10 GbitE. They do this with the SATA connections. A single die has 32 links for IO, 8 of which can be configured as SATA leaving 24 general purpose links in the desktop Ryzen processors.
Since a 10 GbitE connection would take an x4 (?) link they could technically do it with just the IO links; it would take one x16 off each die, but that would leave only 32 links available, not 64. If you use one x16 off each die for the 10 GbitE and 8 links off each one for the 16 SATA, then you only have 8 on each die still available for other IO. I am assuming that the 16 and 12 core are two die parts, although it doesn’t look like they show that in the article. At least 3 of the inter processor links are only meant to be routed the very short distances on the Epyc package; perhaps the link meant to go from socket to socket can also be configured as 4 10 GbitE links. That seems to make the most sense. Do we know everything that is integrated on the Zeppelin die yet, or are there going to be more surprises?
The latest deep dive with the
The latest deep dive with the most new Zeppelin(1) Die information but these Embedded SKUs must have data sheets also at AMD webpage by now also. Slide 13 shuld answer your question as far as what is availabe per Zeppelin die and Slide 5 also helps. All 29 slides are very informative.
(1)
“ISSCC 2018: “Zeppelin”: an SoC for Multi-chip Architectures ”
https://www.slideshare.net/AMD/isscc-2018-zeppelin-an-soc-for-multichip-architectures
I have seen that. It
I have seen that. It doesn’t appear to show anything about on die 10 GbitE. It does show how the SATA and PCIe links are shared. For embedded use, it may be fine for the 10 GbitE to use one x16 of each die out of the 32 IO links. With 8 SATA per die, that only leaves 2 x8 links for general purpose PCIe.
There is other IP on the
There is other IP on the chips beside the PCIe based links that can be used to drive 10GbitE,
There are 5 generations of the electrical interface of SERDES, at 3.125, 6, 10, 28 and 56 Gb/s. and there are 32 high speed SERDES lanes on each Zeppelin die. One SERDES lane can drive a whole lot of data. There is also plenty of IF bandwidth and interface provided per Zeppelin die. See page 4 of the slide presentation and that’s way more connectivity than just for the PCIe 3.0 connectivity that can be used for I/O, Ethernet or otherwise.
P.S. that 5 generations of
P.S. that 5 generations of the electrical interface of SERDES, at 3.125, 6, 10, 28 and 56 Gb/s figure is per lane. So if that’s at the 56Gb/s per lane rate that’s a whole lot more than one(PCIe x1) of bandwidth a PCIe 3.0 lane requires. Hell even SERDES 10 Gb/s is faster than PCIe 3.0.
“The specified maximum transfer rate of Generation 1 (Gen 1) PCI Express systems is 2.5 Gb/s; Generation 2 (Gen 2) PCI Express systems, 5.0 Gb/s; and Generation 3
(Gen 3) PCI Express systems, 8.0 Gb/s. These rates specify the raw bit transfer rate per lane in a single direction and not the rate at which data is transferred through the
system. Effective data transfer rate or performance is lower due to overhead and other system design trade-offs.” (1)
(1)
“Understanding Performance of PCI Express Systems”
https://www.xilinx.com/support/documentation/white_papers/wp350.pdf
That is great but I still
That is great but I still doesn’t include any information on how/where the 10 GbitE connects. I assume it is shared with the IO links. If they want to use a standard PHY then they can’t make use of non-standard connections. The processor interconnect links may be electrically different since 3 of them are only for on package routing only with a max length of a centimeter or two. One of them is for off package routing to another socket, so it is a possibility.
Well AMD’s that’s hard to
Well AMD’s that’s hard to figure out but there is a whole lot of SERDES(On Zeppelin) and the chip floor-plan for the 3000 series has to be using custom(2 Die) with custom MBs. And Epyc 3000 is BGA(product brief page 3) so maybe that’s what needs to be sussed out. It’s an enbedded processor so that implies custom solutions for networking, storage, and other usage.
The footnotes for the webpage(2) state:
” * AMD EPYC™ Embedded 3451 supports up to 64 PCI Express high spend I/O lanes, 8 10 GbE, 16 SATA, and 4 memory channels versus Xeon D 1587 supports 32 PCIe lanes, 4 10GbE, 6 SATA, 2 memory channels – EMB-153″
The Product Brief states:
”
• Integrated eight 10Gb ethernet ports provide seamless support for IPv4 and IPv6 security protocols, with integrated crypto acceleration supporting the IPsec protocol.”(1)[see page 3 of PDF]
(1)
“Product Brief: AMD EPYC™ Embedded 3000 Family”
https://www.amd.com/Documents/3000-Family-Product-Brief.pdf
(2)
“AMD EPYC™ Embedded 3000 Series”
https://www.amd.com/en/products/embedded-epyc-3000-series
.
.
.
I do NOT like all that sign-in required and Google Analytics baked into AMD’s Updated tehcnical web page’s, That’s marketing for you, attempts to use registration to develop marketing lead sheets on folks that are just wanting to research the AMD technology. Whitepapers need to be available in the open and it’s AMD’s fault for not providing more non marketing influnced technical information on its products.
From the fuse-wikichip
From the fuse-wikichip info:
“Configurable I/O
With the introduction of the Xeon D-2100, Intel introduced 20 additional configurable high-speed I/O (HSIO) lanes that can be configured as either PCIe, SATA, or USB (or any valid combinations of those).
Since each Zeppelin die has a highly configurable set of I/Os, AMD has exposed some of this configurability to the system designers in order to allow for higher design flexibility very similar to what Intel has done with their Xeon D models.
The single-die models can have up to 32 PCIe lanes that are MUX’ed with the SATA and GbE ports and can be configured as a mixed combination of those. Those models can be configured as either 32 PCIe lanes or a combination of PCIe lanes and up to 8 SATA ports and up to 4 x 10GbE ports depending on the application of the device. Likewise, for the models which incorporate two dies, this is increased to 64 PCIe lanes that can be configured as up to 16 SATA ports and up to 10 x 10GbE ports.” (1)
But looking at what AMD’s own marketing materal it’s still confusing as Zeppelin has all that SERDES that can accomodate so much more bandwidth but maybe that’s for some whitepaper to clear up. And it appears that according to this wikichip information that the PCIe lanes can be given over for other usage on these Zeppelin(first generation) derived embedded SKUs. AMD sure has given Zeppelin and excess of SERDES for future usage whenever AMD tapes out any Zeppelin-V2 dies.
There are specific links to each Epyc, and Ryzen, embedded model number on the wikichip article so it’s a great source of information presented in a more logical manner than wikipedia’s information. Wikichip updates their info more often than wikipedia.
But the fuse-wikichip listing(1) also states:
“The single-die configuration uses a single-chip module SP4r4 package whereas the dual-die configuration uses a multi-chip module SP4 package. Both packages are ball grid arrays (BGAs) and are pin-compatible with each other. We don’t have a picture of the package but they are considerably smaller than the SP3 package that is used for the normal EPYC chips.” (1)
(1)
“AMD launches EPYC Embedded 3000 and Ryzen Embedded V1000 SoCs”
https://fuse.wikichip.org/news/945/amd-launches-epyc-embedded-3000-and-ryzen-embedded-v1000-socs/
So the v1000 are just
So the v1000 are just embedded 22/2500G… Interesting, I’d like to see if there’s any difference.
The super low end one would be a neat little steam box.