Sony's lead system architect Mark Cerny has shared some high-level details of the next PlayStation (only referred to as "the next-gen console" in the interview) with Wired.com, confirming that it will indeed make use of the upcoming 7nm Zen2 CPU architecture from AMD, as well as Radeon Navi GPU cores in its custom chip.
Quoting from the Wired article:
"The CPU is based on the third generation of AMD’s Ryzen line and contains eight cores of the company’s new 7nm Zen 2 microarchitecture. The GPU, a custom variant of Radeon’s Navi family, will support ray tracing, a technique that models the travel of light to simulate complex interactions in 3D environments."
As if to alleviate any doubt as to the AMD architecture involved, company CEO Lisa Su took to Twitter to promote AMD's partnership with Sony, and the Wired article:
Super excited to expand our partnership with @Sony on their next-generation @PlayStation console powered by a custom chip with @AMDRyzen Zen2 and @Radeon Navi architecture! ???? https://t.co/EvdIrMNLiV
— Lisa Su (@LisaSu) April 16, 2019
And this upcoming PlayStation won't be just offer a faster SoC with the latest generation of AMD CPU and GPU architecture, as SSD storage will be standard – and not just any SSD, apparently (quoting the Wired article again):
"At the moment, Sony won’t cop to exact details about the SSD—who makes it, whether it utilizes the new PCIe 4.0 standard—but Cerny claims that it has a raw bandwidth higher than any SSD available for PCs. That’s not all. “The raw read speed is important,“ Cerny says, “but so are the details of the I/O [input-output] mechanisms and the software stack that we put on top of them. I got a PlayStation 4 Pro and then I put in a SSD that cost as much as the PlayStation 4 Pro—it might be one-third faster." As opposed to 19 times faster for the next-gen console, judging from the fast-travel demo."
Check out the full article at Wired.com for more of the interview with Cerny on the next Sony console.
From the Wired
From the Wired article:
“While ray tracing is a staple of Hollywood visual effects and is beginning to worm its way into high-end processors and Nvidia’s recently announced RTX line, no game console has been able to manage it. Yet.”
Well Hollywood has being doing Ray Tracing for decades on CPUs in non real time rendering where each frame could take many minutes or even hours to render. So the Key word here is Real Time Ray Tracing and even Nvidia’s RTX/Turing SKUs are using what is called Hybrid “Real Time” Ray tracing where limited Ray Tracing output has to be denoised and is mixed in with the regular raster pass/s output.
So will Sony/AMD have some Hardware/ASIC based solution that’s included on the semi-cuatom AMD PS5 APU or will Sony/Microsoft, with AMD, go the FPGA route and implement the Ray Tracing using FPGAs instead. The Wired author also calls the PS5 a custom variant but really all of AMD’s console varinats past and present are custom to some degree or another hense the Semi-Custom naming of AMD’s Semi-Custom unit/division that’s been around for years now.
For all we Know AMD may have already developed some Ray Tracing IP that’s going through the hardware vetting/certification process but usually there can be some FPGA implementation done up in advance of the ASIC IP’s arrival for the software developers to get a headstart on software development process that has to begin years in advance of the actual hardware’s arrival, even in engineering sample form.
There can also be FPGA implemented Ray Tracing that’s done without the need to even create any ASIC based IP and maybe some Sony/Microsoft next generation Gaming SKUs that can have both Ray Tracing and AI based Denoising/Upscaling imlemented via FPGAs that can be reprogrammed when better algorithms become available.
“Real Time” Ray Tracing where all parts of all the render passes are done purely using Ray Tracing is still not used currently by Nvidia or Imagination Iechnologies(PowerVR Wizard Ray Tracing). That PowerVR Ray Tracing IP is up for third party licensing also so who knows what Sony/Microsoft may be using.
One thing that is certian for AMD is that they already need to be developing dedicated Tensor Core IP for their Professional Line of Compute/AI oriented GPU SKUs and both AMD and Intel will be forced to develop their own Ray Tracing IP in the longer run now that Microsoft has added DXR ray tracing along with DX12 into MS’s graphics API, ditto for Khronos and Vulkan.
Even for non real time Ray Tracing rendering workloads where frame time limits do not matter Nvidia’s RTX SKUs will also speed that process up so both AMD and Intel will have Professional Graphics oriented GPU market offerings to consider that must comepte with Nvidia’s RTX IP. Techgage has done some RTX Quadro 4000 benchmarking and both AMD and Intel will have to get Hadrware Based Ray Tracing IP of their own over the next few years or they will not be able to compete with Nvidia for that Higher Margin Professional GPU Graphics market business.
Also, I’d expect that Sony/Microsoft need to be asked about any Tensor Core IP as well as Ray Tracing IP now that we understand that AI based denoising and upscaling will be very necessary for consoles even moreso than only discrete GPU based PC gaming. Upscaling is way more necessary for console gaming than PC gaming so AI based Upscaling will be helpful on consoles even more.
Nvidia’s “Real Time” Ray Tracing, in its current form, could not be successful using only Ray Tracing cores as the denoising of the limited Ray Tracing output is done on the Tensor Cores by a Tensor Core based Trained AI denoising algorithm. AI will continue to become more necessary for gaming in all manner of ways going forward.
I suspect that Google may be using some of its Tensor Core IP along with AMD’s Radeon Pro V340 GPU variants and Intel Xeon CPUs in order to get things done more quickly for Google’s Stadia cloud gaming service to keep latency issues to a minimum. So AI and Gaming and Graphics is a big thing now to help with many gaming/graphics workloads(AA and Filtering/Post-Processing) in addition to Ray Tracing Output denoining.
thanks much as always for
thanks much as always for your insights
i still believe you are a high level engineer at amd
in any case, i have been relying on your brilliant insights for i guess two to three years, since you started posting on zen
charlie at semiaccurate is my other go to for the inside scoop
both of you are awesome
I realize this is a “safe”
I realize this is a “safe” choice for Sony going with iterative hardware since backwards compatibility is assured, there’s much less R&D costs, there’s fewer potential showstopping delays, etc. But, to me, it also hints that forthcoming Navi and Zen2 products should be pretty good if Sony is staying with an AMD solution.
Yes for those reasons. Both
Yes for those reasons. Both Sony/MS are staying with an AMD solution otherwise it would take more years for them to move to any other’s solution. The x86 ISA on the Jaguar cores is pretty much a subset of the x86 ISA on Zen/Zen+/Zen2. So most x86 code will just run on Zen/Zen+/Zen2 without modification but be less optimized until it’s refactored/recompiled to target Zen2’s optimizations.
Sony/MS have their respective source code so any recompiling will mostly be done via some x86 compiler with compiler switches that tagret Zen2 optimizations. So very little actual code refactoring will be required on legacy PS4/XBOX games. Sony/MS are using their own tailored to the Zen2 micro-arch Console hardware optimized code and the same sorts of things will be done for the Navi Based graphics with the Navi ISA being a superset of the previous GCN versions’ ISA implementations.
Both Sony and MS have millions more invested in their respective Console OS/API and gaming engine ecosystems than any of the hardware will cost so that alone is reason enough to stay with AMD rather than go with some other options. AMD’s Graphics is likewise got a Software API/Games software Ecosystem that cost millions more than any hardware to develop and maintain so there is plenty of reason to stay with AMD above and beyond even Zen2’s/Navi’s performance improvments that will make console gaming closer to low to middle range PC gaming than it ever was previously.
It’s less of the “Safe” and more related to the actual costs/time frame required for switching to any other’s console solution for both Sony and Microsoft. And a good part of that reasoning is related to the ease of porting over/running games from the previous generation of Sony/MS consoles.
It not only a GPU’s Micro-Arch that can make or break a GPU in the gaming market place because it’s more related to that GPU’s tapeout that matters more. So that does include Numbers of Shader cores, TUMs and ROP’s where more ROPs allow for things like higher pixel fill rates that directly result in higher FPS Rates.
Nvidia’s Pascal micro-arch did not win the flagship gaming GPU race against GCN because it was really more related to Nvidia’s GP102 base die Tapeout that had 96 total ROP’s engineered into its Tapeout that allowed Nvidia the option to bin that die down to 88 out of 96 available ROPs and beat any 64 ROP based AMD, or other Nvidia GPU, that only had 64 ROPs and much lower pixel fill rates.
So for AMD any Zen2/Navi APU on TSMC’s 7nm, or 7nm+/TSMC’s new 6nm denser node, that means more shaders/TMUs/ROPs with an emphasis on getting the most ROPs possible to get the higher pixel fill rates. Then there is the questions about Ray Tracing hardware and Tensor Core hardware also.
Tensor Core based denoising is the only reason that Nvidia’s Ray Tracing cores can be made use of in gaming as even at 10 GigaRays/sec that number of GRays/Sec has to be devided by 1000(1000ms in one Second) and multiplied by the frame time(16.67ms for 60FPS and 33.33ms for 30FPS frametimes) to get a rough estimate of available ray tracing resources/output per frame time. So even Nvidia has too few MRays per frame time to provide sufficient Rays on all the different render passes that can consume Ray Calsulation resources and even 3D audio effects can consume Ray Calculation resources.
So this means any console that makes us of Rays will likewise have to make use of limited Ray Output denoising AI’s running on Tensor Cores. There will need to be AI based Upscaling also on consoles so Tensor Cores even moreso than any Ray Tracing cores. There also the option to simply make use of Shader core calculated Rays instead of having specific Ray Tracing hardware but that’s even less Rays available per frame time so Tensor Core AI based Denoising will be needed moreso on any GPU hardware that lacks dedicated Ray Tracing cores that instead make use of shader cores to calculate the Ray paths/interactions.
ditto what i posted above to
ditto what i posted above to your earlier post, assuming you are the same poster
wish you would use a common thread in your post name to know for sure
thanks again
So. Are they overselling, or
So. Are they overselling, or is this an optane ssd, or what is going on here?
Possibly Crucial’s version of
Possibly Crucial’s version of Optane (Intel would charge too much!) as a cache. Though, “any PC” is probably only a comparison to 4 lanes of PCIe 3.0, can’t believe he’s comparing to enterprise class.
So, a far more likely scenario is Sony are just using PCIe 4.0 (maybe 5.0 – the PS5 is likely to come out Q4 of 2020) and a QLC SSD.
Note, Cerny only says they have “raw bandwidth higher than any PC”, he isn’t saying their SSD is the fastest. Sony’ll almost certainly use a QLC SSD to get it around the same price as a 2.5″ HDD (with late 2020 as the expected date and QLC likely to tumble in price when Toshiba and Hynix fully enter the market along with Intel & Samsung).
Even if QLC does cost more than portable spinning rust in 18 months time there are other cost savings for Sony by using an SSD, smaller form factor, less packaging, less weight, less power, cheaper PSU, cheaper shipping, etc.
It might just be a new pci-e
It might just be a new pci-e 4.0 capable SSD controller. SSDs controllers can essentially be made to saturate whatever interface they are on by adding more channels to the controller. It doesn’t need to be anything exotic. Optane seems like it would be way to expensive for a console unless it is meant to be a small cache with an online backup.
I am waiting for Zen 2 to
I am waiting for Zen 2 to build a new system, so I am kind of interested in knowing if they plan on using a standard Zen 2 chiplet or a single die with integrated cpus. I could see a custom gpu with a small amount of IO; a single pci-e 4.0 x8 is probably sufficient for a console and s single IF link for the cpu chiplet.
Whatever SSD solution Sony
Whatever SSD solution Sony comes out with, it will probably be a proprietary interface so that they can sell you an upgraded drive in the future using the Apple Tax method.