Of the eight Jaguar cores that Sony added to the PlayStation 4 APU, two were locked down the console's operating system and other tasks. This left the developer with six to push their workloads through. This was the same as the Xbox One until Microsoft released an update last year, which unlocked one to give seven.
NeoGAF users report that, allegedly, PlayStation 4 games can now utilize seven of the eight cores after a recent SDK update from Sony. They source a recent changelist for FMOD, a popular audio management library for PC, mobile, and console platforms, which references targeting “the newly unlocked 7th core.”
Since this is not an official Sony announcement, at least not publicly, we don't know some key details. For instance, is the core completely free, or will the OS still push tasks on it during gameplay? Will any features be disabled if the seventh core is targeted? How frequently will the seventh core be blocked, if ever? What will happen if you block it, if anything? The Xbox One is said to use about 20% of their unlocked seventh core for Microsoft-related tasks, and claiming the remaining 80% is said to disable voice recognition and Kinect features.
The Xbox One and PlayStation 4 are interesting devices to think about. They go low frequency, but wide, in performance, similar to many mobile devices. They also utilize a well-known instruction set, x86, which obviously has a huge catalog of existing libraries and features. I don't plan on every buying another console, but they move with the industry and has a fairly big effect on it (albeit much less than previous generations).
Well, the multi-core support
Well, the multi-core support seems kind of similar to DirectX12 and Vulkan, so I feel like it’s pretty interesting. I guess their console experiences influenced AMD in making Mantle, which influenced DX12 and was the foundation for Vulkan, so I’m hoping this will mean that AMD will be more prepared for the future when it comes to competing in hardware.
When it comes to consoles,
When it comes to consoles, AMD rules the roost, no doubt. In the PC space, however, AMD doesn’t have a software problem (isolated driver issues, notwithstanding), they have a hardware problem. They are not competitive on performance and they are not competitive on performance per Watt. Likewise, AMD’s pricing model needs work as they can’t command a premium, but they try to do it anyway. No amount of API tinkering will solve that. Real-world DX12 benchmarks (so far) are not proving AMD’s value, but rather continuing to prove the value of Intel+NVIDIA setups.
They need Arctic Islands to be awesome and they need it now.
hyperbole and enthusiasm
hyperbole and enthusiasm aside, i can’t find much wrong with your assertion. except that me thinks you mean “performance per watt” only. the folks at amd have all the necessary solutions that satisfy all the computing needs of 90% of the market. despite using more energy, these various solutions are priced accordingly, which in turn, makes them competitive.
ati…er amd…er the radeon dept., will be fine. if anything, it’ll get sold off in the near future. the folks over at the cpu dept. on the other hand… they’re the one’s who are concerned atm.
zen is much more important and integral to amd’s future than arctic islands. fingies crossed
Just keep it up, where are
Just keep it up, where are the DX12/Vulkan benchmarks, and who is to say that anyone’s CPU core’s IPC performance is going to be that much necessary for gaming once GPUs begin to accelerate more of the CPU style gaming calculations on the GPU for gaming workloads.
AMD is already ahead of Nvidia with AMD’s fully in the GPU hardware processor thread scheduling/management and hardware asynchronous compute, while Nvidia’s software thread management leaves GPU execution resources idle on its GPUs! And in Nvidia’s case it’s not because of lack of cued up threads waiting to be dispatched, it’s because Nvidia’s software based GPU processing thread scheduling/dispatch is handled by software resulting in under-utilized GPU hardware execution resources! Only fully in hardware GPU processing thread scheduling/dispatch management is going to be responsive enough to make proper use of the GPUs cores execution resources, and not leave the GPU’s execution resources underutilized and idle! Nvidia’s GPU hardware can not even interleave compute thread and graphics thread workloads on its GPUs to maximize GPU execution resource utilization.
Once the VR hardware, and the new graphics APIs like DX12, and Vulkan, become more prevalent Nvidia is going to be at a disadvantage if Nvidia does not get some full hardware based GPU processing thread dispatch/management hardware into its GPU SKUs! There will be more gaming compute moved off of the CPU and on to the GPU’s cores, in addition to the Graphics workloads, to reduce latency to the smallest amounts possible for VR gaming, and 4k+ workloads, and Nvidia will not be able to ignore having fully in the hardware based asynchronous compute resources on its GPUs.
Uhm… The main reason Nvidia
Uhm… The main reason Nvidia dropped making chips for consoles is mainly because they didn’t make any money while sales where very successful due to Sony and Microsoft forcing hardware manufacturers to sell almost below the manufacturing price. It has nothing to do with who is better, but with who is the cheapest.
It is kind of to a point where you cause environmental problems and go into business with labor camp style worker processes like you see with the Apple hardware manufacturer in China.
That is why Nvidia walked away and AMD stepped in for those same slaver deals.
In a couple of years you will see AMD making the same choices, because at the end of the day they want to become financially better from it to be able to do new things.
Inb4 the bubble bursts.
Nvidia could not offer a
Nvidia could not offer a complete single die package unless Sony and Micrososft were okay with using Nvidia’s ARM CPU cores. While it may not that profitable, most games will be targeting an AMD GPU since they power all of the consoles. This is worth something.
Nvidia’s Denver cores are
Nvidia’s Denver cores are nowhere to be found in their new SKUs, Nvidia went back to using the ARM holdings reference cores in the X1 tablets. The only other option for Nvidia is to get a power8 license from OpenPower, and that may allow Nvidia to compete with Intel’s x86 SKUs. The power8 is a RISC ISA based core that supports Full Linux based OSs, and the power8 Core consists of 8 instruction decoders with 14 execution pipelines(FP, INT, etc.), the power8 core also supports 8 processor threads per core, so a 4 core power8 would have 32 total processor threads.
There is no reason why Nvidia could not develop a 4 core power8 based desktop SOC, but that will take at least 3 or 4 years to develop and certify. Nvidia can not get a x86 32 bit license from Intel, nor a x86 64 bit ISA license from AMD, and AMD is the one that holds the IP license for the x86 64 bit ISA extensions. The console makers will certainly not want to switch from the x86 software ecosystem considering the amount of investment that they have in the x86 based software stack. It should also be noted that a power8 license may be a bit more expensive than an ARMv8A ISA license so Nvidia may have to try to revive Denver and make its custom ARMv8A ISA running Denver cores more wider order superscalar like the Power8 RISC ISA running micro-architecture.
It’s not so much the ISA that makes a CPU core powerful it’s the underlying hardware that is engineered to run that ISA that makes a CPU core powerful, just look at the Power8 micro-architecture as an example of a beefed up RISC ISA running micro-architecture that outperforms even a Xeon in the server room. There is no reason that a CPU/SOC company could not with the proper investment and time create an ARMv8A ISA running custom micro-architecture that is every bit as powerful as the power8 micro-architecture, and AMD’s K12 custom ARMv8A Running micro-architecture will probably be up there with more of the Desktop SKU levels of performance even above Apple’s A9X cores. The power8 cores are of the very wide order superscalar design, being able to decode 8 instructions per clock and support 8 processor threads per core, with those 14 execution/other pipelines providing the resources to keep those 8 processor threads per core in operation.
It should be noted that AMD’s K12 may have some surprises in store for the custom ARMv8A running micro-architecture market where the licensees only license the ARMv8A ISA from ARM holdings then go about engineering their own custom micro-architectures to run the ARMV8A ISA. AMD’s K12, if it was developed in tandem with AMD’s Zen x86 32/64 bit ISA running micro-architecture, and if K12 shares the same basic CPU core design tenets as Zen, except that the K12 is engineered to run the ARMv8A ISA, then K12 should have SMT capabilities and that same wider order superscalar base design as ZEN. One need only look at IBM’s RISC ISA design power8 processors to realize that an ARMV8A ISA running custom micro-architecture has every bit as much of a potential to be as powerful a design as the power8’s design.
Nvidia is going to have to make its GPU cores have fully in hardware thread dispatch/scheduling if it wants to compete with AMD’s GPU cores in the console market. That asynchronous compute ability is going to be a must have for any DX12/Vulkan based console gaming. Nvidia will also have to increase its GPUs overall 32bit FP/other compute resources and have the in GPU hardware ability to accelerate non graphics gaming compute alongside the graphics gaming compute in its GPUs, and that includes the ability to interleave graphics and compute GPU processor threads on its GPU cores with no limits.
woot instead of 18 fps on JC3
woot instead of 18 fps on JC3 they can get 19 now!
its almost 2016, get with the
its almost 2016, get with the times…. its all about ‘frame delivery’ now, more than pure fps. what you wanted to say was, ‘woot instead of 55ms frame times they can get 52 now!’ XD! i luv u sony…720p never ran so smooth!