The Chinese tech site, Evolife, acquired a few benchmarks for the Tegra K1. We do not know exactly where they got the system from, but we know that it has 4GB of RAM and 12 GB of storage. Of course, this is the version with four ARM Cortex-A15 cores (not the upcoming, 64-bit version based on Project Denver). On 3DMark Ice Storm Unlimited, it was capable of 25737 points, full system.
Image Credit: Evolife.cn
You might remember that our tests with an Intel Core i5-3317U (Ivy Bridge), back in September, achieved a score of 25630 on 3DMark Ice Storm. Of course, that was using the built-in Intel HD 4000 graphics, not a discrete solution, but it still kept up for gaming. This makes sense, though. Intel HD 4000 (GT2) graphics has a theoretical performance of 332.8 GFLOPs, while the Tegra K1 is rated at 364.8 GFLOPs. Earlier, we said that its theoretical performance is roughly on par with the GeForce 9600 GT, although the Tegra K1 supports newer APIs.
Of course, Intel has released better solutions with Haswell. Benchmarks show that Iris Pro is able to play Battlefield 4 on High settings, at 720p, with about 30FPS. The HD 4000 only gets about 12 FPS with the same configuration (and ~30 FPS on Low). This is not to compare Intel to NVIDIA's mobile part, but rather compare Tegra K1 to modern, mainstream laptops and desktops. It is getting fairly close, especially with the first wave of K1 tablets entering at the mid-$200 USD MSRP in China.
As a final note…
There was a time where Tim Sweeney, CEO of Epic Games, said that the difference between high-end and low-end PCs "is something like 100x". Scaling a single game between the two performance tiers would be next-to impossible. He noted that ten years earlier, that factor was more "10x".
Now, an original GeForce Titan is about 12x faster than the Tegra K1 and they support the same feature set. In other words, it is easier to develop a game for the PC and high-end tablet than it was to develop an PC game for high-end and low-end machines, back in 2008. PC Gaming is, once again, getting healthier.
Considering what was said in
Considering what was said in the article, this is pretty exciting. Although the numbers look promising, I’d still like a real world benchmark (when you get one, you should totally Frame Rape it) This also potentially gives us a direct comparison between ARM and X86 in terms of gaming. How far does it need to go before it’s “good enough” like AMD CPU’s?
Problem with ARM is focus on
Problem with ARM is focus on power usage and not on performance. So they will never be close to what x86 cpu can do.
Yes but they could be more
Yes but they could be more than good enough in the future for heavy gaming. Think in 4-5 years a motherboard with Nvidia hardware on it and a future ARM processor made from Nvidia of course on it and one or more slots where you can put one or more Nvidia hi end graphics cards. Think SteamOS, or Android as the main operating system(Windows RT?).
If you believe that you need a hi end ultra fast cpu for gaming, just look at PS4 and XBOX1. Just think how much better a $70 FM2 processor does when running Mantle instead of DirectX. How much closer it comes to a $400 Intel hi end cpu in gaming just by changing the API.
WE are talking power usage,
WE are talking power usage, as this is a Tablet/Mobile SKU/s that are being compared, and x86 is more power hungry. Those Custom ARMv8 ISA based CPUs each have their own custom implementation of the underlying hardware that implements the ARMv8 ISA, and Apple’s A7, and Nvidia’s Denver are wide order superscalar designs, that can execute more IPC(instructions per clock), and the Apple A7 has the cache and execution resources that put it closer to the Intel core i series, than the ARM reference designs that can execute less IPC. Expect to see Discrete GPUs get their own companion on die CPU cores, Placed close to the GPU cores and connected by a wide on die system bus/fabric to GDDR5 memory via a unified GPU style Memory controller. These companion ARM/Other CPU cores will occupy the lowest latency location right next to the GPU cores, and if the system is complemented with a large on Die Ram, this arrangement will not be able to be matched by any motherboard CPU, way away from the discrete GPU and on a main motherboard, with narrow bus and slower memory.
Nvidia is developing such a module for the Power8 series in conjunction with IBM, and AMD will be developing APU for the server and PC/laptop/SOC market with ARM based APUs, that are HSA aware, and have a unified memory address space between CPU/GPU/other SOC components. Do not count out Nvidia going with a Power8 based future design SOC, as the Power8 ISA/IP is also up for license, and Nvidia, Samsung, and GlobalFoundries are also IBM technology partners, and Samsung’s 14nm process which Samsung is licensing to GlobalFoundries was developed with help from IBM. Power8 is up for ARM style Licensing, so expect Samsung and others, including AMD to not look the other way, and come up with implementations of Power8, especially for the server SKU, but also for the consumer market, those Power8 are beasts that outperform Xeon.
Future Discrete GPUs are going to become complete SOCs unto themselves, optimized for gaming workloads and hosting gaming OSs, much like the console systems, but on card that will be plugged into the host motherboard’s PCIe slot/s, making future gaming Rigs more like computing clusters than the single CPU/motherboard system we have now. A GPU with its own custom ARMv8 CPU core/s will not need any resources from the Motherboard other than some power(with added connected plug form power supply if needed) and a communication channel to the storage devices. With gaming engines already multithread aware a lot of ARM cores on die next to the GPU, my be all that is needed to take advantage of the low latency high bandwidth links that on die CPUs have compared to a CPU that has a much slower link and a few transfer protocols to transverse to get to the GPU.
Agreed…I didn’t want to go
Agreed…I didn’t want to go that far into the weeds, so to speak, but you nailed it. I think that is why they are making NVlink etc to replace hypertransport/infiniband/PCIe etc. A quick google shows NVlink is being used to tie cpu’s gpus etc. We are almost there.
http://devblogs.nvidia.com/parallelforall/nvlink-pascal-stacked-memory-feeding-appetite-big-data/
“NVLink addresses this problem by providing a more energy-efficient, high-bandwidth path between the GPU and the CPU at data rates 5 to 12 times that of the current PCIe Gen3.”
WINTEL/DirectX is in trouble 😉 Well, their margins will be soon…LOL. Cuda+stacked Dram etc is finally about to pay dividends for the rest of us.
http://nvidianews.nvidia.com/News/NVIDIA-Launches-World-s-First-High-Speed-GPU-Interconnect-Helping-Pave-the-Way-to-Exascale-Computin-ad6.aspx
Correct again with your power8 comments. It appears IBM is already on board.
You’re incorrect. They are
You’re incorrect. They are coming for your desktops soon (servsers too) and at some point (after they take low end) they’ll put out a box that looks just like your PC tower with 500w psu and NV discrete card. That SOC in it will be running 4ghz@50-100w.
I’m not quite sure why people think Arm has to be in a phone forever. There is nothing stopping them from doing a little longer pipeline to up the speed to 4ghz+ just like Intel/amd. They are already about to do 2.7ghz (and one of the K1 devices showed a range up to 3ghz) so they may not even have to do anything but up the volts a bit to hit 3.5ghz-4ghz range. I think they are just waiting for the ecosystem to flesh out a bit more (more apps, 4GB+ devices to show up for 64bit etc) and then they will push the FULL DESKTOP experience. First we’ll get games and tablets etc hooked to monitor/keyboard/mouse, but then comes the full box hooked to the same for PC like performance using the same cards PC uses from AMD/NV now.
The difference will be the price. No $100 to MS for the OS, and no $350 for Intel’s i7-4770K (or whatever). The soc people can sell that amped up ARM chip for $150 and laugh even if they double the chip (which currently is 80-120mm^2 @$15-25). All the same parts as a PC would be used (HD, DDR3/4, SSD, Vid card, etc), except the cpu swapped for soc, and windows swapped out for a tri-boot of Android, Linux, SteamOS (whatever combo, all free). You’d be giving users a cheap powerful box with TONS of software across all 3 OS’s (pick any OS’s, but I can see multi-OS being a selling point). If AMD gets in on this, you could have a windows lic on there too for those that want it if they figure a way to have both a SOC/APU in one, or just two sockets. I think AMD could make some cash here, if they integrated an ARM cpu into their chips so a single chip swings both ways. LOL is that a Bi-sexual chip?
Either way, K1 is just the beginning. It will get better each rev now that GPU is coming from desktops. The first 4 were just to get us here. Welcome to the new world Qcom, it isn’t about your modem any more 😉
BTW, never say never 😉 In this case, you’ll be proven wrong in as little as 1-3yrs. Google “ARM desktop” and you’ll get all the info you need to know on the fact that ARM and all its’ friends are coming for WINTEL.
How is this going to perform
How is this going to perform against an Apple A8, and its PowerVR graphics. Nvidia is putting all of its eggs in the Android basket, and appears to be ignoring The K1’s potential as a SOC for A low cost full Linux based graphics Tablet! Maybe Nvidia wants to become a cloud services provider, and force people into a closed ecosystem built around Android, and Nvidia’s streaming/other services. Nvidia is not a big enough player in the services market, and would do well not trying to lock users into a closed model. Better Nvidia should try to get a tablet that runs a full OS(Linux Based, not M$) and make a tablet SKU that can run Blender, Gimp, Inkscape, etc. and create some lower cost competition for the high priced Graphics tablet market. I do hope that Apple would create a Pro tablet and better graphics, that can run OSX, and open source graphics software. The A8 and whatever PowerVR (wizard GPU with hardware ray tracing) GPU and the ipad air’s kevel of screen resolution, with OSX would be better for a pro graphics tablet. Nvidia just wants too much control, and that will lose them business, just like the other Tegra’s of the past.
Edit: kevel
To: level
Edit: kevel
To: level
I thought to run 3DMark ICE
I thought to run 3DMark ICE Storm Unlimited on Bluestacks.
Phenom II X6 @ 3.5GHz, HD6850 default.
Score 36850
Graphics Score 83818
Physics Score 12434
Graphics Test 1 375 FPS
Graphics Score 2 354.4 FPS
Physics Test 39,5 FPS
I might try a GT620 tomorrow to see what is the difference between the two cards in Bluestacks(if it is as big as running the real thing on a PC).
Here are the scores for
Here are the scores for GT620
Score 20903
Graphics Score 26090
Physics Score 12326
Graphics Test 1 135.5 FPS
Graphics Score 2 97.6 FPS
Physics Test 39.1 FPS
Bluestacks .exe version 0.8.9.3088
http://www.anandtech.com/show
http://www.anandtech.com/show/8035/qualcomm-snapdragon-805-performance-preview
research a bit
found its faster than 805
Oddly enough, Evolife.cn
Oddly enough, Evolife.cn benchmarked the Snapdragon 801 at 19,962 while Anandtech benched the (supposedly faster) 805 at 19,698 (which is lower). Evolife's "Tegra 4" rating of 16,494 is close to Anand's "SHIELD" rating of 16,238 — so the benchmarks are probably comparable. Perhaps Qualcomm's 805 driver was a bit unoptimized when Anand did his tests?
Image is from this news post's source article.
They do say that in the
They do say that in the Anandtech review (in the gpu performance page in the beginning)
“Although it’s our first GPU test, 3DMark doesn’t do much to show Adreno 420 in a good light. Qualcomm tells us that its 4xx drivers aren’t as optimized as they could be and thus 3DMark isn’t the best showcase of its talents.”