At the WinHEC developer conference in China today, Qualcomm and Microsoft have announced a partnership to enable a full Windows 10 computing environment on systems based on the next-generation of Snapdragon processors in the second half of 2017. The importance of this announcement can’t be overstated – it marks another attempt for Microsoft to enter the non-x86 market with mobile devices (think tablets and notebooks, less smartphones).
If you remember the first attempt at Windows on ARM, Windows RT, it’s failure was a result of a split software base: some applications could work between Windows RT and Windows 8 while most could not. It likely helped in the demise of that initiative that Windows 8 was overall very poorly received and that the overzealous box-style interface just wasn’t a hit with users. Major players like NVIDIA, Qualcomm, Samsung and many different OEMs were all caught up in the mess, making it very unlikely that Microsoft would undertake this again without a surefire win.
Though details are light today, the success of this depends on software compatibility. Microsoft and Qualcomm claim that Windows 10 on mobile devices will bring “the scale of the mobile ecosystem with an unparalleled pace of innovation to address consumers’ growing need to be always on and always connected.” Modems and high performance SoCs for mobile systems is the realm of Qualcomm and form factors using these components as the base could be a solid source of innovation. The press release states as much, saying this partnership will “enable hardware makers to develop new and improved consumer products including handsets, tablets, PCs, head mounted displays, and more.”
Software is the silver bullet though.
New Windows 10 devices powered by Snapdragon supports all aspects of Microsoft’s latest operating system including Microsoft Office, Microsoft Edge browser, Windows 10 gaming titles like Crysis 2 and World of Tanks, Windows Hello, and touchscreen features like Windows Pen. It also offers support for Universal Windows Platform (UWP) apps and Win32 apps through emulation, providing users with a wide selection of full featured applications.
Based on what I have learned, the native software experience will come with UWP applications. UWP is Microsoft’s attempt to merge the software base for different platforms, and though it has been slow, adoption by developers and users has been increasing. If it’s true that everything being sold in the Microsoft app store today will be compatible with the ARM architecture processors in the Snapdragon SoC, then I think this leaves the door open for a wider adoption by an otherwise discerning audience.
Are you ready to hit that start button on your Snapdragon computer?
The emulation for ALL other Win32 (and x64) applications is critical as well. Being able to run the code you are used to running on an x86-based notebook will give users flexibility to migrate and the ability to depend on Qualcomm-based Windows 10 machine for work and for play. With emulation comes a performance hit – but how much of one has yet to be seen or discussed. The rumors have been circulating recently that ARM compatibility was coming to Windows 10 with the Redstone 3 update, and the timing of “late 2017” matches up perfectly with the announcement today.
While notebooks and convertibles are likely on the table for this platform, it’s the new form factors that should excite you. Microsoft’s Terry Myserson expects Qualcomm and Windows to bring “a range of thin, light, power-efficient and always-connected devices, powered by the Qualcomm Snapdragon platform, is the next step in delivering the innovations our customers love.” Cristiano Amon, President at Qualcomm Technologies thinks they can provide “advanced mobile computing features, including Gigabit LTE connectivity, advanced multimedia support, machine learning and superior hardware security features, all while supporting thin, fan-less designs and long battery life.”
This partnership will lead to more than just new consumer products though, reaching into the enterprise markets with the Qualcomm Snapdragon platform addressing markets ranging from “mobility to cloud computing.”
Full press release after the break!
Qualcomm Ushers in New Era with Entrance into Windows Mobile Computing Devices
— New Microsoft Windows 10 Devices to be Supported by Next Generation Qualcomm Snapdragon Processors —
SHENZHEN, CHINA — December 8, 2016 — At Microsoft’s WinHEC hardware developer conference today, Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), and Microsoft announced that they are collaborating to enable the full Windows 10 experience on mobile computing devices powered by next generation Qualcomm® Snapdragon™ processors, enabling a new class of anytime, anywhere connected devices.. As traditional PC computing becomes more mobile, Qualcomm Technologies brings the scale of the mobile ecosystem with an unparalleled pace of innovation to address consumers’ growing need to be always on and always connected. With full Windows 10 compatibility, Snapdragon-based technology will enable hardware makers to develop new and improved consumer products including handsets, tablets, PCs, head mounted displays, and more.
New Windows 10 devices powered by Snapdragon supports all aspects of Microsoft’s latest operating system including Microsoft Office, Microsoft Edge browser, Windows 10 gaming titles like Crysis 2 and World of Tanks, Windows Hello, and touchscreen features like Windows Pen. It also offers support for Universal Windows Platform (UWP) apps and Win32 apps through emulation, providing users with a wide selection of full featured applications.
“We are excited to bring Windows 10 to the ARM ecosystem with our partner, Qualcomm Technologies,” said Terry Myerson, executive vice president of the Windows and Devices Group at Microsoft. “We continue to look for ways to empower our customers to create wherever they are. Bringing Windows 10 to life with a range of thin, light, power-efficient and always-connected devices, powered by the Qualcomm Snapdragon platform, is the next step in delivering the innovations our customers love – touch, pen, Windows Hello, and more – anytime, anywhere.”
“Qualcomm Snapdragon processors offer one of the world’s most advanced mobile computing features, including Gigabit LTE connectivity, advanced multimedia support, machine learning and superior hardware security features, all while supporting thin, fan-less designs and long battery life,” said Cristiano Amon, executive vice president, Qualcomm Technologies, Inc., and president, QCT. “With full compatibility with the Windows 10 ecosystem, the Qualcomm Snapdragon platform is expected to support mobility to cloud computing and redefine how people will use their compute devices.”
The first devices running the full Windows 10 experience based on Snapdragon processors are expected to be commercially available in the second half of 2017.
About Qualcomm
Qualcomm’s technologies powered the smartphone revolution and connected billions of people. We pioneered 3G and 4G – and now, we are leading the way to 5G and a new era of intelligent, connected devices. Our products are revolutionizing industries including automotive, computing, IoT and healthcare, and are allowing millions of devices to connect with each other in ways never before imagined. Qualcomm Incorporated includes our licensing business, QTL, and the vast majority of our patent portfolio. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, all of our engineering, research and development functions, and all of our products and services businesses, including our semiconductor business, QCT, and our mobile, automotive, computing, IoT and healthcare businesses.
If it only supports
If it only supports UWP(TIFKAM renamed again) apps then it’s not going to work! First named as Metro then renamed to Modern then to UWP madeness!
And “Win32 apps through emulation” should tell any Win32 appliaction users that there is not native support for Win32!
Yeah, there can’t ever be
Yeah, there can't ever be native support for Win32 applications on ARM. At the very least, they would need to be either recompiled into an equivalent API, assuming no chunks of assembly / etc., or they must be emulated.
There can be native ARM
There can be native ARM support but that would require more than a kernel rewrite, as they would have to port over the entire windows API(win32/other) layers that sit atop the kernel to ARM. And that includes the task of re-optimizing the code to the ARMv8A ISA hardware on Qulacomm’s custom cores. Not to mention developing the compilers that can optimize any windows high level source code after any native ARMv8A code is generated by a compiler or assembler.
It’s easy to cross compile unoptimized code, but optimizing for any underlying hardware’s Implementation differences in any custom implementation that is engineered to run the ARMv8A ISA is a difficult matter, that takes time. Optimizing for any differences in the underlying x86 hardware between Intel’s custom x86 ISA implementation and AMD’s custom x86 ISA implementation was relatively easy as mostly there was mainly only two different x86 custom implementations to deal with at the optimization level with some minor variations for slight differences among various CPU models from both AMD and Intel. Unoptimized code compiled into any CPU’s ISA will run on any CPU that supports that ISA but not very well for any OS/API code that needs to be fully optimized least the OS and essential APIs run very poorly.
The are various differences between say AMD’s x86 ISA running micro-architecture implementation at the below the assembly Op code level and Intel’s x86 below the assembly Op code level(microcode level) that must be optimized for by making use of the x86 CPU makers’ respective optimization manuals. Ditto for the various ARM custom cores and the necessary code optimization that needs to be done for OSs and necessary APIs. So optimizing for any underlying differences so the assembly code does not cause CPU cache thrashing or excessive execution pipeline stalls is a science unto itself for system software engineers.
I can see why Google went with a VM type of software abstraction layer for the android OS and let the various Phone/Tablet makers deal with any of that optimization via the phone/tablet makers’ various ARM CPU hardware suppliers’ optimization manuals. And let the device Tablet/Phone makers makers deal with their respective native implementations of that Android Run-time. So Android apps developers only target the Android Run-time but native applications do run faster than any apps that use a hardware abstraction layer like Android.
The ARM hardware/software and Linux kernel/API ecosystem has had decades to deal with any OS/API optimization for the various ARM ISAs and custom cores that run the various ARM ISAs. But M$ has only been dealing with x86 based systems mostly for some time now. So M$ will need years or have to rely on x86 to ARM abstraction layers for Qualcomm’s systems, so that’s just RT like all over again!
Emulating a CISC ISA with a RISC ISA is going to and a whole other level of difficulties to the process and the inefficiencies are going to be multiplied!
edit: is going to and
to: is
edit: is going to and
to: is going to add
[on the Last sentence in the post.]
With all due respect,
With all due respect, optimization happens mostly at the compiler level. With a few exceptions for the kernel that must use some magic boilerplate assembly to start from the equivalent of the bios. (This will probably use UEFI.)
And Microsoft already has Win32 “ported” to Arm, it was working with Windows 8 PT. Win32 is an API for Windows, and has nothing to do with Intel or AMD x86/64 instructions. (It was was also the API used for the Alpha 64 bit CPU and the Itanium, among others.)
The news here is the emulation of x86/64 for Arm, which will be slower than on real x86/64 hardware. Unless Microsoft plans to statically recompile the x86/64 to Arm using clang’s ability to turn x86/64 code into LLVM byte-code, then optimize that for Arm, which could be interesting.
Yes, optimization happens
Yes, optimization happens mostly at the compiler level! But the compiler developers have to get the optimization manuals for the specific CPU maker’s CPU, and developing any optimizing compilers takes time to develop. So if the CPU core is a custom core that is engineered to run the ARMv8A ISA then that custom core’s maker will have to produce the optimization manual for the compiler developers to use.
There are still some vital OS context switching code that is still tweaked/optimized at the hands on level of optimization and the hardest part of compiler development is not at the parsing level it is at the optimization level to make sure that the optimized code does not produce CPU cache trashing and other ill effects that occur with unoptimized code.
Do you know how many custom ARMV8A ISA running micro-architectures there are out there that are engineered to run the ARMv8A ISA, each requiring the CPU’s maker to produce an optimization manual so compilers can be produced to optimize the op codes in an order that will not induce cache thrashing/other ill effects on that company’s specific brand of custom micro-architecture. Those top tier ARM holdings architectural licensees only license the ARMv8A ISA from ARM holdings and then set about engineering their own custom underlying micro-architectures to run that ARMv8A ISA. So the code optimization has to be done differently for each custom ARMv8A ISA running CPU core!
That CISC ISA to RISC ISA emulation layer is going to have to be tweeked for all the custom ARM micro-architectures and the ARM Holdings refrence micro-architectures and the variants also for any custom or refrence CPU cores. And still it’s going to run like blackstrap molasses at the North pole!
To be clear, Microsoft has
To be clear, Microsoft has only announced support for one version of ARM, specifically the latest Qualcomm chip.
BTW the kernel is what does the context switching.
Yes! That would be implied by
Yes! That would be implied by this part of the post that you replied to that states: “vital OS context switching code” and that Kernel code gets called upon very very often!
There is also some other code calls that the OS does not directly dispatch but that the OS has had code calls that have jump/branch addresses registered as delegated event handlers to assist the OS/CPU for some time sensitive tasks! And that code needs some hand optimization also, ditto for whatever controllers that need access to the bus mastering subsystem to do things on the bus/buses when the CPU/other systems are not using it.
Oh wow have things become complicated since the Z80s running on those Trash-80s starting before the 1980’s and supplanted in the 1980’s by the IBM PC and morphed the into PC market we know of today!
And don’t forget the PET computer that ran on Bender’s Brain!
RISC and CISC aren’t really
RISC and CISC aren’t really that meaningful terms anymore. I doubt modern ARM really fits that well with RISC principles; it has quite a large number of instructions and not all of them are that simple. They have added a lot of SIMD and other instruction extensions as standard. Modern processors are not clearly RISC or CISC. The AMD64 instruction set could have been cleaned up a lot more, but a lot of the instructions will go mostly unused since modern compilers will not issue them. It doesn’t hurt much to have them around since they will just be implemented in microcode. Complex instruction are actually a good thing, as long as they are not encoded in too complex a manner. Having a single instruction that does a lot can be looked at as a form of instruction compression. Modern processors are severely limited by memory access. Having a single instruction that does complex things takes less memory accesses and reduces space used in instruction and shared caches. The added encryption instructions probably take the place of a large number of RISC like instructions. A complex instruction will just generate a stream of microops for the OOO back-end to execute, which makes it look very similar to a RISC-like design.
I wouldn’t be surprised if AMD’s Zen (AMD64) and K12 (ARM) share a lot of the same components, just different instruction decoders. Would one of them be RISC and the other CISC? Processor design has moved so far from the original implementations with these instruction sets, that it almost looks like emulation, even with native code. Anyway, instructions are cheap; memory accesses are expensive. It may be that emulation can be done at very high speed these days. If you can throw some extra cache at the problem, it might run very fast. For something like a gpu limited game, it might be fast enough. Although, with how small cpu cores are getting, I have wondered if it would be possible to include decoders for multiple instruction sets, or just include independent cores for both. The problem is, you can’t just license an AMD64 core and place it on your chip the way you can with ARM.
Sure a CISC ISA based CPU has
Sure a CISC ISA based CPU has hardware that breaks assembly language instructions down into micro-ops for OOO execution and such, but even RISC processors do have things like “micro-ops” for OOO it’s just that the RISC ISA based CPUs have many assembly instructions that break down into single micro-instructions mostly and very few instructions that require more than one “Micro-Op”.
But to emulate any CISC based instructions on a RISC CPU requires that the CISC instruction be implemented in more than a single RISC assembly language instruction, and that translation requires more software code to achieve. Software based code translation layers are not as efficient/quick as hardware based translation layers. So that code(Win32) x86 based API and its legacy bloat is not going to be running very efficiently on that RISC ISA based software translation layer.
AMD’s K12 custom ARMv8A running CPU may just have SMT capabilities and the same style cache, caching hardware algorithms, and instruction buffering, etc. as its x86 ZEN development partner! But K12’s CPU core being RISC ISA based is going to require less transistors to implement in hardware than any CISC based CPU core. So any emulation of a CISC based ISA on a RISC based CPU core is going to have even more inefficiencies than if it where a CISC based CPU emulating a different CISC ISA CPU with that CPU’s different instruction set. Software emulation of an ISA, even a virtual ISA, is very inefficient to begin with compared to native hardware execution.
This post doesn’t change my
This post doesn’t change my opinion that RISC and CISC are obsolete terms. This is a link to the ARMv8 reference manual that a quick search turned up. It is over 5000 pages long.
https://people-mozilla.org/~sstangl/arm/AArch64-Reference-Manual.pdf
Does this look like “Reduced Insteuction Set Computing”? Modern instruction sets look a lot more like CISC than what RISC once was. They have huge numbers of, often very specialized, instructions. The only things that are kept from RISC ideas are that instructions are generally fixed width, and that is usually one processor word. You don’t want variable length instructions since they are difficult to decode and more difficult to pipeline. They end up getting broken down to a series of micro-ops for pipelining. Also, complex addressing modes are unnecessary and a performance issue. With AMD64, a lot of the old instructions are still present, but they would generally not be ussued by a modern compiler. Many of the instructions that a modern compiler will use, even in AMD64, probably break down to one, or at last a small number of micro-ops. Even if lower performance instructions are present in the ISA, they are going to be infrequently used, and therefore not a performance issue.
For making a high performance emulator, they would make a compact translation core just for the most common instructions issued by modern compilers. More complex and infrequently used instructions would be handled differently. Such an emulator could probably take up a relatively small amount of cache. A few MB of cache is actually quite large when you look at such instruction translation. Instructions are cheap; you can execute huge numbers of extra instructions with little performance impact as long as those extra instructions don’t hit memory. Most applications do not come anywhere close to fully utilizing a modern CPU core. The max IPC might be 4 or 6 before we even look at what pipelining provides. Many applications don’t even achieve an IPC of 1 due to memory access latency. I ran into this when testing bit vectors. For a bit vector, you can use a whole byte for each element on a byte addressable machine, or you can do special stuff to pack the bits densly. Since machines are not bit addressable , a lot of extra instructions need to be executed. On modern machines with fast bit shift instructions, the performance difference is non-existent even though the bit addressing code is executing a lot more instructions. It may be offset by reduced memory accesses though. The architecture of modern machines could make software emulation a very reasonable thing to do. With modern tech, it wouldn’t be difficult to add AMD64 decode hardware, but that would probably require an x86 license. I am not sure how that works when the line between software and hardware is blurred.
You are conflating Win32,
You are conflating Win32, which is an API that has been available on many CPU platforms, such as ARM, Alpha and Itanium, with X86/64 intel CPUs.
Not sure who is confusing
Not sure who is confusing what (not original poster); Win32 does refer to an API, not an ISA. The Win32 API is important because if they use emulation, that doesn’t mean that the entire application would need to execute under emulation. They could easily compile all of the shared libraries as native code, and just use emulation for the core application. That method could be very fast if the application uses a lot of MS shared libraries for the heavy lifting. That is similar to how a lot of the higher level languages work, like perl. When you write a perl program, you are mostly stringing together a lot of calls to native code. The functions that you call are implemented in C or C++. Also, a lot of modules are actually implemented in C or C++ also. Microsoft already has all of their API ported to ARM, so making x86 applications call native libraries on ARM should be relatively easy.
Even that would require
Even that would require emulation if only to handle endianness, since ARM is bi-endian, and x86 is little endian only, so that optimizations the compiler will do by default using big-endian tricks would need to be turned off for the x86 emulation.
There will be many issues like this to solve.
I doubt there are very many
I doubt there are very many endian issues. Endianess can be fixed quite easily.
Not sure who is confusing
Not sure who is confusing what (not original poster); Win32 does refer to an API, not an ISA. The Win32 API is important because if they use emulation, that doesn’t mean that the entire application would need to execute under emulation. They could easily compile all of the shared libraries as native code, and just use emulation for the core application. That method could be very fast if the application uses a lot of MS shared libraries for the heavy lifting. That is similar to how a lot of the higher level languages work, like perl. When you write a perl program, you are mostly stringing together a lot of calls to native code. The functions that you call are implemented in C or C++. Also, a lot of modules are actually implemented in C or C++ also. Microsoft already has all of their API ported to ARM, so making x86 applications call native libraries on ARM should be relatively easy.
Ported to unoptimized ARMv8A
Ported to unoptimized ARMv8A ISA code is not the same as ported and optimized for a specific make and model of custom ARM/other core that has its own in hardware idiosyncratic ways of execution in its custom micro-architecture that is engineered to run the ARMv8A ISA/other ISA! The idiosyncratic ways of execution have to be accounted for via the CPU core maker’s supplied optimization manuals.
Code optimized to run on Intel’s brand of x86 ISA running custom micro-architecture is not going to be code that is optimized to run in an optimized state under AMD’s brand of x86 ISA running custom micro-architecture! Ditto for any of the many custom ARMv8A ISA running micro-architectures that are engineered to run the ARMv8A ISA. Add to that ARM holdings’ reference ARM cores that run the ARMv8A ISA and that’s implemented by ARM holdings’ own custom micro-architecture that is engineered to run the ARMv8A ISA.
They all have different underlying micro-architectures. Code optimization is done with intimate knowledge of a CPU core’s underlying hardware implementation of an ISA even for different companies’ CPUs that run the same ISA. The optimization manuals have to be from that exact CPU’s maker. And Optimizing compilers take time to develop! And some OS kernel code still to this day is hand tweaked and hand optimized and that is a lot of work for each and every custom CPU micro-architecture that runs any ISA!
Creating an emulated x86
Creating an emulated x86 processor in ARM ISA code to directly execute any x86 based API code is really going to be slow! So yes native libraries would be faster! But those native libraries are going to have to be optimized/re-factored for each brand of custom ARM core and also ARM Holdings reference cores. So those libraries are either going to have to be ported over unoptimized for most of the cores that they are running on or M$ is going to have its hands full for some years getting is OS optimized for every custom ARM core on the market including Arm Holdings reference design cores.
I’s sure that Qualcomm and M$ spent many millions of programmer hours getting that win32 API work done and somewhat optimized for a single makers brand of custom ARM core, but still there will is some software translation layers going on with even the Qualcomm core. Now M$/Qualcomm still will be forever tweaking things. And M$ still has to do this same process many times over for all the other custom ARM cores, i.e. optimizing/re-factoring code libraries for other custom ARM core! And before those x86 code libraries can be re-factored and cross compiled the optimizing compilers from Qualcomm needed to be there for that process, but still there is some translation layers mentioned that translate from the x86 ISA to ARMv8A ISA.
“Crysis 2”????
Is this a
“Crysis 2”????
Is this a joke? I’m guessing a Snapdragon 820 could manage about 1.5 FPS in Crysis 2.
I’m otherwise super excited! Hopefully my 950XL, through the insider program, will be used as a testbed for this next year.
Yeah… we’ll see what they
Yeah… we'll see what they mean by that. For all we know, it could be a non-mobile SoC.
As history shows; any M$
As history shows; any M$ mobile device = abandonware…
Haha indeed, after they
Haha indeed, after they coming soon it in press releases for a year, release it quarter baked, promise more to fix it coming soon, coming soon, coming soon, and then slink away after a year.
“Microsoft gives Qualcomm’s
“Microsoft gives Qualcomm’s Snapdragon WARTs
It is called Windows 10 now but it will fail for the same reasons”
http://semiaccurate.com/2016/12/07/microsoft-gives-qualcomms-snapdragon-warts/
Sorry, but that article seems
Sorry, but that article seems rather stupid.
“Fail for the same reasons”?
How can that happen when MS/ARM has fixed the two major issues Windows RT had?
– It can now run Win32-apps
– The performance has been improved both for software and hardware. (The performance on Surface 1 was terrible, but on the Surface 2 with the Tegra 4 it was actually quite good)
“It can now run Win32-apps”
“It can now run Win32-apps” maybe but at in unoptimized state and going from CISC to RISC on an emulation/hardware abstraction level is going to really bog things down and require loads more power! We are not talking about an Android VM type of hardware abstraction layer running on a lightweight Linux kernel! We are talking about all of the Windows abstraction layers Cruft that has to be included to make the Win32 Application Ecosystem work and all the bloat that comes along with any M$ legacy code ecosystem!
That S/A article is very apropos even if it involves some of Charlie Demerjian’s famous acerbic wit! Oh the poor little monopolists in Redmond, such thin skin!
This is what Intel gets for
This is what Intel gets for sitting on their laurels and stubbornly refusing to deliver performance increases for 5 years now.
ARM has caught up to them and will surpass them in the next 1-3 years.
The first death knell was the release of the iPad Pro whose A9X chip has similar performance to dual-core Core U chips.
The second death knell is this.
In 3 years at the most, high end ARM chips will have surpassed x86 quad-core chips (probably even desktop ones) in performance.
Odds are iPad Pro 2 will have performance similar to quad-core Intel chips.
And if Win32-on-ARM emulation is completely seamless to the users, then Intel is royally screwed.
Oh and Google is also in trouble, though not as much as Intel. They have been dicking around with Chrome OS on clamshell form factors, refusing to bring official support for Android on laptop and desktop form factors. This is what Google gets as well for stubbornly trying to data-mine laptop users to the absolute maximum as if data-mining them on Android would be any less thorough.
Now that the Intel half of the Wintel monopoly is a dead man walking, it’s time to deliver the killing blow to the Windows half. And SteamOS has been… ~gathering steam~ non-stop since 2013.
You really think Intel has
You really think Intel has “stubbornly” refused to deliver performance increase? The lack of large performance increases is due to the process tech hitting a wall. Since about 2007, process shrinks stopped resulting in the massive speed increases and power reduction that was the norm for a long time. This is also why my old Core2 duo laptop from about 2007 was still perfectly usable right up until the gpu died recently. Also, CPUs are also still heat limited. Even a so called fat core like Skylake is only about 10 to 12 sq. mm, and that includes the L2 cache. That is only 3 to 4 mm on a side. While the total power consumption isn’t that high anymore, the power (and thermal) density is very high.
What happens when a technology hits a basic physical limitation? The front runner hits the wall first, and then everyone else catches up. A large part of Intel’s lead has always been their process tech. CPUs are just not going to be that important going forward anyway since most applications are either not performance sensitive in the first place or are limited by other components, like the gpu. For most people, it will be more important what other hardware is integrated. For mobile, it is important to have hardware acceleration for whatever video codec you are playing. Even if it can be done in software, it will kill the battery to do so. CPUs will just be a commodity item, where it doesn’t matter much which one you get, except for software compatibility and what is integrated with the CPU. The integrated gpu and video decode hardware is more important than the cpu core now.
No Intel’s mostly MBA
No Intel’s mostly MBA management and BOD nefariously engineered the Ultrabook market to push that Apple style Thin and Light form over functionality obsession onto the entire non Apple laptop market. With Ultrabooks(TM) Intel got to push onto the PC market those god awful dual core U series i7 SKUs that cost as much as a quad core i7 SKU on a core for core cost basis, ditto for the U series i5s. Intel solved that Moore’s law running out threat to their high margins by engineering the Ultrabook SOC SKUs and more dual core U series core i7’s per wafer with cores that could be priced much higher Apple style for those mad Apple style profits for SOC SKUs with little real computational power compared to Intel’s previous core i series iterations!
Those Ultrabook laptop SKUs even cost more than laptops that came with a previous generation’s quad core i7 and discrete mobile GPU! Way to overcharge(Apple Style) for less power, Intel and your OEM partners!
The non-upgradable ultra-book
The non-upgradable ultra-book crap exist because people will buy them and they are much more profitable for the laptop maker than an upgradable system. If they put every thing on one board, all of the components can be surface mounted by machine, with very little human labor to build them. A customizable laptop requires a human a while to assemble with placing ram, storage, and whatever else is customizable. With the new MacBook Pros, almost everything is mounted on the board. I doubt my human hands even touch it. Intel didn’t create this market. Consumers did by buying such non-configurable, non-upgradable devices. If you want a configurable laptop, then you just about need to go with a mobile workstation like a Dell model, although you get stuck with quadro graphics that you probably don’t need or want. The other choice seems to be the boutique builders. I have been looking at a company called Mythlogic. They configure Clevo base models. The only issue is that I use a Dell ultra sharp display desktop display, and I fear a gaming display will not look as good. I can get a Dell ultra sharp on their mobile workstation, but then I am stuck with the Nvidia Quadro card that probably doesn’t perform as well as current gaming cards, and cost a lot more. If anyone knows of any other options, I wouldn’t mind hearing about them. I don’t know if you can get an ultra sharp screen on any other Dell models.
Yep, this is Snapdragon’s
Yep, this is Snapdragon’s moment to enter the PC desktop market. When the weak Atom CPUs run Win10 mostly fine and ARM cpus beating Atoms, there’s just no reason not to go and enter the desktop market too. The competition will hopefully just make it cheaper for us consumers, and Windows being the common OS, we won’t need to care about software compatibility issues with our existing software across different hardware.
Intel must be kicking themselves for being slow on porting Android to the x86. They need the Android eco-system going for their hardware to keep market share if low-end Windows devices move towards Snapdragon.
PC Ditto 2.0 (x86 emulator
PC Ditto 2.0 (x86 emulator for Motorola 68000)? Emulation is good for stuff which is obsolete. It will not make emulated stuff obsolete.
At best it will slow down development of professional native ARM applications. At worst due to initial subpar experience people will be tricked into thinking that ARM is nothing good.
lol I was like, But can it
lol I was like, But can it run Crysis? before I even got to that part….