According to the information that DigiTimes was able to garner, only Asustek Computer and Gigabyte Technology will see their shipment of motherboards over this first half of the year either remain the same as last year or perhaps experience a small growth. As neither are expected to break 20 million units this is not great news but is certainly better than the news ASRock, MSI, ECS and Biostar are expecting. The lack of competition in the CPU/APU market is spilling over to motherboard manufacturers as customers are not immediately upgrading to the new platforms but are instead choosing premedical upgrades. The next few quarters are going to be interesting as we see what strategies motherboard manufacturers adopt to retain sales. New boards based on the Intel 200 series chipset will not likely be a factor until the last quarter of this year, at the earliest.
"The sources expect Asustek and Gigabyte's motherboard shipments in 2016 to stay at the same level in 2015 and neither of them is able to break 20 million units. Since overall demand continues shrinking, the top-2 players are likely to continue taking market share from lower-tier players."
Here is some more Tech News from around the web:
- Phase-change chalcogenides for neuronal computing @ Nanotechweb
- Great, IBM has had a PCM breakthrough. Who exactly is going to manufacture? @ The Register
- Got a Fitbit? Thought you were achieving your goals? Better read this @ The Register
- Updategate: Microsoft uses technique to lure you to Windows 10 from the 'X' button @ The Inquirer
- Researchers Set World Record Wireless Data Transmission Rate of 6 GB/Sec Over 37 KM @ Slashdot
- Toshiba adds 8TB model to high-end X300 hard drive range @ The Inquirer
- Amazon Stops Giving Refunds When an Item's Price Drops After You Purchase It @ Slashdot
Suspect auto-defect has
Suspect auto-defect has helpfully struck again with an incorrect “correction” of a word. Curious what the original word was though.
Well, if someone doesn’t
Well, if someone doesn’t upgrade the motherboard or CPU, then he possibly could have meant “peripheral”, but I don’t see how he could have miss-spelled it that badly for the auto-correct to say premdical, lol.
Yes it’s aggressive
Yes it’s aggressive auto-correct in the word processor’s spell correcting algorithms! And a one button solution needs to be at the top menu level/on the top toolbar interface level to turn it all off and instead just highlight the miss-spelled words and let the user decide from a list of possible choices! LibreOffice’s auto spelling algorithms are not up to release level yet, so turning it all completely off needs its own dedicated ON/OFF button visible on the top level of the interface at all times!
The AI pattern matching algorithms can be accelerated on the GPU, as the government genome/other AI matching algorithms have been shifted over to GPU acceleration with great success! So maybe there is some public domain(any government funded) software that the word processor makers/maintainers could be including into their respective project’s code base.
On top of that, LibreOffice stop the feature creep, and focus on the spell checking! Devote more to spell checking, it’s Multi-universe, and Not (Mulch-universe, or Mufti-universe as possible spelling choices, LibreOffice folks), Devote more time to your spelling dictionaries, of all languages! Where is that “Multi-” prefix that has been missing for the last too many LibreOffice updates!
I would hope that spell
I would hope that spell checking would not be so processor intensive that you need to start looking at GPU acceleration.
For context sensitive spell
For context sensitive spell checking, GPU acceleration would help even on the bog standard Mobile Integrated GPU. My laptop has Intel’s SOC integrated GPU(dog food) to go along with the discrete AMD mobile GPU, and if I’m responding to a post, I would welcome some contextual spelling acceleration on the GPU to help suggest the proper word for me, rather than having to bring up the thesaurus/dictionary! So pattern matching/contextual spelling that can look at the words around the misspelled word does help in suggesting the proper word choice for any misspelled word.
My dyslexia is so bad that I need all the help I can get. I constantly confuse my word processing software with my terrible spelling, so more contextual acceleration on the GPU would be a big help, if someone who spells correctly 99.99% of the time does not need the extra GPU assistance then make the GPU contextual spelling acceleration feature switchable on/off, but some folks need the extra help.
If I’m using an Office suite, then I want it to be able to accelerate the Spread Sheet calculations on the GPU via OpenCL/other APIs, as well as contextual spelling via OpenCL/other APIs as well! If I’m typing and not gaming then all that GPU power is just sitting there and unavailable for OpenCL/HSA/other style GPGPU acceleration and not being usable, when all it take is some software that can make use of OpenCL and other GPGPU APIs, well that is just a waste of processing potential, especially for folks with certain disabilities!
Even LibreOffice can accelerate spreadsheet calculations on the GPU, so contextual spell checking accelerated on the GPU is not out of the realm of possibilities, and that contextual spell checking, as well as grammar checking AI, takes a lot of processing power.
I can understand accelerating
I can understand accelerating spread sheets on the GPU since they can be very large in some cases and they usually involve parallel data manipulation which is well suited to GPU acceleration. For small spread sheets though, you are going to do more processing just to display the spreadsheet window graphics than you are actually computing values in the spreadsheet.
If you are just typing in a document, your CPU is mostly sitting idle. Even with spell checking, human typing is so slow by computer standards, that it will not take that much processing even with aggressive spell checking. I would only see 1 to 3 percent CPU utilization on an i7-920 under Linux unless there is a bunch of background stuff running, and most of that ~3% would be background OS task rather than my typing. With how much performance is available from a modern CPU (billions of scalar instructions per second, and much more throughout if SIMD extensions are considered), I doubt GPU acceleration for spell checking is necessary at all.
My iPhone does much more aggressive spell checking and automatic correction than any desktop application I have used. I doubt that these spell checking applications are performance limited. More aggressive spell checkers are probably just not frequently implemented on desktop applications since it is presumed that you have an actual physical keyboard where typos are less common. On my tiny iPhone 5, I type things incorrectly probably more than 95% of the time due to large fingers on a tiny soft keyboard, so everyone needs aggressive spell checkers on such devices.
That’s because your iPhone is
That’s because your iPhone is doing the spell checking in the cloud, probably GPU accelerated. I don’t want any stinking cloud, and I need the GPU(my GPU/s on my laptop) doing that contextual spell checking faster with better suggestions! And I do not want any automatic changing, just highlight the misspelled word and give me a better selection of possible choices, but let me choose! Spell checking involves parallel word matching out the wazoo, so do it on the GPU!
Aggressive spell checkers can really mess things up, especially for word processing applications with missing or improperly put together spelling dictionaries! That Mufti prefix that should be the Multi prefix showing up in some online journalist’s work is a dead giveaway that they are using LibreOffice and aggressive spell checking is to blame!
And for sure Your CPU utilization is low because the Real AI contextual spell checking algorithms are not being used in your word processing software! A lot of word processing software is using the Cloud to do a lot of spell checking, I don’t want any cloud when MY laptop has a GPU on its SOC, and a discrete mobile GPU also. Just give me the best spelling dictionary with the best contextual spell checking algorithms so I can do my writing totally disconnected from any Cloud!
AMD’s AM4 is coming so maybe
AMD’s AM4 is coming so maybe some new sales for Bristol Ridge, and those AM4 desktop motherboard variants will do fine with Zen Also, so maybe not so much for the Intel types of planned motherboard obsolescence for increasing sales! There are other ways to add value to a specific motherboard maker’s like of products, and maybe that is where some of the problem lies with the motherboard makers themselves.
Edit: like of products
Edit: like of products
to line of products
i need a zen board in mITX
i need a zen board in mITX hopefully it wont be AM3 style where you just get fawked if you want that.
I don’t think this has
I don’t think this has anything to do with lack of competition. I was still using an old core 2 duo laptop up until recently, and it worked fine for everything I needed it for, especially since I was running an SSD. With more competition, we may have had slightly lower prices, but I doubt the performance situation would be much different. Because of the relatively small performance gains since about 2006 (due to fundamental technology issues), CPUs just aren’t that important anymore. For most consumers, really old CPUs still perform sufficiently. PC gamers aren’t even driving that many more CPU sales. If you have an i5-2xxx or above, then you probably don’t really need an upgrade. There are obviously some people who think they need the latest and greatest, but in most games at real world settings the GPU is the bottleneck.
With regard to CPU power for
With regard to CPU power for gaming.
Obviously you don’t play CPU bound games which would run just fine on GF8800 for VGA. Take just any serious strategic game. These don’t need GTX1080 to run. But more CPU power is always welcome. I can testify to that for example Supreme Ruler got massive boost when comparing S775, S1366 and S2011(-3). One of very few games that can load 4 cores to capacity. Also GG War in the East/West, AI makes any decision much, much faster. I can continue, but I think its pretty obvious.
I would agree that some
I would agree that some strategy games do take a lot of CPU power. I just don’t know that many strategy game players. They do not seem to be anywhere near as common as graphics limited games. I know a lot of people playing WoW and other such MMORPGs or MOBAs on old hardware though, and they do not need upgrades. If you are a person that plays CPU heavy strategy games, then you obviously should take such needs into account when buying or building a system. It is easy to see from the sales numbers though that most people are not upgrading their systems like they did in the past. It would be interesting to see the numbers for strategy game players, but probably no one has the data unless Steam can link hardware to the games played on that hardware.
Strategy games may not be CPU bound going forward since there is a lot of work going on to accelerated AI applications on GPUs. That is the case for a lot of applications. Single thread CPU performance became very hard to push farther, so we have gone multi-threaded. We are also in the process of switching a lot of applications over to GPU acceleration. A GPU can obviously decode or encode video a lot better than a CPU. We are also seeing a lot more integration of specialized hardware which also decreases the importance of the CPU core itself. CPU cores are tiny now. A Skylake core is probably only 10 to 12 mm2 on 14 nm. ARM based cores are mostly significantly smaller than that. This makes a lot of room for specialized processors. We are probably going to have laptop processors very similar to console processors, with many, relatively low powered CPU cores. Considering how many people game on laptops and consoles, game makers will have to code their applications to take advantage of these systems. With the new PS4 Neo coming out, that should bring the consoles up to respectable performance with PCs again, but they will still have probably an 8 threaded, low power CPU of some kind. My old i7-920 with 8 threads may fair quite well with a game designed for such a system. A newer 4 thread device may perform very well also.
I wouldn’t be surprised to see AI specific hardware on such cores in the future, since the space is available and specialized hardware can usually accomplish the same task using much less power than general hardware. The days of the CPU being the dominant component in a PC is passed. Future systems will be like current consoles which are essentially GPUs (a processor connected to graphics style memory systems) with some tiny CPUs integrated on-die. I have been saying this for a while since there was no way to make integrated graphics competitive with dedicated graphics without solving the memory bandwidth problems caused by old style system memory. This leads to a GPU centered design due to the much more demanding memory system. Also, the size of a mid-range GPU surpassed the size of a mid-range CPU a long time ago.
“since there was no way to
“since there was no way to make integrated graphics competitive with dedicated graphics without solving the memory bandwidth problems”
Yes, but now AMD’s APUs on an interposer are coming, they are coming to the HPC/Workstation market, but you can be sure that there will be some high end gaming APU’s on an interposer SKUs, and the Zen Cores will be wired to a very Fat GPU die via thousands of interposer traces, and any PCI based link will appear as if it was from the dark ages compared to the massive effective bandwidth that those APU’s on an interposer will have at their disposal for both processor to HBM and processor to processor communication(CPU die to GPU die)!
If the HBM stacks can get 4096 traces for the GPU, or CPU, connection to HBM, then any interposer based APU will be able to support some very wide direct traces CPU to GPU, in addition to the HBM’s traces! So just imagine the CPU/cores die directly wired to the GPU die with a wide 4096(or wider) direct parallel connection fabric, and PCI will never be able to match that effective CPU to GPU bandwidth that will be available for the APU on an interposer via the silicon interposer’s ability to have 10s of thousands of traces etched into its silicon substrate.
They should try negative
They should try negative prices like negative interest rates from the ECB… or helicopter money. :o)
Nah, they just need to take
Nah, they just need to take out a motherboard tax straight out of your bank account once they’ve banned cash.
I don’t have a bank account,
I don’t have a bank account, I keep my cash under the mattress. :o)
i’m a “lost my silver hoard
i’m a “lost my silver hoard at the bottom of the lake on my last boat trip” man myself.
for me was a surprise to see
for me was a surprise to see ASrock not growing this year. They been doing pretty good stuff lately.
True that! Specially
True that! Specially considering their pricing is very competetive.