There are many interesting ways to pull heat away from a processor. You can submerge your device in mineral oil or even phase-change fluid (such as "Novec"). You can push cool fluid up to the thing that you are trying to remove heat from and then pump it away through a radiator. If using air, you can make use of vapor chambers and the convection current formed as devices heat up. The goal is to abuse one or more interesting material properties to store energy and move it somewhere else.
Image Credit: HT4U.net
Or you can just have an obscene, gigantic mass of metal with more fins than the NHL. According to FanlessTech, these are three heatsinks that are not yet available (and may never be). Two of them have three towers, connected to the base by heat pipes, and the last one has four.
Image Credit: ExtraHardware.cz
Personally, I would be a bit uncomfortable about buying a PC like that unless I needed absolutely silent or top air cooling performance. The amount that it hangs over RAM or nuzzles against add-in boards seems sketchy to me, especially if you need to swap a DIMM or two at some point, but I always use stock coolers at reference voltage and frequency so what do I know?
Image Credit: PConline.com.cn
Yes, that would be a regular, ATX motherboard.
When will these prototypes become available? Who knows if they even will. Still, if you have a need for cooling solutions that are a little over-the-top, you might be able to get your hands on these some day. There's nothing wrong with adding more mass and surface area, rather than doing something fancy. It works, and it probably works really well.
at this point you should just
at this point you should just go water
Don’t be so narrow minded
Don’t be so narrow minded this stuff is for a silence optimized build not top performance aside from that point doing even an aio is still adding at the very least one to two fans (not silent) a pump (more noise) and at least 4 points for catastrophic failure at each barb something that you wouldn’t want to worry about if your going for a silent build
yes, passive will be
yes, passive will be impossible to fail and really death silent but wont get you the performance of a water cooling setup.
If you go for water, its really just a matter of how you mount your pump and what speed its running. A low speed Laing D5 on a foam pad is just dead silent too.
But if you’re just going for one CPU and one GPU without overclocking i’d rather go for passive cooling which is less expensive, wont fail and easier to set up.
And this is someone saying with a water cooled maxed out 900D (3 x R9 290), just because it look damn cool and air wasnt an option for LTC mining (but i tried first).
it also takes up quarter of
it also takes up quarter of your board and makes impossible to access memory its like smart phone screens they get bigger and bigger it looks stupid no matter how efficient silent it is.
let me rephrase that almost
let me rephrase that almost half your board
http://img.pconline.com.cn/images/upload/upc/tx/itbbs/1304/06/c10/19587892_1365256134817.jpg
Lol that’s the small one
Lol that’s the small one 😛
http://itbbs.pconline.com.cn/viewpic.do?fid=250&imgUrl=http://img.pconline.com.cn/images/upload/upc/tx/itbbs/1304/06/c10/19588084_1365256351747_1024x1024.jpg
And the way you talk about
And the way you talk about water cooling sounds like you have no experience with water cooling.
You justify this becasue of your scared to go water cooling thinking it will kill your system lol when done right that’s hardly the case also FYI pumps can be silent.
I wish people would specify
I wish people would specify AIO or custom water; there’s a huge difference in performance, reliability, and flexibility between the two.
i dont consider AIO water
i dont consider AIO water cooling no better than high end cpu air cooler
I actually really love that
I actually really love that last one.
Put it in a HAF or something.
The massive case fans would be just ripping fresh air through it.
This seems stupid because
This seems stupid because there are unnecessary heat sources still inside the typical ATX case, and serious problems with the basic design of PCs as thermal systems.
The worst of these errors is very easy to correct: Placing the PSU inside the case. Why does anyone do this still? It is a huge heat and audio interference source, literally no one in audio engineering will ever use an internal sound card again because it’s so important to keep the electrical interference and noise far away from the analog audio. Likewise spinning metal drives do not belong inside a case, they’re a huge heat source. If you put both inside the case then you will need monster heat sinks like this.
Imagine a totally different physical architecture. First, power the mainboard with 48VDC, 24VDC, 12VDC and 6VDC stepdowns and have every component on the board (down to the RAM) negotiate its power level. There is a standard for this: Power over Ethernet (PoE). Just put that same power negotiation protocol over PCI-E, HyperTransport/FSB, and etc.
Basically make the motherboard a PoE router as is done for rackmount systems (“LAN on motherboard”, you can buy these boards for rackmounts now). Now you have no AC power inside your case at all! No PSU heat except from the step-down to lower DC voltages. Overall power draw may go up a bit for the DC cable but so what? Laptops don’t suffer for this.
Second, get the spinning drives entirely out. With a LAN-on-motherboard architecture you have lots of network bandwidth, say up to 40GB/s (the HyperTransport bus, maxed, can handle this fine plus everything else in the box like peripherals. No spinning drive moves anywhere near even the maximum throughput of USB 2.0 anyway… so use any of Power over Ethernet, USB 3.0 new powered cable, Thunderbolt or unpowered eSata or SAN fiber cabling to talk to your external drives. The only reason you have them inside the box right now is power supply, and that’s solved if you stuck the power supply externally just use a PoE RAID like rackmounts.
Third stupid and unnecessary heat source inside the box is massive graphics cards. Get a life! Most games are network bound now so you should be investing in better Internet and tweaking open source router Os, not increasing your fps while your ping lags. Spend your money on actual CPU performance and RAM performance and on PCI-E SSDs which actually are fast enough to speed things up. M.2 connectors are a nice hedge between PCI-E and SATA SSDs though I think direct PCI-E is still more stable and sensible.
Seriously an A10 series on FM2+ plus one GPU card runs like 3 4K displays at faster fps than you need to show every single network update even at 30 ping. You probably don’t even own 3 4K displays, so seriously 3x 1080p runs with some APUs right now, or at least with any low end card.
OK that third one is trolling obviously, clearly there is no better use for electronics than faster graphics, you are so helpless with respect to pingtime that you do not dare complain or demand South Korea speed Internet in the USA or Canada.. too much socialist dark fibre required!
So enjoy spending all your money and time on heat sinks to get more fps to make up for your crap pingtime, which it can’t.
Interesting response, Craig
Interesting response, Craig Hubley.
Your point makes sense to a certain extent. Maybe someone from the engineering department could discuss this further. Perhaps someone from ASUS as they enjoy creating new designs and concepts.
As for the GPU point of view. Keep in mind that the PCs are not just for gaming. And not all gamers are connected to the Internet.
And as far are your trolling, North American is still far behind in public access to technology. If you’re working for a research organization (ei. NASA, National Defence, etc.) you’d be amazed and you’d be sadden by how much the public is hung by a tread. It’s important for people to continue to fight for public technological advancements (ei. Net Neutrality, etc.) and to allow open-minded people to share their technology (public domain).
Down the road, it’s all about “easy and un-changed” profit, but imagine the great content if the rope would be removed.
Another good reason for NAS
Another good reason for NAS by the way is a better dedicated OS with a real access-optimized file system like ZFS. If you have the network controllers you will get better performance out of that than you will from directly attached spinning metal using an inferior file system like NTFS etc. Again why rackmount systems do it this way…
Hope they send one to Morry
Hope they send one to Morry 😀
Why is everyone failing to
Why is everyone failing to notice the notches in the heatsinks for fan clips… these suckers are designed for mulitple fans…
Many big heatsinks have fan
Many big heatsinks have fan mounts for extreme cooling, but they are often designed with fanless operation for non-enthusiast CPUs in mind.