A grain of salt is needed for this one. Users on Reddit claim to have found a pair of renders, one of the headset and one of the controllers, for the HTC Vive VR system. They also have a screenshot of the page, although the first words you see are “This Is Real,” which are the most sketchy, ironic, and unfortunate words to be greeted with in a product leak.
The current HTC Vive prototype looks like a rough version of this. There are some significant differences, though. My major concern is at the front of the headset. You can clearly see a front-facing camera as well as two nubs below it, one to the bottom-left, and one to the bottom-right. If those two nubs are also cameras, then that makes a bit more sense.
If those two nubs are not cameras, then Valve would have downgraded from a two-camera system, in the original prototype, to a single camera. Valve has already claimed that the Vive will have front-facing cameras, plural, to track objects (like pets) for safety reasons. I can see them adding an extra camera, but I doubt that they would use just a single one. Two cameras allow more accurate depth tracking at low distances, which is when you risk… interacting… with the user. That sounds unlikely.
If it's three cameras? That makes sense.
Kyle Orland of Ars Technica is using the original prototype during GDC 2015.
Image Credit: Ars Technica
The controllers are also interesting, but mostly from an aesthetic standpoint. The hexagonal plates, which apparently functioned as sensors, seem to have been changed into circular rings (if the hole goes all the way through). They retain their thumb trackpads, triggers, and a couple of buttons. It's unclear whether each controller is identical, or if there's a difference between the intended-left and intended-right models. Being a lefty, I hope not.
At roughly the same time, Cher Wang, the CEO of HTC, announced that the HTC Vive will be unveiled at CES (in January). It won't be available until around April, but we should know basically all there is to know about the system at next month's trade show. Given this timing, and that multiple users have been posting the leak seemingly independently, it sounds valid. The camera configuration, on the other hand, takes a bit away from that.
Multiple cameras are only
Multiple cameras are only useful for depth tracking if you’re using stereo disparity to calculate depth cues. If you’re using another technology, like Structured Light (e.g. Kinect 1), Time of Flight (e.g. Kinect 2, Intel Realsense), or Optical Flow, then a single camera is all that is necessary. If it is only a passthrough optical camera, then a single camera is also all that is necessary (getting two cameras located correctly for stereo vision passthrough requires a complicated mirror setup, so basic mono passthrough is generally preferable to failed orthostereo).
Still doesn’t explain why the
Still doesn't explain why the original prototype had multiple cameras, and why it was said to be released with cameras, plural.
Probably because it was a
Probably because it was a pre-devkit design. If, for example, they had originally been using Leap Motion’s ‘Dragon’ stereo camera for hand tracking, but decided against including that particular device (e.g. due to tracking reliability not being significantly improved over the current Leap Motion devices, or the IR pulsing frequency interfering with the Lighthouse trackers) then there’s no reason to slap an extra camera in that does nothing just to appease some early marketing material.
Personally, I’m betting it’s a pure passthrough camera with no depth-sensing component. Thus far, NOBODY has demonstrated a purely optical hand-tracking method with acceptable performance, let alone one that can operate from a HMD rather than a fixed viewpoint.
It’s not supposed to detect
It's not supposed to detect your hands. That is done with fixed lightfield sensor(s), which I believe they said will ship with two for potential occlusion reasons. No, the camera(s) on the headset are to make sure you don't accidentally kick your pet.
“That is done with fixed
“That is done with fixed lightfield sensor(s)”
LightHOUSE. Lightfields are a completely different ballgame in computational imagery. Incidentally, while the Lighthouse system is outside-in in a programmatic sense, it’s inside-out in an optical sense: the Lighthouse basestations are spinning laser emitters, not sensors. The sensors are distributed in a constellation on a tracked object (controller, HMD, etc).
The controllers do not detect your hands. The controllers are tracked, but you hand state can only loosely be inferred from the button and touchpad states. Even Oculus Touch with it’s capacitive finger-lift sensors can only detect the binary state of three fingers.
Now, whether it’s worth tracking your hands is another debate entirely, even if you could pull a reliable and accurate optical tracking system for hands out of nowhere (or attach an IMU glove to a Vive controller as Noitom did). Without haptic feedback, free-air hand tracking will remain as a ghost-in-an-environment interface, unable to do more than mime as if you are interacting with a virtual objects. Tracker controllers are a bit better in that you have some degree of tactile feedback from buttons/stickers/offset-weight motors/etc. It’s not full proprioceptive haptics, but it’s better than nothing at all in terms of immediate recognition of input.
Hence why I think the camera, if it even ends up in the consumer product, will be passthrough-only. The camera in the initial reveal render was clearly intended for depth sensing through stereo disparity, as it was a binocular camera oriented entirely incorrectly for stereo passthrough, so depth sensing was clearly the original intended purpose.
well i wouldn’t be surprised
well i wouldn’t be surprised for this leak to be real, especialy when valve teased that two weeks ago, Valve and her team made “a very, very big technological breakthrough” followed by “We shouldn’t make our users swap their systems later just so we could meet the December shipping date.”
so yea the release pushed from Q1 2016 to Q2 at least.
so this wouldnt be a leak since the product seem to be fully reworked with apparently something big.
Getting the feeling this
Getting the feeling this won’t be very useful for Elite Dangerous compared to other VR solutions.
While I am interested in VR,
While I am interested in VR, it adds a lot of limitations on game design due to motion sickness. There are similar issues with 3D movies. I don’t really like movies to be in 3D since it just increases the price while usually degrading the experience for me. It degrades the experience for me since I spend the first 20 to 30 minutes of the movie with my eyes constantly trying to adjust focus when it is unnecessary. This makes the movie significantly less immersive. I recently watched “The Martian” in 3D since there were no 2D shows available. I had the same adjustment period at the beginning, but I also had quite a few cases where the 3D effects broke down, which breaks immersion. This was caused by looking at an area of the scene which was not in focus, which causes your eyes to try to bring it into focus. Since you can’t bring it into focus, it causes eye strain. It also had several cases of a quick camera pan which triggers your eyes to refocus when in 3D. This caused any quick camera movements to completely blur out. I usually feel slightly nauseous after watching a movie in 3D.
The VR headsets will get around some of these issues, but not all. Basically, if you don’t want to cause motion sickness for a lot of the audience of a 3D movie, you have to keep the camera as stationary as possible. This doesn’t work well for gaming, and is a big limitation on movies also. I know some people who got motion sickness from watching Avatar in 3D due to the shots with camera movements. Some of the games that seem to do okay are flight sim style games where you have a stationary cockpit around the user. I get car sick really easily, but I have never gotten airsickness. I have seen reports on a few shooting gallery type games as these allow the user to remain stationary. Some people might be able to handle running around in a 3D world while sitting still in their chairs, but a lot of people will not. This has the potential to cause a serious failure of expectations since the hype has been building up for so long. I get motion sickness really easily, so I expect my ability to use a VR system will be limited. I have actually gotten slight nausea just from playing a 3D rendered game on a regular monitor in a darkened room. I had a lot of ear infections though, so I may be more dependent on sight for balance.
The first generation devices are not going to be that spectacular. With the Rift, you at least can run it on a powerful PC. With the PlayStation VR, it doesn’t have the power to drive a VR display except at very low graphics quality. You are essentially trying to run 2 relatively high resolution displays. The current console hardware is barely capable of running 1080p at good graphics quality and high refresh rate. If it isn’t at high enough refresh rate, it will make people sick, so they can’t degrade it down to 30 fps to achieve the expected graphics quality. It has to be able to maintain high refresh rate continuously. The cell phone based headsets are even less powerful than a console.
Given the hype, I am afraid that too many people will buy these first generation headsets with unreasonable expectations. I haven’t seen one my self, but a lot of people seem to think they are spectacular, which they may be for a limited set of content and a limited number of people. Some people might buy one expecting to play a first person shooter and end up with it making them sick. I expect the second or even third generation parts will be the versions to get for the average, non-bleeding edge consumer. By then, people will have more reasonable expectations also.
For the next generation devices, they need to start making displays specifically for VR applications. Cell phone screens are not made for high refresh rate. There are some companies working on micro-displays (.75 to 1 inch or so), but these have their own issues. If they can be made to work, without being ridiculously expensive, then they could allow displays made more like a pair of welding googles rather than the bulky current models. We also need compute power to run the displays. The next generation consoles should be able to run a VR display easily since they will be made to power 4K screens.
Just noticed an article on
Just noticed an article on kitguru claiming that the PlayStation VR will come with a separate processing box. If that is true, then I would think the VR would be running totally on the external box. The PS4 doesn’t have a high enough bandwidth external connection to allow the devices to share the work load. Is assume it would need to connect together via USB. This seems ridiculous to me if it is true. They would essentially be selling you a new computer to run the VR headset. The actual PS4 would be for peripheral connections only. This also means that the VR headset will be expensive, since you would be buying a computer to run it. It would be a stripped down computer, but still a lot more than just the headset.
” If that is true, then I
” If that is true, then I would think the VR would be running totally on the external box. The PS4 doesn’t have a high enough bandwidth external connection to allow the devices to share the work load.”
Nope, back when it was still called Project Morpheus it was revealed that the processing box does three things:
– Dewarp and extract the ‘second display’ stream for showing a separate view on an external monitor (both views rendered on the PS4)
– Performs temporal extrapolation (AKA reprojection AKA timewarp) to produce additional frames for the 60Hz-120Hz display mode (native 90Hz and 120Hz display also supported)
– Performs at least some of the audio processing. The suspicion is that the PS4 is accelerated in rendering discrete mutlichannel audio, so that is how audio is rendered and the box spatialises this and runs it through a HRTF to get the positional audio for stereo headphones.
The PS4 literally cannot offload image rendering through the HDMI port, the bandwidth is not there, and the internal interconnects needed to do so do not exist on the PCB (would need to feed an internal bus out to the HDMI port).
Just noticed an article on
Just noticed an article on kitguru claiming that the PlayStation VR will come with a separate processing box. If that is true, then I would think the VR would be running totally on the external box. The PS4 doesn’t have a high enough bandwidth external connection to allow the devices to share the work load. Is assume it would need to connect together via USB. This seems ridiculous to me if it is true. They would essentially be selling you a new computer to run the VR headset. The actual PS4 would be for peripheral connections only. This also means that the VR headset will be expensive, since you would be buying a computer to run it. It would be a stripped down computer, but still a lot more than just the headset.