When Michael Abrash moved from Intel to Valve, according to his post on the latter company’s blog, he suggested that he should help optimize Portal 2. The response from Jay Stelly was interesting: “Yeah, you could do that, but we’ll get it shipped anyway.” That’s… not something you’d expect from a company that is getting ready to ship a huge, AAA title.
He took that feedback as a license to think outside the box, which led to their “wearable computing” initiative that eventually formed the basis of Steam VR. One key part of this blog post was the minor parenthetical, “think Terminator vision”.
Apparently, Microsoft’s HoloLens team has. As a cute little Unity demo, they are overlaying text and post-processing shaders atop the camera feed. It’s not just baked 2D text, though; they are also pushing the feed through object- and text-recognition, suggesting that users take the source (available on GitHub) and extend it through translation or text-to-speech.
The demo is primarily written in C#, which makes sense, because Unity.