At their annual MAX show, Adobe hosts a keynote called “Sneak Peeks”. Some of theses contain segments that are jaw-dropping. For instance, there was an experimental plug-in at Adobe MAX 2011 that analyzed how a camera moved while its shutter was open, and used that data to intelligently reduce the resulting motion blur from the image. Two years later, the technology eventually made its way into Photoshop. If you're wondering, the shadowy host on the right was Rainn Wilson from the US version of The Office, which should give some context to the humor.

While I couldn't find a stream of this segment as it happened, Adobe published three videos after-the-fact. The keynote was co-hosted by Jordan Peele and, while I couldn't see her listed anywhere, I believe the other co-host is Elissa Dunn Scott from Adobe. ((Update, November 8th @ 12pm EST: Turns out I was wrong, and it was Kim Chambers from Adobe. Thanks Anonymous commenter!))

The first (and most popular one to be reported on) is VoCo, which is basically an impressive form of text-to-speech. Given an audio waveform of a person talking, you are able to make edits by modifying the transcript. In fact, you are even able to write content that wasn't even in the original recording, and the plug-in will synthesize it based on what it knows of that person's voice. They claim that about 20 minutes of continuous speech is required to train the plug-in, so it's mostly for editing bloopers in audio books and podcasts.

In terms of legal concerns, Adobe is working on watermarking and other technologies to prevent spoofing. Still, it proves that the algorithm is possible (and on today's hardware) so I'm sure that someone else, if they weren't already working on it, might be now, and they might not be implementing the same protections. This is not Adobe's problem, of course. A company can't (and shouldn't be able to) prevent society from inventing something (although I'm sure the MPAA would love that). They can only research it themselves, and be as ethical with it as they can, or sit aside while someone else does it. Also, it's really on society to treat the situations correctly in the first place.

Moving on to the second demo: Stylit. This one is impressive in its own way, although not quite as profound. Basically, using a 2D drawing of a sphere, an artist can generate a material that can be applied to a 3D render. Using whatever they like, from pencil crayons to clay, the image will define the color and pattern of the shading ramp on the sphere, the shadow it casts, the background, and the floor. It's a cute alternating to mathematically-generated cell shading materials, and it even works in animation.

I guess you could call this a… 3D studio to the MAX… … Mayabe?

The Stylit demo is available for free at their website. It is based on CUDA, and requires a fairly modern card (they call out the GTX 970 specifically) and a decent webcam (C920) or Android smartphone.

Lastly, CloverVR is and Adobe Premiere Pro interface in VR. This will seem familiar if you were following Unreal Engine 4's VR editor development. Rather than placing objects in a 3D scene, though, it helps the editor visualize what's going on in their shot. The on-stage use case is to align views between shots, so someone staring at a specific object will cut to another object without needing to correct with their head and neck, which is unnecessarily jarring.

Annnd that's all they have on their YouTube at the moment.