I recently got a chance to experience virtual reality (VR) with a HTC VIVE head mounted display (HMD). I was so enraptured by the experience that I immediately began learning everything I could about the current VR state of tech, production & design workflows, best practices, and even academic papers & lectures. It may sound like hyperbole, but it’s really going to change everything.
There have been other times in my tech life that I felt like I did inside VR;
using HyperCard on a Macintosh SE/30,
clicking through blue hyperlinks on the first web pages in a Mosiac browser,
using the then-new layers features in Photoshop 3.0, and
using the first multitouch iPhone in 2007.
All of these experiences gave me a profound sense of “OMG This is going to be HUGE” feeling that pushed me deeper into my technology career. Virtual reality seems like it might be the next step. It’s hard to articulate exactly why I feel so sure about VR playing a big part in the next epoch of computing. The concept of “Presence” is a big part of it though.
But it’s still early. Just as no one could have imagined current Amazon emerging from those first primitive blue-hyperlinked grey-background web pages, it’s difficult to articulate how mainstream VR will unfold. We are one to two magnitudes increase in performance-per-watt away from VR going from niche to billions of users. But as anyone who understands Moore’s Law knows, a 2016 VR-capable GPU running on <1W of power is probably much less than 10 years away. We are at a tipping point now for establishing the VR-foundation that the future will be build upon.
I don’t yet have my own HMD, or even a mobile capable smartphone, but I am teaching myself Blender. While the software is not exactly the most optimal option – although I’m looking to try the Maya student licence – I’ve already grown accustomed to navigating and directing my creative energy within a 3D work space, and that I think is the a good place to start.
I think the future is going to be weirder than we can possibly imagine.
I’ve been thinking on Neal Stephenson’s The Diamond Age as it pertains to self driving cars. The protagonist there has a “car”, which is a nanotech constructed “robot horse”, that looks like the real thing. Horses are the original self-driving transportation.
Consider the descendants of Big Dog or Spot, and imagine a near-future world where “cars” look more like small living rooms with giant horse-like legs, or maybe carriages drawn by a team of robots, running & galloping from place to place.
The dexterity such a robot would offer may enable a future not only without parking, but one without the need for roads. These robot horses would also offer significant safety advances, as legs enable them to nimbly leap out of each others way, just as horses in a herd do.
iOS 9 reveals at WWDC this week have gotten me really excited about the iPad again. As much as I like my iPad I’ve struggled to adapt my habits to do work in the single-app-at-a-time model of the iPad. But the new multitasking and keyboard updates make the iPad a real contender for replacing my laptop.
Slide Over, Split View, & Picture in Picture
Split View will allow me to “work with a reference” as I do windows side-by-side on OS X. Whether it’s wikipedia and a spreadsheet, or Google image search results and a drawing app, I’m really excited about this. Slide Over is going to be great for quickly jumping into a Messages thread, and ever since Youtube added Picture in Picture to their app browsing I’ve wished I could do the same everywhere.
Easy text selection & shortcut buttons.
Editorial is my favourite text app because it has the most amazing tool – an extra row above the keyboard – that offers quick cursor scrubbing without obscuring the line of text with a finger. I’m so happy this idea has made into the stock keyboard with text selection. Combined with Split View I’m expecting a big boost to my productivity on an iPad.
One more thing…
The one thing I’m still wishing for is Cintiq like low-latency & pressure sensitivity as well as a pin point stylus. I do a lot of sketching on iPads, and I have 4 different 3rd party styli. I enjoy it. But the latency is still a little frustrating (especially on an iPad Mini) and if you want a pin-point stylus you tend to give up pressure sensitivity. If Apple builds it in – maybe through some variation on Force Touch – I expect it to have much lower latency, and hopefully a pin-point stylus.