Yesterday I was working on an ITWorld post about Netflix Streaming on the Wii. In describing the navigation of your queue I said “You can click on the arrows on each end of the nav bar…”
That sentence didn’t register until this morning when I was proofreading. Click on the arrows. Clicking, on a video game system. That’s new. Now granted, there’ve been torturous control systems in console games before where you’d move a cursor via analog stick and then press a controller button to ‘click’ on an on-screen button, but the experience has always been pretty awful. And plenty of Wii games (my favorite Wii games, in fact) use a point and click interface. But doing a non-gaming task on the Wii really made me aware that I was doing something different.
On a computer, of course, we click constantly; the entire modern computer interface is built around moving a cursor and clicking mouse buttons. But the Wii is the first console that’s successfully brought that metaphor onto game systems. Presumably Playstation Move will do this as well. But not Natal (see below).
And then there’s the iPad, which takes its UI from smartphones. On the iPad there’s no concept of a cursor. You can still ‘click’ things but the feeling is different from doing so with the mouse where you see the cursor. You can’t roll-over interface items to get helper pop-up texts or anything along those lines [no, I haven’t used an iPad, I’m extrapolating from using smartphones]. On the Android platform, at least, there’s even a change in outcome depending on how long you click. There’s clicking and then there’s “long press” that will generate different results from the same icon/link/on-screen item.
And then there’s multitouch, of course. The pinch to zoom function still feels awkward to me, but it feels like just the start of what’s possible. I have one app on the Android that does interesting things via tap-patterns. For example, if you tap-press (ie, a double tap where you ‘hold’ the second tap) you can then slide your finger left and right to zoom in and out. Really the possibilities are endless, though we’ll need some standards to evolve in order to be efficient (awkward though pinch to zoom is, it’s become a defacto standard that everyone understands).
I’m guessing the Natal experience will be closer to the iPad than to a PC. With the whole “body as controller” I can’t imagine MS putting a cursor on-screen, though maybe they will. (In some cases they may have to.) I think the strength of Natal will be more in ‘gesture controls’ rather than on-screen buttons to be pressed.
I don’t have a real point to this post, I’m just pondering… as game consoles become more generalized devices, they’re borrowing from other devices and/or evolving their UI. At the same time the iPad (and the Android tablets that are soon to come) are establishing a completely different paradigm.
So what does the future look like? Will the “mouse & pointer” combo become some quaint idea of yesteryear? Probably not. Touch interfaces are wonderful for devices that you hold in your lap or that lay flat on a low table, but as soon as you have a vertical service it’s been shown that fatigue sets in pretty quickly with touch interfaces. Any time you have to manipulate a device above heart level it becomes an issue over time. For a fast transaction like using an ATM machine you’d never notice this, but in an hour long touch-gaming session where you have to hold your arms up to manipulate the game, you’d definitely feel it. Lowering the screen, of course, leads to neck strain and back problems.
It’s an exciting time and I feel like computing/gaming/human-machine interfacing is poised on the cusp of a major upheaval; one which will lead to improvements in the way we manipulate these devices that we’re so enamored with.