Yesterday I was working on an ITWorld post about Netflix Streaming on the Wii. In describing the navigation of your queue I said “You can click on the arrows on each end of the nav bar…”
That sentence didn’t register until this morning when I was proofreading. Click on the arrows. Clicking, on a video game system. That’s new. Now granted, there’ve been torturous control systems in console games before where you’d move a cursor via analog stick and then press a controller button to ‘click’ on an on-screen button, but the experience has always been pretty awful. And plenty of Wii games (my favorite Wii games, in fact) use a point and click interface. But doing a non-gaming task on the Wii really made me aware that I was doing something different.
On a computer, of course, we click constantly; the entire modern computer interface is built around moving a cursor and clicking mouse buttons. But the Wii is the first console that’s successfully brought that metaphor onto game systems. Presumably Playstation Move will do this as well. But not Natal (see below).
And then there’s the iPad, which takes its UI from smartphones. On the iPad there’s no concept of a cursor. You can still ‘click’ things but the feeling is different from doing so with the mouse where you see the cursor. You can’t roll-over interface items to get helper pop-up texts or anything along those lines [no, I haven’t used an iPad, I’m extrapolating from using smartphones]. On the Android platform, at least, there’s even a change in outcome depending on how long you click. There’s clicking and then there’s “long press” that will generate different results from the same icon/link/on-screen item.
And then there’s multitouch, of course. The pinch to zoom function still feels awkward to me, but it feels like just the start of what’s possible. I have one app on the Android that does interesting things via tap-patterns. For example, if you tap-press (ie, a double tap where you ‘hold’ the second tap) you can then slide your finger left and right to zoom in and out. Really the possibilities are endless, though we’ll need some standards to evolve in order to be efficient (awkward though pinch to zoom is, it’s become a defacto standard that everyone understands).
I’m guessing the Natal experience will be closer to the iPad than to a PC. With the whole “body as controller” I can’t imagine MS putting a cursor on-screen, though maybe they will. (In some cases they may have to.) I think the strength of Natal will be more in ‘gesture controls’ rather than on-screen buttons to be pressed.
I don’t have a real point to this post, I’m just pondering… as game consoles become more generalized devices, they’re borrowing from other devices and/or evolving their UI. At the same time the iPad (and the Android tablets that are soon to come) are establishing a completely different paradigm.
So what does the future look like? Will the “mouse & pointer” combo become some quaint idea of yesteryear? Probably not. Touch interfaces are wonderful for devices that you hold in your lap or that lay flat on a low table, but as soon as you have a vertical service it’s been shown that fatigue sets in pretty quickly with touch interfaces. Any time you have to manipulate a device above heart level it becomes an issue over time. For a fast transaction like using an ATM machine you’d never notice this, but in an hour long touch-gaming session where you have to hold your arms up to manipulate the game, you’d definitely feel it. Lowering the screen, of course, leads to neck strain and back problems.
It’s an exciting time and I feel like computing/gaming/human-machine interfacing is poised on the cusp of a major upheaval; one which will lead to improvements in the way we manipulate these devices that we’re so enamored with.
I like the idea behind the Wii remote, but I find it too imprecise for anything other then broad strokes. I’m not sure about Move, or Natal, for being any more accurate. Maybe it’s just the years of conditioning on a keyboard and mouse, but I find controlling the Wii a bit sloppy. I’m hoping that if this is the way UIs are moving, that they can give us the kind of tight controls we’re used to on the PC.
I think that’s kind of a design issue. If someone builds a Wii application that requires precise pointer controls then they’ve failed at designing for the platform. In the case of the Netflix integration, all the UI parts are big, easily hit controls so it works great.
I’m sure you’ve played plenty of games where you have to click on a tiny moving object and find doing so is problematic.
It’s a matter of making the target match the fidelity of the pointer.
Honestly I’ve had the same problem with some touch controls. The end of your finger is a pretty large surface and I’ve encountered apps that have a couple of small targets positioned too close together. I aim for one and hit the other.
I do not like touch screens. I am a stickler in keeping my screens clean and I would constantly be spraying lens cleaner on the screen and using my microcloths after every time I touch the screen. I have a phobia about smear, fingerprints and dust on my screens. At my last job I would walk around cleaning everyones monitor screen and seeing people who had a layer of dust on their screen gave me the shakes (not really). The worst where people who would write on their screens.