Not a member yet? Register for full benefits!

Virtual worlds Require Better Tools

When we interact with the world around us, how do we do it? Do we press soft foot soles to the floor, feeling the cold flow up our legs? Bend over the fridge, wiping the hair out of our eyes as we do so, smelling the mix of meats and cheeses within?

Or do we interact with a keyboard, and a mouse?

The disparity is startling.

Just how do we interact with a virtual world? There has to be something better than keyboard and mouse. The logical answer is with voice, and our actual body movements. The only problem is that cuts out the vast majority of those who stand to benefit from VR the most: Those individuals whose physical forms are either undesirable, or simply do not work.

Futurists quickly point to the obvious answer of direct brain interface, interfacing our thoughts with the simulation. That is ideal, but, reluctantly we have to admit that is decades of, in anything like full body reliability.

There is an emerging critical issue of how we interface with vast datasets now, as increasingly, raw data is infused with additional layers, of optional information, adding or changing complexity on a whim. Complexity so vast can only be fully understood by rendering it into a 3D format, becoming a landscape through which we can move, experience, and understand.

This use of VR to understand complex data, and extrude viable life-worlds out of it, both deliberately and by unanticipated emergence has been gathering pace since the 1990s, and is moving forwards with unprecedented swiftness. VR Gameworlds have used gamepads and joysticks for decades now, trying to add in extra means of input; interfaces both ergonomic and intuitive. They are not enough, not to cope with increasing data overload.

We are rapidly closing in on the Metaverse concept: a complete, inseparable melding of computer-created environments, both stand alone and shared, intertwining symbiotically with real-time data from non-immersive Internet-based services. This new form of existence, in turn wrapped around and flowing through streaming data from the world outside, until all jumble together in one flowing whole. Identity is becoming a fluid concept, and purely physical spaces are in the very beginnings of ceasing to matter. Ceasing that is, to all but core governmental authorities - those who have only just embraced plasma monitor screens; stuck in perpetual flatland.

Even they are picking up the pace, playing the catch-up game. Even they are not immune.

Both government, and big business ignores the ever more crystalline march of these long-term trends towards immersion and visualisation in 3D form at their peril. Virtual Reality and Augmented reality systems have been growing for forty years, since the late 1960s, when the "sword of Damocles" and the Sensorama first appeared. The field has surfaced many times since then; some innovations sticking, ways of helping business manage assets far more efficiently; ways of making profit margins double, then triple as data is more easily understood.

They peak and trough like any industry. There have been a half-dozen such periods when the groundbreaking technologies of VR and AR rear their heads, only to be beaten back down; out of the public eye by limitations in technology. The mid 90s 'bubble burst' simply being the most recent and greatest to date. The technology never stops, it never sleeps, and we are entering another time of emergence. Another period when VR and AR buzzwords are all around, and, the technology is mature enough to support them.

As these virtual environments proliferate, those of us who use them, build them, work and play within them; will increasingly smack into hardware limitations. 3D accelerator systems, and computer CPUs, both of which operate under Moore's law, are relentlessly pounding some of the basic ones, such as the sheer, raw processing power to run them.

However that is not enough. Immersive spaces demand immersive movement, immersive interaction. Thus, if we are to sustain the current level of integration and growth of virtual environments, better tools than the mouse and keyboard, are going to have to become commonplace.

We will not get from where we are, to brain-machine interfaces in a single leap, but the changes are beginning, even now.

Basic Tools

The Wanda was the first of the wands/3D pointers, which are essentially mice working in three dimensions, with six degrees of movement. These are becoming much more common, helped in part by the popularity of the Wii, another device usingh the self, same system, which has captured the public heart.

A standard computer mouse has three degrees of freedom. That is all. It can move a viewpoint or a mouse pointer left and right, or up and down. It can also (sometimes) zoom in and out if the 3D virtual environment supports it, by clicking and holding the buttons. This is not ideal, as it means all the mouse's functions are taken up with moving, with nothing left for selecting.

Devices such as the Wanda, the Wii, the 3D Spaceball, or a hundred other names which make up the family of 3D pointers, have six degrees of freedom, not three. They can move along or around every possible axis.

  • Left - right
  • up-down
  • in-out
  • Tilt - movement in the vertical plain. With a user, it usually involves a head nod, or other purely vertical movement.
  • Pan - a rotation about the horizontal plain. Panning your head involves turning it to the side.
  • roll - revolving horizontally around a central point, like a corkscrewing plane.

Together, they form every possible movement that can be made from a single point in 3D space. They do this, because they are held in 3D space, cupped within your hand, and pick up every slight movement in any direction or any twist or turn.

What they cannot do is replicate an entire body in VR. They in effect, are every possible movement of a single joint, just one. Wherever you point, you look. That is, unless you have another device guiding where you look whilst you move your pointer.

The human brain perceives depth only because it has two eyes for visual input. Each eye sees a slightly different angle of the same scene (as evidenced when you hold your finger in front of your nose, then look at it with one eye closed, both eyes and the other eye closed ? the image shifts).

These two separate views are combined in the brain to form a single, 3D image, with parts of the data from each eye used to work out relative distances.

To replicate this effect in VR, you require a device that can do the same thing ? give each eye a separate view. This gave rise to the original sword of damocles interface, and to other unwieldy boxes on people's heads - HMDs. Thes4e days HMDs are far smaller, and typically weith eight to twenty ounces only. However, they are not very social things. Whilst they are ideal for individual work within VR, where the outside world is just an annoyance; they are no help where you are working with colleagues as a group in a single physical location within the same virtual space. Again, if you are alone, but have to be able to see the physical world, to interact for any reason, a HMD is not your friend.

For small-scale use, enter shutter glasses.

Shutter glasses work with shutters that switch on and off many times a second, alternating between the eyes. This happens too quickly for your brain to process it, but forces a situation whereby whenever one eye is looking at the display, the other is looking at a transparent panel, letting the outside world through. The shutter glasses and the display system ? monitor, holoprojector, retinal laser etc ? work in tandem, either through wires, or more commonly now, Wi-Fi. The shutter glasses sync their shuttering speed precisely to the refresh rate of the display system, so that the display can show images appropriate to the left and right eyes, at appropriate times.

The first shutter glasses were mechanical in nature, and actually lowered blanking plates in front of each eye, making them heavy, unwieldy, and to hot from friction and power requirements to wear right next to the eyes for very long. These days, most are LCD based, with the liquid crystal screens polarising and thus blanking out at alternate times, making them lightweight, efficient, and plummeting cost.

Combined with a head tracker - basically a second 3D pointer, they enable you to feel immersed in a virtual world that feels as real as the physical world, and work within it via pointer, whilst your gaze follows your head.



Electromagnetic Tracker

Simulator Sickness

Sword of Damocles




3D pointer for the feet

3D Spaceball

Force Ball

Staff Comments


Untitled Document .