Not a member yet? Register for full benefits!

Natural Forests versus VR forests

Whilst VR systems have come a long, long way in replicating the feel of a forest, there are still a great many elements which make up the feel of a natural environment, and which we have not yet really begun to add to a VR recreation, for many different reasons.

The following list is by no means conclusive, but showcases clearly, just how far we have yet to go:

Dappled, dynamic sunlight filtering in through the branches

Sunlight streams through the air at varying intensities created by cloudbase or pollutants in the atmosphere. It ripples, changing intensity from one moment to the next, at the best of times. On a cloudy day, every time the sun slips behind clouds, the light visibly dims, and shadows fade. That is without the presence of a forest.

With a forest, the light hits each individual leaf and branch. They stop the sunbeams, so the remaining beams that lance through have the outline of leaves and branches upon them, to cast shadows upon the ground.

Those same leaves and the branches they are attached to, are usually moving in a slight breeze, changing the patterns of light that reach down in real-time.

This assumes that all the leaves are at the top of the canopy when, to add to the confusion, they are not. Lower branches will catch the light as well, reflecting the green of their leavess or the vibrant brown of branches back out. Small creatures such as squirrels scurrying along branches will cast flashes of light from their fur as they go, also adding a shadow upon the ground,

To achieve this type of effect requires ray casting, which is a method for backtracking a ray of light, from the viewer's eye to its source. Whether there is light or not, the potential path for every ray is traced, until or unless it hits an object that might obstruct it.

This wastes a great deal of processing power, but is the only method we as yet have to produce anything similar. Of course, the sheer volume of processing power required, continues to elude us. Modern graphics cards have inbuilt chips for hardware acceleration of ray tracing, but even they can only handle such in a limited area, with a limited number of ray bounces. At this stage, it will be many, many years before true dappled sunlight, in real time is possible.

Screenshot demo of Nvidea 8800, showing 2007's state-of-the-art in raytracing, by the patterns of light

Tree trunk patterns standing like fingerprints:

Every tree is unique. No two are ever the same. Evemn members of the same species do not look fully alike. They share at best the same mathematical characteristics in leaf shape, bark colour, branch division, shape of leaves and overall structure. However, when you look closer they are not the same at all.

In modelled forests, typically only a very limited number of models is used, perhaps five or six for each species of tree. These are then twisted and turned to face the viewer from different angles, giving the illusion of being different trees of the same species, when in fact they are not.

This is because typically, still, modelling a tree is a long, arduous task, so making every tree unique is beyond feasibility. Not to mention the storage requirements in disk space for many millions of trees.

If we are to ever solve this problem, we need a new way of making them, perhaps growing organically from algorithms, so that the results share the same characteristics of their species, but each being unique, and perhaps, capable of further growth.

Unique patterns of bark flow on every tree

Each tree has a rough layer of bark covering it. On different species this bark is different colours and textures, but it is always different between trees. Like a fingerprint, each individual tree, even if undamaged, has a different pattern of swirls, grooves, and fragmented blocks of bark.

At the moment in simulated trees the trunk is typically texture-mapped, with a bark-like feel. This is not usually high resolution, a fact which shows when you get close to it.

It will be a long time, if ever, before we get to the stage where individually modelling bark chunks on trees is considered, as each additional quad of a model has to be rendered, and takes time from the frame rate of the overall scene. Having a million extra quads per tree to render in real-time in a forest scene with 2,000 of them, would tag the best systems currently available.

What we could do, is use parallax mapping. Parallax mapping is an algorithm to allow bump heights - a heightmap on the texture, creating peaks and troughs using the colour of the pixels on the texture to work out how high or low they should stand out.

Unique tree trunk curves on every tree

As with the bark patterns, trees don't often grow straight. Instead, each trunk grows in an approximation of the direction it is supposed to, with bends and changes in direction throughout. Looking more like a rippling rod than a staunch upright, there are knot-holes where fallen branches once protruded, mounds where an infection once set in, even twists where another tree's branch once grew in the way.

The only way to replicate this fully, is, to expand on what was said before, to grow trees using procedural content generation rather than model them. By growing them in place, they can interact with trees that neighbour them, growing round each other as need be.

No two trees identical in branch arangement

Finally, for the uniqueness of every tree, even trees of ther same species rarely, if ever, put out branches in the same places as one another. If they do, when those branches themselves split off, they move in a different direction, or split in different places.

This is basically the final nail in the coffin of the idea that using the same model, twisted round, ever really works.

Lumpy, hillocky ground, with many small differences, everywhere you look

Forest ground is never flat. Frequently it exists on the side of an existing hill or hills, with streams meandering through it. Rain washes dirt away until the roots bind it in place. Trees fall over, their roots clawing up great chunks of earth, leaving a depression in their place. The fallen tree trunk sits there, slowly being covered with moss and eaten away.

Over time, leaves fall on it, and winds blow them into drifts. They press to the sides of the fallen plant, and mulch down. Eventually more leaves fall, and more. Year on year organic material mulches down, until a round, tubular, earthy mound lays where the tree once fell. Following rough outlines of branches, perhaps. Another tree falls, across the mound of the old, breaking the soft earth, the remains of the wood. It infills.

Another, and another, and another. Over hundreds, perhaps thousands of years, this pattern continues, taking a flatish sloping landscape, and riddling it with bumps, shallow cliffs and undulation.

This could be simulated wityh relative east. It is usually not, for nop real reason other than not having the models to place to simulate this effect. Terrain generation systems currently in use create an effect over too wide an area, too low a fidelity of change to be useful.

It is pointless trying to create this kind of subtle effect, when the terrain can only handle one point change every five metres or so.

This will change in time as processing power continues to increase, and it becomes feasible to increase terrain fidelity, using algorithms to sculpt terrain rather than solely by hand.

Random scatterings of sticks, branches, bits of wood and stone upon the ground

Forests are not tidy places. Bits fall off trees, when animals jump on them, or swing from them. Storm damage, rotten wood falling to disease, branches breaking off of smaller bushes. Leaves cascade down annually, and small stones, worked loose by roots, protrude.

Modelling this mess, is possible by means of noise functions, laying down models of small sticks and stones semi-randomly, then letting users and Ais move them about carelessly. The problem so far has come, predictably enough, from the effort required to make tens of thousands of tiny models, for the purpose.

As stated above, that will change once we are generating them procedurally.

Random tufts growing through the ground under the canopy

Small plants are often overlooked, they grow wherever they can find purchase. Realistically that is between tree roots and wherever enough dappled sunlight falls. In nature, many millions try and fail, for every one that claws its way to the surface.

In simulation, again, we just do not have the raw processing power to spare yet, to run sub-simulations round every tree or area of ground to see where millions of spores land, and which ones may oer may not grow.

That's not to say that we never will, but for the foreseeable, its not going to be a priority in development, or use of processing resources to do so in all but the most academic situations.

What we could do is the same, but regular scans, say, once a week, for locations around a tree that spores might turn into seedlings. Basically dismissing any location where it is unlikely, without spending much processing power.

This would of course only happen in locations where the forests are a major feature, but is both doable and desirable. Semi-random little tufty plants bring with them a feeling of life to the forest, a feeling hard to recreate without using the same spontaneity of life itself.

Soft, bendable branches that move in the wind and under a hand

In nature, branches bend and flow. The wind rustles the leaves with a soft sighing sound. Foliage crowns toss like salad in gales whilst on the ground little dust devils dance with fallen debris. The canopy high above ebbs and flows, the very light changing as the tops of trees shift position.

A branch bearing your weight bounces springily long before it cracks, and you can push the twined masses of small bushes aside to make headway through.

In simulatory forests currently, that does not happen. Each model is solid, and unyielding. Mimicking life, yet standing like stone. You cannot push the smallest branch out of your way, lest you move the entire model.

The problem is we are still using skinned models for most VR landscapes. There are other methods: particle swarms, metaball-modelling, methods that create an organic feel to them. The problem of course, is that these 'bendy' methods are extremely processor intensive compared to mere skinned solids.

Flowing, bending, rippling leaves

Exactly the same as with bendable branches, if every leaf is rendered, and every leaf can curl or bend, there is a phenomenal amount of processor power to be used. So far, to even create the individual leaves in hundreds of trees in real-time is beyond us. CGI films take ten or more minutes to do that, without all the bending.

Screenshot from Shrek2, CGI film circa 2004.
To render each individual leaf like this, took ten minutes per frame on a dedicated mainframe,
real-time demands 60 frames a second, or more

Rotting stumps and fallen trees covered in mosses

Another problem with fixed, skinned models, is how do you rot them away, or infest with disease? Retexturing helps, to show levels of rot upon the surface, but they cannot change shape, deform and dissolve, unless they are made of something other than wireframe models.

Deformable solids are required, but yet again, for real-time work we do not have the processing power.

If it seems that everything is coming down to not enough processing power to do it, yet, then that is because it is. We are on the very beginnings of being able to create realistic environments. After all, this document is attempting to showcase things we cannot yet do, and why.

Smells of damp, decay, and life

A forest smells alive. It smells damp, it smells of blossoms, of mouldy earth, of piles of animal dung, of a thousand creature odours, rotting wood and somehow a richness that is hard to describe.

Smell is a sense often overlooked in a virtual world. Developers discount its importance, as a minor sense, when in truth we rely on it as an input channel to flesh out the world around us.

As a result, most computer systems, save for dedicated simulators, do not have the hardware to be able to provide smells, even if they are coded into the environment. No damp, clay like earth, no scent of honeysuckle, or the remains of a foxes meal. No campfire stumbled across by smell. No stagnant pools - is that a bad thing?

Simple, primitive scent hardware does exist, but we still lack a killer application, to make it a must-have in every home. However, until it is, many virtual environments are not including the capability, making this a vicious circle.

Soft, spongy ground

The ground you tread is earthy. Save for where it has been compressed by many, many travellers, it has a certain spongy feeling to it; it gives when you step, and you can feel this travelling up your frame. If your feet hurt, then treading on this ground is far more pleasant than on concrete. If you bend over, you will leave depressions in the earth - but will probably find hard stones in the process.

This squishiness is again, very hard to represent if the terrain is a wireframe structure. It needs to have give in it, in order to work.

So, we are looking at a particle swarm system for the ground. Particle swarms are literally swarms of particles who all keep relative place in a 2D or 3D structure. Depress one and it moves, as do the particles around it, by lesser amounts. They are used to accurately recreate water, fire, organic tissue, anything that oozes, flows, or moves freely.

The down side is of course, the computational expense.

Hardware to feel the squishiness, like with scent, is still the reserve of dedicated systems, due to the cost involved. However, as with scent, create an environment that includes it, that appeals to enough people, and they will acquire the hardware, to deepen and enrich the experience.

Sounds of small animals, seemingly pseudo-randomly

Often we hear things coming before we see them. Hearing is not that accurate, and can usually only tell right or left, never mind in front, behind, above or below. However, the very fact we can hear them, gives us valued clues to the existence of something we can not yet see. It fills the world in around us, paints a picture beyond our eyes.

It does not need to be huge, just little things that add to the experience; the sound of the wind blowing through the trees, of birds singing, of grass flumping underfoot, leaves crackling, or a clomp of shoe on concrete. At night, the sounds change. Crickets chirp in the bushes, owl howls at odd intervals, widely spaced. Wind howls get louder, animals creep about. These are the things, which add atmosphere to a world, that bring it to life.

There is no current reason why these are not added to more synthetic environments - they are added to some, but by no means all. Mostly then, it is just something not considered or overlooked, that immediately doubles the realism of the forest.

Small critters occasionally seen, flowing sinewy about their business

This is something hard to achieve still, because little critters move in organic ways, as opposed to stiff-limbed mannequins swinging at the joints. Sinuous fur flows with every muscle movement, and creatures stop and start, with a degree of intelligent behaviour. At least a rudimentary AI is necessary for each, at a level we do not yet possess.

Currently, we are still at the level of barely being able to replicate half a mouse brain, so it is going to take time, to create needs-based artificial intelligence complex enough to replicate a day in a squirrel's life, or even how it would handle itself for five minutes in human presence.

Beyond that, as with the trees, and ground, either metaball modelling or particle swarming is necessary for sinewy movement, and we do not have the processing power to achieve that in real-time.

It should however, be noted that we do have algorithms easily capable, just not the processing power yet.

Signs of damage to varying plants and trees

As with signs of decay above, how do you show the damage by teeth marks to static models? This is slightly different again to disease however, as it relies on being able to move small animals realistically, to know how they approach the trees in order to create damage, or in order to simulate the damage they would create.

This one is a more distant future, relying on a convergence of other elements, themselves not yet in place.

Weaving, chaotic pathways through varying levels of underbrush

Woodland paths are actually something often recreated in the virtual, to allow visitors to stride through the VR version of natural landscape, admire it's beauty, or aid foraging and hunting in its depths.

Sadly, most virtual woodland paths, trails, and tracks, bear little or no resemblance to those found in physical nature. Frequent mistakes include ramrod straight trails, clear of any debris, neatly trimmed edges between the path and forest beyond, or esoteric building materials including stone blocks, marble layering, or even compacted sand or gravel. Often, in fact, virtual forest trails wind between rows of perfectly regimented trees.

When you look at forest trails in VR worlds, there is usually only one path, or a handful of paths from A to B. To go anywhere else, if it is even possible, you have to wade through undergrowth. Yet, in woodlands, there are usually thousands of interwoven paths, in various states of use and disrepair. Wide main paths, well trod, and splinters wandering off in all sorts of directions, petering out, criss-crossed by animal trails, and sometimes leading to little 'oasis' in the undergrowth. Places where the ground is muddy, or covered with leaves, but little or no undergrowth around the trunks of some particular species of tree; all around, at the outskirts is vegetation, save for the occasional trail leading away.

The above elements: humpy, hillocky ground, random fallen branches, rocks, the trails hewn by forest critters, random clumps of plantlife, and bendable, breakable models, all are required for this, as a true path system is emergent, instead of created directly.

Staff Comments


Untitled Document .