It began with the stereoscope of years gone by and found a high-tech revival with the blockbuster “Avatar”: 3D spatial visualization.
For us humans this means that the world we see is not only wide and high but also deep. Our visual system has developed accordingly, meaning we are not only able to look up and down, left and right but also recognize spatial distances to objects. As this is particularly crucial when grabbing items (keyword: hand-eye coordination) the system has developed in a way that works best in local surroundings, usually within a few meters.
Both eyes are needed in cooperation for 3D vision or binocular depth perception. This kind of perception is also called “stereoscopic vision”: two eyes, two different views of an item, resulting in a three dimensional image in the brain.
Babies don’t have the innate ability of binocular vision but obtain a rudimentary binocular depth perception of approximately 6 months of age. Between the age of 5 and 6, the stereoscopic perception is usually fully developed.
In 3D simulation systems images are shown from two slightly different angles. The user wears glasses with filters that transmit the images to the right and left eye respectively. The brain combines both images into one and provides the image with spatial depth.
There are various filter options, for example due to constant, polarized, anaglyph or shutter glasses.
In the regular 3D learning environment of the Cyber-Classroom, users use a 3D TV as well as light polarized glasses made of plastics that guarantee a perfect and comfortable 3D visualization.
Autostereoscopic devices – that are intended to enable 3D vision without glasses – are still in a stage of development. They are not yet operational for 3D learning in a team.
You will find detailed information about the terms and applications of virtual reality technology in our 3D glossary.