An Analytic Comparison of Alpha-False Eye Separation, Image Scaling and Image Shifting in Stereoscopic Displays
Wartell, Zachary Justin
Hodges, Larry F.
MetadataShow full item record
Stereoscopic display is a fundamental part of many virtual reality systems. Stereoscopic displays render two perspective views of a scene, each of which is seen by one eye of the user. Ideally the user's natural visual system combines the stereo image pairs and the user perceives a single 3D image. In practice, however, users can have difficulty fusing the stereo image pairs into a single 3D image. Researchers have used a number of software methods to reduce fusion problems. Some fusion algorithms act directly on the 3D geometry while others act indirectly on the projected 2D images or the view parameters. Compared to the direct techniques, the indirect techniques tend to alter the projected 2D images to a lesser degree. However while the 3D image effects of the direct techniques are algorithmically specified, the 3D effects of the indirect techniques require further analysis. This is important because fusion techniques were developed in non-head-tracked displays that have distortion properties not found in the modern head-tracked variety. In non-head-tracked displays, the non-head-tracked distortions can mask the stereoscopic image artifacts induced by fusion techniques but in head-tracked displays distracting effects of a fusion technique may become apparent. This paper is concerned with stereoscopic displays in which the head is tracked and the display is stationary, attached to a desk, tabletop or wall. This paper rigorously and analytically compares the distortion artifacts of three indirect fusion techniques, alpha-false eye separation, image scaling and image shifting. We show that the latter two methods have additional artifacts not found in alpha-false eye separation and we conclude that alpha-false eye separation is the best indirect method for these displays.