What could be better for manipulating 3D objects on a screen than... a 3D screen? That is the idea behind the Sphere, which François Bérard has been developing at LIG (Grenoble Computer Science Laboratory) for some time.
This polystyrene ball equipped with markers enables the viewing and manipulation of 3D objects. The user wears glasses and a ring-like device studded with markers on the tip of a finger. These markers are tracked by several cameras positioned around the room, which follow the movements of the ball and the user. In real time, software reconstructs the image that the user should theoretically see at each moment, which is projected onto the ball held in the user's hands. “The secret resides in the calibrated projection and the speed at which the calculations are performed”, explains François Bérard, a researcher at LIG and lecturer at Grenoble INP - Ensimag, UGA. The glasses are synchronised with the projector, enabling the pixel to be displayed in exactly the right place on the sphere to create an illusion of depth.”
The calculation even takes account of the projector's latency by anticipating the user's movements... A virtual pointer is also projected, which can be used to designate objects: users click by tapping on the sphere with their index finger on which the marker-equipped ring is worn. Users can manipulate objects in 3D, much as they would in a virtual reality environment.
A 3D screen... with no mouse in sight!
The screen of the future needs the mouse of the future! Another project underway at LIG involves replacing the mouse by pointing with the fingers directly at the user's physical desktop, while continuing to look at the vertical screen. This solution combines the precision of indirect touch using a mouse with the speed and efficiency of direct touch, as on a tablet or smartphone screen.
Quentin Zoppis, a third-year student at Grenoble INP - Ensimag, UGA, is doing his end-of-studies project in the laboratory on this subject. To put it simply, he fits mini-markers on the phalanges of the user's ten fingers to model their movements precisely in 3D. Cameras follow the fingers and transfer their movements to a pointer on the screen. The user clicks by tapping directly on the table.
Initial tests carried out with participants who were required to point at progressively smaller targets showed that once they had got the hang of the technique, their pointing rate was, on average, higher than with a conventional mouse.
*CNRS / Grenoble INP - UGA / Inria / UGA