RENCI Vis Group Multi-Touch Blog

Developing multitouch hardware and applications for research and experimentation.

Entries Comments


Introduction to Multi-Touch

1 September, 2008 (17:32) | Multi-Touch

The multi-touch interface allows a user to interact with an application or system using more than one finger at a time, as in chording and bi-manual operations. In addition to multiple touches, such sensing devices can also accommodate multiple users simultaneously, which is especially useful for larger shared-display systems such as interactive walls and tabletops.

By transferring the interface from a physical device (such as a mouse, pointer or joystick) to a programmable virtual environment, the interaction between user and application can adapt to the user as opposed to the other way around. To paraphrase Bill Buxton from Multi-Touch Systems that I Have Known and Loved, the multi-touch interface has the potential to become a kind of chameleon that provides a single device that can transform itself into whatever interface that is appropriate for the specific task at hand.

The application of multi-touch in concert with gesture recognition has taken off in recent years largely due to the increase in computation power available from commodity computing. RENCI has recognized an opportunity to develop its own in-house expertise in multi-touch and gesture recognition technology as a powerful tool for allowing scientists and researchers to collaboratively interact with large high resolution data sets.

« Welcome!

 Duke Multi-touch Wall Development - Hardware Part 1 »

Comments

Comment from Manu Kovic
Time: February 4, 2009, 11:15 pm

Have you seen this multi-touch sensor?
http://www.youtube.com/watch?v=W0l8TckUe5g

Comment from admin
Time: February 5, 2009, 10:38 am

That’s pretty cool…

Thanks!

Comment from andol
Time: February 24, 2009, 7:23 am

i saw the multiprojector system as a touch wall, but there is a question, how to split the screen to several seamless screens, in application or in hardware?

Comment from jason coposky
Time: February 24, 2009, 11:08 am

im not entirely sure what you mean. the mt wall at duke is run by two machines, one of which handles the 8 cameras on the back end, and the other which handles the applications on front end. the back end computer ships touches over the network to the applications running on the front end. you could split the wall into 2 or more separate instances by having multiple front end computers hooked into projectors and configure the back end to ship touches to each of the front ends respectively. otherwise you can set up the gesture engine to work with clustering and have separate applications running on the same front end which are far enough apart on the desktop to qualify as separate users given the cluster radius.

Comment from Elisa
Time: March 23, 2009, 11:35 pm

I also have the similar question. I wonder how do you handle the seamless multi-projection? Either using hardware or software to do the edge blending? If it is done by hardware, what equipment or model do you use? If it is done by software, what platform and algorithm do you use? I heard some people are using vvvv to develop their multi-touch applications, but I am not sure is the edge blending done by vvvv as well? I would be appreciated if you could give me some advices on this.
Thank you very much.

Write a comment