One of the big resources that I've used for pushing and testing the boundaries for interactive design is using IR sensors with dancers. This is not the only way but it is easier to find someone who is willing to dance for a creative endeavor then find a model or friend to awkwardly stand in front of the Kinect IR sensor for hours. The growing popularity for projection mapping and set design and generative visuals for dancers has also pushed for demand of people that understand the field.
About a year ago I first did a project with Crash Alchemy, a local circus/acro dance troupe, where we got a dancer with a choreography and a narrative, got a costume for him, and created projections for him to dance to help drive the narrative. The outcome we called The Perfectionist which is on my interactive page. It took about 2-3 weeks from start to finish and 90% of communication and production was not face to face with the rest of the group. It was ambitious and it was truly impressive that we were able to sync it all together with the time and resources. The production went like this:
1)Solidify Story, song and Choreography
2)Get someone(Me) to shoot choreography with a camera from the projector's perspective and from the perspective of the audience. For this the dancer should wear finger lights on their hands and feet and the room should be as dark as possible.
3)Send footage to motion graphic artist to create video FXs as in sync as possible to the choreography by following the finger lights in After Effects(I believe you can automate it to follow the light). Combine that into a video with the music.
4) Once motion graphics is finished, set up a full production shoot with three cameras, lighting, projections onto sharktooth transparent scrim, costume, makeup, music recording, people to run all these, the director and the actor. That was also the first day all of us had been in a room together.
It was a beautiful chaos to say the least. With no monetary pressure since it was just all for experience. But I had already been experimenting with the Kinect at that time and Had Brought up to the Production manager there had to be a way we could record the dancer's Body positions using the IR 3D scanner rather then using finger lights. He agreed that we should look into it and eventually I did find a way but not for that project.
Recently though I decided to put it to the test with my knowledge nearly a year later. Within 3 hours, a dancer and I were able to get a broad idea of the choreography and narrative, set up the kinect, record a brief performance in video and her limb positions in 3D space, and then create a 3D representation of her in a visual program so that The motion graphics part can not only see where she is specifically on stage but I may now use this data to set triggers for her movement to create visuals on her own.
This essentially does 3 things:
1) Allows for me both of us to see where and how the kinect may loose her
2) Watch exactly where she moves her body on stage without the need of her going through the choreography or her needing to be present.
3)I'm now able to create triggers for her to generate the video FX live rather than Having them try to be in sync with the choreography. This allows for a truly interactive performance rather than a pre-recorded video of the FXs.
This also allows for the interactive part to be used by anyone as long as they know how to trigger what (hand to chest to create fireball, hand to foot to create glowing orbs, etc.)
This project is still in the works but the application beyond theater is one that I believe will create the way we interact with the world around us. No buttons or wires, only our bodies to control the world around us. Stay tuned!