Novel Instrument Design | Interactive Dance Performance | Sound Art | Visual Interactive Design

"A mirror experience, or a continuation of existence?"

MetaEnsemble is a new interactive musical instrument and immersive space device that can interact with the audience. It is composed of matching wearable hardware, immersive video, and sound. When the audience wears a wearable device for motion capture, the corresponding sound and visual transformation will be triggered. Inspired by the theory and thinking of metaverse, we try to explore the relationship between human behavior and numbers in the physical world.

Beijing Auto Space

Sensors and battery
WIFI board
Ableton Live

Xiaofei Hong
FakeRabbit (Sound Artist)
Yaang (Dancer)



image 86.png

Starting from the theory and thinking of metaverse, we try to explore the relationship between the physical world and numbers. Are human beings just physical bodies? Or numbers? Or can human beings transcend the physical body and become part of the environment? And in the digital world, how to quantify human thinking? Where is the boundary of human digitization? From the Pythagorean school’s "all things are number" to the "Ultimate Ensemble theory of everything" proposed by Tegmark, the only assumption is that "all structures exist in mathematics, they all exist physically". This simple theory suggests that in those structures that are complex enough to contain self-aware substructures (SAS, self-aware substructure), these SAS will subjectively believe that they exist in the physically "real" world. And our consciousness is also a part of a larger data structure, which is an allusion. 

So for the MetaEnsemble meta-ensemble, we try to establish a rational philosophy and belief system through audiovisual language, by turning human behavior and its dominant consciousness into a data structure and making it the environment itself. The key to it is not whether the world we live in is a simulation, but how we can use simulation to understand the nature of the universe, reality and human consciousness more deeply, and connect the physical world and simulation through mapping. 


397 (1).png

Pictures were captured from two performances in Beijing. 


We create an immersive space installation that interacts with the audience. The viewer needs to wear the corresponding wearable device, and the spatial projection and sound will change with the viewer's actions. The above is the spatial form of the project. And it is envisaged that dancers can also perform real-time interactive impromptu performances by wearing devices, realizing the ultimate cooperation between humans and the digital world.



We researched the basic movement forms in modern dance and simplified them into three basic movement prototypes: Linear motion, curved motion, and rotary motion, as well as the state of constant speed and acceleration respectively. The user simulates those movements and collects the three-axis attitude angle and acceleration with an IMU sensor, and conducts a preliminary analysis on the extreme value and average value of the original data. Later, the ESP32 Feather Board was used to complete the wireless data transmission, and the device was fixed in the wearable device to capture the dancer's hand movement data.


“Maxmsp and Ableton live are used as the source of the sound. Parameters were captured by the Arduino through the object serial, then they were scaled into a suitable midi signal value range. These midi signal values ​​are converted and connected to the make note object and note out the object as a virtual midi controller. 

Sound design inspiration comes from emotions and the environment  Diversity of body motion feedback is considered, so as to design multi-level and dynamic sound effects. The grasp of each sound material is complementary to the action,  "the sound is in motion, and motion is in the sound, the sound is moving, and the sound is vivid."

When dancers dance their limbs, they can feel the energy feedback of the sound. The sound then gave the dancer a new physical arrangement. This kind of sound design is not under a  conventional time-linear arrangement, but a timely and feedback-based sound creation. It is this kind of challenge that gives me new thinking about sound creation. I need to think about the multi-level sound arrangement and consider the dancer's body shape feedback, and through a large number of experiments, the numerically triggered instrument channel is continuously carried out. Only by modification can the best quality, hierarchical and dynamic timely feedback sound material be obtained.”



For visual design, we collected the pitch and tone of the main tone channel (lead), and two different effect tracks (fx). Original data were then scaled into appropriate visual signal values and then were connected to the noise, alpha, and other parameters in the visual effect. Finally, they formed a dialogue relationship with the body movements of the dancer. 

The visual subject is a computer organism that talks to self-awareness and body movements. She has a set of internal rules and will respond accordingly to changes in the external acoustic environment. We hope that dancers can enter a certain state of consciousness in our preset environment, express their ideology through body language, influence the physical environment, as well as engage in dialogue through the ensemble. Therefore, we apply the emotional model based on MSVR and Arousal-Valence to divide the preset ideology into four emotional dimensions. Based on this.  the line and dots system is used to express the changes of the Arousal axis, and the color system is used to express the changes of the Valence axis. The immersive spatial experience is shaped through projection.