This research blog is a collection of thoughts, internet fragments and pieces of information I find noteworthy. In general, you can expect posts about technology, politics, art and design.
This demonstration from Behringer shows a mixed reality interface for real-time music manipulation. It utilizes Microsofts Hololens for displaying information in an augmented reality environment. The hand movements and position are tracked by a Leap Motion and the data is fed to the Deepmind12 synthesizer. The demonstration above still seems a bit clunky here and there, and the actual music manipulation seems partially edited, but still, the implications of the technology does stand to reason and teases my curiosity. I think music production and live performances are great fields for applying mixed reality interfaces, since it has inherently a haptic and sensual workflow, adding a new layer of interaction can benefit the creative process. Since it still happens in a digital realm, you have the possibility to hack, customize and modify the input and outputs of that workflow. An example of that would be the online course on Kandenze, where Rebecca Fiebrink of the Goldsmiths University of London shows ways of modifying sensors and inputs with the means of machine learning to basically make custom instruments, e.g. for live performances or experimental music production.
SketchAR is an application through which the user sees a virtual image on the surface of which they are planning to trace a sketch. It is designed to help people who have always wanted, but could not draw. This iteration of the App is using depth sensors that currently only a few special phones have. These sensors are being developed by Google under the name Project Tango and enable a mobile device to become aware of its physical location in the world. SketchAR uses this new spatial information in combination with augmented reality for large scale drawings, stencils and graffiti.