"We work on graphic design, video editing or CAD on a daily basis. Keyboard and mouse are great but they are far from giving you the same sensitivity and abilities as your hand. The same applies for music, browsing or presentations. We need a tool that gives us flexible shortcuts and perfect control, a tool that makes the things we love fast, precise, intuitive and fun.
That's why we created Flow, a freely programmable wireless controller."
"Adriano goes on to explore the relationship between body, sensors and sound by showing us how a piezo contact microphone can be used to transform any piece of backyard junk into a percussive and melodic instrument. Some people call it physical modeling synthesis but we just call it pretty much amazing.
Adriano's objective is clear: to create a new kinesthetic approach to sound design that totally flips our notion that music is made from a traditional instrument or from interfacing with your mouse, keyboard and screen. This kind of research in tactile, computer music embodiment is not only important for reimagining our conventional vision of an instrument, but also for cutting in half the frustration from wanting to perform in front of millions but having no idea how to play a single note."
"When historian Charles Weiner found pages of Nobel Prize-winning physicist Richard Feynman's notes, he saw it as a "record" of Feynman's work. Feynman himself, however, insisted that the notes were not a record but the work itself. In Supersizing the Mind, Andy Clark argues that our thinking doesn't happen only in our heads but that "certain forms of human cognizing include inextricable tangles of feedback, feed-forward and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body and world." The pen and paper of Feynman's thought are just such feedback loops, physical machinery that shape the flow of thought and enlarge the boundaries of mind. Drawing upon recent work in psychology, linguistics, neuroscience, artificial intelligence, robotics, human-computer systems, and beyond, Supersizing the Mind offers both a tour of the emerging cognitive landscape and a sustained argument in favor of a conception of mind that is extended rather than "brain-bound." The importance of this new perspective is profound. If our minds themselves can include aspects of our social and physical environments, then the kinds of social and physical environments we create can reconfigure our minds and our capacity for thought and reason."
Sensory neuroprostheses show great potential for alleviating major sensory deficits. It is not known, however, whether such devices can augment the subject’s normal perceptual range. Here we show that adult rats can learn to perceive otherwise invisible infrared light through a neuroprosthesis that couples the output of a head-mounted infrared sensor to their somatosensory cortex (S1) via intracortical microstimulation. Rats readily learn to use this new information source, and generate active exploratory strategies to discriminate among infrared signals in their environment. S1 neurons in these infrared-perceiving rats respond to both whisker deflection and intracortical microstimulation, suggesting that the infrared representation does not displace the original tactile representation. Hence, sensory cortical prostheses, in addition to restoring normal neurological functions, may serve to expand natural perceptual capabilities in mammals.
Perceiving invisible light through a somatosensory cortical prosthesis • Eric E. Thomson, Rafael Carra & Miguel A.L. Nicolelis
"we seek less complex, more versatile tools: accessible yet fundamentally adaptable. we believe these parameters are most directly achieved through minimalistic design, enabling users to more quickly discover new ways to work, play, and connect. we see flexibility not as a feature but as a foundation."
Tyler Laboratory of Neuroscience and Neurotechnology at the Virginia Tech Carilion Research Institute and the School of Biomedical Engineering and Sciences at Virginia Tech. We study the influence of mechanical forces on brain activity, develop tools and approaches to mapping functional brain activity in humans using ultrasound, and engineer neurotechnology for nonivasively stimulating brain activity using ultrasound, tDCS, and TMS while monitoring activity in human brain circuits using EEG, fMRI, and fNIRS.
The International Conference on New Interfaces for Musical Expression gathers researchers and musicians from all over the world to share their knowledge and ...
Colbert Sesanker's insight:
many of the shapes and limitations of our established musical interfaces no longer apply. the interface is now completely decoupled from the music, whose richest future, i believe lies in lambda calculus
great... Just because it has some effect on the nervous system does not mean it will have any positive qualitative effect one dreams up. At least tDCS, is vaguely based on the established idea of forced synchronization between two oscillators.
PubMed comprises more than 23 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
Colbert Sesanker's insight:
How? Another addition to the current movement of blind exploratory therapies ---We blindly direct every 'safe' form of electromagnetic radiation at people's brains and look to see if something good happens.--- I guess the next paper will be on sound.... oh wait!
"We describe the first direct brain-to-brain interface in humans and present results from experiments involving six different subjects. Our non-invasive interface, demonstrated originally in August 2013, combines electroencephalography (EEG) for recording brain signals with transcranial magnetic stimulation (TMS) for delivering information to the brain. We illustrate our method using a visuomotor task in which two humans must cooperate through direct brain-to-brain communication to achieve a desired goal in a computer game. The brain-to-brain interface detects motor imagery in EEG signals recorded from one subject (the “sender”) and transmits this information over the internet to the motor cortex region of a second subject (the “receiver”). This allows the sender to cause a desired motor response in the receiver (a press on a touchpad) via TMS. We quantify the performance of the brain-to-brain interface in terms of the amount of information transmitted as well as the accuracies attained in (1) decoding the sender’s signals, (2) generating a motor response from the receiver upon stimulation, and (3) achieving the overall goal in the cooperative visuomotor task. Our results provide evidence for a rudimentary form of direct information transmission from one human brain to another using non-invasive means."
The term "programming language" is often used to describe the medium we use to build software. However, to what extent can we also consider programming languages as interfaces in their own right? Are they sufficiently expressive, interactive and dynamic to, say, control a musical instrument? What if the programming language was the musical instrument? How might that challenge our perception of programming language and tools in general. For example, what happens when we consider the act of programming as a performance? What might a music programming environment which has sufficient liveness, rapid feedback and tolerance of failure look like? What benefits would such a style of programming offer business? Could live coding be beneficial for rapid prototyping, exploring big data sets, and even communicating formal business ideas? Weaving Immutable Data Structures into Ephemeral Sounds - Conference Party
Armed with laptops, monomes and number of simple MIDI Kontrollers, we harness the full power of the SuperCollider synthesis engine through the incredible Overtone platform. We don't just use our laptops to tweak GUI sliders and pots, we write raw Clojure code into Emacs live in our performances. We also generate real-time visualisations perfectly synchronised to the sound with Quil and Shadertone. All the code for our sets is open source and available for you to play with: Meta-eX Ignite.
Live Hacking allows us to generate music on-the-fly enabling us to change the direction of the sound and respond to our environment at a whim. Our sounds aren't pre-recorded and tweaked on-stage, they're coded and generated live in real time. We produce a real raw noise rather than manufactured industry-polished pseudo-perfection. Our sound isn't hardcore, breakcore or speedcore - it's multi-core and fully hyper-threaded.
Colbert Sesanker's insight:
programming as expression! give someone a programming language, not a User interface!
Look at 16:45 and consider the implications of "Advanced Chess"'s superiority to the best human and the best computer
"Human sensory and motor systems provide the natural means for the exchange of information between individuals, and, hence, the basis for human civilization. The recent development of brain-computer interfaces (BCI) has provided an important element for the creation of brain-to-brain communication systems, and precise brain stimulation techniques are now available for the realization of non-invasive computer-brain interfaces (CBI). These technologies, BCI and CBI, can be combined to realize the vision of non-invasive, computer-mediated brain-to-brain (B2B) communication between subjects (hyperinteraction). Here we demonstrate the conscious transmission of information between human brains through the intact scalp and without intervention of motor or peripheral sensory systems. Pseudo-random binary streams encoding words were transmitted between the minds of emitter and receiver subjects separated by great distances, representing the realization of the first human brain-to-brain interface. In a series of experiments, we established internet-mediated B2B communication by combining a BCI based on voluntary motor imagery-controlled electroencephalographic (EEG) changes with a CBI inducing the conscious perception of phosphenes (light flashes) through neuronavigated, robotized transcranial magnetic stimulation (TMS), with special care taken to block sensory (tactile, visual or auditory) cues. Our results provide a critical proof-of-principle demonstration for the development of conscious B2B communication technologies. More fully developed, related implementations will open new research venues in cognitive, social and clinical neuroscience and the scientific study of consciousness. We envision that hyperinteraction technologies will eventually have a profound impact on the social structure of our civilization and raise important ethical issues."
"John Underkoffler led the team that came up with this interface, called the g-speak Spatial Operating Environment. His company, Oblong Industries, was founded to move g-speak into the real world. Oblong is building apps for aerospace, bioinformatics, video editing and more. But the big vision is ubiquity: g-speak on every laptop, every desktop, every microwave oven, TV, dashboard. "It has to be like this," he says. "We all of us every day feel that. We build starting there. We want to change it all."
Researchers have developed pioneering ‘tweezers’ that use ultrasound beams to grip and manipulate tiny clusters of cells, which could lead to life-changing medical advances, such as better cartilage implants that reduce the need for knee replacement operations. Using ultrasonic sound fields, cartilage cells taken from a patient’s knee can be levitated for weeks in a nutrient-rich fluid.
"The International Conference on New Interfaces for Musical Expression gathers researchers and musicians from all over the world to share their knowledge and late-breaking work on new musical interface design. The conference started out as a workshop at the Conference on Human Factors in Computing Systems (CHI) in 2001. Since then, an annual series of international conferences have been held around the world, hosted by research groups dedicated to interface design, human-computer interaction, and computer music."