The 26th International Conference on Auditory Display
featured a presentation by Sharath Chandra Ram, involving
the applications of Auditory Streaming and Segregation
in designing effective auditory display systems that can outperform commonly used pitch mapping designs.
The recent article by Sharath Chandra Ram in the Artnodes Journal on Art, Science and Technology, provides creative technologists with pathways to break the norm and innovate in the realm of AI based listening machines. Of special interest to the Signal Cultures Lab are opportunities to question the concept of normative listening, to innovate for hearing impaired users of cochlear implants and hearing aids.
This study explores the formation of a complex illusory conjunction in simulated speech perception, confounded by 2 disparate spatial modalities – a) spatialization of competing formant features across high and low frequency bands on one hand and b) high-level expectation of speech content competing with the low level processing of auditory objects (such as speechmodulated noise). This paper has extended on a previous initial study around this topic, by controlling for the illusory conjunctions in one of the spatial dimensions (formant space) to then account for specific individual differences as possible correlations with individual hearing abilities across the audible frequency range. The results are an encouraging direction towards a better understanding of hearing loss through the controlled perception of illusory conjunctions.
Selected Text Excerpts from the Diary of Anais Nin were converted into a sonic representation and embedded into the soundtrack of the cult film 'Inauguration of the Pleasure Dome'(1954) in this one of kind interactive media installation with projection mapping. Visitors received the sound file as an email attachment and decoded their message back to text by feeding in sound into the microphone of the listening machine using their mobile phones. The media experiment investigated innovative ways by which audience can effectively engage with reading text.
A set of distributable data sonification modules using Csound and Jupyter notebooks for various data types, ranging from time seres and RGB heat maps to bar charts and scatter plots. A collaboration between Sharath Chandra Ram, Scot Gresham Lancaster (Affiliate Researcher at Center for New Music and Audio Technologies, UC Berkeley) and Roger F Malina (ATEC Distinguished Chair and Professor at University of Texas at Dallas)
Displayed at the 23rd International Symposum for Electronic Art at Manizales, Colombia, and Science Gallery Dublin@Bangalore Metro, this project explores the interplay between ‘the climate of economies’ and ‘the economies of climate’ in the age of networked Big Data. While critically examining the property rights behind climate infrastructures, signals from local aiport radar weather stations were decoded to obtain a climate data archive for further processing and sonification. Projection Mapping using OpenFrameworks (C++), Data Analysis and Sonification using Python, Supercollider and Abelton live.
At the Jogjakarta National Museum (International Summit for Critical and Transformative Making 2015) and Masquinez Palace, Goa at the UNESCO International Story of Light Event in 2014 - > 'Traffic’ captures human orchestrations occurring in the spectrum of invisible light, intercepting maritime and air-traffic broadcasts as a sound installation along with a visualization of air traffic entering a city's airspace, by decoding aircraft wireless transponder data in realtime. A DIY collinear Antenna at 1090MHz was used to intercept air traffic data for Real Time Data Sonification using Supercollider. The sound synthesis was re-transmitted to a local radio frequency spectrum.
This software is based on Sharath Chandra's 2008 Masters thesis at the University of Edinburgh, School of Informatics, within the Institute of Perception, Action and Behaviour. The prototype involves an interactive virtual reality that enables an individual to observe her/his own body movements from a 3rd person view, while realtime computer vision algorithms enable coherent motion based interactions with virtual multimodal objects. It provides a framework that integrates the Perception-Action-Behaviour cycle, to give a better perspective of the 'Body Schema' for use in VR therapy for motion disabilities, as well as other experimental paradigms.
A multi-channel sound installation conceived by Dr. Cathy Lane from the Creative Research in Sound Art Practice, University of Arts, London. Sharath Chandra Ram was a collaborating Transmission Artist. BEAM was composed from archival material and maritime radio communications through the sonic translation of AIS (Automatic Identification System) data. BEAM was exhibited as a Collateral Event during the Kochi Muziris Biennale 2014.
.
Quadrifilar Helical Antenna to receive downlinks of imagery and sensor data from overpassing USGS and NASA Polar satellites at the 137Mhz band. The data received was an encoded sound that had to be reprocessed and decoded to produce the overhead image. THis project was part of the The RadioLuna Project for Open Science Communication in collaboration with designer Catalina Alzate Mora. Also presented at the International Astronomical Union, Commission C2 CAP Conference in Medellin as well as the International Conference on Information and Communications Technology for Development, 2016, Ann Arbor, Michigan.
'Neon Fauna' explores 'generalized symmetry' and interactions between Nature and Artificial Reality.The sound scape of 'Neon Fauna' was self-generated in real-time by motion analysis on the video artifacts.