This research investigates the technical modeling of the musical ear in our times. The pervasive digital networks that characterize twenty-first-century communicative media are issuing a mutation in affective networks, effectively producing new models for technical forms of life. Allied with the development of various digitally efficient applications that operate within these networks, the science of Music Information Retrieval (MIR) is increasingly attuned to an economic demand for models for organizing and shaping musical affect. Current research in this sector attempts to map human–computer interaction and interfaces in various ways. Some recent examples include the engineering of models for instrument simulation, algorithms for modeling the felt groove in music, digital modeling of style simulation, and beat induction software.
My research addresses the kinds of data inputs deployed by music software in current development, and their relation to musical practice, considered on a global scale. Far from reflecting neuroscientific axioms (in all their contemporary plasticity), however, the project shows how lines of code have been socially nurtured under quite specific conditions that are often partly delinked from the scientific and academic protocols guiding them. Drawing on musical practices from Africa, India, and other non-Western loci as a central referent, I show how new music software exteriorizes a Euro-genetic industrial habitus. A cultural command system thereby remodels affective life, leaving a material imprint on the performing body. By modeling a particular practice of cultural mimesis, one may speak here of the technosensory modifications of music’s material temporality—a shift from autonomous to automatic listening.