Electronic Empathy
“aMt_A.I.”
Several albums from Artificial Memory Trace were randomly selected, segmented into 30-second clips and used to fine-tune a MusicGen model. The audio outputs presented here are the result of this fine-tuning process. Technically, the model demonstrates both over-fitting and under-performance, as the generated tracks show little resemblance to the original Artificial Memory Trace material. However, given that the source material primarily consists of field recordings, this experiment provided an opportunity to explore the limits of MusicGen – a model designed for conventional music – when exposed to unconventional input. The key questions raised are qualitative: How does an AI trained on traditional music genres respond to sonic experiments? What does the input represent to the AI? Can it identify or infer any underlying musical structure? Notably, a faint piano-like sound emerged amid the crackles – a curious outcome. Did the AI, in its pattern-seeking process, interpret environmental sounds as traces of conventional melody?
all rights reserved