top of page
Karin + Shane Denson

DIRECTOR'S STATEMENT

​

We begin by training AI models on a set of Karin’s paintings, produced by subjecting wildlife videos (all shot by Karin) to aleatoric “databending” processes, whereby the digital videos break and reveal their underlying protocols and infrastructures in the form of glitches and compression artifacts. Selected images are hand painted in acrylic. Fed back into the machine, deep learning models are prompted to generate new images in the style of Karin’s glitch paintings. But since contemporary (“diffusion”-based) AI models are effectively trained to eliminate “noise” and glitches, we are pushing them against their intended purpose—and the results are accordingly unpredictable.

​​

​These images serve as “seeds” for animations that, with the help of custom software, are randomly assembled in a real-time generative audiovisual display. Drawing on an archive of Shane’s media-theoretical writings about AI and related topics, a Markov chain algorithm generates new textual elements that appear as written and spoken text. The resulting video work serves as a kind of latent space for the production of new painted images that capture temporal progressions in spatial forms that are equal parts digital chrono(post)photography, automated cut-up, and human resistance to automation.

​

Through this recursive and collaborative process, which shuttles repeatedly between the digital and the physical, the invisible and the perceptible, the archival and the novel, we hope to open for viewers a space of glitchy imagination, if not insight, into the opaque black-boxed operations of the algorithmic media that are rapidly reshaping our visual cultures and environments.

Proudly supported by Western University, through the Western Sustainable Impact Fund, and the Department of Gender, Sexuality, and Women's Studies.

© 2025 RhizomeMind.

All rights reserved.

bottom of page