top of page

Emerging Visions of AI, Art, and Environment

2025 SPECIAL HIGHLIGHTS

Xian’er (Chinese Immortals)

​

Fang Zhou

China, 2025

14 min 11 secs |  360° VR Film | Foyer installation

​​

​Ancient Chinese immortals descend to mortal world seeking employment, competing to secure statues to inhabit—a reflection of social "involution." These statues metaphorically represent social classes: revered temple deities symbolize government authority; trendy office figurines parallel tech industry workers; urban village statues embody small business owners struggling to make ends meet; and neglected rural statues reflect elderly villagers left behind.

 

The film also highlights the pragmatic persistence of Chinese worship, driven by tangible benefits and amplified by viral online trends. Using 3D scanning and glitch-art aesthetics, Chinese immortals portrays the collision between traditional culture and modern society.

GlitchesAreLikeWildAnimalsInLatentSpace!​​

 

Karin + Shane Denson

USA, 2025

 pre-recorded video loop | Foyer installation

Inspired in equal parts by glitch-art vernaculars, the chronophotography of Eadweard Muybridge and Étienne-Jules Marey, the cut-up methods of Brion Gysin and William Burroughs, and generative practices from Oulipo to Brian Eno and beyond, our ongoing series GlitchesAreLikeWildAnimalsInLatentSpace! stages an encounter between human imagination and automated image-making. Crucially, this encounter is staged by way of interfacing the invisible archives of generative AI with our own archives of images and texts.

​

We begin by training AI models on a set of Karin’s paintings, produced by subjecting wildlife videos (all shot by Karin) to aleatoric “databending” processes, whereby the digital videos break and reveal their underlying protocols and infrastructures in the form of glitches and compression artifacts. Selected images are hand painted in acrylic. Fed back into the machine, deep learning models are prompted to generate new images in the style of Karin’s glitch paintings. But since contemporary (“diffusion”-based) AI models are effectively trained to eliminate “noise” and glitches, we are pushing them against their intended purpose—and the results are accordingly unpredictable.

​​

​These images serve as “seeds” for animations that, with the help of custom software, are randomly assembled in a real-time generative audiovisual display. Drawing on an archive of Shane’s media-theoretical writings about AI and related topics, a Markov chain algorithm generates new textual elements that appear as written and spoken text. The resulting video work serves as a kind of latent space for the production of new painted images that capture temporal progressions in spatial forms that are equal parts digital chrono(post)photography, automated cut-up, and human resistance to automation.

​

Through this recursive and collaborative process, which shuttles repeatedly between the digital and the physical, the invisible and the perceptible, the archival and the novel, we hope to open for viewers a space of glitchy imagination, if not insight, into the opaque black-boxed operations of the algorithmic media that are rapidly reshaping our visual cultures and environments.

Proudly supported by Western University, through the Western Sustainable Impact Fund, and the Department of Gender, Sexuality, and Women's Studies.

© 2025 RhizomeMind.

All rights reserved.

bottom of page