SynEsTHETIC FLORA

REAL-Time Affective Trnslation from Voice to Landscape

Category: Affective Computing & Brain-Computer Interfaces
Year: 2025 | Warren Records Commission

Live generative projection system analyzing songwriter lyrics in real-time, translating emotional content into animated wildflower landscapes. Each song creates unique visual signature based on emotional arc.

Research Question: Can real-time sentiment analysis create meaningful synesthetic translation enhancing performer-audience emotional communication?

Technical Implementation:

  • Speech-to-text (Whisper API) → custom sentiment analysis per stanza

  • Emotion detection: valence (-1 to +1), arousal (0 to 1), quadrant classification

  • Parameter mapping: valence → color palette, arousal → animation speed, quadrant → wildflower species

  • Photogrammetry-scanned local wildflowers animated in TouchDesigner

  • Audio reactivity: FFT analysis for real-time vocal dynamics response

Key Innovation: Affective computing for creative amplification rather than surveillance. Using emotion detection to enhance human expression, not quantify it for extraction.

Contribution to RNI Research: Demonstrates voice-based affective state detection in noisy real-world environments (not just labs). Establishes generative translation methodology for internal states → supportive external experiences. ~78% alignment with songwriter-described emotional arcs.

Technologies: TouchDesigner, Whisper API, custom sentiment analysis (Python/Transformers), photogrammetry, Blender

Previous
Previous

Imaginary Glitchscapes: Queer Feminist Anti-Surveillance Through Digital Artifacts

Next
Next

AI-Assisted Learning Wearable Interface Design: Supporting Teachers in Complex Environments