Compositional Agents

Under the scope of the Beauty Project are three other research tracks or trajectories. These include the Compositional Agents track This track looks at the swarminng behavior of the pattern-forming bacteria Paenibacillus dendritiformis and Bacillus subtilis and charts the movement through 2D space. Using Open Computer Vision, the movements/shapes or the contours of the bacterial agents. The emergent swarming behaviors of the bacterial agents in their environenments are identified and tracked. These contours are classified and sent to a generative model trained to make predictions about growth or future movement. Contour(s) classification and optical flow analysis of swarms, classification patterns, or just live images presented to a generative model to make growth predictions are sent to a multi-agent system (or time-series) that makes compositional decisions based on the data received from the bacteria (e.g. maps generative model growth predictions, contour classes and cluster predictions, etc. to composition/audio-visual/textural parameters or 'gestures'). data is shared among agents (e.g. how manny contours, what class they are, optical flow rates, angles and locations, etc.). Using a custom microscope, real-time or time-lapse video builds the composition over a number of days.

Compositional Agents: contours/contour classification and optical flow cluster analysis of swarms, classification of patterns, or just live images presented to a generative model to make growth predictions, etc are sent to a multi-agent system (or time-series) that makes compositional decisions based on the data received from the bacteria (e.g. maps generative model growth predictions, contour classes and cluster predictions, etc to compositional/audio-visual/textual parameters or “gestures") - data is shared among agents (e.g. how many contours, what class they are, optical flow rates, angles and locations, etc) (look at diagrams alt 2 & 3 on Google Drive) -  can use time-lapse and real-time video (from HomeScope) (real-time video you build composition over days and “append” to a composition (generated via time-lapse) during a live performance. Idea is an agent can learn from the bacteria and use that to make audio visual scores by spawning these agents; aspects of behavior or patterns by observing the cultures and use it guide/predict, conduct searches for interesting patterns, etc; where the patterns will emerge and have that guide compositional decisions.v

So these spawned agents are getting info from the main/parent agent that is searching/predicting and sending data to the agents/spawning new ones and that data gets mapped to sound/visuals - so spawned agents are driven by data from actual bacteria (as understood by the parent agent) [simple version: contour detection on swarms: “snakes” and “swirls” are detected (using contour detection) and put into a 3d world and trigger text, sound, etc]