Introduction
Self-organisation is used by many biological systems to drive the emergence of robust collective phenomena across large populations of cells, and is known to underpin processes such as tissue morphogenesis, collective motion and disease progression \cite{Gorochowski2020}. Engineering microagents – be they living cells or human-made microrobots – with similarly complex collective behaviours would have applications ranging from the design of new functional materials \cite{Slavkov2018} to novel biomedical therapies \cite{Alapan_2019,Hauert_2014}. The key component driving self-organisation is the ability for agents to react to their local environment and follow simple behavioural rules \cite{Brambilla_2013}. However, at present we struggle to rapidly tweak the rules that microagents follow. As a stepping stone, in this work we propose to externally control each microagent and their reaction to the local environment, allowing for the rapid prototyping of behavioural rules that give rise to self-organisation. Light is perfectly suited for this task and at small scales can be used to make and break bonds \cite{Chen2018}, power micromotors \cite{Palagi_2019}, alter shapes \cite{stoychev2019light}, drive the release of a cargo \cite{Erkoc_2018}, modify microenvironments \cite{Ruskowitz_2018}, and interact with light sensing organisms \cite{J_kely_2008,Purcell_2008}, making it a powerful tool for microagent control. Moreover, unlike other methods based on the use of chemicals or magnetic fields, light is better suited to the simultaneous control of many agents due to its high spatio-temporal resolution.
Light-controlled microswarms have been shown to perform collective phototaxis \cite{Dai_2016}, self-assemble into active materials \cite{Schmidt2019}, and treat tumours \cite{Tao2020}, yet many of these systems rely on manual control of a single or few light stimuli, offering limited local control at the scale of large collectives. In a number of instances, closed-loop high-resolution spatio-temporal control has been demonstrated, achieving complex behaviours such as flocking \cite{Lavergne2019}, formation of sophisticated shapes \cite{Frangipane2018} and collective cargo transport \cite{Steager_2015}. However, typically a bespoke optical setup is needed to realise this control, making the approach inaccessible to most labs. Of the setups that have been previously developed, very few are reproducible due to insufficient documentation, and many rely on components such as optical breadboards \cite{Lam_2017} or fluorescence microscopes \cite{Stirman_2012} to provide structural and mechanical functions, which can be costly and may require some degree of specialised expertise to install and operate.
To address these limitations, here we present the Dynamic Optical MicroEnvironment (DOME). The DOME is a fully integrated device that is able to project dynamic light patterns in response to the behaviour of light-reactive microagents and guide their collective behaviour (Figure \ref{506762}). This is made possible by the DOME’s ability to continuously image a microsystem and use this information as input into feedback control schemes that can then modify the light pattern projected in real-time to interact and guide the behaviours of the individual agents. Furthermore, the DOME has been specifically designed to be low-cost, modular, and open source to allow for the easy adaptation to new applications. This builds on other existing open platforms focused on microscopy, by adding fine-grained light-based control \cite{Diederich_2020}.
Materials and methods
The DOME (Figure \ref{650756}) consists of three major subsystems: dynamic light projection, real-time imaging, and computation of feedback control signals. The imaging module, which consists of an inverted microscopy setup and camera, observes changes in the micro-system such as agent density or position, and communicates these changes to the projection module via feedback control. This causes the projected light to be restructured in line with the new state of the system, creating what we term a ‘light-based augmented reality layer’ \cite{Denniss_2019} on the sample stage that can be used to influence the behaviour of light-responsive microagents that are present.