Closed-loop computational setup

The projection module is comprised of the DLP LightCrafter Display 2000 Evaluation Module (Texas Instruments) interfaced through a custom PCB (Pi Zero W adapter board, Tindie) with a Raspberry Pi Zero W. The imaging module comprises a Raspberry Pi 4, Raspberry Pi camera and illumination LEDs. This computer acts as the primary computing module and user interface, and can be connected to a monitor, mouse and keyboard, or accessed remotely. Crucial to the closed-loop control scheme of the DOME is two-way communication between the imaging and projection module. Due to the interface between the Raspberry Pi Zero and DLP unit there are no ports available to facilitate a physical connection. As an alternative, the two Raspberry Pi modules are configured as nodes in an ad-hoc wireless network. The network was established by editing the network interface files on both Raspberry Pi computers to include details of the required ad-hoc connection and IP addresses for both nodes. This ad hoc configuration allows the two-way transfer of information for closed-loop control, with the imaging module operated as a server, and the projection module connecting as a client. The connection also enables the user to control the projection module from the imaging module via a VNC connection. With the projection module Pi running VNC server and the imaging module Pi running VNC Viewer, both desktops can be accessed and controlled using a single desktop, mouse and keyboard setup if needed.

Calibration algorithm for the camera and projector

Due to the nature of using a square camera sensor to image through a circular imaging column and lenses, raw camera images contain sizable “dead space” (an area containing no information). A raw camera frame will appear as a black rectangle with a circular area in the centre in which the sample is visible. To increase image processing efficiency and reduce file sizes, the first step in the calibration process is to crop the total FOV down to a rectangular area that fits inside the circular area of visibility. For this, contour detection is run to find the illuminated area. From these coordinates, the largest square is found by contour detection and this information is written to a file in the format (centre-x, centre-y, width, height). This file can then be imported by all other programs to maintain consistency. Critical to the operation of the DOME is the ability to translate coordinates within the camera frame of reference into the corresponding projector coordinates. For this, the camera space is mapped to the projector space through a calibration process. The first step is to locate approximately where in the projector space the camera is focused using an iterative quadrant search. Once the appropriate sub-space has been found, a 4-point square is projected into this area and located in the camera frame using contour detection. With these sets of coordinates for the projector and camera spaces, the parameters for a matrix transformation operation can be extracted. The baseline code for calibration is provided as part of the open-source DOME software.