Methods and Protocols
The initial test phases were comprised of sampling a reliable and scalable AI API that could be recycled for image preprocessing usage and later reinforced for new recognition tests. An automated neural network design was configured through the IBM Watson Bluemix Visual Recognition API which enables bulk image upload and broad recognition. The criteria for image recognition was satisfied through the utilization of online medical and radiological databases. Essentially, allocating scans with high-low frequency Amyloid/Tau deposits was necessary for classification. The first phase identified broad feature classification (a simple positive and negative output). This was executed by amassing highly present Amyloid and Tau PET scans into a compressed zip folder. High frequency Tau and Amyloid scans were allocated into a positive classifier, while benign and low-frequency cases were placed into the negative classifier. The API would then exploit newly created datasets to identify novel images with simple I/O of negative or positive diagnosis. Over the course of sample phases, features and preprocessing techniques became increasingly complex with discrete feature identification. Critical feature analysis (heat mapping) was completed by configuring a Javascript (JS) image splitter which would then feed the resulting images into the API. Following individual image identification, it would reassemble the split JPG files into the original image, providing critical area detection. Efficient and scalable preprocessing was executed through the Pinetools © website platform.
Image Collection and Verification
In order to obtain a substantial collection of images the research team initially turned to a variety of online databases such as the LONI Image Data Archive, the Alzheimer’s Disease Neuroimaging Initiative, and the NIH Data Sharing Repositories. Unfortunately, open-access medical imaging repositories require undergoing a lengthy registration process which would not be suitable for the time constraints and efficiency of the project. With this in mind, configuring and assembling an image dataset archive on Google Drive was necessary for both ease of access and . Data was collected primarily from Google Images, with the necessity of collecting valid amyloid/tau pet scans closely scrutinized. All images were gathered from reliable sources, such as the Michelson Medical Research Foundation, Berkeley.edu, and Alzforum. Initially, the images selected were screened for high-contrast features such as intense coloration and highlighting which could be further enhanced by adjusting intensity and shaping (these techniques are further elucidated in the Image Preprocessing Protocols segment). The procedure of collecting and refining the images led to a more straightforward and accurate learning process for the AI, solidifying the validity of all diagnosis.
Image Preprocessing Protocols
As noted earlier in the background and literature review, image preprocessing was executed through an automatic function which would process pixel values proportional to intensity levels. Due to the incessant and laborious task of individual image processing algorithms, a local tool (Pinetools ©) was utilized to adjust intensity parameters and maintain controlled variables (in terms of intensity of image processing, coloration, sharpening, etc). All images were effectively processed through this online platform in three areas: Image sharpening, gamma adjustment, and edge-detection. Edge detection was satisfied at a constant level of 100% with the Sobel Feldman method, while, based on preliminary data, gamma adjustment was leveled at ~7.12 intensity. The image sharpening tool was adjusted to 100% in association with edge detection. Additionally, default settings were applied (5X5 convolution mask). The following sample of the graphical dataset reveals critical image features, recognizable through effective image preprocessing techniques. The highlighting of red and blue regions in the image facilitates the identification process during additional testing phases. Parameters were not independent during trial experimentation and maintained continuous at high intensity. Sample images undergoing each test at continuous intensity levels are demonstrated in Table 1 below applying to negative and positive classifiers.