Mass spectrometry imaging has the unique ability to perform untargeted, spatial analysis of thousands of molecules in a single run. With improvements in instrumentation acquisition speeds and decreased spatial resolution, there is a large increase in the size of acquired data sets. This has led to a need for sophisticated software tools that can compress data and rapidly handle analysis workflows to obtain meaningful biological conclusions. Additionally, the utilization of mass spectrometry imaging in the clinical setting and pharmaceutical companies has led for a great need to develop robust quantitation strategies, as well as a coupling mass spectrometry imaging with other commonly used imaging modalities to answer new biological questions of interest. Here, we critically review the status of mass spectrometry imaging and discuss unique opportunities for new frontiers for mass spectrometry imaging in biomedicine.
1. Introduction:
Mass spectrometry imaging (MSI) is a powerful tool that allows the untargeted investigations of the spatial distribution of a molecules species of interest in a variety of sample sources. In a single experiment, it is capable of imaging thousands of molecules, such as metabolites, lipids, peptides, proteins, and glycans, without labeling. The combination of mass spectrometry with the ability spatially analyze thin sample sections creates a chemical analysis tool useful for biological characterization, essentially creating a chemical microscope. In general, after proper sample preparation, the thin sample section, a (x, y) grid is overlayed onto the tissue, with each square indicative of the spatial resolution dictated by the user. The MS instrument ionizes the molecules and collects a spectrum at each grid square on the section. After collecting all the spectra, computational software allows researchers to select a mass-to-charge (m/z) value from the overall, averaged spectrum for the tissue. The intensity of the m/z from each grid point (i.e., spectrum) is then extracted and combined into a colormetric image depicting the distribution of that m/z value. In order to determine the identity of that m/z value, on-section fragmentation can be done, and the fragments can be used to piece the structure of the unknown molecule. Otherwise, accurate mass matching to databases can be done as well to confirm the identity of the molecule within a certain mass error range.
Based on technological advances in the past few years, MSI is becoming more routine tool in clinical practice and the pharmaceutical industry. Advances include improvements in reproducible sample preparation and instrumentation that allows for high acquisition speeds and lower spatial resolution. Additionally, the ability to provide absolute quantitative information in MSI experiments boasts its credibility. To help with large computational endeavors, statistical workflows and machine learning algorithms have been implemented to handle the large imaging datasets being produced with modern day instrumentation. MSI can also be combined with other imaging modalities, such as microscopy, Raman spectroscopy, and MRI to complement the high chemical specificity of MSI with high resolution structural information, which can be applied to clinical readouts of patient diagnosis and prognosis. Additionally, researchers have been able to expand MSI methodology beyond 2-dimensional (2D) sections. With both hardware and software improvements, 3-dimensional (3D) renderings and even single cell resolution using MSI are emerging as future frontiers. With all the advances in this field, MSI is still evolving and requires continuous developments to match the current demand.
Overall, the aim of this review is to provide an informative resource for those in the MSI community who are interested in improving MSI data quality and analysis. Particularly, we discuss advances in sample preparation, instrumentation, quantitation, statistics, and multi-modal imaging that have allowed MSI to emerge as a powerful technique in the clinic. Also, several, novel biological applications will be highlighted.
2 Sample Preparation:
2.1 The Basics
As with any methodology, the most crucial step for analytical success is proper sample preparation. This is particularly true for mass spectrometry, as even subtle differences in sample integrity to density can have profound effects on the signal intensity, types of molecules ionized, or localizations. For example, in MSI, one of the greatest challenges is reducing delocalization of molecules, and this relies solely on proper sample preparation strategies. Researchers have even developed a new statistical scoring to determine sample preparation quality (independent assessment).
Universally, after any necessary dissection, samples require a step to halt enzyme activity to reduce degradation and delocalization of molecules. Classically, this means flash freezing for MSI since many other preparations (e.g., formalin fixation (FF)) are not MS compatible for most molecular species, although some lipids are not cross-linked and thus allows for FF to preserve sample integrity (Inflation fixation). New method developments have made the abundant FF paraffin embedded (FFPE) samples more MSI accessible (see discussion below). Prior to sectioning, one unique preparation is the decellularization of the tissues, allowing for the improved signal of extracellular matrix \cite{26505774}. Next, these samples are sectioned thinly (6-20 micron), thaw mounted onto appropriate slides, and placed into a drying system (e.g., desiccator box). In many cases, tissues are fragile and do not section well without support, thus many researchers have adopted embedding tissues prior to sectioning. These range from things like optimal cutting temperature (OCT) material to gelatin (Precast molds, inflation fixation,), but, as always, MS-compatibility is a concern. OCT, for example, is popular among histologists but tends to contaminate MS spectra and is thus not recommend. Due to samples flaking or washing off the slide, O’Rourke et. al. recommend coating the slide in nitrocellulose as a “glue-like” substance to aid the sections in staying on the slides \cite{26212281}. Here, one major assumption made is that the samples described are 3D tissue samples, so these general steps are not accurate for all samples. In general, researchers have found ways to image analytes in imprinted \cite{25914940}, plant roots \cite{26990111}, and even agar \cite{26959280} \cite{26297185}. Others have gone beyond single tissues to whole body imaging, which obviously can have its own unique challenges \cite{26491885}.
Several different ionization techniques are compatible with MSI, although each requires a unique process to preserve the corresponding sample. Matrix-assisted laser desorption/ionization (MALDI) is the most popular ionization technique for MSI, especially for its ability to image both small and large molecules (e.g., metabolites and proteins) (localization of ginsenosides). Its requirement of a matrix for proper ionization and production of only singly charged ions often limits its applicability to larger proteins. This has prompted the development of laserspray ionization and unique matrices (e.g., 2-NPG), although they have not found their niche in imaging workflows (in situ characterization). Obviously, no one matrix, application method, or analyte extraction process works for all molecules, so optimization is important and will be discussed later in this review. Other varieties of MALDI MSI exist, including scanning microprobe MALDI (SMALDI) (Phospholipid topography), IR-MALDESI \cite{27848143},\cite{26402586}, surface-assisted laser desorption/ionization (SALDI) (\cite{26705612}, although they are not as popular. Other techniques worth noting include desorption electrospray ionization (DESI) and secondary ion mass spectrometry (SIMS), which require minimal sample preparation in comparison to MALDI \cite{26545296}, \cite{25799886}, \cite{27270864}, \cite{26419771}, \cite{26859000}. Unfortunately, each of these is more limited in the molecules they ionize (peptides and metabolites, respectively). In the most general cases, both DESI and SIMS can be performed directly after sectioning, as they depend more on the instrument parameters for proper analyte extraction, although all the addition developments will be discussed. Even with all the ionization methods available researchers are still developing new methodology, such as laser electrospray \cite{26931651} . Each ionization has its own advantages and disadvantages, ranging from the molecules of interest to spatial resolution, the later to be discussed further on in this review. Finally, after proper preparation and ionization, the instrument itself (e.g., mass analyzer), is important to consider before determining your proper sample handling, and the confidence in being able to identify an analyte is just as important as the analyte being available for analysis.
2.2 Improving the Basics
2.2.1 Applying an Internal Standard
While evaluation of different tissues or different analytes within a tissue was accepted previously, appropriate normalization and internal standards are expected if semi-quantitative comparisons are to be made. These standards could be included as early as dosing the animals/cells or right before the ions enter the instrument (direct targeted, quantitative mass spectrometry imaging, detection and mapping). For MALDI, the standards are classically applied prior to matrix application using the same automatic sprayer systems described below \cite{26544763}, \cite{28193015}, \cite{27263025}. Chumbley et. al. has done a comprehensive study to determine the proper inclusion of the standard (e.g., with matrix, under the tissue section, or sandwiching the section with matrix), and it was found that depositing the standards followed by matrix to be optimal for the drug rifampicin \cite{26814665} This sample protocol can also be applied to done tissue sections used in DESI experiments (applying prior to analysis), or standards can be added directly DESI extraction solvent for inclusion into sample analysis \cite{26859000}.
2.2.2 Matrix Choice and Application (MALDI only)
For MALDI ionization, a matrix is required to allow proper ionization of the molecules of interest. As the matrix crystalizes, analytes are extracted and co-crystalized. If analytes aren’t in this crystal structure, it is unlikely they will be ionized and then analyzed on the MS. Thus, the availability of the molecule, the matrix application, and the matrix itself can all have an effect on this process. It should be noted that all of preparations may be applicable for other ionizations if appropriate. For the case of some proteins, a fixation wash is necessary to make the molecules even available for co-crystalization \cite{26505774},\cite{26212281}. The Carnoy’s solution is a common wash used for. Other washes, such as ammonium citrate, have also been utilized to analyze low molecular weight species. Besides washing, pre-spraying with solvents can also aid in the extraction of peptides. The combination of ammonium citrate washes and pre-spraying with cyclohexane proved to be effective in extracting clozapine from rat brain sections (Pre-extraction). Vapor chambers have also been found to be effective, specifically TFA vapors for SIMS imaging of lipids \cite{25799886}.
Several matricies have found popularity for their “universal analysis” including 2, 5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (CHCA), especially for metabolites and peptides in positive mode. A 1:1 mixture of these matrices is also commonly used \cite{26962105}. Also for positive mode, sinaptic acid has been well vetted for proteins. On the other hand, negative mode has been found useful for metabolites, for which 1, 5-diaminonaphthalene (DAN) and 9-aminoacridine (9-AA) are the most accepted matrices \cite{28362367}. Based on the literature, little effort has been made on developing or discovering new matrices for MALDI, although the use of water as a “matrix” in MALDESI is has been done recently \cite{26402586}. Also, nanomaterials have been utilized as an alternative, although these are considered a different ionization all together (e.g., SALDI) \cite{26705612}. Matrix has also been used in enhance SIMS signals \cite{26419771}. Finally, since MALDI mainly produces singly-charged ions, some researchers have utilized matrices such as NPG-2 to produce multiply charged-ions using a commercial MALDI source, although their quick sublimation doesn’t allow for longer runs like imaging requires \cite{25273590}. In general, most of the focus for sample preparation has been on the matrix application process.
When applying matrix, the best method would provide appropriate analyte extraction, small crystal size, and homogenous application. Unfortunately, no universal method exists. Classically, researchers would spray matrix over the tissue slide using a painter’s airbrush. While this can be reproducible within an individual, person-to-person variability is high, and there is little adjustability. For example, the “wetness” of the application itself defines the appropriate analyte extraction. An appropriate balance needs to be found, as a too “wet” application can cause molecular diffusion while a too “dry” method may not effectively extract the molecules. “Wet” vs. “dry” methods also have an effect on the crystal size, the wetter methods faltering towards larger crystal sizes. Substrate versus surrounding temperatures have also been thought to reduce heterogeneity, but this has been only applied to MALDI spots \cite{27126469}. Automated sprayers have allowed reproducible application methods across individuals and labs, thus their popularity has grown in the last few years \cite{26922843}). Several application notes for different vendors exist, but researchers should take time to optimize their application methods to their specific systems. Interestingly, alternative ionization methods (SIMS) have been used to characterize the analyte incorporation into spots, and, although difficult, imaging-based studies would be interesting \cite{26419771}. Homogenous application has also been a major focus, and researchers have utilized alternative application methods to improve this facet in the last few years. One example is electrospray deposition, for which units tend to be homebuilt. This dry application method usually requires an addition “incorporation spray” after the matrix has been applied \cite{28263004}. Some electrospray devices have allowed for control of the crystal size, which can directly relate to spatial resolution achievable \cite{26016507}. Other methods have also benefited from the inclusion of an electric field in decreasing crystal size and increase spatial resolution \cite{26016507}. Finally, the “driest” method used is sublimation, which is popular for its low-cost, low crystal size, high homogeneity. Commercial and partially modified apparatuses are highly published \cite{26212281}, \cite{26705612}, \cite{28362367}, \cite{28362367}. Moving forward, when individuals want to use several matrices on a tissue section or staining, they will tend to wash it off and reapply the new matrix, but this produces an expected signal loss and diffusion. As an alternative, using a commercial sprayer, Urbanek et. al. have developed a multigrid MALDI (mMALDI) methodology, where different matrices are “printed” into predefined dots on a grid. By targeting these specific matrix dots during the imaging run, a researcher is able to gather multiple datasets (e.g., metabolites, peptides, and proteins) from a single tissue section without washing \cite{27039200}. Finally, with all the different variations in equipment and methodology, a real emphasis needs to be on sharing automated matrix application methods and cross-lab communication to allow for reproducible results. The use of open source software and instrumentation is an example of this, although the ease of commercial instrumentation will continually compete with this notion \cite{25795163}.
2.2.3 Chemical Derivatization/On-Tissue Labeling
To those outside the field, mass spectrometry is seen as a “magic” technique, although there are several molecules that have a hard time being ionized and thus analyzed directly by mass spectrometry. The concept of derivitizingderivatizing molecules is commonly used in antibodye-based techniques, and its inclusion in mass spectrometry sample prep to aid in ionization was expected. The Girard T (GirT) reagent has been applied successful to several steroids, including testosterone and triamcinolone acetonide \cite{27676129} \cite{28193015}. Other steroids (e.g., THC) have also been targeted using 2-fluoro-1-methylpyridinium p-tolunesolfonate as a dervitization agent \cite{27648476}. N-glycans, fatty acids, and neurotransmitters have also all be targets through unique on-tissue assays \cite{25453841}, \cite{27181709}, \cite{27145236}. Compared to using the traditional spraying of the reagent, which usually produces poor (>100 micron) spatial resolutions, electrospray deposition has been successfully utilized to derivatize fatty acids but with a high (20 micron) spatial resolution \cite{27181709}. As you can see by the molecular species, most of targets for derivatization are smaller molecules. This may be due to their low ionization or the fact that they are the only molecules that have successfully been derivatized on-tissue.
2.3 Specific Molecular Considerations
2.3.1 On-Tissue Digestion
Molecular imaging of proteins has been of major interest, but high mass resolution analysis of proteins has been out of reach due to the mass range limitations of current, mass analyzers (e.g., Orbitraps), especially for MALDI. This has been alleviated for extract analysis by the inclusion of an initial protein digestion, and naturally trypsin on-tissue protocols have been developed for MSI \cite{26544763}\cite{26505774}. Please note that, as with every method developed, the steps should all be optimized for the tissue type \cite{26544763} , \cite{27485623}. For example, Heijs et. al. has shown the appearance of different myelin basic protein fragments appearing over longer trypsin incubation times \cite{26544763}. Until recently, trypsin digestion has been analogous with on-tissue digestion experiments. With the recent boom of interest in glycans, PNGase F, which cleaves N-glycans, has found application into in situ digestion, and sequential enzyme application has even allowed the imaging of both glycans and protein fragments in one imaging run \cite{27373711}. Overall, while stains/immunolabelings are incredible effective, they can be non-specific, and MALDI MSI provides an interesting cross-validation of the labeling-based strategies. The trickiest part of in situ digestion is appropriately identifying the protein fragments. In some cases, on-tissue MS/MS is difficult depending on the instrumentation, and a complementary liquid chromatography experiment may be neededs to performed \citet*{26505774} decellularization, multimodal mass spec). It is worth noting that other ionization techniques (nanoDESI) allow for intact protein imaging up to 15 kDa on Orbitrap systems \cite{26509582}.
2.3.2 Formalin-Fixed Paraffin Embedded Samples
While there is preference in obtaining freshly excised tissue, sometimes that isn’t possible for many hard to obtain biological samples, especially for rare, human-based samples. The large availability of FFPE tissues, which are not typically compatible with MS typically, researchers have been motivated to develop methods to release the analytes of interest to image these tissues \cite{27791282}. As also, optimization for a tissue type is important, and Oetjen et. al. has provided a comprehensive, guided study to do this (an approach to optimize). Unfortunately, not all molecular species can be extracted from these tissues, although Pietrowska et. al. has said that lipids can be analyzed by avoiding paraffin embedding after fixing the tissue with formalin \cite{27001204}. Most commonly proteins and peptides are targeted, mainly using the in situ digestion described above (tissue fixed with formalin, an approach to optimize sample prep). More recently, researchers have been able to extract metabolites and glycans \cite{27414759} \cite{25804891},\cite{27373711}. With more standardized protocols, the extensive FFPE samples available will be utilized more readily, allowing for a flood of new information to help guide researchers in future endeavors.
3. Developments in Instrumentation
MS imaging often requires specially developed instrumentation in order to address challenges unique to image acquisition, such as spatial resolution or surface homogeneity. Numerous advancements have been made in recent years to improve the quality and reproducibility of generated images. As the main distinction between imaging and LC-MS is related to the conservation of a spatial dimension, most instrumentation developments have been focused on the ionization source, with several exceptions related to ion accumulation. The two main ionization methods for MSI are laser based and secondary ion based, and most of the progress in recent years has focused on these sources. As such they will be the focus of discussion.
3.1 Laser-based ionization
3.1.1 Spatial Resolution
Arguably the most sought-after improvements in MSI are related to spatial resolution, which is the area of an imaged sample that comprises a single mass spectrum in an imaging acquisition . Improving the spatial resolution enables more discrete localization patterns to be observed throughout a tissue, but since improving spatial resolution decreases the area of tissue ionized, there is a tradeoff between spatial resolution and sensitivity. The resolution can be changed by adjusting the optics of the ionization source or otherwise changing the instrument’s geometry to decrease the laser diameter. Numerous groups have recently reported drastic improvements in spatial resolution. One paper reports a lateral spatial resolution of 1.4 micron on an atmospheric pressure MALDI source by adjusting its geometry, allowing for the visualization of subcellular lipid, metabolite, and peptide distributions \cite{27842060}. Another group achieved a spatial resolution of 5 microns on a vacuum pressure MALDI instrument by using a simple modification to the optical instrument. The system was easily interchangeable between various laser spot sizes, allowing for more selection in the tradeoff between sensitivity and resolution based on each individual experiment’s needs \cite{28050871}. These two papers highlight some of the current advances in spatial resolution recently.
However, with the rapid developments in spatial resolution, it was found that spatial resolutionit was being defined differently between groups, instruments, and samples. As this makes it difficult to form a standard of comparison between methods and instruments, to developing a universal method for both defining, and measuring spatial resolution is crucial to proper data reporting and comparison of images acquired on different instruments with different sample preparation methods, or with different users. In this review, we define spatial resolution as Typically, the limiting factor in spatial resolution is the laser, as the laser width determines the ablation area. Therefore, investigations have looked into characterizing the ablation pattern in imaging experiments, particularly with MALDI-MSI, the most widespread imaging technique. It was found that laser ablation patterns follow a Gaussian distribution, with incomplete ionization around the outside of the pixel. Furthermore, there is the ability to “shear” matrix crystals, scattering debris across the sample after laser ablation. This finding led to the assertion that MSI resolution should be defined as (1) the homogeneity of the matrix crystals once they have been applied and co-crystallized with the analyte and (2) the effective ablation diameter of the laser {O'Rourke, 2017, The Characterization of Laser Ablation Patterns and a New Definition of Resolution in Matrix Assisted Laser Desorption Ionization Imaging Mass Spectrometry (MALDI-IMS)}. The hope is that this new definition will allow for more uniform reporting of spatial resolution between research laboratories on different instruments and with different sample preparation methodologies.
Several research groups have developed methods for measuring the actual spatial resolution achievable from an instrument, which can differ from the reported pixel size of the instrument acquisition parameters. A simple and effective way to do this is with a standard slide that can be used to determine the working spatial resolution of an instrument based on user-defined instrumental parameters. One group developed such a slide that incorporated a pattern of crystal violate using lithography in order to measure the beam diameter in MALDI-MSI experiments by visually inspecting the ablation pattern \cite{27299987}. Another slide for measuring spatial resolution was developed using a slightly different technique, in which a sample solution can be dragged over the slide’s surface, allowing it to be automatically retained in hydrophilic grooves of the slide. The slide can then be imaged on the instrument in order to determine the lower threshold of the instrument’s spatial resolution \cite{26044268}. These devices can serve as a valuable method for testing the spatial resolution when adjusting instrumental parameters or performing quality assurance on images to ensure that proper resolution is being reported.
3.1.2 Matrix-free laser-based ionization
Though highly beneficial in many regards, MALDI MSI’s requirement for a matrix coating is often a major drawback in imaging experiments. Matrix application can be a limitation because it requires an additional step in sample preparation, it suffers from poor homogeneity that can affect spatial resolution, and it results in excessive noise peaks in some ranges of the spectrum due to the interference of matrix ions. As a result, ionization sources are being developed to utilize a laser ablation techniques without the requirement of matrix. Laser desorption post-ionization mass spectrometry, though still in its early stages of development, has been demonstrated to have a promising potential as a complementary tool for in situ localization and quantification. It has the benefit of not requiring matrix application or sample preparation, though currently its resolution and mass accuracy are 500 micron and 300 ppm, respectively, which is not competitive with commercial instruments \cite{28294229}. However, with further development, it may earn its place as a prominent ionization source. Another method for ionization without the application of matrix is nanophotonic laser desorption ionization, which ionizes analytes from a highly uniform silicon nanopost array \cite{26929010}. This method has achieved 40 micron spatial resolution for over 80 molecular species, giving it the potential to be competitive with MALDI upon further exploration.
4.1.3 Throughput
Another frequently cited challenge with MSI is the long analysis time typically required, which can range from several hours to several days, depending on the tissue area and pixel size. These long analysis times limit the practicality of MSI for routine applications, particularly in clinical settings. As a result, developments have been made in order to increase throughput without sacrificing image quality. One notable example involved utilizing a solid state laser with 5 kHz repetition rate to perform continuous laser raster sampling on a MALDI-TOF/TOF instrument. This method achieved an acquisition rate of up to 50 pixels per second, an 8 to 14-fold improvement over conventional lasers \cite{28239976}. Throughput becomes even more of a challenge when molecules in the same tissue ionize differently, thus requiring different polarities for acquisition. This is particularly the case with lipid analysis, as lipids are a diverse class with high structural variability. Methods have been developed for imaging in both positive and negative polarity while minimizing analysis time using high speed MALDI-MSI technology and precise laser control \cite{27041214}. The field is moving toward real-time imaging capabilities for immediate spatial analysis for guidance during surgeries. As an example, Fowble and colleagues have applied a laser ablation imaging approach in ambient conditions in order to obtain spatial distribution of metabolites with a range of polarities in real time without the use of any matrix or sample pretreatment \cite{28234459}. Another method couples a picosend IR laser to an ESI source in order to provide ambient MS imaging without causing thermal damage to tissue. This allows molecules to remain in their native state, allowing better insight into the tissue’s condition \cite{26561279}. These developments demonstrate great potential in moving MSI technology from laboratories to clinical settings for improved patient treatment.
3.2 SIMS
3.2.1 Resolution and Mass Accuracy
The other most common method of ionization is SIMS, which has seen notable improvements in instrumentation. In SIMS imaging, spatial resolution is often quite good, but at the expense of sensitivity. This is largely a consequence of the ion beam, either due to low ionization probability or beam focusing difficulties. An Argon gas cluster ion beam is typically used for TOF-SIMS, but, despite its many benefits, it suffers from poor sensitivity, often causing a tradeoff between spatial resolution and mass resolution. Delayed extraction, a method widely used for MALDI, is becoming more prominent in TOF-SIMS imaging, and has been shown to be successful in maintaining high mass resolution and spatial resolution \cite{26395603}. By implementing external mass calibration, the mass accuracy can also be preserved \cite{26861497}. Methods involving delayed extraction have been explored as a means to improve resolution, but these methods often make mass calibration difficult, resulting in poor mass accuracy. Other groups have explored alternative primary ion sources, such as a CO2 cluster ion beam, which possesses many similarities to Argon, but improved the imaging resolution by more than a factor of 2 due to increased stability of the beam \cite{27324648}
3.2.2 Parallel Imaging MS/MS
With the inferior mass resolution of SIMS compared to other ionization methods, the mass accuracy is usually not high enough to make confident of detected molecules. Therefore, it is usually necessary to acquire MS2 spectra on ions in order to make identifications. Collecting MS2 spectra is difficult in imaging experiments, however, because performing sequential MS2 scans after a full-MS scan causes misalignment between spectra and spatial information. To address this, progress in parallel imaging MS/MS has been implemented, in which MS2 spectra are collected simultaneously with MS1 spectra using 2 mass analyzers. This acquisition method differs from traditional MS/MS acquisitions, in which all ions other than the precursor ions are discarded. As a result, MS1 and MS2 images are in perfect alignment with each other, allowing for more precise mapping of molecular distribution \cite{27181574};Fisher, 2016, Parallel imaging MS/MS TOF-SIMS instrument}. With fully optimized parallel imaging, identification confidence can be drastically improved without sacrificing the integrity of localization information.
3.2.3 Ambient/Low-vacuum TOF-SIMS
As MSI is very commonly used for the analysis of biological tissue it is highly desirable for analyses to be conducted in near-native environments, such as in the presence of water, in order to get an accurate understanding of the chemical environment. Low-vacuum and ambient MALDI imaging have already been well-explored, but progress has recently been made with SIMS, denoted as Wet-SIMS {Seki, 2016, Ambient analysis of liquid materials with Wet-SIMS}. Currently, the technique is able to acquire images at 80 Pa in imaging experiments {Suzuki, 2016, Development of Low-vacuum SIMS instruments with large cluster Ion beam}. With further development, this technique could be used to analyze biomolecules in their native environment, allowing for analysis in biologically relevant experimental conditions.
3.4 Separation
A significant limitation to MS imaging compared to LS-MS analysis is the lack of separation capabilities, as retaining spatial information typically requires ablating all ions present in a pixel of sample at the same time for a single scan. This often leads to problems such as ion suppression, but techniques that allow post-ionization separation are being developed to overcome this challenge. To separate analytes from noise or undesired compounds, a simple sample cleanup step was incorporated into MALDI MSI by first introducing laser ablation with vacuum capture followed by C18 elution onto the MALDI target plate. The method demonstrated an improved sample signal and decreased background interference compared to direct MALDI MSI, resulting in higher quality MS/MS data, cleaner spectra, and more confident identification power\cite{26374229}. For separation of analytes, ion mobility has been a popular choise, as it can and has been seamlessly integrated into MALDI MSI workflows. It has also been recently demonstrated to be highly effective for ambient ionization techniques, such as LESA and DESI \cite{27228471} \cite{27782388}. The results showed an increase in detected molecules and the ability to select specific classes to image. An alternative, pseudo-separation method has also been employed, in which subsequent MS scans covered differing m/z windows in order to detect low-intensity ions characteristic of specific ranges, providing the effect of gas-phase fractionation. By implementing a spiral plate motion during imaging, the integrity of spatial information was not lost with this method\cite{26438126}.
3.5 Depth profiling
Another challenge specific to imaging is achieving uniform ionization over the surface of the tissue, something difficult to accomplish if the tissue is not perfectly flat. While extra care in sample preparation can help alleviate this to an extent in some sample types, often slight variations in the height of the tissue are unavoidable. To remedy this, modifications to instruments have been made that allow for height correction. For example, a novel LAESI source was recently developed that incorporated a confocal distance sensor that both moved the sample to a constant height and recorded the height information to generate a topography map {Bartels, 2017, Mapping metabolites from rough terrain: laser ablation electrospray ionization on non-flat samples}. Another method combined shear force microscopy with a nano-DESI source to measure and adjust the voltage magnitude to enable a stable feedback signal over surfaces with complex topographies {Nguyen, 2017, Constant-Distance Mode Nanospray Desorption Electrospray Ionization Mass Spectrometry Imaging of Biological Samples with Complex Topography}. If a uniform sampling can be ensured over the surface of a tissue, it not only preserves spatial integrity throughout the plane of the sample, but can also allow for three-dimensional imaging. With 3D imaging, it is imperative that the depth profile of the sample be preserved to ensure accurate record of the tissue profile. Several significant advances have been made in this respect in the area of elemental imaging, such the development of a femtosecond laser ionization source for multielemental imaging with a 7 micron depth resolution \cite{27976851}. Submicron depth resolution, down to 20 nm, has been demonstrated using extreme ultraviolet laser light, allowing for 3D imaging of bacterial colonies \cite{25903827}. It is expected that these capabilities will continue to be developed and applied to 3D imaging of more complex systems.
4 Quantitation
4.1 Comparison to LC-ESI-MS/MS: The Past
With the push of multi-modal imaging, it is clear that obtaining several pieces of information from a single tissue is imperative. While MSI is mainly qualitative, with the appropriate conditions, processing, and software, quantitative information can be extracted, although this is still under question. Items such as tissue inhomogeneity, ion suppression, sample topography, etc. are all considered significant challenges in this field (aspects of quantitation). Before the development of quantitative MSI, the analytes of interest were separately extracted from another tissue section and run on a liquid chromatography (LC)-electrospray ionization (ESI)-based instrument, although this is still done regularly in MSI to aid in the identification of unknown, interesting m/z values \cite{27181709}. Once the absolute quantity of the analyte of calculated, these values can then be applied to the tissue of interest. This can also be a starting point of studies, allowing for more targeted imaging studies \cite{25542581}. This methodology is still in the current literature, although, it is more commonly utilized for confirmation of the MSI results, similar to Western blot for other LC-MS quantitative results \cite{26814665}. Quantitative MSI is now expected, as many application-based MSI publications focus on the comparison between two of more sample types. With proper sample preparation, comparisons can be made with the appropriate considerations.
4.2 Relative
4.2.1 Direct Comparison (with or without Normalization)
As eluded to above, direct comparisons between different tissue sections is done commonly. While these “relative” comparison methods learn towards being “semi-quantitative,” several techniques and data processing strategies have perpetuated its use. For example, matrix effects and other interfering molecules tend to cause more deviation in the quantitative accuracy, although some researchers have shown that the correlation between MALDI-MSI and LC-MS/MS can be quantitative for fatty acids and proteins (On-tissue derv, a proof of concept). While these assessments are of different molecules in a single tissue are interesting, differences in ion suppression and ionization efficiencies between molecules should always be questioned, although the addition of an internal standard can aid in the normalization of the signal (spatial localization and quantitation). This can also be done with the same molecules within different tissues, and normalization still aids in more confident comparisons (spatial localization and quantitation). The inclusion of a normalization procedure in pre- and post-processing is now an expectation. This strategy is applicable for several other molecular species, including neurotransmitters, nucleotides, lipids, and tryptic peptides (direct targeted, MSI reveals, brain region specific). Almost all software available for MS imaging provides the ability to normalize. For example, the use of SciLS software tool allowed for normalization to the total ion current (TIC) before further statistical analysis (mass spectrometry imaging of metabolites). After differentiation, several metabolites were found to be different between the cortex, outer medulla, and inner medulla of the rat kidney between control and furosemide-treated (mass spectrometry imaging of metabolites). It should be noted that care should be taken when comparing different regions of a tissue, as their matrices can vary slightly (aspects of quantitation). It should be noted though that there are publications still make comparisons without normalization \cite{26475201}, pioneering ambient, imaging of proteins). Finally, software is obviously an important component in any imaging-based quantitative strategies, and Renslow et. al. have further developed tools to nanoSIMS transition from qualitative to quantitative for element incorporation into biofilms (quantifying elemental incorporation).
4.2.2 On-tissue labeling – Using Reporter Ions
For ESI-based quantitation, two techniques are employed. Label-free directly compares samples in different runs, which is analogous to the “direct comparison” MSI described in the previous section. While label-free quantitation is commonly employed, instrument variability, instrument limitations, and other factors lead to inconsistent and incorrect comparisons. To compare, the incorporation of stable isotopes has allowed for same spectrum relative quantitation, although its application to MSI is extremely limited. One example in the literature entitled stable-isotope-label based mass spectrometric imaging (SILMSI) utilizes light and heavy chromogens to differentiate between different cancer biomarkers of interest (SILMSI). After labeling with a primary and secondary antibody, the addition of the chromogen produces an azo dye that, when ionized by the laser, fragmented into distinct, duplex reporter ions. The ratio of these reporter ions then can calculate their relative abundance compared to another molecule, in this case the estrogen receptor and progesterone receptor (SILMSI). While classically reporter ions are seen in the MS/MS spectra via isobaric labeling, this same concept is not done in MSI experiments, not only due to the poor fragmentation for ions but likely due to the incompatibly of the methods for relative quantitation. In comparison, MS1-based labeling methods can easily be transitioned to on-tissue MSI applications, although the process of derivitizing molecules on-tissue has primarily been used for increasing ionization of different molecules (direct targeted, on-tissue dervatization, linkage-specific).
4.3 Absolute
4.3.1 Internal Standard
Whereas relative comparisons are common place, absolute quantitation is relatively underdeveloped. While obtaining the true concentration of a molecule is much more difficult, it is also more desired since it allows for true comparisons between different molecular species without worries of varying ionization efficiencies. As with ESI-based measurements, the easiest method is to incorporate a deuterated internal standard into the sample. As explained previously, internal standards are now being used extensively to normalize MSI data sets, and the inclusion of a very specific standard (e.g., deuterated version of an analyte of interest) facilitates absolute quantitation of that analyte of interest. This has been done primarily for DESI samples, with the standards incorporated into the solvent stream (quantitative mass spectrometry imaging of small).
4.3.2 Calibration Curve
In general, the creation of a calibration curve is the most confident way to obtain the absolute quantity of an analyte. This has been done with ESI in separate and the same runs (iDiLeu). Initially, you would think producing an external, separately spotted calibration curve would work for MALDI, although the lack of sample matrix and matrix heterogeneity leads to inaccurate concentrations. Thus, researchers have adopted an on-tissue spotting technique that takes both of these considerations into account. The standard of interest (isotopic or non-isotopic) are spotted/applied on a separate, “control” section (absolute quantitation, direct targeted, direct imaging). This section is usually a serial section of the one being analyzed, as having the same matrix is important for accurate quantitation (aspects of quantitation). For example, many researchers chose liver tissue for initial optimization or studies, as it is considered extremely homogenous (aspects of quantitation, absolute quantitation). Interestingly, in the case of elemental analysis, before spotting on the sample, the sections are washed to remove excess elements (e.g., sodium) (direct imaging). To increase homogeneity of the areas where the standards are placed, researchers have developed methods where the standards are spiked into tissue homogenates themselves. These samples are then placed into a mold, frozen, sectioned, and placed near the imaged section, for which quantitation accuracy is similar, although it was noted that the dried droplet spotting method referenced above is much faster and easier (aspects of quantitation). All of these methods require sophisticated computational tools, and several software packages exist for processing region of interest quantitation (MsiReader, MSIQuant). msIQuant is an example software, which has been used to absolutely quantify drugs and neurotransmitters (msIQuant).
5. Data Analysis
MSI data is difficult to process for a number of reasons, including the large size of the data files and the high degree of dimensionality, as acquisitions retain spatial information as well as other information. This is becoming more of a problem with the increase in spatial resolution causing an exponential growth in data files sizes. As such, key software developments have been made to address these challenges and ensure that effective analyses are being done without the loss of valuable information in the process.
5.1 Visualization
The most important information obtained from an imaging experiment is a visualization of the distribution of various molecules throughout the tissue. As each pixel of an imaging experiment contains an entire mass spectrum, special software is required to handle this specific need in the field. While there have been numerous advancements in this respect, the influx of progress caused there to be a lack of uniformity. This means that typically the software could not be applied to large data sets, expensive commercial software would be required, or the software would require the end user to have some degree of programming knowledge to fit his or her data to the software input. However, recent efforts have been made to design open-source visualization tools that are user-friendly and applicable to multiple instrument platforms, particularly in the area of LA-ICP-MS, which is not as routinely implemented as MALDI-MSI or SIMS-TOF {Sforna, 2017, MapIT!: a simple and user-friendly MATLAB script to elaborate elemental distribution images from LA-ICP-MS data}\cite{27917244}{}{Uerlings, 2016, Reconstruction of laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) spatial distribution images in Microsoft Excel 2007}. MSiReader is a key player in open source visualization, providing both a graphic user interface and MATLAB open source code for users\cite{23536269}. Additionally, even open source microscopy imaging software like ImageJ have plugins scripts capable of handling MSI data sets \cite{22347386}. These new open source tools show promise for making the processing of imaging data more widely accessible and customizable for to the broader mass spectrometry imaging community.
New methods have also been explored for expanding the capabilities of visualization tools. For example, 3D MALDI imaging have been limited by inabilities to reconstruct 3D images, but Patterson and colleagues designed an open-source method for 3D reconstruction using multivariate segmentation \cite{26958804}. Others have expanded our knowledge gained in a different direction. Instead of using imaging to track a single molecule, they developed a tool to view the localization of biological indices (e.g. energy charge to indicate energy status in the cell), mapping the relationship between several specified molecules \cite{27542771}.
An important note with visualization of data in MSI is critical to ensuring that the image shown is an accurate representation of the molecular distribution. .It has been found that cropping images to eliminate background can cause the emergence of distribution patterns not observed in the entire image. As a result, data can become skewed if the analyzed area is too small and does not contain sufficient background area for reference \cite{27730748}. With MS imaging making an increasing presence in biomedical applications as a diagnostic tool, appropriate representation and statistical analysis of visual data is essential.
\cite{Bro_2014}
5.2 Preprocessing
Prior to data processing, several steps can be used to ensure accurate and efficient data analysis. These steps include normalization, baseline correction, spectra recalibration, smoothing, and data compression. Normalization is considered required for data analysis, while baseline correction, spectra recalibration, smoothing, and data compression (unsupervised and supervised) are considered optional prior to analysis, but may be necessary dependent upon chosen statistical analysis and the mass spectrometry instrumentation used to collect the data \cite{17541451}. The use of preprocessing steps can also depend on mass spectrometry and biological points of view of an individual project. Overall, preprocessing can help reduce experimental variance within the data set, and helps draw meaningful conclusions from subsequent statistical analysis.
5.2.1 Normalization
\cite{21479971}
Normalization is used to remove systematic artifacts that can affect the mass spectra. Sample preparation, matrix application, ion suppression, and differential ionization efficiencies in complex samples can influence the intensity peaks of mass spectra. Some of these random effects in data acquisition can be minimized by proper normalization. Not applying normalization can lead to misleading artifacts and ultimately can depict inaccurate ion distributions, statistical analysis, and conclusions about biological significance. There are a few different methods for normalization for mass spectrometry imaging data sets based on the purpose of the analysis. Normalization to the total ion current (TIC) is the most commonly implemented method. Normalization to the TIC ensures that all spectra have the same integrated area and is based on the assumption that there are comparable number of signal in each spectrum \cite{17541451} \cite{22148759}. However, in an imaging experiment, it cannot always be assumed that this condition is met. TIC normalization can improve the ability to compare expression levels across samples with similar sample type, however is not applicable when comparing very different tissue types. In addition to normalization to the TIC for similar sample types, the TIC normalized data can be further normalized to matrix related peaks for MALDI imaging experiments to correct for uneven matrix coating. This may be necessary depending on how the matrix is applied to the sample. For example, manually prayed air brush sprayed matrix applications cannot produce as homogenous of crystals across the whole tissue as matrix applied with an automated sprayer or automated microspotter\cite{25331774}. For samples with different tissues types, such as whole body imaging, an externally applied labeled calibration molecule similar to the compound of interest ideally is used a reference molecule and is applied during matrix application. For this normalization method, each spectra is normalized to the intensity of the reference molecule for analysis. Choice of reference molecule can be complicated by deposition methods and choice of compound that may require optimization. Normalization to an internal standard reduces the impact of ion suppression that arises from tissue inhomogeneity and improves pixel-to-pixel variability. TIC is not recommended for whole body imaging or for different samples compositions, where internal standard normalization is considered the best normalization option \cite{25318460}. Other options include normalization to an endogenous molecule that is expected to be consistently expressed throughout the whole tissue, such as a phospholipid head group. Additionally, some researchers have calculated a tissue extinction coefficients or relative response factors to determine the relative amount of a compound in whole body imaging or different tissue types. This tissue extinction coefficient takes into account ion suppression related to the compound of interest and the tissue of interest and is then compared to LC-MS/MS data \cite{22842155} . The advantage of this method is that no expensive labeled standards are needed of the compounds of interest, although accuracy of tissue extinction coefficients is still being investigated .
5.2.2 Baseline Correction
Spectra resulting from imaging experiments can result in noisy data acquisitions and large variations in spectral intensity, even in within the same sample. Noise in the data is the baseline can affect peak detection algorithms and sample-to-sample comparisons. Typically, a baseline algorithm is implemented in the data preprocessing step to reduce this noise prior to statistical analysis. Baseline noise occurs because at low m/z values, there is more chemical noise, leading to the presence of a higher baseline than present at higher m/z values \cite{17541451}. The effect of chemical noise can be suppressed by estimating the baseline and then using a polynomial function or moving average to subtract the baseline. A new baseline is calculated and signal levels are adjusted. New sliding window baseline algorithms are being developed with automatic adjustments based on mass range \cite{27980460}. Even after correction, a residual baseline might still be observable in the low m/z ranges. A choice of baseline algorithm that best reduces that baseline of an individual dataset may depend on the complexity and acquisition of the data.
5.2.3 Spectral recalibration
Spectral recalibration, also called spectral realignment, is applied after data acquisition as a method to improve the mass accuracy of the data by calibrating to an internal standard or several internal standards to realign the spectra. This internal standard must be present in at least 90% of the spectra to be used for recalibration . Some use a matrix peak for MALDI imaging, consistently expressed m/z values in the tissue, or an applied internal standard to perform the recalibration as used in Heijs et al. \cite{26544763} . Multiple internal standard peaks should be selected if a large mass range is used in the data setup. The spectra are then realigned using a quadratic calibration algorithm based on the median value of the selected peaks used for calibration. This typically results in a 5-10 fold reduction of the range of centroid values following alignment \cite{17541451}. Spectral recalibration is especially important for instruments with low mass accuracy, such as a linear TOF instrument, where one might expect a 100-200 ppm mass accuracy \cite{17541451}. This can be compared with an Orbitrap, where one would expect 5 ppm mass accuracy, where spectral recalibration is not as necessary for m/z identifications. However, spectral recalibration can also help to correct for irregularities in the tissue thickness, which can further magnify variations in the mass measurement.
5.2.4 Smoothing
The application of a smoothing algorithm can reduce fluctuation by increasing the signal-to-noise ratio. Mass spectrometry imaging can produce salt-and-pepper noise, where you see sharp, sudden disturbances in the image pixels that do not correspond to the signal seen surrounding this pixel. To help reduce these sudden fluctuations, denoising algorithms are applied to reduce pixel-to-pixel variability and to allow the local scale of features to be resolved. Commonly used algorithms include 1) Savitsky Golay Smoothing \cite{27791282} \cite{27256770}, which assumes a Gaussian distribution of the data and uses the polynomical order and the number of points to a compute a smoothed output value and the 2) Boxcar Smoothing \citet*{22743164}(also known as a moving average smoothing algorithm or the Gaussian kernel), which replaces each data point with the average of neighboring values\cite{26680279}. These work to reduce noisy data sets with significant inter-pixel variation.
5.2.5 Unsupervised data compression
As MSI acquisitions tend to create large data files (up to several terabytes per sample), data processing becomes more difficult and requires more strenuous computational methods. To alleviate this problem and make the data files easier to handle and distribute, several compression strategies have been implemented to reduce the size of data, while still retaining the important information. Binning mass spectra for each pixel of an imaged tissue and compression based on region of interest (ROI) are the most successful methods, with ROI compression requiring the least amount of computation \cite{28842033}. Autoencoders have also been useful for unsupervised non-linear dimensionality reduction of imaging data by reducing each pixel one at a time to its core features {Thomas, 2016, Dimensionality Reduction of Mass Spectrometry Imaging Data using Autoencoders}. Once the size of data has been reduced, it can be more easily processed in subsequent steps of the processing pipeline.
Unsupervised clustering of the data is also used to compress data into features for statistical analysis. Unsupervised analysis can be divided into manual, component, or segmentation analysis. Manual is carried out by selecting out m/z value unique to the region of interest, pulling out each image for a single m/z and manually cataloguing them. Component analysis requires a statistical or machine learning algorithm to cluster the data. Principal Component Analysis (PCA) is used to reduce the dimensionality of the dataset by converting possibly correlated variables into a set of linearly uncorrelated values, known as principal components. PCA an unsupervised statistical method to distinguish principal components that cause the greatest variance in the data. PCA plots the principal component that causes the greatest variation on the x axis and the principal component that causes the 2nd greatest amount of variation on the y-axis to induce groupings of related pixels in the data sets \cite{21980364} . PCA can also be used to remove signals which are poorly connected with variability between groups. Spatial segmentation helps bin together similar spectra into regions of interests and to identified co-localized m/z values. Hierarchical clustering segmentation partitions the image into its constituent regions at hierarchical levels of allowable dissimilarity between regions. Hierarchical clustering only requires a similarity between groups of data points. Hierarchical clustering is used to rearrange multiple variable to visualize possible groups in the data. This provides the possibility for rapid identification of specific markers from different histological samples. HC classifies the mass spectra according to similarities between their profiles and thus provides the ability to highlight regions containing differences in molecular content. Another segmentation methods is K-means clustering, which is the most commonly used for for mass spectrometry imaging. K-means clusters the number of partitions, n, into k number of clusters, where each cluster is based on the spatial distances between mass spectra. Following k-means clustering, each observation now belongs to the cluster with the nearest mean. K-means clustering to create spatially localized clusters to which feature extraction can be applied {Winderbaum, 2015, FEATURE EXTRACTION FOR PROTEOMICS IMAGING MASS SPECTROMETRY DATA} Bisecting k-Means is a combination of k-Means and hierarchical clustering, although computationally more complex. Bisecting k-means is a hierarchical clustering method that uses k-means repeatedly on the parent cluster to determine the best possible split to obtain the next two daughter clusters to obtain uniformly sized clusters. These methods can help to detect important, biologically relevant features that may otherwise go undetected due to the difficulty in extracting information and segmenting large data sets so that statistical analysis is computational reasonable. Cardinal, an R based statistics program can be used for data compression and statistical analysis \cite{25777525}.
5.2.6 Supervised data compression
Supervised clustering is better suited when a specified set of classes is known and the goal is to classify new data set into one of those classes. Supervised used predefined classes or categories, while unsupervised uses similarity between spectra to generate classes. Supervised classification is used to figure out if the groups are actually different, and what m/z values best differentiate the groups. Some studies use actually both supervised and unsupervised statistical analysis for analysis \cite{28361385}. Partial least squares regression is a supervised classification method, where classes of data are annotated with known labels \cite{25462628}. Partial least squares regression is similar to PCA, however instead of separating into components based on the maximum variance, it uses a linear regression model to project predicted variables and observable variables to a new space. This type of supervised clustering requires a training data set for the classification of groups.
Both supervised and unsupervised classification methods reduce data down to the most important m/z value distributions. Data compression projects the data to a lower dimension subspace, while maintaining the essence of the data for statistical analysis. With the large degree of dimensionality associated with MS imaging data, especially of biomedical samples, extracting important, relevant features becomes increasingly difficult. Machine learning algorithms for feature detection applied to LC-MS data can be limiting with imaging data, as they don’t account for differences in spatial regions of the tissue of interest. A context aware feature mapping machine learning algorithm was recently developed that takes into account the spatial region of features when ranking \cite{27764717}.
5.3 Statistical Analysis
5.3.1 Tests of Significance
Statistical analysis of large data imaging sets is incredibly important for the implementation and utility of mass spectrometry imaging. Comparing samples significantly involves statistical hypothesis testing to determine if there is a certain difference that exists between samples or between spatial regions of the tissues. Univariate analysis tests that one m/z, identifying to a compound of interest, is different between different samples. If the data has a Gaussian distribution, a t-test is used to determine the difference between two samples, and ANOVA is used to determine if there is any difference in a group of samples \cite{27485623} \cite{Marczyk_2015} . Gaussian distribution of mean intensities cannot be assumed for clinical samples; mean values may still be used if the central limit theorem is satisfied. If the data has a non-Gaussian distribution, nonparametric tests like the Mann-Whitney U-test can be used as a statistical test of the hypothesis. These tests are useful for finding peaks with an observable change caused from the experiment design between different regions or experimental conditions.
\cite{25877011}
5.3.2 Discriminant Analysis
Data reduction methods such as PCA or PLS are pre-processing steps to discriminant analysis. Together these analyses are commonly performed together and abbreviated as: PCA-DA or PLS-DA, respectively. Discriminant analysis is a statistical tool to assess the adequacy of a classification system. For any kind of discriminant analysis, the groups need to be assigned beforehand or in the case of PCA, preprocessed prior to discriminant analysis. Discriminant analysis is particularly useful in determining whether a set of variables is effective in predicting category membership. This is different from an ANOVA or multiple ANOVA, which is used to predict on or multiple continuous dependent variables by one or more independent categorical variable.
\cite{26604989}
5.3.3 Biomarker Tests
Even if statistical differences exist between two conditions for a single m/z, this does not necessarily mean that this m/z value can act as a biomarker to distinguish the two classes. For univariate biomarker analysis to confirm if a m/z can be used as a diagnostic test to distinguish two regions of interests, a receiver operator curve (ROC) analysis is performed. In ROC analysis, the true positive rate (sensitivity) is plotted in function of the false positive (specificity)\cite{20978390} \cite{20821157} \cite{16550707}. The area under the curve (AUC) in these plots can distinguish whether the m/z marker can be used for diagnostics. This is a test of accuracy, where an AUC value between .90-1 is excellent, .80-.90 is good, .70-.80 is fair, .60-.70 is poor, and .50-.60 is failed test. This test is used to discriminate the ability of a specific marker (m/z) to correctly classify groups of interest. MALDI imaging was used to reveal thymosin beta-4 as an independent biomarker in flash frozen colorectal cancer compared with normal using ClinPro Tools software to perform ROC analysis \cite{26556858}.
However, often in biomarker discovery, one biomarker is not able to correctly classify groups with a high AUC for clinical analysis. In this case, multiple biomarkers (multiple m/z values) are used for analysis. This is known as multivariate analysis. Here, machine learning is used to look at multiple biomarkers to look for correlated structures in the mass spectra that also correlates with the target outcome. This multivariate analysis provides a single ROC curve that is derived from multiple biomarkers. Additionally, an indicator of how much each m/z contributes to the score from the resulting algorithm is calculated for each m/z value\cite{7628115} \cite{23054242}. For regression-based methods such as PLS, the importance of an m/z value is a direct result of the model’s loading vector. Additionally, colocalization of two individual m/z values in a tissue can be calculated in a correlation analysis to see how well m/z components of the multivariate analysis align based on special distributions\cite{18570456}. One problem for mass spectrometry imaging is salt adducts of the m/z values of interest are identified separately. Therefore, in biomarker analysis, it would be ideal to combine m/z values identifying to the same molecular compounds into a single peak for analysis For instance, two m/z values separated by 17mDa is indicative of the presence of that specific m/z plus a sodium ion. This can also happen for potassium salts, the loss of ammonia, the loss of water, oxidation of methionine, and other common modifications. This can complicate identification and statistical analysis as well as univariate and multivariate biomarker analysis. For MALDI, Alexandrov introduced a method called masses alignment which is used to group masses corresponding to a single peak and then represent them as one m/z value. This also reduces the size of the dataset, making computation and biological understanding of the data more attainable \cite{23176142}.
5.3.4 Machine Learning Algorithms
Machine learning is starting to play a larger role in developing algorithms to quantify relationships in mass spectrometry imaging and then using these identified data to make predictions for new data sets. First, data is converted from a population of profiles into a n by m data matrix, where n is individuals, and m is the biomolecule of interest. Following conversion, they can be analyzed using different algorithms that look for correlated structure in the measured data that also correlates with a target outcome.
This is currently being implemented for automated decision making, modeling, and computer aided diagnosis. Supervised learning is being used to help the computer to identify patterns in the known categories. This can be done in two separate ways: classification and regression. Classification refers to decisions among a typically small and discrete set of choices (tumor vs. normal tissue), while regression refers to an estimation of possibly continuous-valued output variables (diagnosis of the severity of disease). Neuronal networks, support vector machine algorithms, recursive maximum margin criterion, and genetic algorithms build statistical models that use training data to perform to predict the classification of new data sets. This is commonly applied for tumor classification \cite{25750696}.
\cite{27322705}
5.3.5 Complete data pipelines
Because processing imaging data requires numerous different treatments than conventional LC-MS data, software with complete data analysis pipelines are useful for streamlining the entire data analysis process. While there are numerous open source and freely available software packages for processing data, functionality tends to be restricted and there typically aren’t export options for the data. A new MSI software package, SpectralAnalysis, strives to expand the reach of data processing by incorporating all processing steps from preprocessing to multivariate analysis, within a single package, allowing for the analysis of single experiments as well as large-scale experiments spanning multiple instruments and modalities \cite{27558772} . Improved data processing pipelines are also being developed in efforts to make full use of the spatial information unique to imaging experiments. One such pipeline, EXIMS, strives to reveal significant molecular distribution patterns by treating the dataset as a collection of intensity images for various m/z values. The process incorporates preprocessing, sliding window normalization, de-noising and contrast enhancement, spatial distribution-based peak-picking, and clustering of intensity images \cite{26063840}. This pipeline emphasizes the importance of special treatment for imaging data compared to LC-MS data.
5.4 Repositories
Finally, data storage and sharing of the final results allow for the community to move forward and build upon the ever growing wealth of knowledge. In order to further drive this, imaging repositories are necessary for allowing researchers access to imaging data for comparison of results and for discovering new answers to biological questions. Previously, such repositories were difficult to implement due to the large requirement of space and computational powers, but technological advancements have allowed for the emergence of at least one such repository\cite{25542566}, with the promise of more becoming available in the near future. Currently the European project METASPACE for Bioinformatics for spatial metabolomics developed on online engine based on big-data technologies that automatically translates millions of ion images to molecular annotations. The estimated completion time for this project is June 2018.
6. Multi-modal Imaging Systems
MSI is useful for analyzing the spatial distributions of small molecules, lipids, peptides, proteins, and glycans. The combination of MSI with other imaging modalities help to multiplex imaging analyses into a comprehensive analysis to answer biological questions that could not be otherwise not be analyzed with a single imaging modality. Multimodal technologies are very commonly implemented in diagnostic imaging techniques and the concept has been expanded into MSI analysis pipelines. MSI can serve as an essential complement for untargeted chemical analysis coupled with other imaging modalities. Because MSI has high chemical specificity, but lower spatial resolution compared with other imaging modalities, it is typically combined with modalities that complement these features. MSI is combined with imaging modalities that are low in chemical specificity, but high in spatial resolution or tissue structural information. The results from combining complementary imaging modalities is greater than the sum of its parts\cite{26070717}.
Multi-modal imaging can be approached by either acquiring images at different times (asynchrosonous), where the images are fused in data processing step, or by simultaneously acquiring images (synchronous) and merging them during data acquisiton step \cite{20812286} . Asynchronous post-processing can present some difficulties which arise from the positioning of the same samples between different scans at different times, which can cause difficulties in co-registering images for analysis \cite{Meyer_2013}. Co-registration is especially difficult if data acquisitions are not acquired at the same spatial resolutions, however advances in computational annotation help to improve image analysis \cite{Eliceiri_2012}. Image co-registration can be achieved by aligning known regions of interest, using calibration points to perform a rigid regression, or by selecting a variety of points to perform moving least squares registration \cite{Huhdanpaa_2014}. Additionally, different imaging platforms have different sample preparation protocols, which can cause interference into different imaging modalities. Synchronous imaging is advantageous because consistency is achieved in both time and space, however combining instrumentation to accommodate synchronous acquisitions can required advanced skill and can be very expensive, especially for mass spectrometry instrumentation. The next steps for multimodal imaging involve integrating quantitative information from multiple existing functional modalities to create composites of not just two types of modalities, but integrating three, four, or even five imaging modalities into single data analysis pipeline. Additionally, advances in technology and instrumentation will allow for synchronous integration to be expanded for multiple imaging modalities.
6.1 Microscopy Multi-Modality
MSI is often combined with microscopy to provide high resolution morphological and structural information, while MSI is used to visualize and identify distributions of specific molecules. Additionally, Plas et al. describes a method for using microscopy data to fuse with mass spectrometry imaging data to enable prediction of a molecular distribution both at high chemical specificity and a high spatial resolution \cite{Van_de_Plas_2015}. This is done post data acquisition using the microscopy data to sharpen and perform out-of-sample prediction \citet*{25707028}. Here, we describe the use of light and fluorescent microscopy to evaluate tissue structure and specific markers. Microscopy is the most common multi-modal system currently paired with mass spectrometry imaging and is particularly useful for identifying regions of interest .
6.1.1 Histology
Although tissue sections used for MSI can be scanned to produce an structural overlay, important structural information on the cellular level is obtained from histological analysis of a sample using light microscopy that can be important for region of interest analysis of MSI data. Light microscopy is used to see details and enlarged portions of a tissue section, which is then captured with a camera. Samples are stained with a specific dye to stain tissue structures. Histology overlay is the most common multimodal imaging system combined with mass spectrometry imaging currently applied in the current literature \cite{26216958} \cite{25488653} \cite{20170166}.
The most traditional stain hematoxylin and eosin (H&E) stain distinguishes nucleic acids in blue and proteins in red. This allows the pathologist to visualize the difference between cells from the surrounding extracellular matrix\cite{21356829}. Other commonly used stains include Masson's trichrome stain used for connective tissue, Alcian Blue for mucins, and Periodic acid-Schiff reactions used for staining carbohydrate rich tissue region \cite{4184780}. Trained pathologists used stained slides to identify different disease states of the tissues. Tissue morphology, cell structure, and staining distribution is analyzed by pathologists to stratify patient specimens and provide diagnostic indices for the patient \cite{28416487} \cite{28117928}
Berikut ini adalah usulan-usulan riset dari peserta kuliah sesuai dengan rencana mereka saat mendaftar sebagai mahasiswa magister. Harap diperhatikan bahwa apapun yang tertulis di bawah ini adalah Plan B. Plan B adalah rencana riset dengan sumber daya paling minimum yang data dikerjakan oleh para mahasiswa. Sumber daya minimum yang dimaksud adalah dana minimum, kebutuhan piranti keras dan piranti lunak minimum, serta perjalanan/akomodasi yang juga minimum.
Deskripsi rencana riset awal
Dominicus Vincent: melanjutkan program riset S1 dengan topik simulasi air tanah, S1 Teknik Pertambangan ITB, menggunakan Visual Modflow, disarankan menggunakan kode Modflow orisinal keluaran USGS.
Rendi Ermansyah: eksplorasi hidrogeologi untuk pencarian sumber air, S1 Teknik Geologi Unpad. Ybs perlu menyampaikan nilai kebaruan teknik eksplorasi agar tidak berkesan biasa.
Meila Puspita: hidrogeologi untuk geotermal, S1 Teknik Geofisika Unsyiah. Ybs perlu memutuskan untuk bekerja di lapangan geotermal yang telah dieksplorasi atau yang masih baru (green fields). Disarankan untuk memilih lapangan baru, karena berbagai komponen lingkungan dapat menjadi nilai originalitas riset. Lapangan dewasa (brown fields) dinilai telah terlalu sering dibahas dalam tugas akhir.
Felice Dagelardini Wopari: hidrogeologi untuk pertambangan mineral, kasus aliran "lumpur basah" dari rekahan, S1 Teknik Pertambangan Uncen. Lumpur basah ini, atau wet mud atau mud rush atau dewatering sludge perlu didefinisikan dengan lebih baik untuk dapat merumuskan berbagai komponen riset yang berkaitan. Ybs baru dapat menceritakan dampak adanya banjir lumpur basah.
Anggi Rustini: tentang perubahan iklim dan ketersediaan air di Kab. Subang atau melanjutkan riset S1 di zona tak jenuh di lahan gambut, S1 Meteorologi Terapan IPB, pernah kerja di CIFOR. Catatan: untuk tema 1, apakah memang anda yakin iklim telah berubah? Indikatornya apa? Apakah potensi air di Subang memang terpengaruh oleh perubahan iklim itu? Air tanah yang bersumber mata air dan/atau sumur, kalau iya, berapa kali pengukuran?
Kenali rencana riset sejak dini
Studi anda hanya berlangsung selama dua tahun atau empat semester. Terlambat menyusun rencana riset akan berarti menunda kelulusan anda hingga waktu yang tidak dapat ditentukan. Ilustrasi di bawah ini menggambarkan anekdot mengenai pembagian waktu anda. Data bisa jadi tidak cukup atau analisis menjadi kurang dalam, adalah beberapa hal paling sering dijumpai saat anda menunda riset pada saat yang paling akhir. Output riset anda tidak maksimal, yakni hanyalah sebuah buku tesis. Padahal mestinya tidak begitu. Output riset anda dapat sangat bervariasi bila anda memulainya sejak dini.
The goal of this experiment was to analyze and identify metallic samples. We use an X-ray diffraction machine to analyze the crystal structure of five different samples. We analyzed the diffraction peaks of each sample and calculated their lattice constants. By comparing to literature values, we were able to validate the identity of silicone, bronze, brass, and pure copper powder, and identify an unknown substance to be tantalum.
Skripsi adalah salah satu bentuk tulisan ilmiah. Tahapan ini mau atau tidak harus anda lalui untuk mengakhiri karir anda sebagai mahasiswa. Di sisi yang lain, masalah utama mahasiswa adalah kesulitan untuk menulis.
Menulis dalam arti luas sebenarnya adalah bercakap secara sistematis. Inilah bedanya dengan percakapan bebas yang biasa anda lakukan. Sekali anda salah, maka ucapan akan terlanjur keluar dari mulut anda. Tapi dengan menulis, percakapan akan mengalir tapi juga memiliki waktu untuk direnungkan, sebelum pada akhirnya dirilis ke pembaca.
Artikel ini merupakan sari dari beberapa karya tulis yang telah dihasilkan sebelumnya, implementasi open science \cite{Irawan_2017}, Status makalah berbahasa Indonesia di DOAJ \cite{irawan_dasapta_erwin_2017_376762}, dan sebuah buku berjudul Menulis Ilmiah itu Menyenangkan yang diawali dengan sebuah blog (baca juga reviewnya oleh Nursatria Vidya Adikrisna).
Tulisan pendek ini akan mencoba mengubah pikiran anda dari sulit menulis menjadi tidak dapat berhenti menulis. Semoga.
(Makalah ini ditulis untuk Kolom Opini Majalah Retorika Kampus)
Modelo de negocio para la aplicación “home services”
Antecedentes:
Internet presta servicios que van más allá de consultas de información, transferencia de ficheros y chat, entre otros. Mediante este canal, es posible realizar transacciones electrónicas que van desde comprar un libro, hasta el pago de facturas; estas actividades en Internet, a las cuales se accede usando dispositivos móviles, se conocen como comercio móvil. El concepto no es común en nuestra sociedad y, por ende, es importante ahondar en lo que este tipo de comercio implica desde su infraestructura hasta la percepción de usuarios. Metodología: la revisión bibliográfica se realizó mediante una investigación documental. Resultados: el comercio móvil contempla funcionalidades, estrategias, e infraestructura necesarias para la realización de transacciones electrónicas exitosas y seguras. Conclusiones: el comercio móvil, al igual que el comercio electrónico tradicional, cumple con la infraestructura necesaria para ofrecer una gran variedad de servicios a los usuarios. Robayo-Botiva, D. M. (2012). El comercio móvil: una nueva posibilidad para la realización de transacciones electrónicas. Memorias, 10(17), 57-72.
La indefinición del modelo de negocio contribuye sin duda a agravar la sensación de riesgo e incertidumbre por parte de los productores provenientes de las industrias mediáticas. En el caso de las marcas informativas, por ejemplo, esa indefinición se traduce en la dependencia excesiva de los modelos de negocio del Internet convencional, caracterizados por la gratuidad de buena parte –o la totalidad– de los contenidos. Que esos mismos medios ofrezcan contenidos adaptados gratuitos (versiones ligeras de sus web o aplicaciones dedicadas que remodelan esos mismos contenidos) les obliga, a la postre, a generar nuevos contenidos con valor añadido –o bien fórmulas de integración de publicidad– que les permitan monetizar la plataforma móvil más allá de su contribución como complemento secundario del medio online (como El País Plus u Orbyt, de El Mundo). Existe, no obstante, un tercer perfil emergente entre los productores de contenido y aplicaciones móviles: el de las marcas que utilizan los contenidos como elementos de imagen, en una suerte de versión móvil de patrocinio. Son los contenidos y aplicaciones de marca (appvertising), entre los cuales destacan los videojuegos de marca (advergaming). ( Juan Miguel Aguado, Claudio Feijóo e Inmaculada J. Martínez, 2012)
Este trabajo tiene como propósito fortalecer el Lienzo del Modelo de Negocio de Osterwalder y Pigneur (que presenta en su libro “Generación de Modelos de Negocio), haciendo uso de los elementos de análisis que aporta Fuentes Zenón en su trabajo “Diseño de la Estrategia Competitiva”, conservando la esencia grafica sencilla y esquemática del lienzo original, al fin de favorecer la participación de los practicantes o responsables de los negocios.
En la primera parte se plantea el porqué de la conveniencia y popularidad del Modelo de Negocio, así como la necesidad de aportar mayores elementos de análisis, en la segunda se hace una rápida revisión de los antecedentes, estructura y papel que ocupan los Modelos de Negocio en el campo de los enfoques de negocio, para luego en la tercera parte ofrecer los elementos de análisis para el examen de cada uno de los 9 bloques del Modelo de Negocio, resultados que se resumen la cuarta parte para dar forma a una guía breve. ( Silvia Núñez Corona, 2014)
El presente informe de trabajo de grado, tiene por objetivo presentar el plan de negocios para la creación de la aplicación móvil destway, que permita gestionar los viajes terrestres intermunicipales. Este plan de negocio se pudo desarrollar utilizando la guía para la generación de modelos de negocio propuesto por Alexander Osterwalder. Los elementos que se han tenido en cuenta para la elaboración del presente informe abarcan el análisis del mercado en el uso de SmartPhones y de aplicaciones móviles, revisión de las aplicaciones móviles desarrolladas para el sector transporte y el análisis del movimiento del transporte terrestre de pasajeros. ( Galvis Escalante, Julián Andrés; Giraldo Betancur, Julián Andrés, 2015)
Este artículo revisa el uso del Plan de Negocios como herramienta para enseñar emprendimiento y propone un nuevo enfoque basado en el diseño del Modelo de Negocios acompañado de actividades significativas como un medio para promover el espíritu emprendedor entre los estudiantes universitarios.
La propuesta se basa en información de autores destacados sobre el tema, pero sobre todo en el enfoque de Alexander Osterwalder e incluye la aplicación de la teoría del Design Thinking donde los emprendedores requieren pensar de forma tanto divergente como convergente a través de diferentes etapas del conocimiento. Para la validación de la propuesta se basó en una metodología exploratoria para la cual se entrevistó a egresados con empresas. Así mismo, durante un semestre se corrió un piloto en el cual se aplicó el Modelo de Negocio en conjunto con otras actividades en lugar del Plan de Negocios y al término del mismo se entrevistó a alumnos activos de la materia de emprendimiento, y profesores del curso. Se obtuvieron resultados satisfactorios en cuanto al aprendizaje de los alumnos y el fomento al espíritu emprendedor al aplicar el modelo de Negocios como base del curso, así como el resultado y aceptación de las actividades significativas. ( Eugenia del Carmen Aldana Fariñas, Ma. Teresa del Carmen Ibarra Santa Ana, Ingrid Loewenstein Reyes, 2012)
Los trabajos de investigación actuales en ingeniería de requisitos buscan mecanismos que permitan establecer la relación entre la funcionalidad esperada de un sistema de información y los procesos de negocios a los que éste dará soporte. Este enfoque permitirá asegurar que el sistema de información a desarrollar sea realmente útil en las tareas de los actores organizacionales. Los trabajos de investigación en esta área han determinado que las metas organizacionales son una buena base para establecer la relación entre los objetivos perseguidos por el negocio y los requisitos del sistema de información a desarrollar, ya que todos estos requisitos (funcionales y no funcionales) deben corresponderse con tareas que se desean desempeñar dentro de un proceso de negocios. Los procesos de negocio a su vez, permiten el cumplimiento o satisfacción de alguna o algunas de las metas del negocio. En este trabajo se presenta una propuesta para la obtención de requisitos de software a partir de modelos de negocios. El artículo se divide en dos secciones principales: (a) la construcción de modelos de negocios a partir de un análisis orientado a metas (b) la obtención de un modelo de requisitos de software a partir del modelo de negocios. Este trabajo permite tener un punto de partida sólido para la construcción del sistema de información, donde cada requisito tiene su origen en las metas del negocio. (Hugo estrada, 2012)
La micro, pequeña y mediana empresa (MIPYME) ha sido en los últimos años el centro de atención de numerosos trabajos de investigación, no obstante, aún sigue necesitada de fundamentos estratégicos, operativos y de alianzas que, de forma continua, le brinden oportunidades para mejorar su competitividad. El Cuerpo Académico “Gestión de PYME” ha diseñado y puesto en funcionamiento un Laboratorio Empresarial que realiza estudios regionales dentro del Estado de Coahuila, entre ellos el diagnóstico del modelo de negocio y la definición de estrategias de cambio; diseña estrategias cooperativas de innovación ofreciendo un sitio de conexión virtual entre la empresa y la universidad. El objetivo del trabajo es mostrar los resultados alcanzados mediante una encuesta a directivos de 212 PYME del Estado de Coahuila sobre la percepción que ellos tienen acerca de sus manejos financieros mediante el análisis de doce variables (de las 28 que componen el modelo de negocios) que toda empresa debe controlar. Los resultados muestras las relaciones entre cada componente (como variable dependiente) y sus elementos (como variable independiente) en la estructura del modelo de negocios. Los resultados evidencian similitudes y diferencias en estos manejos según los sectores y tamaños de las empresas. La evaluación realizada permite definir estrategias para mejorar el desempeño económico y social de las MIPYME.( Víctor Manuel Molina Morejón, Lourdes J. García Hernández, Valeria Viridiana Salas Jaramillo, 2013)
Child Witness: Autobiography, Trauma, Social Justice
Introduction
Child Witness explores the emergence of the child as a testimonial site and figure in autobiographical projects by adults who seek to represent trauma and call for justice. From Harriet Jacobs’s slave narrative Incidents in the Life of a Slave Girl to contemporary comics like Phoebe Gloeckner’s A Child’s Life to picture book memoirs likeRuby Bridges’ Through My Eyes, authors often incorporate childhood experience as a critical feature of shaping a life story for diverse audience. These are not stories that merely recollect childhood or burnish it nostalgically. Instead, autobiographical narratives of childhood by adults mark a site where the values associated with self-representation in politics, aesthetics, and everyday life -- truth telling, the authority of experience, reliability – attach to the child and permit adult readers to connect with the authors’ larger social justice projects. The child witness -- credible, trustworthy, and vulnerable – offers authors and audience a means of connection they would not otherwise achieve. The child in the life writing projects of Jacobs, Gloeckner, and Bridges is employed as a witness to the horrors of slavery, deprivation, rape, and segregation. The child is positioned to testify to experience rather than to suffer it. The adult author recounts what the child experienced: not by ascribing naïve authenticity to the child’s voice, but by centering the childhood experience and knowledge upon which the authority of the adult autobiographer builds. Our focus on the emergence of the child witness as a testimonial figure and site reveals how authors leverage the affective power of their own childhoods to connect with diverse audiences. Autobiographical literature that uses the child witness in this way offers a pedagogical form that educates about injustice and calls for ethical witnessing and social change. It provides for new relations to emerge between authors and audiences through which previously silenced histories of personal and collective trauma are represented.
Child Witness will reveal a history of the child’s centrality to struggles for social justice, especially anti-racist, feminist, and human rights movements, and the significance within this history of autobiographical literature that connects childhood to adult activism. The book is guided by an overarching question: How does this literature disrupt the symbolic and political meanings of the child in the service of social justice and activism? Given the cultural judgments that attach to women’s autobiographical accounts, for example, how does the figure of the child and the narrative of childhood address the limits of persuasiveness and authority that damage women’s testimony? To answer these questions we chart a feminist history of life writing that foregrounds a child witness on whose behalf readers learn to demand justice.
The major theoretical intervention of the book lies in our fusion of insights from childhood studies and studies of autobiographical literature through which we reveal the centrality of the child (as witness and activist, as testimonial site and figure) in a testimonial tradition of auto/biographical work that seeks to make visible and/or remedy inequity. Child Witness takes up the child – a familiar figure in literary studies and humanitarianism alike – in order to place it in a new critical context by pulling visual and verbal forms into new proximities through feminist interdisciplinary analysis. We propose that a new formation around “the child” emerges at the intersections of life writing, children’s literature, and visual culture. Specifically, our focus on the child within the history of feminist life writing reveals new examples of how to bear witness to individual and social trauma.
Many will associate the words “witness” and “trauma” with Shoshana Felman and Dori Laub’s psychoanalytic and literary analyses rooted in Freud and focused on the Holocaust. Our project is rooted in a different strand of trauma studies that is based in the feminist theory and clinical practice of Judith Herman, Laura S. Brown, and others who elaborate an antiracist feminist criticism of trauma that looks at systems of inequality. Extending this work to the study of self-representation, Child Witness draws on Leigh Gilmore’s (2001, 2107) elaboration of a feminist intersectional analysis of the chronic, pervasive, and everyday quality of trauma in the lives of those who experience a range of material forms of insecurity and risk. Gilmore’s focus on testimony, everyday violence, and systemic sexism and racism is shared by other scholars who use the terms trauma and testimony without primarily referencing the work of Felman and Laub, including Judith Butler, Hillary Chute, Wendy Kozol, Nicholas Mirzoeff, and Gillian Whitlock. We define trauma here as harm that unfolds over time, is hidden in plain sight, and permitted by social norms of violence against women, children, and people of color. Child Witness engages directly with how trauma structures testimony, and it does so by attending to a range of dynamic and sometimes controversial visual-verbal strategies. Our analysis of visual culture also moves us away from Felman and Laub, as we attend to how photographic portraits in the 19th century documented slavery and visualized the subject of abolition, how comics and graphic memoirs challenge the all-too-pervasive sexual abuse of girls and women, and how auto/biographical picture books about civil rights define children as political agents.
The critical term “witness” is drawn from scholarship in life writing on “human rights and narrated lives” (Smith and Schaeffer), from the analysis of race, gender, and culture in intersectional feminism, and from visual and verbal studies of ethical witnessing (Kozol; Hesford; Mirzoeff; Neary, et al). We add to previous theorizations of ethical witnessing an analysis of the child as the site and figure of testimony to the everyday trauma that the girl experiences and documents. The self-representational strategies of writers and illustrators motivate different publics to activism. We chart examples of ethical witnessing with the child at the center of autobiographical projects from slave narratives in the mid-19th century in the U.S. to contemporary memoirs and picture books. Our critical framework and archive are well-suited to each other: we document how authors use narratives and images of their own childhoods to reach diverse and often distant audiences, thereby placing familiar texts in a new critical narrative and incorporating unfamiliar texts to flesh out this history. There is no single child witness in the history we lay out; rather, autobiographers return to their childhoods and use the child as a site of testimony in a range of ways that we seek to name in each chapter. The origins and locations of meanings of childhood will shift within and among historical time periods, especially given our focus on women and girls of color. Our method is an emergent one that adapts to the flexible genre of autobiography and to the themes and strategies each author and artist employs in a text.
Our use of the term social justice is an essential element of the theoretical framework of intersectional feminism. This interdisciplinary feminist frame fits our project’s focus on situated personal experiences as a way to create new knowledge, affiliations, and forms of justice that exceed courts or other formal venues. The autobiographies in this project place the girl in a political context. Here, the child is not an innocent being to be saved; rather, women name the intersections among race, class, gender, citizenship, and other variables to highlight and resist larger systems of oppression in which the child is embedded. Autobiographers use the child as a testimonial site to create narratives and images that critically interrogate systems of meaning and intersections hidden in plain sight. These are often shocking because we are trained to read the child as vulnerable, in need of saving, immune to adult conflicts, and somehow not raced or classed. For example, Rigoberta Menchú as a girl who is Indigenous, poor, colonized, and an activist who makes claims on behalf of numerous victims of torture and murder, tells a personal life story in order to draw attention to U.S. involvement in the conflict in Guatemala. Through her use of the girl, she testifies to violence and demands justice for the victims of harm. Through this example, we can see the ways in which social justice is at the heart of intersectional feminism’s commitments to examining structures of inequity that frame how and who is heard. The feminist history of life writing we propose begins with women of color. The critical and historical trajectory extends from Harriet Jacobs to Black Lives Matter in one conceptual breath and argues that when some men have focused on or embedded childhood within their autobiographical projects, they do so in relation to women’s writing. Thus the gendered discourse of autobiographical narratives of childhood develops in authority and innovation in demonstrable ways through the work of women.
Critical studies of both childhood, including children’s literature and queer theory, and autobiographical narrative, including graphic memoirs and picture books, represent a provocative and important intersection for at least two reasons. First, adult autobiographers politicize childhood in ways that challenge “certain stylized and largely unquestioned assumptions about childhood” (Duane 8). Second, adults writing about their own childhoods bring attention to abuses often hidden from view and encourage adult readers to ally with them and advocate for change in the public sphere. Scholars in this area have theorized the child as a symbolic and contested social category rather than a biological certainty (Bruhm and Hurley; Driscoll; Duane; Dubinsky; Gittins; Higonnet; Kehily; Sánchez-Eppler; Steedman). Scholars of childhood maintain that while there are actual children who need protection from those positioned to provide it, the meanings a culture gives to childhood, and the harm or protections solidified in institutions and policy, will differ across time, culture, and location, as well as across the variables of race, class, gender, and sexuality. This multiplicity of meanings has been captured in cultural studies of childhood that note how the figure of the child often serves as a means to elicit a wide range of competing emotions, from sympathy to patriotism (Berlant; Edelman; Stockton). Children’s literature scholars, in particular (Capshaw; Kincaid; Kidd; Mickenberg), have been instrumental in drawing attention to how the imagined child reflects larger social and political ideologies, histories, and movements. To this field, we contribute an analysis of how the figure of the child witness enables readers to connect the private act of reading to the collective project of social change.
The interdisciplinary field of autobiographical literature examines how people represent their lives in relation to history and do so in creative and innovative ways (Chaney; Chute; Gilmore; Smith and Watson; Whitlock). Historically, this practice has taken numerous nonfictional forms, including autobiography, memoir, slave narrative, and other testimonial discourses, and has also paralleled the development of fictional forms interested in the first person, including the bildungsroman, first person fiction, lyric poetry, and ‘zines (Gilmore; Rak and Polletti). We draw upon and amplify Leigh Gilmore’s analysis of limit cases in life writing in order to offer a critical frame for theorizing the use of the child witness within the larger historical and creative project of life narrative in different media.
To this end, we recognize the diverse linguistic and visual strategies that authors and illustrators employ within a complex history of socio-political movements. Thus Child Witness connects an analysis of slave narratives of the 19th century to contemporary graphic memoirs and children’s picture books by historicizing and theorizing the emergence of the child witness as testimonial figure and site of cultural judgment. By design, we place autobiographical narrators like Harriet Jacobs and Rigoberta Menchú and comics artists like Marjane Satrapi and Phoebe Gloeckner alongside works often read in K-12 contexts, including fairy tales, and graphic life writing in picture book format, such as Duncan Tonatiuh’s Separate is Never Equal: Sylvia Mendez and Her Family’s Fight For Desegregation in order to capture the broad use of the child witness. This cross section of texts allows us to make visible the dynamic interrelations of gender, genre, race, and class in the context of testimony and its investments in social justice. We have been struck in our previous research (Gilmore, “Witnessing Persepolis”; Gilmore and Marshall; Marshall) by the wide range of textual and visual strategies writers and artists use to politicize childhood. Among these, we have observed how writers and artists pose ethical demands as an outgrowth of shared affect, offer up radical pedagogies that blur the soft borders between childhood and adulthood, and teach alternative lessons about history, trauma, and resistance through life writing. Our previous work examined how adults use texts and images of their own childhoods to make larger claims in the public sphere and allowed us to further analyze feminist interventions in the symbolic and cultural meanings of childhood through the media of life writing and graphic memoirs. Here we elaborate a framework for understanding how feminist autobiographical projects disrupt the symbolic and political meanings of the child.
Chapter One, “Girlhoods, Crisis, and Autobiography,” examines three linked cases that introduce readers to how adult women use girlhood as a category to compel social activism. In each, the authors draw on and insert the child as central to the political activism for which they seek witness. From slave narratives to the Latin American testimonio I, Rigoberta Menchú,and autobiography in comics form, such as Marjane Satrapi’s Persepolis, in the global literary marketplace, women use autobiographical narratives of childhood to elicit readers’ ethical engagement with political topics and cultural critique. In this chapter, we chart a feminist history of how women of color use autobiographical narratives of their own girlhood to elicit sympathy from a mostly white and often geographically distant readership. These popular autobiographical narratives reach across national borders to call for political action, including the abolition of slavery in the U.S., humanitarian intervention in the civil war in Guatemala, and understanding of revolution in Iran. In these texts, women argue that political and moral autonomy develops from their responses to childhood experience and crisis.
We begin with Harriet Jacobs’s critique of the destruction of childhood for enslaved children. Jacobs shifts the focus from race to racism and slavery by describing her own happy young life. Childhood, for Jacobs, offers a way to interrogate her white readers’ assumptions about race and racism. Rigoberta Menchú uses her childhood to establish a complex network of testimony, truth-telling, and privacy. She contrasts the independence and respect children are accorded and the work they are relied upon to do in Quiché culture with the exploitation of their labor on coastal plantations. Marjane Satrapi offers her child-self as a witness to the rise of the Ayatollah in Iran even as the childhood she knows disappears when her parents send her into exile. The symbolic and political meanings of the child differ in each example as do they ways in which they are unsettled; yet, taken together, they represent a history of feminist representation of the child as a testimonial figure and site.
We connect these texts to make clear that how life writing, children’s literature, and visual culture are co-producing the child is broadly intersectional along the lines Kimberlé Crenshaw adumbrated. Our work can be read alongside previous theorizations of the child by scholars such as Robin Bernstein, Anna Mae Duane, Caroline Levander, Kathryn Bond Stockton and others, who also recognize the multiple systems of oppression that motivate a diverse range of equally intersectional responses by authors, artists, and activists. As with these critical projects, our method is less concerned with naming a particular child figure (e.g., the suffering child) in a particular historical moment, or communicating the authentic perspective of the child; rather, our intersectional feminist frame allows for a focus on the unique formation of an adult rendering his/her/their own childhood as a testimonial site from which to agitate for social justice. No longer representative of static subaltern silence, girls emerge in these narratives as figures of sympathy represented by politically active women autobiographers.
Chapter Two, “ Soft Borders and the Feminist Politics of Girlhood,” shifts focus from the use of the child figure to draw attention to injustice and to compel the action of others on behalf of the child in order to examine the strategic use of the girlhood as a category with soft temporal borders. Here, we connect Susanna Kaysen’s popular memoir Girl, Interrupted about her confinement in McLean hospital, Lucy Grealey’s Autobiography of a Face about her experience of jaw cancer in childhood through multiple surgeries and hospitalizations, and David Small’s graphic memoir Stitches about his childhood experience of throat cancer, surgery, and its consequences. In each example, experiences of illness take the authors out of one form of time, in suspending one childhood temporality and supplanting it with another that moves in the tempo of diagnosis and treatment. One form of childhood time –growing in relation to siblings and peers, for example, schooling, neighborhood life—is replaced with the rhythms and routines of the hospital, routines that offer new markers for charting life. Childhood in these texts is a borderline category. Kaysen fuses childhood and young adulthood to make a feminist critique about the white middle class family and about mental illness. In Autobiography of a Face, Grealey offers different trajectories of growth for her body and her face, deftly revealing how the medicalization of her childhood lacked a developmental language adaptable to her sexuality. Viewed as a childhood patient, yet living in a maturing body, Grealey’s face emblematizes a complex site of traumatic experience and testimony. Whereas Susanna Kaysen represents her young adulthood as being interrupted by her institutionalization for borderline personality disorder, Grealey’s life is interrupted by a narrative of her difference, a condition she can neither leave nor outgrow but must address through narrative. We read David Smalls’ graphic memoir in relation to Kaysen and Grealey to place him within the gendered market of contemporary life writing about trauma and to highlight the feminist strategies he adapts to narrate childhood trauma.
Chapter Three, “Fairy Tale Girlhoods: Sexual Violence and Feminist Graphic Knowledge,” considers the category of girlhood as a site for feminist critique through a reading of Virginia Phoebe Gloeckner’s A Child’s Life and Other Stories.Here we identify specific formal strategies Gloeckner crafts in the service of testimony, including the telescoping of child and adult perspectives and temporalities, and the use of children’s literature, especially fairy tales in Gloeckner’s graphic autobiographical project. The connection between nonfictional narratives of endangered children and the canon of children’s literature may seem tenuous, but adult life writers often rely upon familiar texts from childhood (Marshall). Fairy tale characters like Little Red Riding Hood and All-Fur experience evil stepmothers, threats of rape and rape, and other forms of violence, and provide a familiar touch point for life writing about childhood and trauma. Gloeckner returns to the sexual violence of traditional fairy tales to rupture the façade of the unknowing child. In comics like “Magda Meets the Little Men in the Woods” Gloeckner remediates the fairy tale in contemporary comics form to offer a pedagogy in which the child witness refuses the position of resilient being who grows out of or forgets trauma. This chapter offers a method for reading the visual and verbal strategies of feminist resistance that Gloeckner employs through the child witness. Specifically we note how Gloeckner creates a feminist graphic knowledge of sexual violence through her use of the gutter (the white space between panels in comics) and scale. She uses the figure of the child to intervene in the epistemology of children’s sexual precarity within families by illustrating it explicitly. She reaches out to readers visually to counter the claim that such violence is invisible and unknown. To contextualize Gloeckner’s graphic strategy, we consider Virginia Woolf’s imposed reticence about being her experience of sexual abuse as a child in her autobiographical essay, “Sketch of the Past,” and demonstrate how Una’s graphic memoir about abuse, Becoming Unbecoming present sexual abuse as defining childhood and adulthood for women rather than as an isolated or episodic interruption.
In the previous chapters, we examine texts published for an adult or young adult audience and the figure of the child as witness. The final chapter, “Witnessing Social Violence for Children: Picture Books, Auto/Biography and Social Change,” takes up children’s nonfictional picture books as a unique and radical form of graphic life writing in which indigenous writers and authors and illustrators of color center a child figure who is both witness and activist. These texts represent social histories often left out of official social studies curricula. Often dismissed as simple or solely for a young audience, picture books have a history of providing “necessary cover” (Capshaw 103) for the child witness to speak and relay lessons about discrimination, violence, and activism. For instance, Duncan Tonatiuh’s biography of Sylvia Mendez and her family in Separate Is Never Equal, Ruby Bridges’ memoir Through My Eyes, and Christy Jordan-Fenton and Margaret Pokiak-Fenton’s co-authored auto/biography When I Was Eight recuperate and reclaim histories through counter-storytelling in image and narrative (Solórzano and Yosso). The child witness-as-activist is central to counter histories of racialized misrepresentation in text and image and to the creation of culturally specific stories of resistance that have radical potential for social justice education. In each of these auto/biographical picture books, a child witness who is also an activist child.
In the conclusion, “New Child Witnesses,” we turn our attention to current events and movements in which the child witness is crucial to forwarding human rights. Nobel Peace Prize winner Malala Yousafzai’s representation of her childhood experience and activism in I am Malala: The Girl Who Stood up For the Taliban and Was Shot (Youfsazi and Lamb) emerges alongside the representation of her by others, including picture books, such as Malala Yousafzai: Warrior With Words (Abouraya and Wheatley) and Malala, A Brave Girl From Pakistan (Winter) and enables a comparison of the autobiographical and biographical child witness.In addition, we examine how the feminist and anti-racist movements of #BlackLivesMatter and #SayHerName protest not only the expendability of black boys and girls, but also how these subjects are denied their status as children. Tied to representational strategies in Harriet Jacobs’s Incidents, the black child remains a critical figure for social justice and a contested site of interpretation. Police officers typically see black children and adolescents as older than they are and link imputed age to the risk they pose to officers. Under these conditions, children of color and indigenous youth are at heightened risk of police violence. Social justice activism aimed at reclaiming Trayvon Martin, Michael Brown, and Tamir Rice as children politicizes the category of the child and clarifies its potent use in calls for justice. These new child witnesses circulate in a range of visual verbal circuits, draw on the strategies we outline, and also highlight emergent uses of social justice life writing that compel readers and viewers toward activism. They connect to the earlier histories of abolition and demonstrate the significance of children lives in testimonial projects.
Works Cited
Berlant, Lauren. The Queen of America Goes to Washington: Essays on Sex and Citizenship. Durham: Duke University Press, 1997. Print. Bridges, Ruby. Through My Eyes. New York: Scholastic, 1999. Bruhm, Steven, and Natasha Hurley, eds. Curiouser: On the Queerness of Children. Minneapolis: University of Minnesota Press, 2004. Print. Capshaw, Katharine. Civil Rights Childhood: Picturing Liberation in African American Photobooks. Minneapolis: University of Minnesota Press, 2014. Print. Chaney, Michael, ed. Graphic Subjects: Critical Essays on Autobiography and Graphic Novels. Madison, WI: University of Wisconsin Press, 2011. Print. Chute, Hillary. Graphic Women: Life Narrative & Contemporary Comics. New York: Columbia University Press, 2010. Print. Driscoll, Catherine. Girls: Feminine Adolescence in Popular Culture and Theory. New York: Columbia University Press, 2002. Print. Duane, Anna Mae, ed. The Children’s Table: Childhood Studies and the Humanities. Athens: University of Georgia Press, 2013. Print. Dubinsky, Karen. “Babies Without Borders: Rescue, Kidnap, and the Symbolic Child.” Journal of Women’s History19 (2007): 142-150. Print. Farley, Lisa & Julie C. Garlen. “The Child in Question: Childhood Texts, Cultures, and Curriculum. Curriculum Inquiry 46 (2016): 221-229. Print. Gilmore, Leigh. Autobiographics: A Feminist Theory of Women's Self-Representation. Ithaca, New York: Cornell University Press, 1994. Print. Gilmore, Leigh. The Limits of Autobiography: Trauma and Testimony. Ithaca, New York: Cornell University Press, 2001. Print. Gilmore, Leigh. (2001). “Limit-Cases: Trauma, Self-Representation and the Jurisdictions of Identity.” Biography24 (1): 128-139. Gilmore, Leigh. “Witnessing Persepolis: Comics, Trauma, and Childhood Testimony.” Graphic Subjects: Critical Essays on Autobiography and Graphic Novels. Ed. Michael Chaney. Madison, WI: University of Wisconsin Press, 2011. 157- 163. Print. Gilmore, Leigh, and Elizabeth Marshall. “Trauma and Young Adult Literature: Representing Adolescence and Knowledge in David Small’s Stitches: A Memoir. Prose Studies: History, Theory, Criticism 35.1 (2013): 16-38. Print. Gloeckner, Phoebe. A Child’s Life and Other Stories. Berkeley: Frog, Ltd. Books, 1998/2000. Print. Harrison, Kathryn.The Kiss. New York: Avon, 1997. Print. Higonnet, Anne. Pictures of Innocence: The History and Crisis of Ideal Childhood. London: Thames & Hudson, 1998. Print. Jacobs, Harriet, A. Incidents in the Life of a Slave Girl, ed. Jean Fagan Yellin. 1861; Cambridge, MA: Harvard University Press, 2000. Print. Jordan-Fenton, Christy and Margaret Pokiak-Fenton. When I was Eight. Toronto & Vancouver: Annick, 2013. Print. Kaysen, Susanna. Girl, Interrupted. New York: Random House/Vintage Books, 1994. Print. Kehily, Mary Jane. An Introduction to Childhood Studies(2nd ed.). New York: Open University Press. 2009. Print. Kidd, Kenneth. Making American Boys: Boyology and the Feral Tale. Minneapolis: University of Minnesota Press, 2005. Print. Kincaid, James. Child-Loving: The Erotic Child and Victorian Culture. New York: Routledge, 2002. Print. Marshall, Elizabeth. “The Daughter’s Disenchantment: Incest as Pedagogy in Fairy Tales and Kathryn Harrison’s The Kiss.” College English 66.4 (2004): 395-418. Print. Menchú, Rigoberta. I, Rigoberta Menchú. New York: Verso, 1984. Print. Mickenberg, Julia, L. Learning From the Left: Children's Literature, the Cold War, and Radical Politics in the United States. New York: Oxford University Press, 2005. Print. Rak, Julie and Anna Polletti. Identity Technologies: Constructing the Self Online. Madison: University of Wisconsin Press, 2014. Print. Sánchez-Eppler, Karen. Dependent States: The Child’s Part in Nineteenth-Century American Culture. Chicago: University of Chicago Press, 2005. Print. Satrapi, Marjane. Persepolis: The Story of a Childhood. New York: Pantheon, 2003. Print. Searle, Ronald. The Terror of St. Trinian’s and Other Drawings. New York: Penguin, 1959. Print. Searle, Ronald. To the Kwai- and Back: War Drawings 1939-1945. London: William Collins Sons & Co., 1986, Print. Schaffer, Kay and Sidonie Smith. Human Rights and Narrated Lives: The Ethics of Recognition. New York; Basingstoke: Palgrave Macmillan, 2004. Print. Smith, Sidonie and Julia Watson. Reading Autobiography: A Guide for Interpreting Life Narratives(2nd ed.). Minneapolis: University of Minnesota Press. Print. Solórzano, Daniel G., and Tara J. Yosso. “Critical Race Methodology: Counter-Storytelling as an Analytical Framework for Educational Research.” Foundations of Critical Race Theory in Education. Eds. Taylor, Edward, David Gillborn and Gloria Ladson-Billings. New York Routledge, 2009. 131-47. Print. Steedman, Carolyn. Strange Dislocations: Childhood and the Idea of Human Interiority, 1780- 1930. Cambridge: Harvard University Press, 1998. Print. Stockton, Kathryn Bond. The Queer Child: Or Growing Sideways in The Twentieth Century. Durham: Duke University Press, 2009. Tonatiuh, Duncan. (2014). Separate is Never Equal: Sylvia Mendez & Her Family’s Fight For Desegregation. New York: Abrams, 2014. Print. Whitlock, Gillian. Soft Weapons: Autobiography in Transit. Chicago: University of Chicago Press, 2006. Print. Winter, Jeanette. Malala: A Brave Girl From Pakistan. New York: Beach Lane Books, 2014. Print. Yousafzai, Malala, and Christina Lamb. I Am Malala: The Girl Who Stood Up For Education and Was Shot By the Taliban. New York: Little Brown and Company, 2013. Print.
Associate Professor
Departamento de Economia e Relações Internacionais
Faculdade de Ciências Econômicas
Universidade Federal do Rio Grande do Sul
Av. João Pessoa, 52, 1o. andar, sala 18-D.
Centro, Porto Alegre, Rio Grande do Sul, Brazil
CEP 90040-000
Tel: +55(51)3308-3332
e-mail:[email protected]
URL:http://professor.ufrgs.br/nelsonseixas
Research Project Submitted and to The Alfred P. Sloan School of Management to apply to a Visiting Scholar Position with Professor Rodrigo Verdi.
\label{intro} The classification of supernovae (SNe) has remained a challenging task in astrophysics since the first type distinction was made in 1941 by \cite{Minkowski_1941}. Minkowski split SNe into two groups: Type I spectra do not have any Hydrogen features, while Type II spectra exhibit strong Hydrogen features. Since then, SNe classification has developed more complexity as the number of observed spectra has increased, along higher quality of observing instruments. Figure 1 illustrates the basic classification scheme, excluding unique spectra that might define their own subtypes.
In this paper, we focus on Type I SNe, specifically types Ib, Ic, broad line Ic (Ic-BL), and IIb. These non Type Ia SNe are generally less studied than Type Ia because they are not used as standard candles. Type Ib SNe are generally characterized by the presence of a strong HeI line, while Type Ic spectra lack this feature. Type IIb spectra resemble Ib spectra, but with the presence of a weak Hydrogen line at early phases. More information on the historical classification of SNe can be found in \cite{Filippenko_1997} or \cite{Matheson_2001}, and more recent discussion can be found in \cite{Modjaz_2014}, \cite{Liu_2016}, and \cite{Modjaz_2016}. These four SNe types are examples of stripped-envelope core-collapse super novae (SESNe). A SESNe is classified by the lack of Hydrogen layers, and often Helium layers, in the progenitor star. The progenitors lose their outer shells either through strong winds woo\cite{Woosley_1993} or binary interactions \cite{Podsiadlowski_2004}, and then explode when their cores collapse.
As a newly minted graduate student at MIT, I am among many who suffer from "imposter syndrome." How did I get here? Ask anyone at MIT what made them special and the probable answer is, "I don't know how or why I got in, but how about you?" While your research advisor, academic advisor, or counselor may put emphasis on the tangibles – the series of acronyms like GPA, GRE, and CV that define your intellectual prowess – the real separation between those accepted and those declined lies in their habits. Among these habits, the hot mess of graduate students seems ordered and coherent and the reasoning of their acceptance seems obvious.
There are four real ways graduate students get into top schools.
I'd like to find a solution where at time zero for any x as it approaches infinity, the derivative approaches zero, so all roads lead to \(\frac{b}{a}\) which is a steady state
Journal Article: Comparative membrane interaction study of viscotoxins A3, A2 and B from mistletoe (Viscum album) and connections with their structures (10.1042/BJ20030488)
recientemente la comunidad científica ha estado haciendo hincapié en el impacto de la actividad humana en cambio climático. existe evidencia que lasla emisiones de gasesn de efecto invernadero a provocado el aumento de la tempratura a nivel global. es por ello que en el presente trabajo pretendemos externar una opinión al respecto de los estudios ante mencionado. Este cambio puede trener concecuencias catastróficas.\ref{467629}
There are many similarities between Qi Gong and Eurithme; in energetic creativity, form and movement. There is a zeitgeist that is beginning to explore this cross-over. Eurithme practitioners find that Qi Gong improves their grounding whilst Qi Gong practitioners may find that Eurithme deepens their understanding of the energetic influence and the subtleness in movements of Qi Gong.
This article presents a set of diagrams that provide an insight into Prime Frequencies, or the Harmonic Frequency of Primes as Thomas Mario Kalmar would put it. It explains how the diagrams are made and how they come together. The diagrams are intimately wrapped up with prime numbers, and provide a visual description of how prime numbers occur. Principally, given a prime number, and assuming all other prime numbers up to that prime number are known, the diagrams will reveal the prime numbers in the series up to the square of the first next prime number found. It is uncertain that this is an efficient method of divining primes, or if the diagrams actually add anything new to the theory of prime numbers or number theory in general. However I have every reason to believe the diagrams are unique and are interesting in themselves from a purely visual perspective, although they are most similar in appearance to Sacks’ Spirals. My hope is that they may pique the curiosity of people interested in prime numbers. The diagrams were discovered by thinking carefully (obsessively) about the Discrete Fourier Transform and the famous musical concept of the Circle of Fifths.
Photocatalytic ozonation (PH-OZ) process using TiO2 photocatalyst conducted in acidic water environment often leads to a synergistic effect in terms of decomposition and mineralisation of aqueous organic contaminants. The synergism is greatly influenced by photocatalyst physicochemical properties and pollutant type, besides pH, temperature, O3 concentration and other factors. Herein, five different commercial TiO2 photocatalysts (P25, PC500, PC100, PC10 and JRC-TiO-6) were used in photocatalysis (O2/TiO2/UV), catalytic ozonation (O3/TiO2) and PH-OZ (O3/TiO2/UV) advanced oxidation systems for degradation of two pollutants (dichloroacetic acid - DCAA and thiacloprid), simultaneously present in water. Synergistic effect in PH-OZ was much more expressed in the case of thiacloprid which did not significantly adsorb on the photocatalyst surface—in contrast to DCAA with stronger adsorption. Faster PH-OZ kinetics correlated to the higher exposed surface of TiO2 agglomerates, regardless of (lower) BET surfaces of primary particles. But, DCAA mineralisation reactions on a TiO2 surface were much faster in comparison to thiacloprid degradation reactions in solution bulk. Hence, we propose that high BET surface area of the photocatalyst is crucial for fast surface reactions (DCAA mineralisation), while good dispersity—high exposed surface of aggregates—and charge separation play a major role when it comes to photocatalytic degradation or PH-OZ of less-adsorbed organic pollutants (thiacloprid).
Review of "APE: An Annotation Language and Middleware for Energy-Efficient Mobile Application Development"
Suggestion for acceptance
Strongly accept (maybe I'm too optimistic, but I think all of the reviewed papers so far have been very interesting and have made worthwhile contributions)
Summary of the paper
Annotated Programming for Energy-efficiency (APE) is a service for specifying and implementing system-level power management policies. The policies are written as special APE Java annotations by the developer and can specify complex power management policies with very few lines of code compared to manually writing this power management code manually. This is very beneficial because mobile developers are often on an agile development cycle and don't want to refactor complicated power management policies throughout the development cycle. With APE, policies can be quickly and easily added in after requirement code is complete.
Positive points
The introduction was well written, following the method we discussed in class where the author describes current problems in the field and then their own contributions. APE's main positive point is that it removes the complexity of writing power management policies for continuously-running mobile (CRM) application developers, with negligible overhead. One example of this attribute on page 8 shows a policy written in Java that takes 19 LOC rewritten in APE to be only three lines of APE annotations. I think the manageability and flexibility of APE makes it very valuable to CRM application developers.
Negative points
Figure 8 seemed out of place, as it wasn't referenced in the section it appeared in, but rather in the following section. With energy conservation comes data loss; the application is not sending as many updates as it would if APE was disabled. It's ultimately up to the programmer to decide which areas of code need to send/receive updates frequently however, so it's not really a negative point of the APE service, just a negative point of energy-saving techniques in general. They only tested on one application, CitiSense. They got good results from one instance, and then created several instances to show that APE works well with "multiple APE-enabled applications", which seems a little presumptuous. They also only used one energy-saving APE policy on these instances, which made enough of a difference to prove their point, but I feel like demonstrating more than one of their policies would have been more interesting. Also, I feel that this study created some highly-optimistic graphs that can be somewhat misleading at first glance. Figures 9 and 10 show drastic improvement, but upon further investigation, this improvement is in a very specific case.
Potential future work
The authors mention that they will conduct a study to examine how well experienced developers adapt to using APE to define power-management policies. This is a worth-while experiment and can also result in valuable user-feedback in real-life situations. It would also be useful to implement other APE policies across various applications and run them concurrently to see how these policies interact with each other.
We are going to learn how you can use R for statistical computing in this paper. You will need an instance of Rstudio to work with the modules. Rstudio is a free and open source software that uses R as its back end. In order to work with these series of examples, just load or fire up Rstudio and copy and paste these codes from this page to the script window.
Set up
Create three directories: data, code, and documents
Use script (this) and console
Keep all data in the data directory
Keep all codes and scripts in the code directory or folder
You can write a document using the script window
In the script window you type the contents of your document
You use using markdown syntax
In the following table the markdown syntax is explained
| Markdown syntax | Meaning |
|-----------------|-----------------------------|
| Headers | Use a number of hash marks |
| Table | This is an example of table |
| Figures | ![name](filename.jpg) |
Type math in console:
(3 + 5) # type these in the console, not here
Assign values to objects
wt_kg <- 55 # will not print anything in the console
Note the following with object names
You can give an object any name you want
Do not start with a number
Object names are case sensitive
Do not use reserved names
Use nouns for variable names
Use verbs for function names
Avoid dots in object names
When you create objects, R will not print anything in the console
If yuou want to print, use parentheses ()
What can you do with variable names?
Do arithmetic with it
Change the variable's value by assigning new value to it
If you use other variables with this variable, then:
Changing the variable value does not change this other variable
R code: variables
wt_kg <- 100
wt_lb <- wt_kg * 2.0
wt_kg <- 120
(wt_lb) # what do you think the wt_lb will print? 200 or 240? Why?
Functions and arguments
Functions are
Automate complex and repeated sets of commands
Canned scripts
Can be predefined such as mean()
You can access them by loading packages
Each function has inputs called arguments
Functions return a value
The values functions return can be numeric or non-numeric
When you run a function, you will have to first call it
R code: example of a function
a <- 9 # assign 9 to variable a
b <- sqrt(a) # b calls function sqrt and gives argument a which is 9 to it
Vectors and data types
Vectors are the most common and basic data type in R
Single value or series of values
Either number or characters
Assign using c() function
For character vector essential to have quote marks otherwise R thinks these are objects and throws error messages
length() tells you how many elements are present in a vector
class() tells you what type of element is this object
str() tells you what is the structure of the object
Some examples to run
wt_g <- c(50, 60, 70, 80)
animals <- c("mouse", "rat", "cat", "dog")
(length(wt_g)) # return 3
(length(animals)) # return 4
(class(wt_g)) # returns numeric as everything is number
(class(animals)) # returns character as it is a character vector
(str(wt_g)) # gives you more information about this vector that is it is number
wt_g <- c(wt_g, 90) # we can add more elements this way to the end
wt_g <- c(30, wt_g) # add an element to its front
# other types of vectors are logical (true/false),
# integers == whole numbers or integer numbers
# complex = complex numbers
# raw = raw data
Data structures
Vectors are the ones that contain similar or identical types of elements
Columns of data sets are vectors
lists can contain mix of element types (rows of data sets)
The first position is 1, so the indexing starts at 1
R code example of subsetting vectors
ans <- c("mice", "rats", "dogs", "cats")
(ans[2]) # will return "rats"
(ans[c(3,2)]) # will return dogs rats
Conditional subsetting
Subset from a vector by defining different conditions
R code example of conditional subsetting
weight_g <- c(21, 34, 39, 54, 55)
(weight_g[c(TRUE, FALSE, TRUE, TRUE, FALSE)]) # we only want 1st, 3rd and 4th element
(weight_g > 50) # if you want weight > 50
(weight_g[weight_g > 50]) # subset
(weight_g[weight_g > 50 | weight_g < 30]) # use pipe
(weight_g[weight_g > 50 & weight_g < 30] ) # use and boolean
How to search for strings in a vector
animals <- c("cat", "rat") # define what you want to search
statement <- c("a", "cat", "sat", "on a", "mat", "to catch a", "rat" ) # specify the search string
(animals %in% statement) # are animals in statement?
( animals[animals %in% statement]) # which animals?
How to analyse real world data sets and missing data?
Missing data in R are presented as NA
If you operate on a vector which has NA, the operation will result NA
You have to remove NA in these cases
For those operations, set na.rm = TRUE
R code example of missing data
height <- c(2,4,4,NA, 6)
( mean(height)) # will return NA
( mean(height, na.rm = T)) #T is short hand for TRUE
What will you do to remove missing values from data sets?
( height[!is.na(height)] ) # will return 4 values
( na.omit(height)) # remove missing data
( height[complete.cases(height)]) # similar to !is.na()
lengths <- c(10, 24, NA, 18, NA, 20) # vector
lengths_without_NA <- lengths[!is.na(lengths)]
( median(lengths_without_NA)) # can you think of one other way of doing this?
Working with data sets
We will analyse a data set that has the following variables
Column
Description
record_id
Unique ID
month
month of observation
day
day of observation
year
year of observation
plot_id
ID of particular plot
species_id
ID of a particular species
sex
sex male or female
hindfoot_length
length of the hindfoot
weight
weight in grams
genus
genus of the animal
species
species of the animal
taxa
the taxonomy
plot_type
type of plot
Use download.file() function to download the file
Store it in the data folder
Read the data into R using read.csv() function
This will save the data as a data.frame object
How to read data in R
download.file("https://ndownloader.figshare.com/files/2292169", "data/portal_data_joined.csv") #download data
surveys <- read.csv('data/portal_data_joined.csv')
What do we do with the data set?
( head(surveys)) # first six rows
( tail(surveys)) # last six rows
( str(surveys)) # get the data structure
(nrow(surveys)) # number of rows
(ncol(surveys)) # number of columns
( names(surveys)) # lists variables
(colnames(surveys)) # lists variables another style
( summary(surveys)) # get a summary of the data set
Indexing and subsetting data sets
(surveys[1,1]) # first row first column
( surveys[1,6]) # element in row 1 and column 6
( surveys[, 1]) # contents of the first column
( surveys[c(1:3), 7]) # first three rows, column 7
( surveys[, -1]) # data set minus the first column
( surveys[c(1:6), ]) # keep only the first 6 rows
( surveys["species_id"]) # return a column by name
( surveys[, "species_id"]) # returns a vector values
How to deal with factors
Used to represent categorical data
Can be ordered or unordered
Factors are stored as integers
These integers have labels associated with them
Even though they behave like characters, they are integers
R sorts factors in alphabetical order
R code samples to deal with factors
sex <- factor(c("male", "female", "female", "male"))
( levels(sex)) # R assigns 1 to female and 2 to male
( nlevels(sex)) # returns number of levels
plot(surveys$sex) # plot the number of observations
( levels(surveys$sex)) # returns "", "F", "M"
levels(surveys$sex)[1] <- "not known" # change "" to "not known"
How to format dates
Convert date and time to appropriate and usable
Use the lubridate package and ymd() function
How to format dates with R
library(lubridate) # load the lubridate package
surveys$date <- ymd(paste(surveys$year, surveys$month, surveys$day, sep = "-")) # ymd converts dates
( str(surveys$date)) # returns the structure of date object
1750-1869: David Hume, Adam Smith, Thomas Malthus, David Ricardo, J.S. Mill and Karl Marx
1870- 1939: Alfred Marshall, Joseph Shumpeter, Colin Clark, Kuznets, Hoffman, and Roy Harrod
1940-1985: Solow and Swan
1986 to present: Paul Romer, Robert Barro, Phillip Aghion, Oded Galor, Daron Acemoglu...
Stylized facts about growth (what reality are you trying to explain):
Economies grow over time in the long-run
There are vast differences in the standard of living across countries
Factors of production (K & L) share of total income are roughly constant: \(F(cK, cAL) = cF(K,AL)\)
Ratio of capital to labor (K/L) is roughly constant over time (1:3): \(F(K,L) = K^\alpha (AL^{1-\alpha})\)
Marginal return to capital is +/- constant (mostly true for UK... longest time series)
Productivity tends to increase over the very long run (history of world GDP, 444 \(\rightarrow\) 6000
Share of consumption in GDP has remained constant (~70%): \(\dot k (t) = sY(t) - \delta k(t)\)
Financial development precedes economic development and growth (Hamilton's predictions)
There are structural transformation in the development process that a generalized production function cannot capture (different industries)
There are growth miracles and disasters (i.e. China vs. Buenas Aires)
Important issues:
What does production function capture? Work at home? Is it OK to assume labor and knowledge are exogenously determined?
Solow Model:
\(Y (t) = F(K (t), L (t)A (t)\)
Assumptions: constant returns to scale (CRS), and growth rate of L and A is exogenous (something that the model is not intending to explain)
Properties of a production function: has three arguments all of which are function of time, AL ("effective labor" enters multiplicatively, this specification implies that K/Y is constant, homogenous of degree 1 -- constant returns to scale, inputs other than K, A, and L are unimportant (no land or resources, which don't change implications anyway)
\(\therefore\) Cobb-Douglas is convenient and easy to understand
\(F(\frac{K}{AL}, 1) = \frac{1}{AL} F(K,AL)\)
\(y \equiv \frac{Y}{AL}; k \equiv \frac {K}{AL}\)
\(y = f(k)\)
Remember, \(F(K,L) = K^\alpha (AL^{1-\alpha})\)
\(\therefore f(k) = k^\alpha\)
Evolution of inputs:
\(\dot L (t) = nL(t)\), growth rate of labor (exogenous)
\(\dot A(t) = gA(t)\), growth rate of knowledge (exogenous)
\(\dot K(t) = sY(t) - \delta K(t)\), growth rate of capital (savings) minus depreciation (endogenous-- asking, what drives K?)
Dynamics of the model (embedded all three equations into one):
the model is determined by the movement of k over time
So, k* is being replenished by savings/investment, then diminished by the growth rates of A (g) and L (n) and depreciation (\(\delta\)). And in steady state, this means that \(\dot k = n+g\) (growth rate of capital per unit of effective labor equals the growth rate of those two factors).
E.g.: a change in savings rate
Temporarily increase the growth rate of output per effective worker
Impact on long-term consumption will depend on whether the new steady-state level of capital is above or below the "Golden Rule" level
Consumption will be equal to: \(c = y - (n+g+ \delta)k\), or the amount of output that is not saved
Max consumption by: \(f'(k) = n + g + \delta\), or tangent line of \(f(k)\)
^THAT is the Golden Rule
Problems with Solow Model
Implied differences across countries in capital per worker are extremely large (US to India would be over 100x)
Implied differences across countries in the marginal return to capital are also implausibly large
g is exogenous (but improvements on current capital are the bulk of growth)
Growth Accounting
Useful for identifying the proximate causes of growth in seeking to find out what fraction of growth is due to increases in factors of production and then from everything else
Works by taking partial derivatives of each component to find the residual:
Size does matter. With global resources on the verge of depletion and all kinds of pollution on the rise, the better part of the population is slowly turning towards the smaller things for salvation, dubbed nanotechnology. Perhaps National Nanotechnology Initiative describes it best when declaring nanotechnology ‘as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers’, such matter that possesses the ability to exhibit certain properties at the nanoscale. It is one of those feats of science that fits in every field of daily life; the sky is the limit when it comes to its benefits as it boasts immense potential in medicine, energy and food just to name a few.
"In thinking about nanotechnology today, what's most important is understanding where it leads, what nanotechnology will look like." - Kim Eric Drexler
That is the very essence of our research. In this paper we discuss the role nanotechnology in destined to play in our food industries. Today’s agricultural system is being affected badly through climatic changes, both natural and artificial. The bane of this system is polluted irrigation and groundwater reserves, rapidly changing climatic variables leading to uncontrolled plant growth and infertility. Our paper analyzes what nanotechnology can do to avert this crisis which will eventually lead to lesser and lesser food production, breaking down lives and economies. Its most amazing and rare properties can allow us to maintain and control our crops much better and efficiently at the most minutest of scales. In the end, this research paper recommends solutions to some of the problems faced while executing the technology, mainly toxicity and its environmental impact. Suffice to say, regulated use of nanotechnology can offer a promising future for us, provided its potential is completely understood and exploited.
Introduction
The 21st century has brought about a fast-paced revolutionary era that has produced incredible marvels in every field of science and humanity, including introductions of many new ones, yet it has presented seemingly unsolvable problems many of which have victimized our agricultural industries. These challenges include global warming resulting in uncontrolled farming, impure water for irrigation, soil infertility, accumulation and runoff of industrial and fertilizer chemicals resulting in toxicity and contamination; all of the above mentioned problems along with many others, have squeezed global food production and struck the economies of developing countries even more as agricultural production is the backbone of such countries’ economies.
In time, the demand for food is inevitably going to rise to feed the mouths of 9.8 billion people by 2050 [1]. Also, with the high prices and consequences of fossil fuels, countries will soon start viewing agricultural products as the new big thing in international trade. Furthermore, efficient and promising biofuels, especially algae, will become a global trend to serve as a petroleum-supplement owing to its rare and multi-use properties. For all this to become a reality, food production has to keep increasing with population growth, which will strain already weak agricultural systems and have a heavy toll on this land, which is, by every minute, becoming infertile. Innovative scientific solutions, like nanotechnology, pose possible solutions to increase farm productivity and reduce its environmental strain.
Richard Feyman was the first to describe the magnitude of nanotechnology’s potential in real-word applications, during his lecture, producing the infamous statement, “There’s plenty of room at the bottom.” [2] Nanotechnology can effectively filter irrigation water that has been contaminated through nanotechnology membranes. To have greater control over the plants’ growth nanosensors and nanobots can be employed to maintain healthy growth even in exceptional conditions like those presented by global warming. For better crop yield and fertiliser efficiency, nutrient delivery can be implemented with nanotechnology. Also, the conventional method of producing all the nanomaterials to build such technology has adverse environmental impacts as well as high production costs. In this context, plants can act as bioreactors and ‘green synthesize’ nanomaterials relieving burdens on other industries related to the process. Although such technologies have weak public support, there is a dire need for their acceptance as perhaps, right now, they are the only cost-effective, environment-friendly and productive means of meeting global demand for agricultural products.
Water purification using nanotechnology exploits nanoscopic materials such as carbon nanotubes and alumina fibers for nanofiltration
it also utilizes the existence of nanoscopic pores in zeolite filtration membranes, as well as nanocatalysts and magnetic nanoparticles
Nanosensors, such as those based on titanium oxide nanowires or palladium nanoparticles are used for analytical detection of contaminants in water samples.
It can be used for removal of sediments, chemical effluents, charged particles, bacteria and other pathogens.
"The main advantages of using nanofilters, as opposed to conventional systems, are that less pressure is required to pass water across the filter, they are more efficient, and they have incredibly large surface areas and can be more easily cleaned by back-flushing compared with conventional methods," - Alpana Mahapatra and colleagues Farida Valli and Karishma Tijoriwala
carbon nanotube membranes can remove almost all kinds of water contaminants including turbidity, oil, bacteria, viruses and organic contaminants.
Although their pores are significantly smaller carbon nanotubes have shown to have an equal or a faster flow rate as compared to larger pores, possibly because of the smooth interior of the nanotubes.
Nanofibrous alumina filters and other nanofiber materials also remove negatively charged contaminants such as viruses, bacteria, and organic and inorganic colloids at a faster rate than conventional filters.
In the near future, it has been estimated that average water supply per person will drop by a factor of one third, which will result in the avoidable premature death of millions of people
Conventional desalination technologies like reverse osmosis membranes are being used but these are costly due to the large amount of energy required.
Nanotechnology has played a very important role in developing a number of low-energy alternatives, among which three are most promising. (i) protein-polymer biomimetic membranes, (ii) aligned-carbon nanotube membranes and (iii) thin film nanocomposite membranes
These technologies have shown up to 1000 times better desalination efficiencies than RO, as these have high water permeability due to the presence of carbon nanotube membranes in their structure
Some of these membranes are involved in the integration of other processes like disinfection, deodorizing, de-fouling and self-cleaning.
For a 1% increase in P, quantity demanded falls by 16.7%
Example #2:\(P = 940 - 48(Q) + Q^2\)
\(Q = 10, P = 560\)
\(\frac {dP}{dQ} = \frac{1}{\frac{dP}{dQ}}\)
\(E = -2\)
Cross-elasticity: \(E_{xy} = \frac{dQ_x}{P_y} \cdot \frac{P_y}{Q_x}\), if positive, then goods are substitutes, but if negative, they are complements
Income elasticity: \(E_I = \frac{dQ}{dI} \cdot \frac{I}{Q}\), as incomes rise, people have fewer children (inferior good? less because they become a higher quality kid?)