Makalah ini merupakan rangkuman hasil diskusi Forum OPD yang diselenggarakan oleh Dinas Perkebunan Jawa Barat pada tanggal 15-16 Maret 2017 di Hotel Prime Park Bandung. Acara diskusi ini (hari kedua) merupakan bagian dari rangkaian acara tersebut.
Dalam diskusi ini dihadirkan beberapa nara sumber:
Makalah ini merupakan adalah review terhadap Dokumen Pedoman Publikasi Ilmiah
(Kemristekdikti, 2017) yang kemudian akan terus dikembangkan dengan memberikan berbagai referensi terkait. Dokumen ini disusun secara bersama dengan koordinator Dasapta Erwin
Irawan (ITB). Review singkat ini secara bertahap akan dilengkapi dengan konteks pada masing-masing butir review.
Kontributor:
Robbi Rochim (Institut Teknologi Medan),
…., …., …. (silahkan tambahkan nama anda dan
afiliasinya)
The Axiom of Comparison: Consumers can compare options and form preferences
The Axiom of Transitivity: Consumers rank preferences above one another
Definitions: a good is a commodity for which more is preferred to less; a bad is a commodity for which the reverse holds
3.2: Utility and Preference
Bentham: "Nature has placed mankind under the governance of two sovereign masters, pain and pleasure... the principle of utility recognizes this subjection." Equating happiness with pleasure.
Definition: Utility is the variable whose relative magnitude indicates the direction of preference
Indifference curves can be called constant utility curves and amount to the combinations of goods X and Y that bring about equal utility
3.3: Characteristics of Indifference Curves
Properties of indifference curves:
Negative slope: diminishing marginal returns and opportunity cost (give up X for Y)
Indifference curves never intersect: more is always preferred to less, so curves that overlap contradict themselves
Coverage of indifference curves: many are available-- but each represents budget constraint
Indifference curves are convex to the origin: based upon empirical observation of "diversity in consumption"
3.4: More on Goods and Bads
Portfolios are chosen with an eve to desired features (average percent yield r) and undesired features (riskiness s)
Indifferent curves between goods and bads always have a positive slope
3.5: The Sources and Content of Preferences
Some preferences are stable (i.e. benevolence towards children). Interesting study shows that women leave their inheritance to their children more often than to their surviving spouse (and vice versa for men)
Many are transitory-- how does marketing drive this?
4.1: The Optimum of the Consumer
Budget lines show the baskets attainable for consumers who spend their whole incomes on two goods (intersect indifference curves twice)
Consumer choice equation: \(P_x x + P_y y = I\)
Slope of budget line: \(-\frac{P_x}{P_y}\)
Conclusion: the optimum of the consumer is the point on the budget line that touches the highest attainable indifference curve. When convex, the optimum can be an interior solution where positive amounts of both commodities are bought. Or, it can be a corner solution where one on the commodities is not bought at all
Remember, can only use marginal utility when thinking of it as a cardinal variable (adding up vs. ranking) and diminishing marginal utility is assumed
For ordinal utility, we can use Marginal Rate of Substitution in Consumption (MRSc): \(MRS_c \equiv - \frac{\Delta y}{\Delta x}|_U\)
\(MRS_c = \frac{P_x}{P_y}\)
4.2- 4.3: Complements, Substitutes and Consumer's Response
Perfect complements curves will be at right angles
Perfect substitutes curves are straight lines
If we assume preferences do not change, the optimum of the consumer can vary only in response to changes in opportunities: his or her income and commodity prices
Conclusion: For goods X and Y, a positively sloped Income Expansion Path indicates that the consumption of both goods rises as income grows (normal or superior goods).
The curve connecting all the optimum positions of the goods is called the Price Expansion Path
Properties of PEP:
As Px falls, income I is held constant, the conumer attains higher utility.
When the PEP slopes downward, the consumer responds to a fall in Px by choosing more X but less of the numeraire good Y
The intercepts are the "choke prices" where the consumer buys none of the good at the high price
It may even curl backward like a Giffen good, where the lower price temporarily causes the consumer to buy less (violates law of demand)
4.4: Income and Substitution Effects of a Price Change
Income effect: A fall in Px increases the consumer's real income-- meaning, he or she could buy the same bundle of goods as before but have some left over
Pure Substitution Effect: Even if real income or utility had remained the same, more X would have been purchased anyway at the lower Px
4.5: From Individual Demand to Market Demand
Conclusion: the market demand curve is the horizontal sum of the individual demand curves: \(X \equiv \sum^ N _{i = 1} x_i\)
O tratamento e degradação de efluentes orgânicos tem recebido uma atenção crescente nos últimos anos, especialmente em zonas de forte implementação agro-industrial. Estes efluentes, com origem em suiniculturas ou matadouros por exemplo, apresentam elevadas concentrações de matéria orgânica recalcitrante, assim como elevados teores de azoto e fósforo \cite{P_ramo_Vargas_2015,Villamar_2011}) sendo tratados por uma combinação de técnicas físico-químicas (coagulação/floculação) e biológicas (digestão anaeróbia). Todavia, a performance destas técnicas não é totalmente satisfatória devido às grandes quantidades de efluente a tratar e à toxicidade de alguns poluentes. Assim, tecnologias alternativas como os Processos Oxidativos Avançados (POAs) têm sido cada vez mais exploradas para a remediação ambiental de águas residuais. De forma autónoma ou em combinação com outros processos, estas técnicas têm como objectivo aumentar a estabilidade dos efluentes pré-descarga através da oxidação da matéria orgânica presente, podendo ser utilizados também como pré-tratamento. Entre as várias metodologias de remediação baseadas em POAs, algum destaque tem sido dado ao processo de Fenton que usa peróxido de hidrogénio na presença de sais de ferro para a geração de radicais hidroxilo, um potente agente oxidante. O processo de Fenton envolve, deste modo, a geração de radicais HO• pela decomposição catalítica do peróxido de hidrogénio através da acção do ião Fe2+ em meio ácido segundo a equação \cite{P_ramo_Vargas_2015}: \begin{equation}
Fe^{2+} + H_2O_2 \rightarrow Fe^{3+} + HO^- + HO•
\end{equation}
Depois de gerados, os radicais HO• são capazes de oxidar os poluentes orgânicos (R) presentes: \begin{equation}
HO• + R \rightarrow produtos finais
\end{equation}
Este processo de catálise homogénea não é necessariamente activado pela luz, mas pode ser acelerado de forma significativa pela acção de radiação ultravioleta ou visível (< 580nm). O agora processo fotocatalítico, conhecido como foto-Fenton, gera uma quantidade adicional de radicais HO• e leva à redução do fotocatalisador, pelas equações \cite{Oliveira_2012}: \begin{equation}
H_2O_2 + h\nu_{UV-vis} \rightarrow Fe^{3+} + HO^- + HO•
\end{equation} \begin{equation}
Fe^{3+} + H_2O + h\nu_{UV-vis} \rightarrow Fe^{2+} + OH• + H^+
\end{equation}
Este processo tem a vantagem de ser bastante eficiente e económico devido à possibilidade de utilização da luz solar. Tem, no entanto, a desvantagem de ser relativamente agressivo devido ao baixo pH necessário para a reacção e o alto consumo de peróxido de hidrogénio. Além disso, existe ainda a necessidade de remoção dos iões ferrosos no final do tratamento \cite{Oliveira_2012}. Devido às suas características climáticas, o Alentejo é uma região adequada à implementação de tecnologias de remediação baseadas em POAs, incluindo o processo de foto-Fenton.
This is the only value that changes, which makes sense. The effective diffusion is lowered because the particle now has much higher probability to reach the ring of nutrients stochastically in 2D vs. 1D, keeping the frequency and length of tumbling the same. The velocity of the particle isn't changing, nor is the time it needs to get to the nutrients by, nor is the MSD.
[2a]
Find MSD over long time-scales
i. \(U\left(x\right)=\frac{1}{2}kx^2\); potential energy of trapped particle
ii. \(Z=\int_{-\inf}^{\inf}\exp\left(\frac{-U\left(x\right)}{k_BT}\right)dx\); partition function for normalization
iii. \(<x^2>\ =\ \frac{\int_{-\inf}^{\inf}x^2\cdot\exp\left(\frac{-kx^2}{2k_BT}\right)dx}{\int_{-\inf}^{\inf}\exp\left(\frac{-kx^2}{2}\right)dx\ }\); expression for the mean squared displacement from elementary probability theory normalized
iv. \(<x^2>\ =\ \frac{\sqrt{2\pi}}{\left(\frac{k}{k_BT}\right)^{\frac{3}{2}}}\cdot\left(\frac{\sqrt{2\pi}}{\frac{k}{\left(k_BT\right)}}\right)^{-1}\); solved integral using Gaussian integral
iv. \(<x^2>\ =\ \frac{k_B\cdot T}{k}as\ t\rightarrow\infty\); rearranged to simplify MSD expression
[2b]
Over long time-scales, particle diffusion is limited by the optical trap, causing the MSD to plateau at the trap distance. A free particle undergoing Brownian diffusion would experience at much higher MSD over long time-scales comparatively. At short time-scales, the trapped particle's motion looks similar to a free particle's motion because it has not yet experienced the effect of the optical trap. At short time-scales, one may not interpret the motion as trapped.. yet. Observations at long-time scales would show the effects of optical trapping through the plateau of particle diffusion. The MSD does not depend on lag-time because it is constrained by the optical trap, which sets the MSD over relatively significant time-scales.
SFA를 이용한 전복 양식업의 지역별 효율성분석에 관한 연구(2012.9 김혜성, 송정헌)
실시간 자료를 받아와 하루 내의 생산량을 예측하는 연구는 진행되지 않음.
3. 자료분석
본 논문에 사용된 굴 생산량과 작업 인원수 관측치는 116개로 2016년 11월부터 2017년 3월까지 직접 관측한 일별 자료[1]를 이용하였다. 또한 작업 환경에 영향을 미칠 수 있는 요소로 기온과 수온, 풍속을 고려하였고 이들 자료는 굴 생산량 자료의 관측지인 통영지역을 기준으로 하여 기상청의 자료를 이용하였다.
표 x은 우리나라 천해양식 굴(생굴 기준)의 요약통계량을 나타낸 것이다.
총 생산량은 16시까지 계근 무게의 총 합이며 평균 3620.28kg, 표준편차 940.62kg으로 1779.60kg에서 6452.50kg까지 분포하고 있다. 오전 생산량은 12시까지 계근 무게의 총 합이며 평균 2237.44kg, 표준편차 573.04kg으로 1115.90kg에서 3956.20kg까지 분포하고 있다. 오전 작업 인원수는 12시까지 계근 데이터에 기록된 작업자 인원을 구한 것으로 평균 107명, 표준편차 573.04명으로 53명에서 130명까지 분포하고 있다. 오후 작업 인원수는 12시에서 16시까지 계근 데이터에 기록된 작업자 수를 구한 것으로 평균 102명, 표준편차 17.53명, 최소 13명에서 최대 130까지 분포하고 있다. 기온은 최소 -0.67℃, 최대 16.66℃까지 분포하고 있고, 습도는 최소 26.67%에서 최대 98%까지, 풍속은 최소 1.15㎧, 최대 6.38㎧까지 분포하고 있다.
각 모델 간의 비교를 원활하게 하기 위해 각 자료를 표준정규화하여 변환하였다. 또한 오후 작업 인원수가 0인 경우 오전 생산량을 가지고 당일 총 생산량을 예측하는 것이 무의미하기 때문에 오후 작업 인원수가 0인 자료는 제외한다.
4. 연구방법
본 연구는 굴 생산량과 관련이 있다고 판단되는 요소를 분석하여 최적의 당일 굴 생산량을 예측하는 모형을 구축하는 것을 목표로 한다.
효율적인 다중선형회귀 모형을 구축하기 위해서는 적절한 설명변수를 선택하고 각 변수에 대한 최적의 회귀계수를 산정하는 것이 중요하다.
본 연구에서는 모형을 구축하는데 사용되는 자료의 범위를 결정하기 위해 이전 1달에서 4달까지의 자료를 이용하여 모형을 구축하고 비교, 분석하여 효율적인 자료의 범위를 정한다. (그리고 적절한 설명 변수를 선택하기 위해 오전 생산량에 대하여 오전 생산량과 시간별 생산량을 나누어 각각의 모형을 구축하고 비교, 검토해본다.) 마지막으로 효율적인 굴 생산량 예측 모형 구축을 위해 최소자승법과 단계별 선택법을 사용하여 회귀 계수를 산정한다.
4.1 자료 선정(삭제)
(이전 몇 달의 자료를 사용할 것인지 비교) -> 회귀계수 산정 방법과 같은 표에.
모형 구축에 있어 효율적인 training 자료의 규모를 결정하기 위해 이전 1달에서 4달까지의 자료를 사용하여 모델을 구축한 뒤 비교·검정하였다. 자료의 규모가 모형에 어떤 영향을 미치는지 알아보기 위해 회귀 계수는 최소자승법을 이용하여 산정하였다.
표에서 보는 바와 같이 training 자료의 규모가 커질 수록 오차율이 줄어드는 것을 확인할 수 있었다. 또한 규모가 커질 수록 예측 정확도도 높아지는 경향을 보이고 있음을 알 수 있었다.
그러나 test를 위한 자료로 2월의 자료를 사용하고, training 자료로 나머지를 사용하였을 때 이전 1달내의 자료를 사용하였을 때 가장 높은 예측 정확도를 보였고, 이전 4달내의 자료를 사용하였을 때 가장 낮은 오차율을 보였다. 이때 2월 자료는 약 1,500kg에서 5,000kg까지 분포하고 있는 다른 기간의 총 생산량과 달리 최소 395.9kg에서 최대 6452.5kg까지 분포 범위가 넓게 나타났다. ------
4.2 회귀 계수 산정
(최소자승법과 단계별 선택법 비교)
4.2.1 최소자승법
당일 굴 생산량을 예측하기 위해 설명 변수로 오전 굴 생산량과 오전, 오후 작업 인원수를 포함해 작업자의 효율성에 영향을 줄 것이라 판단되는 기온, 습도, 풍속을 고려하였고, 테스트 자료의 범위를 제외한 모든 자료를 사용하였다. 각 독립변수의 최적의 계수를 산정하기 위해 최소자승법을 이용하였고, 종속 변수인 굴 생산량에 대한 영향력을 판단하기위한 검토에는 t-검정 값을 이용하였다. t-검정 값이 높은 변수일수록 영향력이 높다고 판단하며 식 x와 같이 계산한다.
t-검정값 = 개별회귀계수 값/계수의 표준오차
테스트 자료를 제외한 모든 자료를 사용하여 굴 생산량에 대해 모형을 구축한 결과는 표 x와 같다.
t-검정값을 살펴볼 때 굴의 총 생산량 예측 모형에 가장 큰 영향을 미치는 독립변수는 오전 생산량임을 알 수 있었다. 다른 조건이 일정할 때, 오전 생산량이 1% 증가할 때, 굴 생산량의 증감율은 평균 0.85% 증가하는 것으로 나타났다.
4.2.2 단계별 선택법
최소자승법과 같은 조건으로 계수를 산정하였고 그 결과는 표 x와 같이 나타난다.
5. 결과 및 성능 비교
다중회귀분석모형에 사용된 모든 변수 간 다중공선성 유무를 점검해 본 결과, 변수와상수의 VIF가 모두 10이하로 다중공성선 문제는 발생하지 않았다. 구축된 모형을 이용하여 테스트 자료에 대한 예측을 수행하였고, 각각의 모형을 비교, 분석하기 위해 수정 결정계수와 평균 제곱근 오차, 평균 절대오차, Nash-Sutcliffe 효율계수를 이용하였다. 표 x는 통계지표에 대한 설명을 나타낸다.
5.1 회귀 계수 산정
5.1.1 최소자승법
표 x는 test month를 기준으로 이전 1달에서 4달까지의 자료를 기반으로 최소자승법을 이용하여 회귀 계수를 산정한 모델의 통계지표를 나타낸다.
표에서 나타나는 것과 같이 모델을 구축하는데 이전 자료를 많이 사용할 수록 예측 정확도가 높아짐을 알수 있다. 하지만 test month가 2일 때, 자료의 범위를 2로 둔 모형이 평균 제곱근 오차는 429.4, Nash-Sutcliffe 효율계수는 0.924로 성능이 가장 좋게 평가되었다. 이는 2월의 생산량이 최소 395.9kg, 최대 6,452.5로 약 2,000에서 3,000kg밖에 차이 나지 않는 다른 달의 생산량 자료에 비해 변동이 심하기 때문이라고 판단된다.
5.1.2 단계별 선택법
최소자승법과 동일한 방식으로 모델을 구축하고, 단계별 선택법을 통해 종속변수 예측에 기여하는 정도를 판단하여 유의한 독립변수를 추가, 제거하여 회귀 계수를 산정하였다.
표 x는 단계별 선택법을 이용하여 회귀 계수를 산정한 모델의 통계지표를 나타낸다.
표에서 나타는 것과 같이 단계별 선택법을 통해 독립변수를 선별, 적용하여도 이전 자료의 양과 예측 정확도는 비례하였다. 수정 결정계수로 나타나는 모델 설명력은 단계별 선택법을 사용하여 모델을 구축하였을 때 0.01정도 증가하였지만 예측 정확도에 있어서는 test month에 따라 최소자승법을 사용하여 구축한 모델보다 낮게 나오는 경우도 확인되었다.
Laman grup ini dibuat khusus untuk perkuliahan Metode Penelitian/Riset Program Studi Magister Teknik Air Tanah (PS S2 TAT). Pesertanya adalah seluruh mahasiswa PS S2 TAT.
The total energy associated with a system composed with a star and a planet is the sum of its kinectic energy plus their potential mutial atraction, i.e.,
For this experiment, the echo client sends out a packet of n bytes fifty times to the echo server. The echo server reads the length of the message first, then reads as many bytes from the input stream as specified by the message length. Afterwards, it writes the received message to the output stream. During this process, the echo client begins its benchmark with a time stamp before writing to the output stream and a time stamp after reading from the input stream. When the echo client is finished, it returns the average round-trip time over the fifty echo requests. Also, although message sizes of 10 bytes, 1,000 bytes, and 10,000 bytes were asked to be tested, those of 100 bytes, 100,000 bytes, and 1,000,000 bytes were also tested for completeness.
Most important Exogenous growth based on micro-foundations
Basis of modern Dynamic (Stochastic) General Equilibrium models (DSGE)
Permits the occurrence of business cycles
Because of micro foundations, we can do welfare analysis
Model ingredients: households, firms, governments
Households:
Large number of identical
Grows at rate n
Owner of labor returns
Owner of capital returns
Maximize infinitely-lived utility function
Firms:
At every moment in time they use capital and labor to make output
Constant returns to scale-- factors of production are paid their marginal product
Economy is competitive so zero profits
Real interest rate is equal to the marginal product of capital (K)
Wage per unit of effective labor is equal to the marginal product of labor (AL)
Solving the model:
Need a budget constraint before maximization problem. Will be lifetime consumption cannot exceed the sum of initial wealth plus the present discounted value of lifetime labor. No-Ponzi-Game.
Constant, relative risk aversion is \(1 - \theta\) (not derived from consumption)
\(e^{- \rho t}\): assumes present consumption is more desirable, so discount future utilities. Rate at which you discount that future utility is called the discount rate. i.e. bringing future dollars into today's dollars with present value
\(e^{(1 - \theta)gt}\): assuming exogenous growth rate of technology (A) with g
\(e^{nt}\): assuming exogenous growth rate of population (L) with n
Budget constraint: presented discounted value of utility function.
Should look familiar because just like the Solow model, but with savings endogenized as income minus consumption minus the growth of A and L but no depreciation. At the end of the day, we've just brought in a second differential equation for consumption since we use it to explain savings.
\(\therefore\) We have a system of differential equations that cannot be solved analytically. Thus we must observe with graphs.
The process of tuning a metaheuristic can be responsible for its success or failure in the face of the problem which it is intended to solve, for which reason great efforts are made to the elaboration of techniques that allow the fine adjust of these algorithms. However, the absence of a formal tuning definition, as well as a variety of implementations and types of metaheuristics, results in tuning dependent on a specialist or by trial and error, which is inefficient and makes it difficult to validate results. This article presents a tuning method named Cross-Validated Racing (CVR) that has problem and set of instance independence, making tuning results more reliable and generalist. For a validation of the CVR, the BRKeCS was developed, a hybrid algorithm that is the advantage of BRKGA ( biased random-key genetic algorithm) with ECS (Evolutionary Clustering Search) and applied to solve Permutacional Flow Shop Scheduling problems. Continue...
a) The break-even investment line, \((n+g+ \delta)k\), will decrease in slope as \(\delta\) falls. The actual investment line, \(sf(k)\), is unaffected.
b) The break-even line, \((n + g+ \delta)k\), will increase in slope as \(g\) increases. The actual investment line, \(sf(k)\), is unaffected.
c) The break-even investment line will be unaffected. However, the impact on the actual investment line can be determined by assessing \(\frac{\partial sk^\alpha}{\partial \alpha} = sk^\alpha \ln k\). Since \(0< \alpha < 1\), if \(\ln k > 0\), then the new actual investment curve will lie above the old one. If \(\ln k < 0\), then the new curve will lie below the old.
d) The break-even investment line is unaffected. However, \(f(k)\) will increase by a factor of the increased output per unit of effective labor, shifting the curve up. Mathematically, \(s C_{old} f(k) \rightarrow C_{new} = (C_{old} + 1) \therefore sC_{new} f(k) > sC_{old}f(k)\)
1.4
a) Since the increase in workers is greater than the increase in technological progress (\(n > g\)), then output per unit of effective labor will fall at the time of the jump. This will reduce the amount of capital per unit of effective labor, causing \(k^*\) to fall to \(k_{new}\). Likewise, the fall in capital causes a fall in output per unit of effective labor: \(y^* \rightarrow y_{new}\).
b) After the initial change, the lower \(k_{new}\) means that actual investment per unit of effective labor will be higher than the break-even line. Thus, the economy is saving/investing more than enough to offset depreciation and technological progress, meaning \(k_{new}\) will slowly grow back toward \(k^*\). Then, output per unit of effective labor will rise again toward the balanced growth path (\(y_{new} \rightarrow y^*\)).
c) At a balanced growth path, the level of output per unit of effective labor will be as high as it needs to offset the technological progress and depreciation, so \(k\) returned to its original level. This is because although the new workers joined the workforce, capital has been accumulated toward the steady state again so output per unit of effective labor is the same.
1.5
a) Cobb-Douglas: \(f(k) = k^ \alpha\)
Substituting this into the equation describing the evolution of the capital stock per unit of effective labor: \(\dot k = sk^ \alpha - (n + g+ \delta)k\)
On balanced growth path, \(\dot k = 0\) and \(sk^{*\alpha} = (n + g+ \delta)k^*\)
Solving this first for \(k^*\): \(k^* = [\frac{s}{(n+g+ \delta}]^{\frac{1}{(1 - \alpha)}}\)
Solving this for \(y^* = k^ \alpha\): \(y^* = [\frac{s}{(n+g+ \delta}]^{\frac{\alpha}{( 1- \alpha)}}\)
Consumption per unit of effective labor is \(c^* = (1 - s)y*\)
Thus, with Cobb-Douglas, the savings rate required to reach the golden rule equals the elasticity of output with respect to capital
1.6
a) Capital per worker, output per worker, and consumption per worker would increase as a result of the fall in rate of population growth.
b) The fall in population growth would ultimately cause the path of output to grow at a permanently lower rate since Y will gradually shift down until K, AL, and Y are all growing at the new lower n.
Using the evolution of capital stock per unit of effective labor, \(\dot k = sf(k) - (n + g + \delta)k\), and the fact that on a balanced growth path, \(\dot k = 0; k = k*; sf(k*) = (n + g+ \delta)k*\)
Taking the derivative with respect to n gives:
\(\frac{\partial k*}{\partial n} = \frac{k*}{sf'(k*) - ( n + g + \delta)}\)
By plugging in the numbers given: \(\alpha _K = \frac{1}{3}; g = .02; \delta = .03\) we calculate the effect on y* of a fall in n from 2% to 1%, using n = 0.015:
Thus, the 50% drop in the population growth rate leads to about a 6% increase in the level of output per unit of effective labor (\(-0.5 \cdot -0.12 = 0.06\)).
1.8
a) We know that an increase in the fraction of output that is devoted to investment from 0.15 to 0.18 is a 20% increase in the saving rate. The elasticity of output with respect to the saving rate is:
Substituting the given that \(\alpha_k(k*) = 1/3\) we have:
\(= \frac{1/3}{1 - 1/3} = \frac{1}{2}\)
So we know that the elasticity of output with respect to the saving rate is 1/2, meaning that the 20% increase in saving would have risen output by about 10%.
b) Consumption rises less than output, since increasing savings means that consumption accounts for a smaller part of output. Since consumption on the balanced growth path is: \(c* = (1-s)y*\)
By taking the derivative with respect to s and multiplying both sides by \(\frac{s}{c*}\), we have the elasticity:
Therefore the elasticity of consumption with respect to the saving rate is approx. 0.3, meaning that consumption will be about 6% above where it would have been.
c) The immediate effect of the rise in investment is that consumption falls simultaneously. Though y* doesn't rise at once, it begins to move toward the new, higher balanced-growth path level. The text on the speed of convergence helps to determine the time it takes for consumption to return to what it would have been without the saving rate increase. Since \(c = (1-s)y\), consumption will grow at the same rate as y on the way to the new balanced growth path. The rate of convergence of k and y is: \(\lambda = (1 - \alpha_k)(n + g+ \delta) \)
We know that \((n + g+ \delta)\) equals 6% per year and \(\alpha _ k = 1/3\), yielding \(\lambda \) = 4%. Thus, k and y move 4% of the remaining distance toward their balanced growth path values each year. Since consumption falls initially by 3.5% and will eventually be 6% higher than the original level, it must be 36.8% up the growth path (3.5/9.5). Thus, we can determine the length of time this will take by using this formula:
\(e^{- \lambda t*} = 0.632\)
Taking the natural log: \(- \lambda t* = ln(0.632)\)
\(t* = 11.5\) years for consumption to return to what it would have been.
1.9
a) First we define the marginal product of labor to be \(w \equiv \frac{\partial F(K, AL)}{\partial L}\)
The partial derivative of output with respect to L yields:
So we know the share of output going to capital is constant. Then, we can take the time derivative of the log of the marginal product of labor to find the growth rate:
So, on a balanced growth path, \(\dot k = 0 ; \frac{\dot w}{w} = g\). This means that marginal product of labor rises at the growth rate of the effectiveness of labor.
d) As we showed, the growth rate of the marginal product of labor is: \(= g + \frac{-kf"(k)\dot k}{f(k) - kf'(k)}\). Thus, if \(k > k*; \frac{\dot w}{w} > g\). As we move from k to k*, the amount of capital per unit of effective labor also rises meaning labor is more productive and the marginal product of labor is increased even more. The growth rate of the marginal product of capital is: \(\frac{\dot r}{r} = \frac{f"(k) \dot k}{f'(k)}\). Therefore, as k rises toward k*, the growth rate is negative and the marginal product of capital falls.
1.10
a) Balanced growth occurs when all the variables are growing at constant rates. Taking the time derivative of k gives us:
\(\dot k = (\frac{\dot K}{AL}) = \frac{\dot K(AL) - K[\dot LA - \dot AL]}{(AL^2)} = \frac{\dot K}{AL} -k(n + g)\)
Then we can substitute to find: \(\dot k = [f'(k) - (n + g + \delta)]k\). This means that the balanced growth path level of capital stock per unit of effective labor is implicitly in \(f'(k*) = (n + g+ \delta)\). Therefore, all variables of the model grow at constant rates (n+ g).
To show that the economy actually converges to this balanced growth path, we first assume that if \(f"(k) < 0 \) then f'(k) will fall as k rises (and vice versa). Thus, if \(k > k*; f'(k) < (n + g + \delta)\) and vice versa. Therefore, regardless of the original value of k the economy will converge to a balanced growth path at k* .
b) The golden-rule level of k is where consumption is maximized per unit of effective labor (\(f'(k^{GR}) = (n + g + \delta)\)). This means that the slope of the production function is equal to the slope of the break even line. In this model, we save capital's contribution to output, and if it exceeds break-even then k rises. We see that it will always settle to the point on the balanced growth path where \(f'(k) = (n + g+ \delta)\).
1.11
We can write that \(\dot y = \dot y(y); k = k*; y = y* ; \dot y = 0\). Using a first-order Taylor-series approximation:
\(\dot y \cong [\frac{\partial \dot y}{\partial y}|_{y=y*}] (y - y*)\)
By using k(t), n and \(\mu\) for the changed variables: \(\dot k(t) = \frac{ \dot K(t)}{A(t)^\phi L(t) - (\phi \mu + n)k(t)}\)
Finally, by using the equation \(y(t) = k(t) ^ \alpha\) we have:
\(\dot k (t) = sk(t) ^\alpha - (\phi \mu + n + \delta)k(t)\)
This means that when actual investment per unit of effective labor exceeds break even, k rises toward k*. Since alpha is constant, y will also be constant when the economy converges to k*.
b) Now the production function is \(Y(t) = [A(t) \bar J(t)]^ \alpha L(t) ^{1 - \alpha}\)
Just as in part a, we divide and simplify to obtain:\(\frac{Y(t)}{A(t)^{\frac{\alpha}{1 - \alpha}} L(t)} = [\frac{\bar J(t)}{A(t)^{frac{ \alpha}{1 - \alpha} L(t)}}]^ \alpha\)
Now with the similar definitions as part a: \(y(t) = \bar j(t) ^ \alpha\)
To analyze the time dynamics of j, we take the time derivative:
Just like the Solow model, we can graph this case where the economy does converge to a balanced growth path since all the variables of the model are growing at constant rates:
c) On a balanced growth path, \(\bar j(t) = 0\) and so,
\(\bar j^{1 - \alpha} = \frac{s}{[n+ \delta + \mu(1 + \phi)]}\) and then we have,
Now we know that the economy moves a fraction (the exponent) of the remaining distance toward y* each year.
e) The elasticity of output with respect to s is the same in this model as in the basic Solow model, although the speed of convergence is faster since \(\phi = \frac{\alpha}{1 - \alpha}\)
1.3
a) Growth rate of output per worker: \(\frac{\dot Y(t)}{Y(t)} - \frac{\dot L(t)}{L(t)} = \alpha_K(t)[\frac{\dot K(t)}{K(t)} - \frac{\dot L(t)}{L(t)}] + R(t)\) and \(\alpha_K(t)\) is the elasticity of output with respect to capital at time t and R(t) is the residual. If it is on the balanced growth path, the growth rates of output and capital per worker are equal to g, which is the growth rate of A. So, growth accounting attributes 67% of growth in output per worker to tech. progress and 33% to the growth in capital per worker.
b) The reason that the capital-labor ratio grows at g is because the effectiveness of labor is growing at g. Growth accounting attributes the rise in output to the way that raising output will raise the capital-labor ratio since it raises resources devoted to capital accumulation. Therefore, it does not highlight the direct determinants of this growth.
1.14
a) OLS gives a biased estimate of the slope coefficient of a regression if there is correlation between the explanatory variable and error term. When we substitute the equations given in section 1.7 into each other:
\(ln[\frac{Y}{N}_{1979}] - ln[\frac{Y}{N}_{1870}] = a + b \ln[(\frac{Y}{N})_{1870}] + [\epsilon - (1 + b)u]\)
If the value of b is -1, the error term is \(\epsilon\). This means that it will not be biased since the explanatory variable will not be correlated with the error term.
b) Measuring errors for the dependent variable are not problematic for the OLS estimate. Instead, if the measurement error is in the year's income per capita, then there will be biases in the results. For example, if 1870 income per capita is understated, growth is overstated.
1.15
When growing at a constant rate on the balanced growth path, \(\frac{\dot K(t)}{K(t)} = s \frac{Y(t)}{K(t)} - \delta\)
If you take a log of both sides of the production function:
Substituting and simplifying that the growth rates are equal to n and A is equal to g gives:\(y_Y(t) = \alpha g_K(t) + (1 - \alpha)n + (a - \alpha - \beta - \gamma)g\)
then, using the fact that the growth rate of output and capital are equal on a balanced growth path means:
- Teniendo en cuenta que actualmente te encuentra trabajando, cuáles son los motivos de tu búsqueda laboral?
Unos de los motivos mas importante es la reestruturacion de la empresa, en donde se han realizado muchos layoffs donde a generado un estado de incertidumbre de la . En los 2 years se realizaron 3 o 4 layoffs, pero el ultimo bastante importante.
Otro motivo es la mejora en la posicion o puesto donde realizo la tarea,aunque no es una mala posicion mis estudios incluyen una parte de soporte donde puedo trabajar como Analista de Soporte pero tambien hay otras areas que me gustaria mas derrollar y encuentro que muchas de ellas se especifican en este puesto ofrecido por el instituto.
- Por qué respondiste este aviso? Qué es lo que más te atrajo?
Desde mi punto de vista engloba muchos puntos de los cuales ya tengo una buena experiencia donde fue la docencia en UTU pero me gustaria dessarollar mas y donde pueda aplicar mis conocimientos de IT, me gusta la idea de ser un facilicitador y apoyar a los profesores con las herramientas informaticas y tener un impacto y influir postivamente tambien en los estudiantes , la parte de dodencia fue realmente una experiencia agradable y aprendi muchas cosas hablando fuera de lo tecnico.
- En el caso de avanzar y coordinar una entrevista que disponibilidad horaria tienes para esta semana (por favor pásame más de una opción)?
Esta semana tengo una disponibilidad limitada dado que tengo una visita importante de Dallas que no puedo hacer un skip. Igualmente puedo en cualquier horario Martes y Miercoles pues tengo la posbilidad de hablar con mi jefe.
Este Jueves y Viernes podria despues de las 5 pm.
También necesitamos conocer cuál es el entorno de tu aspiración salarial (liquida) o que nos menciones la banda salarial en la que te encuentras actualmente.
Mass spectrometry imaging has the unique ability to perform untargeted, spatial analysis of thousands of molecules in a single run. With improvements in instrumentation acquisition speeds and decreased spatial resolution, there is a large increase in the size of acquired data sets. This has led to a need for sophisticated software tools that can compress data and rapidly handle analysis workflows to obtain meaningful biological conclusions. Additionally, the utilization of mass spectrometry imaging in the clinical setting and pharmaceutical companies has led for a great need to develop robust quantitation strategies, as well as a coupling mass spectrometry imaging with other commonly used imaging modalities to answer new biological questions of interest. Here, we critically review the status of mass spectrometry imaging and discuss unique opportunities for new frontiers for mass spectrometry imaging in biomedicine.
1. Introduction:
Mass spectrometry imaging (MSI) is a powerful tool that allows the untargeted investigations of the spatial distribution of a molecules species of interest in a variety of sample sources. In a single experiment, it is capable of imaging thousands of molecules, such as metabolites, lipids, peptides, proteins, and glycans, without labeling. The combination of mass spectrometry with the ability spatially analyze thin sample sections creates a chemical analysis tool useful for biological characterization, essentially creating a chemical microscope. In general, after proper sample preparation, the thin sample section, a (x, y) grid is overlayed onto the tissue, with each square indicative of the spatial resolution dictated by the user. The MS instrument ionizes the molecules and collects a spectrum at each grid square on the section. After collecting all the spectra, computational software allows researchers to select a mass-to-charge (m/z) value from the overall, averaged spectrum for the tissue. The intensity of the m/z from each grid point (i.e., spectrum) is then extracted and combined into a colormetric image depicting the distribution of that m/z value. In order to determine the identity of that m/z value, on-section fragmentation can be done, and the fragments can be used to piece the structure of the unknown molecule. Otherwise, accurate mass matching to databases can be done as well to confirm the identity of the molecule within a certain mass error range.
Based on technological advances in the past few years, MSI is becoming more routine tool in clinical practice and the pharmaceutical industry. Advances include improvements in reproducible sample preparation and instrumentation that allows for high acquisition speeds and lower spatial resolution. Additionally, the ability to provide absolute quantitative information in MSI experiments boasts its credibility. To help with large computational endeavors, statistical workflows and machine learning algorithms have been implemented to handle the large imaging datasets being produced with modern day instrumentation. MSI can also be combined with other imaging modalities, such as microscopy, Raman spectroscopy, and MRI to complement the high chemical specificity of MSI with high resolution structural information, which can be applied to clinical readouts of patient diagnosis and prognosis. Additionally, researchers have been able to expand MSI methodology beyond 2-dimensional (2D) sections. With both hardware and software improvements, 3-dimensional (3D) renderings and even single cell resolution using MSI are emerging as future frontiers. With all the advances in this field, MSI is still evolving and requires continuous developments to match the current demand.
Overall, the aim of this review is to provide an informative resource for those in the MSI community who are interested in improving MSI data quality and analysis. Particularly, we discuss advances in sample preparation, instrumentation, quantitation, statistics, and multi-modal imaging that have allowed MSI to emerge as a powerful technique in the clinic. Also, several, novel biological applications will be highlighted.
2 Sample Preparation:
2.1 The Basics
As with any methodology, the most crucial step for analytical success is proper sample preparation. This is particularly true for mass spectrometry, as even subtle differences in sample integrity to density can have profound effects on the signal intensity, types of molecules ionized, or localizations. For example, in MSI, one of the greatest challenges is reducing delocalization of molecules, and this relies solely on proper sample preparation strategies. Researchers have even developed a new statistical scoring to determine sample preparation quality (independent assessment).
Universally, after any necessary dissection, samples require a step to halt enzyme activity to reduce degradation and delocalization of molecules. Classically, this means flash freezing for MSI since many other preparations (e.g., formalin fixation (FF)) are not MS compatible for most molecular species, although some lipids are not cross-linked and thus allows for FF to preserve sample integrity (Inflation fixation). New method developments have made the abundant FF paraffin embedded (FFPE) samples more MSI accessible (see discussion below). Prior to sectioning, one unique preparation is the decellularization of the tissues, allowing for the improved signal of extracellular matrix \cite{26505774}. Next, these samples are sectioned thinly (6-20 micron), thaw mounted onto appropriate slides, and placed into a drying system (e.g., desiccator box). In many cases, tissues are fragile and do not section well without support, thus many researchers have adopted embedding tissues prior to sectioning. These range from things like optimal cutting temperature (OCT) material to gelatin (Precast molds, inflation fixation,), but, as always, MS-compatibility is a concern. OCT, for example, is popular among histologists but tends to contaminate MS spectra and is thus not recommend. Due to samples flaking or washing off the slide, O’Rourke et. al. recommend coating the slide in nitrocellulose as a “glue-like” substance to aid the sections in staying on the slides \cite{26212281}. Here, one major assumption made is that the samples described are 3D tissue samples, so these general steps are not accurate for all samples. In general, researchers have found ways to image analytes in imprinted \cite{25914940}, plant roots \cite{26990111}, and even agar \cite{26959280} \cite{26297185}. Others have gone beyond single tissues to whole body imaging, which obviously can have its own unique challenges \cite{26491885}.
Several different ionization techniques are compatible with MSI, although each requires a unique process to preserve the corresponding sample. Matrix-assisted laser desorption/ionization (MALDI) is the most popular ionization technique for MSI, especially for its ability to image both small and large molecules (e.g., metabolites and proteins) (localization of ginsenosides). Its requirement of a matrix for proper ionization and production of only singly charged ions often limits its applicability to larger proteins. This has prompted the development of laserspray ionization and unique matrices (e.g., 2-NPG), although they have not found their niche in imaging workflows (in situ characterization). Obviously, no one matrix, application method, or analyte extraction process works for all molecules, so optimization is important and will be discussed later in this review. Other varieties of MALDI MSI exist, including scanning microprobe MALDI (SMALDI) (Phospholipid topography), IR-MALDESI \cite{27848143},\cite{26402586}, surface-assisted laser desorption/ionization (SALDI) (\cite{26705612}, although they are not as popular. Other techniques worth noting include desorption electrospray ionization (DESI) and secondary ion mass spectrometry (SIMS), which require minimal sample preparation in comparison to MALDI \cite{26545296}, \cite{25799886}, \cite{27270864}, \cite{26419771}, \cite{26859000}. Unfortunately, each of these is more limited in the molecules they ionize (peptides and metabolites, respectively). In the most general cases, both DESI and SIMS can be performed directly after sectioning, as they depend more on the instrument parameters for proper analyte extraction, although all the addition developments will be discussed. Even with all the ionization methods available researchers are still developing new methodology, such as laser electrospray \cite{26931651} . Each ionization has its own advantages and disadvantages, ranging from the molecules of interest to spatial resolution, the later to be discussed further on in this review. Finally, after proper preparation and ionization, the instrument itself (e.g., mass analyzer), is important to consider before determining your proper sample handling, and the confidence in being able to identify an analyte is just as important as the analyte being available for analysis.
2.2 Improving the Basics
2.2.1 Applying an Internal Standard
While evaluation of different tissues or different analytes within a tissue was accepted previously, appropriate normalization and internal standards are expected if semi-quantitative comparisons are to be made. These standards could be included as early as dosing the animals/cells or right before the ions enter the instrument (direct targeted, quantitative mass spectrometry imaging, detection and mapping). For MALDI, the standards are classically applied prior to matrix application using the same automatic sprayer systems described below \cite{26544763}, \cite{28193015}, \cite{27263025}. Chumbley et. al. has done a comprehensive study to determine the proper inclusion of the standard (e.g., with matrix, under the tissue section, or sandwiching the section with matrix), and it was found that depositing the standards followed by matrix to be optimal for the drug rifampicin \cite{26814665} This sample protocol can also be applied to done tissue sections used in DESI experiments (applying prior to analysis), or standards can be added directly DESI extraction solvent for inclusion into sample analysis \cite{26859000}.
2.2.2 Matrix Choice and Application (MALDI only)
For MALDI ionization, a matrix is required to allow proper ionization of the molecules of interest. As the matrix crystalizes, analytes are extracted and co-crystalized. If analytes aren’t in this crystal structure, it is unlikely they will be ionized and then analyzed on the MS. Thus, the availability of the molecule, the matrix application, and the matrix itself can all have an effect on this process. It should be noted that all of preparations may be applicable for other ionizations if appropriate. For the case of some proteins, a fixation wash is necessary to make the molecules even available for co-crystalization \cite{26505774},\cite{26212281}. The Carnoy’s solution is a common wash used for. Other washes, such as ammonium citrate, have also been utilized to analyze low molecular weight species. Besides washing, pre-spraying with solvents can also aid in the extraction of peptides. The combination of ammonium citrate washes and pre-spraying with cyclohexane proved to be effective in extracting clozapine from rat brain sections (Pre-extraction). Vapor chambers have also been found to be effective, specifically TFA vapors for SIMS imaging of lipids \cite{25799886}.
Several matricies have found popularity for their “universal analysis” including 2, 5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (CHCA), especially for metabolites and peptides in positive mode. A 1:1 mixture of these matrices is also commonly used \cite{26962105}. Also for positive mode, sinaptic acid has been well vetted for proteins. On the other hand, negative mode has been found useful for metabolites, for which 1, 5-diaminonaphthalene (DAN) and 9-aminoacridine (9-AA) are the most accepted matrices \cite{28362367}. Based on the literature, little effort has been made on developing or discovering new matrices for MALDI, although the use of water as a “matrix” in MALDESI is has been done recently \cite{26402586}. Also, nanomaterials have been utilized as an alternative, although these are considered a different ionization all together (e.g., SALDI) \cite{26705612}. Matrix has also been used in enhance SIMS signals \cite{26419771}. Finally, since MALDI mainly produces singly-charged ions, some researchers have utilized matrices such as NPG-2 to produce multiply charged-ions using a commercial MALDI source, although their quick sublimation doesn’t allow for longer runs like imaging requires \cite{25273590}. In general, most of the focus for sample preparation has been on the matrix application process.
When applying matrix, the best method would provide appropriate analyte extraction, small crystal size, and homogenous application. Unfortunately, no universal method exists. Classically, researchers would spray matrix over the tissue slide using a painter’s airbrush. While this can be reproducible within an individual, person-to-person variability is high, and there is little adjustability. For example, the “wetness” of the application itself defines the appropriate analyte extraction. An appropriate balance needs to be found, as a too “wet” application can cause molecular diffusion while a too “dry” method may not effectively extract the molecules. “Wet” vs. “dry” methods also have an effect on the crystal size, the wetter methods faltering towards larger crystal sizes. Substrate versus surrounding temperatures have also been thought to reduce heterogeneity, but this has been only applied to MALDI spots \cite{27126469}. Automated sprayers have allowed reproducible application methods across individuals and labs, thus their popularity has grown in the last few years \cite{26922843}). Several application notes for different vendors exist, but researchers should take time to optimize their application methods to their specific systems. Interestingly, alternative ionization methods (SIMS) have been used to characterize the analyte incorporation into spots, and, although difficult, imaging-based studies would be interesting \cite{26419771}. Homogenous application has also been a major focus, and researchers have utilized alternative application methods to improve this facet in the last few years. One example is electrospray deposition, for which units tend to be homebuilt. This dry application method usually requires an addition “incorporation spray” after the matrix has been applied \cite{28263004}. Some electrospray devices have allowed for control of the crystal size, which can directly relate to spatial resolution achievable \cite{26016507}. Other methods have also benefited from the inclusion of an electric field in decreasing crystal size and increase spatial resolution \cite{26016507}. Finally, the “driest” method used is sublimation, which is popular for its low-cost, low crystal size, high homogeneity. Commercial and partially modified apparatuses are highly published \cite{26212281}, \cite{26705612}, \cite{28362367}, \cite{28362367}. Moving forward, when individuals want to use several matrices on a tissue section or staining, they will tend to wash it off and reapply the new matrix, but this produces an expected signal loss and diffusion. As an alternative, using a commercial sprayer, Urbanek et. al. have developed a multigrid MALDI (mMALDI) methodology, where different matrices are “printed” into predefined dots on a grid. By targeting these specific matrix dots during the imaging run, a researcher is able to gather multiple datasets (e.g., metabolites, peptides, and proteins) from a single tissue section without washing \cite{27039200}. Finally, with all the different variations in equipment and methodology, a real emphasis needs to be on sharing automated matrix application methods and cross-lab communication to allow for reproducible results. The use of open source software and instrumentation is an example of this, although the ease of commercial instrumentation will continually compete with this notion \cite{25795163}.
2.2.3 Chemical Derivatization/On-Tissue Labeling
To those outside the field, mass spectrometry is seen as a “magic” technique, although there are several molecules that have a hard time being ionized and thus analyzed directly by mass spectrometry. The concept of derivitizingderivatizing molecules is commonly used in antibodye-based techniques, and its inclusion in mass spectrometry sample prep to aid in ionization was expected. The Girard T (GirT) reagent has been applied successful to several steroids, including testosterone and triamcinolone acetonide \cite{27676129} \cite{28193015}. Other steroids (e.g., THC) have also been targeted using 2-fluoro-1-methylpyridinium p-tolunesolfonate as a dervitization agent \cite{27648476}. N-glycans, fatty acids, and neurotransmitters have also all be targets through unique on-tissue assays \cite{25453841}, \cite{27181709}, \cite{27145236}. Compared to using the traditional spraying of the reagent, which usually produces poor (>100 micron) spatial resolutions, electrospray deposition has been successfully utilized to derivatize fatty acids but with a high (20 micron) spatial resolution \cite{27181709}. As you can see by the molecular species, most of targets for derivatization are smaller molecules. This may be due to their low ionization or the fact that they are the only molecules that have successfully been derivatized on-tissue.
2.3 Specific Molecular Considerations
2.3.1 On-Tissue Digestion
Molecular imaging of proteins has been of major interest, but high mass resolution analysis of proteins has been out of reach due to the mass range limitations of current, mass analyzers (e.g., Orbitraps), especially for MALDI. This has been alleviated for extract analysis by the inclusion of an initial protein digestion, and naturally trypsin on-tissue protocols have been developed for MSI \cite{26544763}\cite{26505774}. Please note that, as with every method developed, the steps should all be optimized for the tissue type \cite{26544763} , \cite{27485623}. For example, Heijs et. al. has shown the appearance of different myelin basic protein fragments appearing over longer trypsin incubation times \cite{26544763}. Until recently, trypsin digestion has been analogous with on-tissue digestion experiments. With the recent boom of interest in glycans, PNGase F, which cleaves N-glycans, has found application into in situ digestion, and sequential enzyme application has even allowed the imaging of both glycans and protein fragments in one imaging run \cite{27373711}. Overall, while stains/immunolabelings are incredible effective, they can be non-specific, and MALDI MSI provides an interesting cross-validation of the labeling-based strategies. The trickiest part of in situ digestion is appropriately identifying the protein fragments. In some cases, on-tissue MS/MS is difficult depending on the instrumentation, and a complementary liquid chromatography experiment may be neededs to performed \citet*{26505774} decellularization, multimodal mass spec). It is worth noting that other ionization techniques (nanoDESI) allow for intact protein imaging up to 15 kDa on Orbitrap systems \cite{26509582}.
2.3.2 Formalin-Fixed Paraffin Embedded Samples
While there is preference in obtaining freshly excised tissue, sometimes that isn’t possible for many hard to obtain biological samples, especially for rare, human-based samples. The large availability of FFPE tissues, which are not typically compatible with MS typically, researchers have been motivated to develop methods to release the analytes of interest to image these tissues \cite{27791282}. As also, optimization for a tissue type is important, and Oetjen et. al. has provided a comprehensive, guided study to do this (an approach to optimize). Unfortunately, not all molecular species can be extracted from these tissues, although Pietrowska et. al. has said that lipids can be analyzed by avoiding paraffin embedding after fixing the tissue with formalin \cite{27001204}. Most commonly proteins and peptides are targeted, mainly using the in situ digestion described above (tissue fixed with formalin, an approach to optimize sample prep). More recently, researchers have been able to extract metabolites and glycans \cite{27414759} \cite{25804891},\cite{27373711}. With more standardized protocols, the extensive FFPE samples available will be utilized more readily, allowing for a flood of new information to help guide researchers in future endeavors.
3. Developments in Instrumentation
MS imaging often requires specially developed instrumentation in order to address challenges unique to image acquisition, such as spatial resolution or surface homogeneity. Numerous advancements have been made in recent years to improve the quality and reproducibility of generated images. As the main distinction between imaging and LC-MS is related to the conservation of a spatial dimension, most instrumentation developments have been focused on the ionization source, with several exceptions related to ion accumulation. The two main ionization methods for MSI are laser based and secondary ion based, and most of the progress in recent years has focused on these sources. As such they will be the focus of discussion.
3.1 Laser-based ionization
3.1.1 Spatial Resolution
Arguably the most sought-after improvements in MSI are related to spatial resolution, which is the area of an imaged sample that comprises a single mass spectrum in an imaging acquisition . Improving the spatial resolution enables more discrete localization patterns to be observed throughout a tissue, but since improving spatial resolution decreases the area of tissue ionized, there is a tradeoff between spatial resolution and sensitivity. The resolution can be changed by adjusting the optics of the ionization source or otherwise changing the instrument’s geometry to decrease the laser diameter. Numerous groups have recently reported drastic improvements in spatial resolution. One paper reports a lateral spatial resolution of 1.4 micron on an atmospheric pressure MALDI source by adjusting its geometry, allowing for the visualization of subcellular lipid, metabolite, and peptide distributions \cite{27842060}. Another group achieved a spatial resolution of 5 microns on a vacuum pressure MALDI instrument by using a simple modification to the optical instrument. The system was easily interchangeable between various laser spot sizes, allowing for more selection in the tradeoff between sensitivity and resolution based on each individual experiment’s needs \cite{28050871}. These two papers highlight some of the current advances in spatial resolution recently.
However, with the rapid developments in spatial resolution, it was found that spatial resolutionit was being defined differently between groups, instruments, and samples. As this makes it difficult to form a standard of comparison between methods and instruments, to developing a universal method for both defining, and measuring spatial resolution is crucial to proper data reporting and comparison of images acquired on different instruments with different sample preparation methods, or with different users. In this review, we define spatial resolution as Typically, the limiting factor in spatial resolution is the laser, as the laser width determines the ablation area. Therefore, investigations have looked into characterizing the ablation pattern in imaging experiments, particularly with MALDI-MSI, the most widespread imaging technique. It was found that laser ablation patterns follow a Gaussian distribution, with incomplete ionization around the outside of the pixel. Furthermore, there is the ability to “shear” matrix crystals, scattering debris across the sample after laser ablation. This finding led to the assertion that MSI resolution should be defined as (1) the homogeneity of the matrix crystals once they have been applied and co-crystallized with the analyte and (2) the effective ablation diameter of the laser {O'Rourke, 2017, The Characterization of Laser Ablation Patterns and a New Definition of Resolution in Matrix Assisted Laser Desorption Ionization Imaging Mass Spectrometry (MALDI-IMS)}. The hope is that this new definition will allow for more uniform reporting of spatial resolution between research laboratories on different instruments and with different sample preparation methodologies.
Several research groups have developed methods for measuring the actual spatial resolution achievable from an instrument, which can differ from the reported pixel size of the instrument acquisition parameters. A simple and effective way to do this is with a standard slide that can be used to determine the working spatial resolution of an instrument based on user-defined instrumental parameters. One group developed such a slide that incorporated a pattern of crystal violate using lithography in order to measure the beam diameter in MALDI-MSI experiments by visually inspecting the ablation pattern \cite{27299987}. Another slide for measuring spatial resolution was developed using a slightly different technique, in which a sample solution can be dragged over the slide’s surface, allowing it to be automatically retained in hydrophilic grooves of the slide. The slide can then be imaged on the instrument in order to determine the lower threshold of the instrument’s spatial resolution \cite{26044268}. These devices can serve as a valuable method for testing the spatial resolution when adjusting instrumental parameters or performing quality assurance on images to ensure that proper resolution is being reported.
3.1.2 Matrix-free laser-based ionization
Though highly beneficial in many regards, MALDI MSI’s requirement for a matrix coating is often a major drawback in imaging experiments. Matrix application can be a limitation because it requires an additional step in sample preparation, it suffers from poor homogeneity that can affect spatial resolution, and it results in excessive noise peaks in some ranges of the spectrum due to the interference of matrix ions. As a result, ionization sources are being developed to utilize a laser ablation techniques without the requirement of matrix. Laser desorption post-ionization mass spectrometry, though still in its early stages of development, has been demonstrated to have a promising potential as a complementary tool for in situ localization and quantification. It has the benefit of not requiring matrix application or sample preparation, though currently its resolution and mass accuracy are 500 micron and 300 ppm, respectively, which is not competitive with commercial instruments \cite{28294229}. However, with further development, it may earn its place as a prominent ionization source. Another method for ionization without the application of matrix is nanophotonic laser desorption ionization, which ionizes analytes from a highly uniform silicon nanopost array \cite{26929010}. This method has achieved 40 micron spatial resolution for over 80 molecular species, giving it the potential to be competitive with MALDI upon further exploration.
4.1.3 Throughput
Another frequently cited challenge with MSI is the long analysis time typically required, which can range from several hours to several days, depending on the tissue area and pixel size. These long analysis times limit the practicality of MSI for routine applications, particularly in clinical settings. As a result, developments have been made in order to increase throughput without sacrificing image quality. One notable example involved utilizing a solid state laser with 5 kHz repetition rate to perform continuous laser raster sampling on a MALDI-TOF/TOF instrument. This method achieved an acquisition rate of up to 50 pixels per second, an 8 to 14-fold improvement over conventional lasers \cite{28239976}. Throughput becomes even more of a challenge when molecules in the same tissue ionize differently, thus requiring different polarities for acquisition. This is particularly the case with lipid analysis, as lipids are a diverse class with high structural variability. Methods have been developed for imaging in both positive and negative polarity while minimizing analysis time using high speed MALDI-MSI technology and precise laser control \cite{27041214}. The field is moving toward real-time imaging capabilities for immediate spatial analysis for guidance during surgeries. As an example, Fowble and colleagues have applied a laser ablation imaging approach in ambient conditions in order to obtain spatial distribution of metabolites with a range of polarities in real time without the use of any matrix or sample pretreatment \cite{28234459}. Another method couples a picosend IR laser to an ESI source in order to provide ambient MS imaging without causing thermal damage to tissue. This allows molecules to remain in their native state, allowing better insight into the tissue’s condition \cite{26561279}. These developments demonstrate great potential in moving MSI technology from laboratories to clinical settings for improved patient treatment.
3.2 SIMS
3.2.1 Resolution and Mass Accuracy
The other most common method of ionization is SIMS, which has seen notable improvements in instrumentation. In SIMS imaging, spatial resolution is often quite good, but at the expense of sensitivity. This is largely a consequence of the ion beam, either due to low ionization probability or beam focusing difficulties. An Argon gas cluster ion beam is typically used for TOF-SIMS, but, despite its many benefits, it suffers from poor sensitivity, often causing a tradeoff between spatial resolution and mass resolution. Delayed extraction, a method widely used for MALDI, is becoming more prominent in TOF-SIMS imaging, and has been shown to be successful in maintaining high mass resolution and spatial resolution \cite{26395603}. By implementing external mass calibration, the mass accuracy can also be preserved \cite{26861497}. Methods involving delayed extraction have been explored as a means to improve resolution, but these methods often make mass calibration difficult, resulting in poor mass accuracy. Other groups have explored alternative primary ion sources, such as a CO2 cluster ion beam, which possesses many similarities to Argon, but improved the imaging resolution by more than a factor of 2 due to increased stability of the beam \cite{27324648}
3.2.2 Parallel Imaging MS/MS
With the inferior mass resolution of SIMS compared to other ionization methods, the mass accuracy is usually not high enough to make confident of detected molecules. Therefore, it is usually necessary to acquire MS2 spectra on ions in order to make identifications. Collecting MS2 spectra is difficult in imaging experiments, however, because performing sequential MS2 scans after a full-MS scan causes misalignment between spectra and spatial information. To address this, progress in parallel imaging MS/MS has been implemented, in which MS2 spectra are collected simultaneously with MS1 spectra using 2 mass analyzers. This acquisition method differs from traditional MS/MS acquisitions, in which all ions other than the precursor ions are discarded. As a result, MS1 and MS2 images are in perfect alignment with each other, allowing for more precise mapping of molecular distribution \cite{27181574};Fisher, 2016, Parallel imaging MS/MS TOF-SIMS instrument}. With fully optimized parallel imaging, identification confidence can be drastically improved without sacrificing the integrity of localization information.
3.2.3 Ambient/Low-vacuum TOF-SIMS
As MSI is very commonly used for the analysis of biological tissue it is highly desirable for analyses to be conducted in near-native environments, such as in the presence of water, in order to get an accurate understanding of the chemical environment. Low-vacuum and ambient MALDI imaging have already been well-explored, but progress has recently been made with SIMS, denoted as Wet-SIMS {Seki, 2016, Ambient analysis of liquid materials with Wet-SIMS}. Currently, the technique is able to acquire images at 80 Pa in imaging experiments {Suzuki, 2016, Development of Low-vacuum SIMS instruments with large cluster Ion beam}. With further development, this technique could be used to analyze biomolecules in their native environment, allowing for analysis in biologically relevant experimental conditions.
3.4 Separation
A significant limitation to MS imaging compared to LS-MS analysis is the lack of separation capabilities, as retaining spatial information typically requires ablating all ions present in a pixel of sample at the same time for a single scan. This often leads to problems such as ion suppression, but techniques that allow post-ionization separation are being developed to overcome this challenge. To separate analytes from noise or undesired compounds, a simple sample cleanup step was incorporated into MALDI MSI by first introducing laser ablation with vacuum capture followed by C18 elution onto the MALDI target plate. The method demonstrated an improved sample signal and decreased background interference compared to direct MALDI MSI, resulting in higher quality MS/MS data, cleaner spectra, and more confident identification power\cite{26374229}. For separation of analytes, ion mobility has been a popular choise, as it can and has been seamlessly integrated into MALDI MSI workflows. It has also been recently demonstrated to be highly effective for ambient ionization techniques, such as LESA and DESI \cite{27228471} \cite{27782388}. The results showed an increase in detected molecules and the ability to select specific classes to image. An alternative, pseudo-separation method has also been employed, in which subsequent MS scans covered differing m/z windows in order to detect low-intensity ions characteristic of specific ranges, providing the effect of gas-phase fractionation. By implementing a spiral plate motion during imaging, the integrity of spatial information was not lost with this method\cite{26438126}.
3.5 Depth profiling
Another challenge specific to imaging is achieving uniform ionization over the surface of the tissue, something difficult to accomplish if the tissue is not perfectly flat. While extra care in sample preparation can help alleviate this to an extent in some sample types, often slight variations in the height of the tissue are unavoidable. To remedy this, modifications to instruments have been made that allow for height correction. For example, a novel LAESI source was recently developed that incorporated a confocal distance sensor that both moved the sample to a constant height and recorded the height information to generate a topography map {Bartels, 2017, Mapping metabolites from rough terrain: laser ablation electrospray ionization on non-flat samples}. Another method combined shear force microscopy with a nano-DESI source to measure and adjust the voltage magnitude to enable a stable feedback signal over surfaces with complex topographies {Nguyen, 2017, Constant-Distance Mode Nanospray Desorption Electrospray Ionization Mass Spectrometry Imaging of Biological Samples with Complex Topography}. If a uniform sampling can be ensured over the surface of a tissue, it not only preserves spatial integrity throughout the plane of the sample, but can also allow for three-dimensional imaging. With 3D imaging, it is imperative that the depth profile of the sample be preserved to ensure accurate record of the tissue profile. Several significant advances have been made in this respect in the area of elemental imaging, such the development of a femtosecond laser ionization source for multielemental imaging with a 7 micron depth resolution \cite{27976851}. Submicron depth resolution, down to 20 nm, has been demonstrated using extreme ultraviolet laser light, allowing for 3D imaging of bacterial colonies \cite{25903827}. It is expected that these capabilities will continue to be developed and applied to 3D imaging of more complex systems.
4 Quantitation
4.1 Comparison to LC-ESI-MS/MS: The Past
With the push of multi-modal imaging, it is clear that obtaining several pieces of information from a single tissue is imperative. While MSI is mainly qualitative, with the appropriate conditions, processing, and software, quantitative information can be extracted, although this is still under question. Items such as tissue inhomogeneity, ion suppression, sample topography, etc. are all considered significant challenges in this field (aspects of quantitation). Before the development of quantitative MSI, the analytes of interest were separately extracted from another tissue section and run on a liquid chromatography (LC)-electrospray ionization (ESI)-based instrument, although this is still done regularly in MSI to aid in the identification of unknown, interesting m/z values \cite{27181709}. Once the absolute quantity of the analyte of calculated, these values can then be applied to the tissue of interest. This can also be a starting point of studies, allowing for more targeted imaging studies \cite{25542581}. This methodology is still in the current literature, although, it is more commonly utilized for confirmation of the MSI results, similar to Western blot for other LC-MS quantitative results \cite{26814665}. Quantitative MSI is now expected, as many application-based MSI publications focus on the comparison between two of more sample types. With proper sample preparation, comparisons can be made with the appropriate considerations.
4.2 Relative
4.2.1 Direct Comparison (with or without Normalization)
As eluded to above, direct comparisons between different tissue sections is done commonly. While these “relative” comparison methods learn towards being “semi-quantitative,” several techniques and data processing strategies have perpetuated its use. For example, matrix effects and other interfering molecules tend to cause more deviation in the quantitative accuracy, although some researchers have shown that the correlation between MALDI-MSI and LC-MS/MS can be quantitative for fatty acids and proteins (On-tissue derv, a proof of concept). While these assessments are of different molecules in a single tissue are interesting, differences in ion suppression and ionization efficiencies between molecules should always be questioned, although the addition of an internal standard can aid in the normalization of the signal (spatial localization and quantitation). This can also be done with the same molecules within different tissues, and normalization still aids in more confident comparisons (spatial localization and quantitation). The inclusion of a normalization procedure in pre- and post-processing is now an expectation. This strategy is applicable for several other molecular species, including neurotransmitters, nucleotides, lipids, and tryptic peptides (direct targeted, MSI reveals, brain region specific). Almost all software available for MS imaging provides the ability to normalize. For example, the use of SciLS software tool allowed for normalization to the total ion current (TIC) before further statistical analysis (mass spectrometry imaging of metabolites). After differentiation, several metabolites were found to be different between the cortex, outer medulla, and inner medulla of the rat kidney between control and furosemide-treated (mass spectrometry imaging of metabolites). It should be noted that care should be taken when comparing different regions of a tissue, as their matrices can vary slightly (aspects of quantitation). It should be noted though that there are publications still make comparisons without normalization \cite{26475201}, pioneering ambient, imaging of proteins). Finally, software is obviously an important component in any imaging-based quantitative strategies, and Renslow et. al. have further developed tools to nanoSIMS transition from qualitative to quantitative for element incorporation into biofilms (quantifying elemental incorporation).
4.2.2 On-tissue labeling – Using Reporter Ions
For ESI-based quantitation, two techniques are employed. Label-free directly compares samples in different runs, which is analogous to the “direct comparison” MSI described in the previous section. While label-free quantitation is commonly employed, instrument variability, instrument limitations, and other factors lead to inconsistent and incorrect comparisons. To compare, the incorporation of stable isotopes has allowed for same spectrum relative quantitation, although its application to MSI is extremely limited. One example in the literature entitled stable-isotope-label based mass spectrometric imaging (SILMSI) utilizes light and heavy chromogens to differentiate between different cancer biomarkers of interest (SILMSI). After labeling with a primary and secondary antibody, the addition of the chromogen produces an azo dye that, when ionized by the laser, fragmented into distinct, duplex reporter ions. The ratio of these reporter ions then can calculate their relative abundance compared to another molecule, in this case the estrogen receptor and progesterone receptor (SILMSI). While classically reporter ions are seen in the MS/MS spectra via isobaric labeling, this same concept is not done in MSI experiments, not only due to the poor fragmentation for ions but likely due to the incompatibly of the methods for relative quantitation. In comparison, MS1-based labeling methods can easily be transitioned to on-tissue MSI applications, although the process of derivitizing molecules on-tissue has primarily been used for increasing ionization of different molecules (direct targeted, on-tissue dervatization, linkage-specific).
4.3 Absolute
4.3.1 Internal Standard
Whereas relative comparisons are common place, absolute quantitation is relatively underdeveloped. While obtaining the true concentration of a molecule is much more difficult, it is also more desired since it allows for true comparisons between different molecular species without worries of varying ionization efficiencies. As with ESI-based measurements, the easiest method is to incorporate a deuterated internal standard into the sample. As explained previously, internal standards are now being used extensively to normalize MSI data sets, and the inclusion of a very specific standard (e.g., deuterated version of an analyte of interest) facilitates absolute quantitation of that analyte of interest. This has been done primarily for DESI samples, with the standards incorporated into the solvent stream (quantitative mass spectrometry imaging of small).
4.3.2 Calibration Curve
In general, the creation of a calibration curve is the most confident way to obtain the absolute quantity of an analyte. This has been done with ESI in separate and the same runs (iDiLeu). Initially, you would think producing an external, separately spotted calibration curve would work for MALDI, although the lack of sample matrix and matrix heterogeneity leads to inaccurate concentrations. Thus, researchers have adopted an on-tissue spotting technique that takes both of these considerations into account. The standard of interest (isotopic or non-isotopic) are spotted/applied on a separate, “control” section (absolute quantitation, direct targeted, direct imaging). This section is usually a serial section of the one being analyzed, as having the same matrix is important for accurate quantitation (aspects of quantitation). For example, many researchers chose liver tissue for initial optimization or studies, as it is considered extremely homogenous (aspects of quantitation, absolute quantitation). Interestingly, in the case of elemental analysis, before spotting on the sample, the sections are washed to remove excess elements (e.g., sodium) (direct imaging). To increase homogeneity of the areas where the standards are placed, researchers have developed methods where the standards are spiked into tissue homogenates themselves. These samples are then placed into a mold, frozen, sectioned, and placed near the imaged section, for which quantitation accuracy is similar, although it was noted that the dried droplet spotting method referenced above is much faster and easier (aspects of quantitation). All of these methods require sophisticated computational tools, and several software packages exist for processing region of interest quantitation (MsiReader, MSIQuant). msIQuant is an example software, which has been used to absolutely quantify drugs and neurotransmitters (msIQuant).
5. Data Analysis
MSI data is difficult to process for a number of reasons, including the large size of the data files and the high degree of dimensionality, as acquisitions retain spatial information as well as other information. This is becoming more of a problem with the increase in spatial resolution causing an exponential growth in data files sizes. As such, key software developments have been made to address these challenges and ensure that effective analyses are being done without the loss of valuable information in the process.
5.1 Visualization
The most important information obtained from an imaging experiment is a visualization of the distribution of various molecules throughout the tissue. As each pixel of an imaging experiment contains an entire mass spectrum, special software is required to handle this specific need in the field. While there have been numerous advancements in this respect, the influx of progress caused there to be a lack of uniformity. This means that typically the software could not be applied to large data sets, expensive commercial software would be required, or the software would require the end user to have some degree of programming knowledge to fit his or her data to the software input. However, recent efforts have been made to design open-source visualization tools that are user-friendly and applicable to multiple instrument platforms, particularly in the area of LA-ICP-MS, which is not as routinely implemented as MALDI-MSI or SIMS-TOF {Sforna, 2017, MapIT!: a simple and user-friendly MATLAB script to elaborate elemental distribution images from LA-ICP-MS data}\cite{27917244}{}{Uerlings, 2016, Reconstruction of laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) spatial distribution images in Microsoft Excel 2007}. MSiReader is a key player in open source visualization, providing both a graphic user interface and MATLAB open source code for users\cite{23536269}. Additionally, even open source microscopy imaging software like ImageJ have plugins scripts capable of handling MSI data sets \cite{22347386}. These new open source tools show promise for making the processing of imaging data more widely accessible and customizable for to the broader mass spectrometry imaging community.
New methods have also been explored for expanding the capabilities of visualization tools. For example, 3D MALDI imaging have been limited by inabilities to reconstruct 3D images, but Patterson and colleagues designed an open-source method for 3D reconstruction using multivariate segmentation \cite{26958804}. Others have expanded our knowledge gained in a different direction. Instead of using imaging to track a single molecule, they developed a tool to view the localization of biological indices (e.g. energy charge to indicate energy status in the cell), mapping the relationship between several specified molecules \cite{27542771}.
An important note with visualization of data in MSI is critical to ensuring that the image shown is an accurate representation of the molecular distribution. .It has been found that cropping images to eliminate background can cause the emergence of distribution patterns not observed in the entire image. As a result, data can become skewed if the analyzed area is too small and does not contain sufficient background area for reference \cite{27730748}. With MS imaging making an increasing presence in biomedical applications as a diagnostic tool, appropriate representation and statistical analysis of visual data is essential.
\cite{Bro_2014}
5.2 Preprocessing
Prior to data processing, several steps can be used to ensure accurate and efficient data analysis. These steps include normalization, baseline correction, spectra recalibration, smoothing, and data compression. Normalization is considered required for data analysis, while baseline correction, spectra recalibration, smoothing, and data compression (unsupervised and supervised) are considered optional prior to analysis, but may be necessary dependent upon chosen statistical analysis and the mass spectrometry instrumentation used to collect the data \cite{17541451}. The use of preprocessing steps can also depend on mass spectrometry and biological points of view of an individual project. Overall, preprocessing can help reduce experimental variance within the data set, and helps draw meaningful conclusions from subsequent statistical analysis.
5.2.1 Normalization
\cite{21479971}
Normalization is used to remove systematic artifacts that can affect the mass spectra. Sample preparation, matrix application, ion suppression, and differential ionization efficiencies in complex samples can influence the intensity peaks of mass spectra. Some of these random effects in data acquisition can be minimized by proper normalization. Not applying normalization can lead to misleading artifacts and ultimately can depict inaccurate ion distributions, statistical analysis, and conclusions about biological significance. There are a few different methods for normalization for mass spectrometry imaging data sets based on the purpose of the analysis. Normalization to the total ion current (TIC) is the most commonly implemented method. Normalization to the TIC ensures that all spectra have the same integrated area and is based on the assumption that there are comparable number of signal in each spectrum \cite{17541451} \cite{22148759}. However, in an imaging experiment, it cannot always be assumed that this condition is met. TIC normalization can improve the ability to compare expression levels across samples with similar sample type, however is not applicable when comparing very different tissue types. In addition to normalization to the TIC for similar sample types, the TIC normalized data can be further normalized to matrix related peaks for MALDI imaging experiments to correct for uneven matrix coating. This may be necessary depending on how the matrix is applied to the sample. For example, manually prayed air brush sprayed matrix applications cannot produce as homogenous of crystals across the whole tissue as matrix applied with an automated sprayer or automated microspotter\cite{25331774}. For samples with different tissues types, such as whole body imaging, an externally applied labeled calibration molecule similar to the compound of interest ideally is used a reference molecule and is applied during matrix application. For this normalization method, each spectra is normalized to the intensity of the reference molecule for analysis. Choice of reference molecule can be complicated by deposition methods and choice of compound that may require optimization. Normalization to an internal standard reduces the impact of ion suppression that arises from tissue inhomogeneity and improves pixel-to-pixel variability. TIC is not recommended for whole body imaging or for different samples compositions, where internal standard normalization is considered the best normalization option \cite{25318460}. Other options include normalization to an endogenous molecule that is expected to be consistently expressed throughout the whole tissue, such as a phospholipid head group. Additionally, some researchers have calculated a tissue extinction coefficients or relative response factors to determine the relative amount of a compound in whole body imaging or different tissue types. This tissue extinction coefficient takes into account ion suppression related to the compound of interest and the tissue of interest and is then compared to LC-MS/MS data \cite{22842155} . The advantage of this method is that no expensive labeled standards are needed of the compounds of interest, although accuracy of tissue extinction coefficients is still being investigated .
5.2.2 Baseline Correction
Spectra resulting from imaging experiments can result in noisy data acquisitions and large variations in spectral intensity, even in within the same sample. Noise in the data is the baseline can affect peak detection algorithms and sample-to-sample comparisons. Typically, a baseline algorithm is implemented in the data preprocessing step to reduce this noise prior to statistical analysis. Baseline noise occurs because at low m/z values, there is more chemical noise, leading to the presence of a higher baseline than present at higher m/z values \cite{17541451}. The effect of chemical noise can be suppressed by estimating the baseline and then using a polynomial function or moving average to subtract the baseline. A new baseline is calculated and signal levels are adjusted. New sliding window baseline algorithms are being developed with automatic adjustments based on mass range \cite{27980460}. Even after correction, a residual baseline might still be observable in the low m/z ranges. A choice of baseline algorithm that best reduces that baseline of an individual dataset may depend on the complexity and acquisition of the data.
5.2.3 Spectral recalibration
Spectral recalibration, also called spectral realignment, is applied after data acquisition as a method to improve the mass accuracy of the data by calibrating to an internal standard or several internal standards to realign the spectra. This internal standard must be present in at least 90% of the spectra to be used for recalibration . Some use a matrix peak for MALDI imaging, consistently expressed m/z values in the tissue, or an applied internal standard to perform the recalibration as used in Heijs et al. \cite{26544763} . Multiple internal standard peaks should be selected if a large mass range is used in the data setup. The spectra are then realigned using a quadratic calibration algorithm based on the median value of the selected peaks used for calibration. This typically results in a 5-10 fold reduction of the range of centroid values following alignment \cite{17541451}. Spectral recalibration is especially important for instruments with low mass accuracy, such as a linear TOF instrument, where one might expect a 100-200 ppm mass accuracy \cite{17541451}. This can be compared with an Orbitrap, where one would expect 5 ppm mass accuracy, where spectral recalibration is not as necessary for m/z identifications. However, spectral recalibration can also help to correct for irregularities in the tissue thickness, which can further magnify variations in the mass measurement.
5.2.4 Smoothing
The application of a smoothing algorithm can reduce fluctuation by increasing the signal-to-noise ratio. Mass spectrometry imaging can produce salt-and-pepper noise, where you see sharp, sudden disturbances in the image pixels that do not correspond to the signal seen surrounding this pixel. To help reduce these sudden fluctuations, denoising algorithms are applied to reduce pixel-to-pixel variability and to allow the local scale of features to be resolved. Commonly used algorithms include 1) Savitsky Golay Smoothing \cite{27791282} \cite{27256770}, which assumes a Gaussian distribution of the data and uses the polynomical order and the number of points to a compute a smoothed output value and the 2) Boxcar Smoothing \citet*{22743164}(also known as a moving average smoothing algorithm or the Gaussian kernel), which replaces each data point with the average of neighboring values\cite{26680279}. These work to reduce noisy data sets with significant inter-pixel variation.
5.2.5 Unsupervised data compression
As MSI acquisitions tend to create large data files (up to several terabytes per sample), data processing becomes more difficult and requires more strenuous computational methods. To alleviate this problem and make the data files easier to handle and distribute, several compression strategies have been implemented to reduce the size of data, while still retaining the important information. Binning mass spectra for each pixel of an imaged tissue and compression based on region of interest (ROI) are the most successful methods, with ROI compression requiring the least amount of computation \cite{28842033}. Autoencoders have also been useful for unsupervised non-linear dimensionality reduction of imaging data by reducing each pixel one at a time to its core features {Thomas, 2016, Dimensionality Reduction of Mass Spectrometry Imaging Data using Autoencoders}. Once the size of data has been reduced, it can be more easily processed in subsequent steps of the processing pipeline.
Unsupervised clustering of the data is also used to compress data into features for statistical analysis. Unsupervised analysis can be divided into manual, component, or segmentation analysis. Manual is carried out by selecting out m/z value unique to the region of interest, pulling out each image for a single m/z and manually cataloguing them. Component analysis requires a statistical or machine learning algorithm to cluster the data. Principal Component Analysis (PCA) is used to reduce the dimensionality of the dataset by converting possibly correlated variables into a set of linearly uncorrelated values, known as principal components. PCA an unsupervised statistical method to distinguish principal components that cause the greatest variance in the data. PCA plots the principal component that causes the greatest variation on the x axis and the principal component that causes the 2nd greatest amount of variation on the y-axis to induce groupings of related pixels in the data sets \cite{21980364} . PCA can also be used to remove signals which are poorly connected with variability between groups. Spatial segmentation helps bin together similar spectra into regions of interests and to identified co-localized m/z values. Hierarchical clustering segmentation partitions the image into its constituent regions at hierarchical levels of allowable dissimilarity between regions. Hierarchical clustering only requires a similarity between groups of data points. Hierarchical clustering is used to rearrange multiple variable to visualize possible groups in the data. This provides the possibility for rapid identification of specific markers from different histological samples. HC classifies the mass spectra according to similarities between their profiles and thus provides the ability to highlight regions containing differences in molecular content. Another segmentation methods is K-means clustering, which is the most commonly used for for mass spectrometry imaging. K-means clusters the number of partitions, n, into k number of clusters, where each cluster is based on the spatial distances between mass spectra. Following k-means clustering, each observation now belongs to the cluster with the nearest mean. K-means clustering to create spatially localized clusters to which feature extraction can be applied {Winderbaum, 2015, FEATURE EXTRACTION FOR PROTEOMICS IMAGING MASS SPECTROMETRY DATA} Bisecting k-Means is a combination of k-Means and hierarchical clustering, although computationally more complex. Bisecting k-means is a hierarchical clustering method that uses k-means repeatedly on the parent cluster to determine the best possible split to obtain the next two daughter clusters to obtain uniformly sized clusters. These methods can help to detect important, biologically relevant features that may otherwise go undetected due to the difficulty in extracting information and segmenting large data sets so that statistical analysis is computational reasonable. Cardinal, an R based statistics program can be used for data compression and statistical analysis \cite{25777525}.
5.2.6 Supervised data compression
Supervised clustering is better suited when a specified set of classes is known and the goal is to classify new data set into one of those classes. Supervised used predefined classes or categories, while unsupervised uses similarity between spectra to generate classes. Supervised classification is used to figure out if the groups are actually different, and what m/z values best differentiate the groups. Some studies use actually both supervised and unsupervised statistical analysis for analysis \cite{28361385}. Partial least squares regression is a supervised classification method, where classes of data are annotated with known labels \cite{25462628}. Partial least squares regression is similar to PCA, however instead of separating into components based on the maximum variance, it uses a linear regression model to project predicted variables and observable variables to a new space. This type of supervised clustering requires a training data set for the classification of groups.
Both supervised and unsupervised classification methods reduce data down to the most important m/z value distributions. Data compression projects the data to a lower dimension subspace, while maintaining the essence of the data for statistical analysis. With the large degree of dimensionality associated with MS imaging data, especially of biomedical samples, extracting important, relevant features becomes increasingly difficult. Machine learning algorithms for feature detection applied to LC-MS data can be limiting with imaging data, as they don’t account for differences in spatial regions of the tissue of interest. A context aware feature mapping machine learning algorithm was recently developed that takes into account the spatial region of features when ranking \cite{27764717}.
5.3 Statistical Analysis
5.3.1 Tests of Significance
Statistical analysis of large data imaging sets is incredibly important for the implementation and utility of mass spectrometry imaging. Comparing samples significantly involves statistical hypothesis testing to determine if there is a certain difference that exists between samples or between spatial regions of the tissues. Univariate analysis tests that one m/z, identifying to a compound of interest, is different between different samples. If the data has a Gaussian distribution, a t-test is used to determine the difference between two samples, and ANOVA is used to determine if there is any difference in a group of samples \cite{27485623} \cite{Marczyk_2015} . Gaussian distribution of mean intensities cannot be assumed for clinical samples; mean values may still be used if the central limit theorem is satisfied. If the data has a non-Gaussian distribution, nonparametric tests like the Mann-Whitney U-test can be used as a statistical test of the hypothesis. These tests are useful for finding peaks with an observable change caused from the experiment design between different regions or experimental conditions.
\cite{25877011}
5.3.2 Discriminant Analysis
Data reduction methods such as PCA or PLS are pre-processing steps to discriminant analysis. Together these analyses are commonly performed together and abbreviated as: PCA-DA or PLS-DA, respectively. Discriminant analysis is a statistical tool to assess the adequacy of a classification system. For any kind of discriminant analysis, the groups need to be assigned beforehand or in the case of PCA, preprocessed prior to discriminant analysis. Discriminant analysis is particularly useful in determining whether a set of variables is effective in predicting category membership. This is different from an ANOVA or multiple ANOVA, which is used to predict on or multiple continuous dependent variables by one or more independent categorical variable.
\cite{26604989}
5.3.3 Biomarker Tests
Even if statistical differences exist between two conditions for a single m/z, this does not necessarily mean that this m/z value can act as a biomarker to distinguish the two classes. For univariate biomarker analysis to confirm if a m/z can be used as a diagnostic test to distinguish two regions of interests, a receiver operator curve (ROC) analysis is performed. In ROC analysis, the true positive rate (sensitivity) is plotted in function of the false positive (specificity)\cite{20978390} \cite{20821157} \cite{16550707}. The area under the curve (AUC) in these plots can distinguish whether the m/z marker can be used for diagnostics. This is a test of accuracy, where an AUC value between .90-1 is excellent, .80-.90 is good, .70-.80 is fair, .60-.70 is poor, and .50-.60 is failed test. This test is used to discriminate the ability of a specific marker (m/z) to correctly classify groups of interest. MALDI imaging was used to reveal thymosin beta-4 as an independent biomarker in flash frozen colorectal cancer compared with normal using ClinPro Tools software to perform ROC analysis \cite{26556858}.
However, often in biomarker discovery, one biomarker is not able to correctly classify groups with a high AUC for clinical analysis. In this case, multiple biomarkers (multiple m/z values) are used for analysis. This is known as multivariate analysis. Here, machine learning is used to look at multiple biomarkers to look for correlated structures in the mass spectra that also correlates with the target outcome. This multivariate analysis provides a single ROC curve that is derived from multiple biomarkers. Additionally, an indicator of how much each m/z contributes to the score from the resulting algorithm is calculated for each m/z value\cite{7628115} \cite{23054242}. For regression-based methods such as PLS, the importance of an m/z value is a direct result of the model’s loading vector. Additionally, colocalization of two individual m/z values in a tissue can be calculated in a correlation analysis to see how well m/z components of the multivariate analysis align based on special distributions\cite{18570456}. One problem for mass spectrometry imaging is salt adducts of the m/z values of interest are identified separately. Therefore, in biomarker analysis, it would be ideal to combine m/z values identifying to the same molecular compounds into a single peak for analysis For instance, two m/z values separated by 17mDa is indicative of the presence of that specific m/z plus a sodium ion. This can also happen for potassium salts, the loss of ammonia, the loss of water, oxidation of methionine, and other common modifications. This can complicate identification and statistical analysis as well as univariate and multivariate biomarker analysis. For MALDI, Alexandrov introduced a method called masses alignment which is used to group masses corresponding to a single peak and then represent them as one m/z value. This also reduces the size of the dataset, making computation and biological understanding of the data more attainable \cite{23176142}.
5.3.4 Machine Learning Algorithms
Machine learning is starting to play a larger role in developing algorithms to quantify relationships in mass spectrometry imaging and then using these identified data to make predictions for new data sets. First, data is converted from a population of profiles into a n by m data matrix, where n is individuals, and m is the biomolecule of interest. Following conversion, they can be analyzed using different algorithms that look for correlated structure in the measured data that also correlates with a target outcome.
This is currently being implemented for automated decision making, modeling, and computer aided diagnosis. Supervised learning is being used to help the computer to identify patterns in the known categories. This can be done in two separate ways: classification and regression. Classification refers to decisions among a typically small and discrete set of choices (tumor vs. normal tissue), while regression refers to an estimation of possibly continuous-valued output variables (diagnosis of the severity of disease). Neuronal networks, support vector machine algorithms, recursive maximum margin criterion, and genetic algorithms build statistical models that use training data to perform to predict the classification of new data sets. This is commonly applied for tumor classification \cite{25750696}.
\cite{27322705}
5.3.5 Complete data pipelines
Because processing imaging data requires numerous different treatments than conventional LC-MS data, software with complete data analysis pipelines are useful for streamlining the entire data analysis process. While there are numerous open source and freely available software packages for processing data, functionality tends to be restricted and there typically aren’t export options for the data. A new MSI software package, SpectralAnalysis, strives to expand the reach of data processing by incorporating all processing steps from preprocessing to multivariate analysis, within a single package, allowing for the analysis of single experiments as well as large-scale experiments spanning multiple instruments and modalities \cite{27558772} . Improved data processing pipelines are also being developed in efforts to make full use of the spatial information unique to imaging experiments. One such pipeline, EXIMS, strives to reveal significant molecular distribution patterns by treating the dataset as a collection of intensity images for various m/z values. The process incorporates preprocessing, sliding window normalization, de-noising and contrast enhancement, spatial distribution-based peak-picking, and clustering of intensity images \cite{26063840}. This pipeline emphasizes the importance of special treatment for imaging data compared to LC-MS data.
5.4 Repositories
Finally, data storage and sharing of the final results allow for the community to move forward and build upon the ever growing wealth of knowledge. In order to further drive this, imaging repositories are necessary for allowing researchers access to imaging data for comparison of results and for discovering new answers to biological questions. Previously, such repositories were difficult to implement due to the large requirement of space and computational powers, but technological advancements have allowed for the emergence of at least one such repository\cite{25542566}, with the promise of more becoming available in the near future. Currently the European project METASPACE for Bioinformatics for spatial metabolomics developed on online engine based on big-data technologies that automatically translates millions of ion images to molecular annotations. The estimated completion time for this project is June 2018.
6. Multi-modal Imaging Systems
MSI is useful for analyzing the spatial distributions of small molecules, lipids, peptides, proteins, and glycans. The combination of MSI with other imaging modalities help to multiplex imaging analyses into a comprehensive analysis to answer biological questions that could not be otherwise not be analyzed with a single imaging modality. Multimodal technologies are very commonly implemented in diagnostic imaging techniques and the concept has been expanded into MSI analysis pipelines. MSI can serve as an essential complement for untargeted chemical analysis coupled with other imaging modalities. Because MSI has high chemical specificity, but lower spatial resolution compared with other imaging modalities, it is typically combined with modalities that complement these features. MSI is combined with imaging modalities that are low in chemical specificity, but high in spatial resolution or tissue structural information. The results from combining complementary imaging modalities is greater than the sum of its parts\cite{26070717}.
Multi-modal imaging can be approached by either acquiring images at different times (asynchrosonous), where the images are fused in data processing step, or by simultaneously acquiring images (synchronous) and merging them during data acquisiton step \cite{20812286} . Asynchronous post-processing can present some difficulties which arise from the positioning of the same samples between different scans at different times, which can cause difficulties in co-registering images for analysis \cite{Meyer_2013}. Co-registration is especially difficult if data acquisitions are not acquired at the same spatial resolutions, however advances in computational annotation help to improve image analysis \cite{Eliceiri_2012}. Image co-registration can be achieved by aligning known regions of interest, using calibration points to perform a rigid regression, or by selecting a variety of points to perform moving least squares registration \cite{Huhdanpaa_2014}. Additionally, different imaging platforms have different sample preparation protocols, which can cause interference into different imaging modalities. Synchronous imaging is advantageous because consistency is achieved in both time and space, however combining instrumentation to accommodate synchronous acquisitions can required advanced skill and can be very expensive, especially for mass spectrometry instrumentation. The next steps for multimodal imaging involve integrating quantitative information from multiple existing functional modalities to create composites of not just two types of modalities, but integrating three, four, or even five imaging modalities into single data analysis pipeline. Additionally, advances in technology and instrumentation will allow for synchronous integration to be expanded for multiple imaging modalities.
6.1 Microscopy Multi-Modality
MSI is often combined with microscopy to provide high resolution morphological and structural information, while MSI is used to visualize and identify distributions of specific molecules. Additionally, Plas et al. describes a method for using microscopy data to fuse with mass spectrometry imaging data to enable prediction of a molecular distribution both at high chemical specificity and a high spatial resolution \cite{Van_de_Plas_2015}. This is done post data acquisition using the microscopy data to sharpen and perform out-of-sample prediction \citet*{25707028}. Here, we describe the use of light and fluorescent microscopy to evaluate tissue structure and specific markers. Microscopy is the most common multi-modal system currently paired with mass spectrometry imaging and is particularly useful for identifying regions of interest .
6.1.1 Histology
Although tissue sections used for MSI can be scanned to produce an structural overlay, important structural information on the cellular level is obtained from histological analysis of a sample using light microscopy that can be important for region of interest analysis of MSI data. Light microscopy is used to see details and enlarged portions of a tissue section, which is then captured with a camera. Samples are stained with a specific dye to stain tissue structures. Histology overlay is the most common multimodal imaging system combined with mass spectrometry imaging currently applied in the current literature \cite{26216958} \cite{25488653} \cite{20170166}.
The most traditional stain hematoxylin and eosin (H&E) stain distinguishes nucleic acids in blue and proteins in red. This allows the pathologist to visualize the difference between cells from the surrounding extracellular matrix\cite{21356829}. Other commonly used stains include Masson's trichrome stain used for connective tissue, Alcian Blue for mucins, and Periodic acid-Schiff reactions used for staining carbohydrate rich tissue region \cite{4184780}. Trained pathologists used stained slides to identify different disease states of the tissues. Tissue morphology, cell structure, and staining distribution is analyzed by pathologists to stratify patient specimens and provide diagnostic indices for the patient \cite{28416487} \cite{28117928}
Berikut ini adalah usulan-usulan riset dari peserta kuliah sesuai dengan rencana mereka saat mendaftar sebagai mahasiswa magister. Harap diperhatikan bahwa apapun yang tertulis di bawah ini adalah Plan B. Plan B adalah rencana riset dengan sumber daya paling minimum yang data dikerjakan oleh para mahasiswa. Sumber daya minimum yang dimaksud adalah dana minimum, kebutuhan piranti keras dan piranti lunak minimum, serta perjalanan/akomodasi yang juga minimum.
Deskripsi rencana riset awal
Dominicus Vincent: melanjutkan program riset S1 dengan topik simulasi air tanah, S1 Teknik Pertambangan ITB, menggunakan Visual Modflow, disarankan menggunakan kode Modflow orisinal keluaran USGS.
Rendi Ermansyah: eksplorasi hidrogeologi untuk pencarian sumber air, S1 Teknik Geologi Unpad. Ybs perlu menyampaikan nilai kebaruan teknik eksplorasi agar tidak berkesan biasa.
Meila Puspita: hidrogeologi untuk geotermal, S1 Teknik Geofisika Unsyiah. Ybs perlu memutuskan untuk bekerja di lapangan geotermal yang telah dieksplorasi atau yang masih baru (green fields). Disarankan untuk memilih lapangan baru, karena berbagai komponen lingkungan dapat menjadi nilai originalitas riset. Lapangan dewasa (brown fields) dinilai telah terlalu sering dibahas dalam tugas akhir.
Felice Dagelardini Wopari: hidrogeologi untuk pertambangan mineral, kasus aliran "lumpur basah" dari rekahan, S1 Teknik Pertambangan Uncen. Lumpur basah ini, atau wet mud atau mud rush atau dewatering sludge perlu didefinisikan dengan lebih baik untuk dapat merumuskan berbagai komponen riset yang berkaitan. Ybs baru dapat menceritakan dampak adanya banjir lumpur basah.
Anggi Rustini: tentang perubahan iklim dan ketersediaan air di Kab. Subang atau melanjutkan riset S1 di zona tak jenuh di lahan gambut, S1 Meteorologi Terapan IPB, pernah kerja di CIFOR. Catatan: untuk tema 1, apakah memang anda yakin iklim telah berubah? Indikatornya apa? Apakah potensi air di Subang memang terpengaruh oleh perubahan iklim itu? Air tanah yang bersumber mata air dan/atau sumur, kalau iya, berapa kali pengukuran?
Kenali rencana riset sejak dini
Studi anda hanya berlangsung selama dua tahun atau empat semester. Terlambat menyusun rencana riset akan berarti menunda kelulusan anda hingga waktu yang tidak dapat ditentukan. Ilustrasi di bawah ini menggambarkan anekdot mengenai pembagian waktu anda. Data bisa jadi tidak cukup atau analisis menjadi kurang dalam, adalah beberapa hal paling sering dijumpai saat anda menunda riset pada saat yang paling akhir. Output riset anda tidak maksimal, yakni hanyalah sebuah buku tesis. Padahal mestinya tidak begitu. Output riset anda dapat sangat bervariasi bila anda memulainya sejak dini.
The goal of this experiment was to analyze and identify metallic samples. We use an X-ray diffraction machine to analyze the crystal structure of five different samples. We analyzed the diffraction peaks of each sample and calculated their lattice constants. By comparing to literature values, we were able to validate the identity of silicone, bronze, brass, and pure copper powder, and identify an unknown substance to be tantalum.
Skripsi adalah salah satu bentuk tulisan ilmiah. Tahapan ini mau atau tidak harus anda lalui untuk mengakhiri karir anda sebagai mahasiswa. Di sisi yang lain, masalah utama mahasiswa adalah kesulitan untuk menulis.
Menulis dalam arti luas sebenarnya adalah bercakap secara sistematis. Inilah bedanya dengan percakapan bebas yang biasa anda lakukan. Sekali anda salah, maka ucapan akan terlanjur keluar dari mulut anda. Tapi dengan menulis, percakapan akan mengalir tapi juga memiliki waktu untuk direnungkan, sebelum pada akhirnya dirilis ke pembaca.
Artikel ini merupakan sari dari beberapa karya tulis yang telah dihasilkan sebelumnya, implementasi open science \cite{Irawan_2017}, Status makalah berbahasa Indonesia di DOAJ \cite{irawan_dasapta_erwin_2017_376762}, dan sebuah buku berjudul Menulis Ilmiah itu Menyenangkan yang diawali dengan sebuah blog (baca juga reviewnya oleh Nursatria Vidya Adikrisna).
Tulisan pendek ini akan mencoba mengubah pikiran anda dari sulit menulis menjadi tidak dapat berhenti menulis. Semoga.
(Makalah ini ditulis untuk Kolom Opini Majalah Retorika Kampus)
Modelo de negocio para la aplicación “home services”
Antecedentes:
Internet presta servicios que van más allá de consultas de información, transferencia de ficheros y chat, entre otros. Mediante este canal, es posible realizar transacciones electrónicas que van desde comprar un libro, hasta el pago de facturas; estas actividades en Internet, a las cuales se accede usando dispositivos móviles, se conocen como comercio móvil. El concepto no es común en nuestra sociedad y, por ende, es importante ahondar en lo que este tipo de comercio implica desde su infraestructura hasta la percepción de usuarios. Metodología: la revisión bibliográfica se realizó mediante una investigación documental. Resultados: el comercio móvil contempla funcionalidades, estrategias, e infraestructura necesarias para la realización de transacciones electrónicas exitosas y seguras. Conclusiones: el comercio móvil, al igual que el comercio electrónico tradicional, cumple con la infraestructura necesaria para ofrecer una gran variedad de servicios a los usuarios. Robayo-Botiva, D. M. (2012). El comercio móvil: una nueva posibilidad para la realización de transacciones electrónicas. Memorias, 10(17), 57-72.
La indefinición del modelo de negocio contribuye sin duda a agravar la sensación de riesgo e incertidumbre por parte de los productores provenientes de las industrias mediáticas. En el caso de las marcas informativas, por ejemplo, esa indefinición se traduce en la dependencia excesiva de los modelos de negocio del Internet convencional, caracterizados por la gratuidad de buena parte –o la totalidad– de los contenidos. Que esos mismos medios ofrezcan contenidos adaptados gratuitos (versiones ligeras de sus web o aplicaciones dedicadas que remodelan esos mismos contenidos) les obliga, a la postre, a generar nuevos contenidos con valor añadido –o bien fórmulas de integración de publicidad– que les permitan monetizar la plataforma móvil más allá de su contribución como complemento secundario del medio online (como El País Plus u Orbyt, de El Mundo). Existe, no obstante, un tercer perfil emergente entre los productores de contenido y aplicaciones móviles: el de las marcas que utilizan los contenidos como elementos de imagen, en una suerte de versión móvil de patrocinio. Son los contenidos y aplicaciones de marca (appvertising), entre los cuales destacan los videojuegos de marca (advergaming). ( Juan Miguel Aguado, Claudio Feijóo e Inmaculada J. Martínez, 2012)
Este trabajo tiene como propósito fortalecer el Lienzo del Modelo de Negocio de Osterwalder y Pigneur (que presenta en su libro “Generación de Modelos de Negocio), haciendo uso de los elementos de análisis que aporta Fuentes Zenón en su trabajo “Diseño de la Estrategia Competitiva”, conservando la esencia grafica sencilla y esquemática del lienzo original, al fin de favorecer la participación de los practicantes o responsables de los negocios.
En la primera parte se plantea el porqué de la conveniencia y popularidad del Modelo de Negocio, así como la necesidad de aportar mayores elementos de análisis, en la segunda se hace una rápida revisión de los antecedentes, estructura y papel que ocupan los Modelos de Negocio en el campo de los enfoques de negocio, para luego en la tercera parte ofrecer los elementos de análisis para el examen de cada uno de los 9 bloques del Modelo de Negocio, resultados que se resumen la cuarta parte para dar forma a una guía breve. ( Silvia Núñez Corona, 2014)
El presente informe de trabajo de grado, tiene por objetivo presentar el plan de negocios para la creación de la aplicación móvil destway, que permita gestionar los viajes terrestres intermunicipales. Este plan de negocio se pudo desarrollar utilizando la guía para la generación de modelos de negocio propuesto por Alexander Osterwalder. Los elementos que se han tenido en cuenta para la elaboración del presente informe abarcan el análisis del mercado en el uso de SmartPhones y de aplicaciones móviles, revisión de las aplicaciones móviles desarrolladas para el sector transporte y el análisis del movimiento del transporte terrestre de pasajeros. ( Galvis Escalante, Julián Andrés; Giraldo Betancur, Julián Andrés, 2015)
Este artículo revisa el uso del Plan de Negocios como herramienta para enseñar emprendimiento y propone un nuevo enfoque basado en el diseño del Modelo de Negocios acompañado de actividades significativas como un medio para promover el espíritu emprendedor entre los estudiantes universitarios.
La propuesta se basa en información de autores destacados sobre el tema, pero sobre todo en el enfoque de Alexander Osterwalder e incluye la aplicación de la teoría del Design Thinking donde los emprendedores requieren pensar de forma tanto divergente como convergente a través de diferentes etapas del conocimiento. Para la validación de la propuesta se basó en una metodología exploratoria para la cual se entrevistó a egresados con empresas. Así mismo, durante un semestre se corrió un piloto en el cual se aplicó el Modelo de Negocio en conjunto con otras actividades en lugar del Plan de Negocios y al término del mismo se entrevistó a alumnos activos de la materia de emprendimiento, y profesores del curso. Se obtuvieron resultados satisfactorios en cuanto al aprendizaje de los alumnos y el fomento al espíritu emprendedor al aplicar el modelo de Negocios como base del curso, así como el resultado y aceptación de las actividades significativas. ( Eugenia del Carmen Aldana Fariñas, Ma. Teresa del Carmen Ibarra Santa Ana, Ingrid Loewenstein Reyes, 2012)
Los trabajos de investigación actuales en ingeniería de requisitos buscan mecanismos que permitan establecer la relación entre la funcionalidad esperada de un sistema de información y los procesos de negocios a los que éste dará soporte. Este enfoque permitirá asegurar que el sistema de información a desarrollar sea realmente útil en las tareas de los actores organizacionales. Los trabajos de investigación en esta área han determinado que las metas organizacionales son una buena base para establecer la relación entre los objetivos perseguidos por el negocio y los requisitos del sistema de información a desarrollar, ya que todos estos requisitos (funcionales y no funcionales) deben corresponderse con tareas que se desean desempeñar dentro de un proceso de negocios. Los procesos de negocio a su vez, permiten el cumplimiento o satisfacción de alguna o algunas de las metas del negocio. En este trabajo se presenta una propuesta para la obtención de requisitos de software a partir de modelos de negocios. El artículo se divide en dos secciones principales: (a) la construcción de modelos de negocios a partir de un análisis orientado a metas (b) la obtención de un modelo de requisitos de software a partir del modelo de negocios. Este trabajo permite tener un punto de partida sólido para la construcción del sistema de información, donde cada requisito tiene su origen en las metas del negocio. (Hugo estrada, 2012)
La micro, pequeña y mediana empresa (MIPYME) ha sido en los últimos años el centro de atención de numerosos trabajos de investigación, no obstante, aún sigue necesitada de fundamentos estratégicos, operativos y de alianzas que, de forma continua, le brinden oportunidades para mejorar su competitividad. El Cuerpo Académico “Gestión de PYME” ha diseñado y puesto en funcionamiento un Laboratorio Empresarial que realiza estudios regionales dentro del Estado de Coahuila, entre ellos el diagnóstico del modelo de negocio y la definición de estrategias de cambio; diseña estrategias cooperativas de innovación ofreciendo un sitio de conexión virtual entre la empresa y la universidad. El objetivo del trabajo es mostrar los resultados alcanzados mediante una encuesta a directivos de 212 PYME del Estado de Coahuila sobre la percepción que ellos tienen acerca de sus manejos financieros mediante el análisis de doce variables (de las 28 que componen el modelo de negocios) que toda empresa debe controlar. Los resultados muestras las relaciones entre cada componente (como variable dependiente) y sus elementos (como variable independiente) en la estructura del modelo de negocios. Los resultados evidencian similitudes y diferencias en estos manejos según los sectores y tamaños de las empresas. La evaluación realizada permite definir estrategias para mejorar el desempeño económico y social de las MIPYME.( Víctor Manuel Molina Morejón, Lourdes J. García Hernández, Valeria Viridiana Salas Jaramillo, 2013)
Child Witness: Autobiography, Trauma, Social Justice
Introduction
Child Witness explores the emergence of the child as a testimonial site and figure in autobiographical projects by adults who seek to represent trauma and call for justice. From Harriet Jacobs’s slave narrative Incidents in the Life of a Slave Girl to contemporary comics like Phoebe Gloeckner’s A Child’s Life to picture book memoirs likeRuby Bridges’ Through My Eyes, authors often incorporate childhood experience as a critical feature of shaping a life story for diverse audience. These are not stories that merely recollect childhood or burnish it nostalgically. Instead, autobiographical narratives of childhood by adults mark a site where the values associated with self-representation in politics, aesthetics, and everyday life -- truth telling, the authority of experience, reliability – attach to the child and permit adult readers to connect with the authors’ larger social justice projects. The child witness -- credible, trustworthy, and vulnerable – offers authors and audience a means of connection they would not otherwise achieve. The child in the life writing projects of Jacobs, Gloeckner, and Bridges is employed as a witness to the horrors of slavery, deprivation, rape, and segregation. The child is positioned to testify to experience rather than to suffer it. The adult author recounts what the child experienced: not by ascribing naïve authenticity to the child’s voice, but by centering the childhood experience and knowledge upon which the authority of the adult autobiographer builds. Our focus on the emergence of the child witness as a testimonial figure and site reveals how authors leverage the affective power of their own childhoods to connect with diverse audiences. Autobiographical literature that uses the child witness in this way offers a pedagogical form that educates about injustice and calls for ethical witnessing and social change. It provides for new relations to emerge between authors and audiences through which previously silenced histories of personal and collective trauma are represented.
Child Witness will reveal a history of the child’s centrality to struggles for social justice, especially anti-racist, feminist, and human rights movements, and the significance within this history of autobiographical literature that connects childhood to adult activism. The book is guided by an overarching question: How does this literature disrupt the symbolic and political meanings of the child in the service of social justice and activism? Given the cultural judgments that attach to women’s autobiographical accounts, for example, how does the figure of the child and the narrative of childhood address the limits of persuasiveness and authority that damage women’s testimony? To answer these questions we chart a feminist history of life writing that foregrounds a child witness on whose behalf readers learn to demand justice.
The major theoretical intervention of the book lies in our fusion of insights from childhood studies and studies of autobiographical literature through which we reveal the centrality of the child (as witness and activist, as testimonial site and figure) in a testimonial tradition of auto/biographical work that seeks to make visible and/or remedy inequity. Child Witness takes up the child – a familiar figure in literary studies and humanitarianism alike – in order to place it in a new critical context by pulling visual and verbal forms into new proximities through feminist interdisciplinary analysis. We propose that a new formation around “the child” emerges at the intersections of life writing, children’s literature, and visual culture. Specifically, our focus on the child within the history of feminist life writing reveals new examples of how to bear witness to individual and social trauma.
Many will associate the words “witness” and “trauma” with Shoshana Felman and Dori Laub’s psychoanalytic and literary analyses rooted in Freud and focused on the Holocaust. Our project is rooted in a different strand of trauma studies that is based in the feminist theory and clinical practice of Judith Herman, Laura S. Brown, and others who elaborate an antiracist feminist criticism of trauma that looks at systems of inequality. Extending this work to the study of self-representation, Child Witness draws on Leigh Gilmore’s (2001, 2107) elaboration of a feminist intersectional analysis of the chronic, pervasive, and everyday quality of trauma in the lives of those who experience a range of material forms of insecurity and risk. Gilmore’s focus on testimony, everyday violence, and systemic sexism and racism is shared by other scholars who use the terms trauma and testimony without primarily referencing the work of Felman and Laub, including Judith Butler, Hillary Chute, Wendy Kozol, Nicholas Mirzoeff, and Gillian Whitlock. We define trauma here as harm that unfolds over time, is hidden in plain sight, and permitted by social norms of violence against women, children, and people of color. Child Witness engages directly with how trauma structures testimony, and it does so by attending to a range of dynamic and sometimes controversial visual-verbal strategies. Our analysis of visual culture also moves us away from Felman and Laub, as we attend to how photographic portraits in the 19th century documented slavery and visualized the subject of abolition, how comics and graphic memoirs challenge the all-too-pervasive sexual abuse of girls and women, and how auto/biographical picture books about civil rights define children as political agents.
The critical term “witness” is drawn from scholarship in life writing on “human rights and narrated lives” (Smith and Schaeffer), from the analysis of race, gender, and culture in intersectional feminism, and from visual and verbal studies of ethical witnessing (Kozol; Hesford; Mirzoeff; Neary, et al). We add to previous theorizations of ethical witnessing an analysis of the child as the site and figure of testimony to the everyday trauma that the girl experiences and documents. The self-representational strategies of writers and illustrators motivate different publics to activism. We chart examples of ethical witnessing with the child at the center of autobiographical projects from slave narratives in the mid-19th century in the U.S. to contemporary memoirs and picture books. Our critical framework and archive are well-suited to each other: we document how authors use narratives and images of their own childhoods to reach diverse and often distant audiences, thereby placing familiar texts in a new critical narrative and incorporating unfamiliar texts to flesh out this history. There is no single child witness in the history we lay out; rather, autobiographers return to their childhoods and use the child as a site of testimony in a range of ways that we seek to name in each chapter. The origins and locations of meanings of childhood will shift within and among historical time periods, especially given our focus on women and girls of color. Our method is an emergent one that adapts to the flexible genre of autobiography and to the themes and strategies each author and artist employs in a text.
Our use of the term social justice is an essential element of the theoretical framework of intersectional feminism. This interdisciplinary feminist frame fits our project’s focus on situated personal experiences as a way to create new knowledge, affiliations, and forms of justice that exceed courts or other formal venues. The autobiographies in this project place the girl in a political context. Here, the child is not an innocent being to be saved; rather, women name the intersections among race, class, gender, citizenship, and other variables to highlight and resist larger systems of oppression in which the child is embedded. Autobiographers use the child as a testimonial site to create narratives and images that critically interrogate systems of meaning and intersections hidden in plain sight. These are often shocking because we are trained to read the child as vulnerable, in need of saving, immune to adult conflicts, and somehow not raced or classed. For example, Rigoberta Menchú as a girl who is Indigenous, poor, colonized, and an activist who makes claims on behalf of numerous victims of torture and murder, tells a personal life story in order to draw attention to U.S. involvement in the conflict in Guatemala. Through her use of the girl, she testifies to violence and demands justice for the victims of harm. Through this example, we can see the ways in which social justice is at the heart of intersectional feminism’s commitments to examining structures of inequity that frame how and who is heard. The feminist history of life writing we propose begins with women of color. The critical and historical trajectory extends from Harriet Jacobs to Black Lives Matter in one conceptual breath and argues that when some men have focused on or embedded childhood within their autobiographical projects, they do so in relation to women’s writing. Thus the gendered discourse of autobiographical narratives of childhood develops in authority and innovation in demonstrable ways through the work of women.
Critical studies of both childhood, including children’s literature and queer theory, and autobiographical narrative, including graphic memoirs and picture books, represent a provocative and important intersection for at least two reasons. First, adult autobiographers politicize childhood in ways that challenge “certain stylized and largely unquestioned assumptions about childhood” (Duane 8). Second, adults writing about their own childhoods bring attention to abuses often hidden from view and encourage adult readers to ally with them and advocate for change in the public sphere. Scholars in this area have theorized the child as a symbolic and contested social category rather than a biological certainty (Bruhm and Hurley; Driscoll; Duane; Dubinsky; Gittins; Higonnet; Kehily; Sánchez-Eppler; Steedman). Scholars of childhood maintain that while there are actual children who need protection from those positioned to provide it, the meanings a culture gives to childhood, and the harm or protections solidified in institutions and policy, will differ across time, culture, and location, as well as across the variables of race, class, gender, and sexuality. This multiplicity of meanings has been captured in cultural studies of childhood that note how the figure of the child often serves as a means to elicit a wide range of competing emotions, from sympathy to patriotism (Berlant; Edelman; Stockton). Children’s literature scholars, in particular (Capshaw; Kincaid; Kidd; Mickenberg), have been instrumental in drawing attention to how the imagined child reflects larger social and political ideologies, histories, and movements. To this field, we contribute an analysis of how the figure of the child witness enables readers to connect the private act of reading to the collective project of social change.
The interdisciplinary field of autobiographical literature examines how people represent their lives in relation to history and do so in creative and innovative ways (Chaney; Chute; Gilmore; Smith and Watson; Whitlock). Historically, this practice has taken numerous nonfictional forms, including autobiography, memoir, slave narrative, and other testimonial discourses, and has also paralleled the development of fictional forms interested in the first person, including the bildungsroman, first person fiction, lyric poetry, and ‘zines (Gilmore; Rak and Polletti). We draw upon and amplify Leigh Gilmore’s analysis of limit cases in life writing in order to offer a critical frame for theorizing the use of the child witness within the larger historical and creative project of life narrative in different media.
To this end, we recognize the diverse linguistic and visual strategies that authors and illustrators employ within a complex history of socio-political movements. Thus Child Witness connects an analysis of slave narratives of the 19th century to contemporary graphic memoirs and children’s picture books by historicizing and theorizing the emergence of the child witness as testimonial figure and site of cultural judgment. By design, we place autobiographical narrators like Harriet Jacobs and Rigoberta Menchú and comics artists like Marjane Satrapi and Phoebe Gloeckner alongside works often read in K-12 contexts, including fairy tales, and graphic life writing in picture book format, such as Duncan Tonatiuh’s Separate is Never Equal: Sylvia Mendez and Her Family’s Fight For Desegregation in order to capture the broad use of the child witness. This cross section of texts allows us to make visible the dynamic interrelations of gender, genre, race, and class in the context of testimony and its investments in social justice. We have been struck in our previous research (Gilmore, “Witnessing Persepolis”; Gilmore and Marshall; Marshall) by the wide range of textual and visual strategies writers and artists use to politicize childhood. Among these, we have observed how writers and artists pose ethical demands as an outgrowth of shared affect, offer up radical pedagogies that blur the soft borders between childhood and adulthood, and teach alternative lessons about history, trauma, and resistance through life writing. Our previous work examined how adults use texts and images of their own childhoods to make larger claims in the public sphere and allowed us to further analyze feminist interventions in the symbolic and cultural meanings of childhood through the media of life writing and graphic memoirs. Here we elaborate a framework for understanding how feminist autobiographical projects disrupt the symbolic and political meanings of the child.
Chapter One, “Girlhoods, Crisis, and Autobiography,” examines three linked cases that introduce readers to how adult women use girlhood as a category to compel social activism. In each, the authors draw on and insert the child as central to the political activism for which they seek witness. From slave narratives to the Latin American testimonio I, Rigoberta Menchú,and autobiography in comics form, such as Marjane Satrapi’s Persepolis, in the global literary marketplace, women use autobiographical narratives of childhood to elicit readers’ ethical engagement with political topics and cultural critique. In this chapter, we chart a feminist history of how women of color use autobiographical narratives of their own girlhood to elicit sympathy from a mostly white and often geographically distant readership. These popular autobiographical narratives reach across national borders to call for political action, including the abolition of slavery in the U.S., humanitarian intervention in the civil war in Guatemala, and understanding of revolution in Iran. In these texts, women argue that political and moral autonomy develops from their responses to childhood experience and crisis.
We begin with Harriet Jacobs’s critique of the destruction of childhood for enslaved children. Jacobs shifts the focus from race to racism and slavery by describing her own happy young life. Childhood, for Jacobs, offers a way to interrogate her white readers’ assumptions about race and racism. Rigoberta Menchú uses her childhood to establish a complex network of testimony, truth-telling, and privacy. She contrasts the independence and respect children are accorded and the work they are relied upon to do in Quiché culture with the exploitation of their labor on coastal plantations. Marjane Satrapi offers her child-self as a witness to the rise of the Ayatollah in Iran even as the childhood she knows disappears when her parents send her into exile. The symbolic and political meanings of the child differ in each example as do they ways in which they are unsettled; yet, taken together, they represent a history of feminist representation of the child as a testimonial figure and site.
We connect these texts to make clear that how life writing, children’s literature, and visual culture are co-producing the child is broadly intersectional along the lines Kimberlé Crenshaw adumbrated. Our work can be read alongside previous theorizations of the child by scholars such as Robin Bernstein, Anna Mae Duane, Caroline Levander, Kathryn Bond Stockton and others, who also recognize the multiple systems of oppression that motivate a diverse range of equally intersectional responses by authors, artists, and activists. As with these critical projects, our method is less concerned with naming a particular child figure (e.g., the suffering child) in a particular historical moment, or communicating the authentic perspective of the child; rather, our intersectional feminist frame allows for a focus on the unique formation of an adult rendering his/her/their own childhood as a testimonial site from which to agitate for social justice. No longer representative of static subaltern silence, girls emerge in these narratives as figures of sympathy represented by politically active women autobiographers.
Chapter Two, “ Soft Borders and the Feminist Politics of Girlhood,” shifts focus from the use of the child figure to draw attention to injustice and to compel the action of others on behalf of the child in order to examine the strategic use of the girlhood as a category with soft temporal borders. Here, we connect Susanna Kaysen’s popular memoir Girl, Interrupted about her confinement in McLean hospital, Lucy Grealey’s Autobiography of a Face about her experience of jaw cancer in childhood through multiple surgeries and hospitalizations, and David Small’s graphic memoir Stitches about his childhood experience of throat cancer, surgery, and its consequences. In each example, experiences of illness take the authors out of one form of time, in suspending one childhood temporality and supplanting it with another that moves in the tempo of diagnosis and treatment. One form of childhood time –growing in relation to siblings and peers, for example, schooling, neighborhood life—is replaced with the rhythms and routines of the hospital, routines that offer new markers for charting life. Childhood in these texts is a borderline category. Kaysen fuses childhood and young adulthood to make a feminist critique about the white middle class family and about mental illness. In Autobiography of a Face, Grealey offers different trajectories of growth for her body and her face, deftly revealing how the medicalization of her childhood lacked a developmental language adaptable to her sexuality. Viewed as a childhood patient, yet living in a maturing body, Grealey’s face emblematizes a complex site of traumatic experience and testimony. Whereas Susanna Kaysen represents her young adulthood as being interrupted by her institutionalization for borderline personality disorder, Grealey’s life is interrupted by a narrative of her difference, a condition she can neither leave nor outgrow but must address through narrative. We read David Smalls’ graphic memoir in relation to Kaysen and Grealey to place him within the gendered market of contemporary life writing about trauma and to highlight the feminist strategies he adapts to narrate childhood trauma.
Chapter Three, “Fairy Tale Girlhoods: Sexual Violence and Feminist Graphic Knowledge,” considers the category of girlhood as a site for feminist critique through a reading of Virginia Phoebe Gloeckner’s A Child’s Life and Other Stories.Here we identify specific formal strategies Gloeckner crafts in the service of testimony, including the telescoping of child and adult perspectives and temporalities, and the use of children’s literature, especially fairy tales in Gloeckner’s graphic autobiographical project. The connection between nonfictional narratives of endangered children and the canon of children’s literature may seem tenuous, but adult life writers often rely upon familiar texts from childhood (Marshall). Fairy tale characters like Little Red Riding Hood and All-Fur experience evil stepmothers, threats of rape and rape, and other forms of violence, and provide a familiar touch point for life writing about childhood and trauma. Gloeckner returns to the sexual violence of traditional fairy tales to rupture the façade of the unknowing child. In comics like “Magda Meets the Little Men in the Woods” Gloeckner remediates the fairy tale in contemporary comics form to offer a pedagogy in which the child witness refuses the position of resilient being who grows out of or forgets trauma. This chapter offers a method for reading the visual and verbal strategies of feminist resistance that Gloeckner employs through the child witness. Specifically we note how Gloeckner creates a feminist graphic knowledge of sexual violence through her use of the gutter (the white space between panels in comics) and scale. She uses the figure of the child to intervene in the epistemology of children’s sexual precarity within families by illustrating it explicitly. She reaches out to readers visually to counter the claim that such violence is invisible and unknown. To contextualize Gloeckner’s graphic strategy, we consider Virginia Woolf’s imposed reticence about being her experience of sexual abuse as a child in her autobiographical essay, “Sketch of the Past,” and demonstrate how Una’s graphic memoir about abuse, Becoming Unbecoming present sexual abuse as defining childhood and adulthood for women rather than as an isolated or episodic interruption.
In the previous chapters, we examine texts published for an adult or young adult audience and the figure of the child as witness. The final chapter, “Witnessing Social Violence for Children: Picture Books, Auto/Biography and Social Change,” takes up children’s nonfictional picture books as a unique and radical form of graphic life writing in which indigenous writers and authors and illustrators of color center a child figure who is both witness and activist. These texts represent social histories often left out of official social studies curricula. Often dismissed as simple or solely for a young audience, picture books have a history of providing “necessary cover” (Capshaw 103) for the child witness to speak and relay lessons about discrimination, violence, and activism. For instance, Duncan Tonatiuh’s biography of Sylvia Mendez and her family in Separate Is Never Equal, Ruby Bridges’ memoir Through My Eyes, and Christy Jordan-Fenton and Margaret Pokiak-Fenton’s co-authored auto/biography When I Was Eight recuperate and reclaim histories through counter-storytelling in image and narrative (Solórzano and Yosso). The child witness-as-activist is central to counter histories of racialized misrepresentation in text and image and to the creation of culturally specific stories of resistance that have radical potential for social justice education. In each of these auto/biographical picture books, a child witness who is also an activist child.
In the conclusion, “New Child Witnesses,” we turn our attention to current events and movements in which the child witness is crucial to forwarding human rights. Nobel Peace Prize winner Malala Yousafzai’s representation of her childhood experience and activism in I am Malala: The Girl Who Stood up For the Taliban and Was Shot (Youfsazi and Lamb) emerges alongside the representation of her by others, including picture books, such as Malala Yousafzai: Warrior With Words (Abouraya and Wheatley) and Malala, A Brave Girl From Pakistan (Winter) and enables a comparison of the autobiographical and biographical child witness.In addition, we examine how the feminist and anti-racist movements of #BlackLivesMatter and #SayHerName protest not only the expendability of black boys and girls, but also how these subjects are denied their status as children. Tied to representational strategies in Harriet Jacobs’s Incidents, the black child remains a critical figure for social justice and a contested site of interpretation. Police officers typically see black children and adolescents as older than they are and link imputed age to the risk they pose to officers. Under these conditions, children of color and indigenous youth are at heightened risk of police violence. Social justice activism aimed at reclaiming Trayvon Martin, Michael Brown, and Tamir Rice as children politicizes the category of the child and clarifies its potent use in calls for justice. These new child witnesses circulate in a range of visual verbal circuits, draw on the strategies we outline, and also highlight emergent uses of social justice life writing that compel readers and viewers toward activism. They connect to the earlier histories of abolition and demonstrate the significance of children lives in testimonial projects.
Works Cited
Berlant, Lauren. The Queen of America Goes to Washington: Essays on Sex and Citizenship. Durham: Duke University Press, 1997. Print. Bridges, Ruby. Through My Eyes. New York: Scholastic, 1999. Bruhm, Steven, and Natasha Hurley, eds. Curiouser: On the Queerness of Children. Minneapolis: University of Minnesota Press, 2004. Print. Capshaw, Katharine. Civil Rights Childhood: Picturing Liberation in African American Photobooks. Minneapolis: University of Minnesota Press, 2014. Print. Chaney, Michael, ed. Graphic Subjects: Critical Essays on Autobiography and Graphic Novels. Madison, WI: University of Wisconsin Press, 2011. Print. Chute, Hillary. Graphic Women: Life Narrative & Contemporary Comics. New York: Columbia University Press, 2010. Print. Driscoll, Catherine. Girls: Feminine Adolescence in Popular Culture and Theory. New York: Columbia University Press, 2002. Print. Duane, Anna Mae, ed. The Children’s Table: Childhood Studies and the Humanities. Athens: University of Georgia Press, 2013. Print. Dubinsky, Karen. “Babies Without Borders: Rescue, Kidnap, and the Symbolic Child.” Journal of Women’s History19 (2007): 142-150. Print. Farley, Lisa & Julie C. Garlen. “The Child in Question: Childhood Texts, Cultures, and Curriculum. Curriculum Inquiry 46 (2016): 221-229. Print. Gilmore, Leigh. Autobiographics: A Feminist Theory of Women's Self-Representation. Ithaca, New York: Cornell University Press, 1994. Print. Gilmore, Leigh. The Limits of Autobiography: Trauma and Testimony. Ithaca, New York: Cornell University Press, 2001. Print. Gilmore, Leigh. (2001). “Limit-Cases: Trauma, Self-Representation and the Jurisdictions of Identity.” Biography24 (1): 128-139. Gilmore, Leigh. “Witnessing Persepolis: Comics, Trauma, and Childhood Testimony.” Graphic Subjects: Critical Essays on Autobiography and Graphic Novels. Ed. Michael Chaney. Madison, WI: University of Wisconsin Press, 2011. 157- 163. Print. Gilmore, Leigh, and Elizabeth Marshall. “Trauma and Young Adult Literature: Representing Adolescence and Knowledge in David Small’s Stitches: A Memoir. Prose Studies: History, Theory, Criticism 35.1 (2013): 16-38. Print. Gloeckner, Phoebe. A Child’s Life and Other Stories. Berkeley: Frog, Ltd. Books, 1998/2000. Print. Harrison, Kathryn.The Kiss. New York: Avon, 1997. Print. Higonnet, Anne. Pictures of Innocence: The History and Crisis of Ideal Childhood. London: Thames & Hudson, 1998. Print. Jacobs, Harriet, A. Incidents in the Life of a Slave Girl, ed. Jean Fagan Yellin. 1861; Cambridge, MA: Harvard University Press, 2000. Print. Jordan-Fenton, Christy and Margaret Pokiak-Fenton. When I was Eight. Toronto & Vancouver: Annick, 2013. Print. Kaysen, Susanna. Girl, Interrupted. New York: Random House/Vintage Books, 1994. Print. Kehily, Mary Jane. An Introduction to Childhood Studies(2nd ed.). New York: Open University Press. 2009. Print. Kidd, Kenneth. Making American Boys: Boyology and the Feral Tale. Minneapolis: University of Minnesota Press, 2005. Print. Kincaid, James. Child-Loving: The Erotic Child and Victorian Culture. New York: Routledge, 2002. Print. Marshall, Elizabeth. “The Daughter’s Disenchantment: Incest as Pedagogy in Fairy Tales and Kathryn Harrison’s The Kiss.” College English 66.4 (2004): 395-418. Print. Menchú, Rigoberta. I, Rigoberta Menchú. New York: Verso, 1984. Print. Mickenberg, Julia, L. Learning From the Left: Children's Literature, the Cold War, and Radical Politics in the United States. New York: Oxford University Press, 2005. Print. Rak, Julie and Anna Polletti. Identity Technologies: Constructing the Self Online. Madison: University of Wisconsin Press, 2014. Print. Sánchez-Eppler, Karen. Dependent States: The Child’s Part in Nineteenth-Century American Culture. Chicago: University of Chicago Press, 2005. Print. Satrapi, Marjane. Persepolis: The Story of a Childhood. New York: Pantheon, 2003. Print. Searle, Ronald. The Terror of St. Trinian’s and Other Drawings. New York: Penguin, 1959. Print. Searle, Ronald. To the Kwai- and Back: War Drawings 1939-1945. London: William Collins Sons & Co., 1986, Print. Schaffer, Kay and Sidonie Smith. Human Rights and Narrated Lives: The Ethics of Recognition. New York; Basingstoke: Palgrave Macmillan, 2004. Print. Smith, Sidonie and Julia Watson. Reading Autobiography: A Guide for Interpreting Life Narratives(2nd ed.). Minneapolis: University of Minnesota Press. Print. Solórzano, Daniel G., and Tara J. Yosso. “Critical Race Methodology: Counter-Storytelling as an Analytical Framework for Educational Research.” Foundations of Critical Race Theory in Education. Eds. Taylor, Edward, David Gillborn and Gloria Ladson-Billings. New York Routledge, 2009. 131-47. Print. Steedman, Carolyn. Strange Dislocations: Childhood and the Idea of Human Interiority, 1780- 1930. Cambridge: Harvard University Press, 1998. Print. Stockton, Kathryn Bond. The Queer Child: Or Growing Sideways in The Twentieth Century. Durham: Duke University Press, 2009. Tonatiuh, Duncan. (2014). Separate is Never Equal: Sylvia Mendez & Her Family’s Fight For Desegregation. New York: Abrams, 2014. Print. Whitlock, Gillian. Soft Weapons: Autobiography in Transit. Chicago: University of Chicago Press, 2006. Print. Winter, Jeanette. Malala: A Brave Girl From Pakistan. New York: Beach Lane Books, 2014. Print. Yousafzai, Malala, and Christina Lamb. I Am Malala: The Girl Who Stood Up For Education and Was Shot By the Taliban. New York: Little Brown and Company, 2013. Print.