this is for holding javascript data
Nicholas Davies edited Introduction .tex
about 8 years ago
Commit id: 766166e2a948d09620cd32e6f6ff818345b55467
deletions | additions
diff --git a/Introduction .tex b/Introduction .tex
index 8acb63d..1ae654f 100644
--- a/Introduction .tex
+++ b/Introduction .tex
...
\section{Introduction}
Eucalypt species are fast growing and can produce high quality timber for appearance, structural and engineered wood products. Large growth-strains displayed in eucalypts are associated with log splitting, warp, collapse and brittleheart, imposing substantial costs on processing
(--Yamamoto 2007--). \cite{yamamoto2007slides}. Costly mitigation strategies have been developed to reduce growth-strain induced wood defects that have been only partially effective
--ref--. \cite{yamamoto2007slides}. With growth-strain being highly heritable, as is shown here, an alternative approach is to select and grow individuals which display low growth-strain. Until now it has been difficult, time consuming and expensive to measure growth-strain, preventing the assessment of the large number of trees needed in a successful breeding programme, previously the largest reported studies
\cite{Murphy_2005} \cite{naranjo2012early} --dundii and teak ref-- conducted growth-strain testing on xxx
trees or S OLĂ“RZANO N ARANJO 2011 et al. trees.
Utilising the developments made by --ken
refs--, refs--\cite{Chauhan_2010} \cite{Entwistle_2014}, a rapid growth strain testing procedure has been developed. In order to minimise the time taken to conduct the growth strain testing on each individual the rapid testing procedure can not account for negative values, where the wood in the centre of the stem is under tension rather than compression, resulting in a left censored dataset.
Left censored data is common in research areas where detection limits are high compared to the measured values, such as testing for the presents of dugs within an animal --ref--. Bayesian statistics can be used to simulate the missing data from known data, reducing the error induced by zero inflated data sets.