Bill Bob edited Everything.tex  about 9 years ago

Commit id: af83baf03b491c9573144aa413a6f64acb5a5683

deletions | additions      

       

\usepackage[margin=1.0in]{geometry}  \bibliographystyle{plain}  \usepackage{indentfirst}  \usepackage{pgfgantt}  \begin{document} 

\par  This project also had a lot of options in terms of looking at data of the responding variables to the increase in carbon dioxide. Rather than use wildfires as the indication of climate change, the project could have been based around the melting of ice sheets, rising ocean levels, or even reduced snowfall in the mountains. Wildfire severity was chosen because it has a more abstract application, and it should increase as the climate gets warmer and drier, showing more correlation back to climate change.  \section{Implementation Phase}   \subsection{Assumptions & and  Boundary Conditions} This project covers a broad range of subjects, and finding data for all of the subjects can be difficult. This was the main restriction on the project. For example, there are only two data sets to go off of for the wildfire data. However, the carbon dioxide data was the easiest to access because it is kept by the EPA on their .gov site. This means that the data is free to use, and is open access. The other sets of data did not have the same advantages.  \par  The other main restriction was the time that the group had to work. Climate change is one of the biggest and most broad issues out there. Exploring every avenue was just simply not possible, so the workload was fine-tuned and filed down to a reasonable level for a three week project. This was also the hardest boundary to deal with, because it is very easy to go over board when gathering information, and figuring out what is actually useful is difficult.  \subsection{Timeline}  %put gannt chart here! \begin{center}  \begin{figure}[h]  \includegraphics[height=7cm,width=18cm]{Gannt}  \end{figure}  \end{center}  \subsection{Data Acquisition}  Finding and downloading all of the data necessary took a lot of time and effort. When the project was in its initial workings, it was required that possible locations for data were written down. So, the research for data started with the marked down websites and locations.  \par 

\par  The data acquired for the carbon dioxide analysis came from flask data sets. Flask data is taken by scientists who know the volume of the flask, and then do certain tests on the volume of air to determine what the concentration is in parts per million. Collecting this type of data was consistent across all of the sets. There were multiple options to choose from that were not specifically flask data.  \par  The last parameter used when gathering the carbon dioxide data was to make sure the data was up to date. This was not extremely difficult to do thanks to NOAA's filtering options. When looking for data an option was chosen that the data sets had to have been updated within the last 365 days. This means that all of the data sets have measurements within the last year, or slightly longer. It appeared that updating the data did not always mean adding new points in. The worst data set, in terms of new data, had its newest data point as the end of 2012.  %Graph of some raw CO2 data? 2012.\\  \begin{figure}[h]  \includegraphics[width=\textwidth]{co2all}  \end{figure}  \par  lkdsjg    %Luke and Whitney put your data aquisition here, I would have done it but I have no idea how you guys collected your data.  \subsection{Data Reduction and Processing}