Whit edited Everything.tex  about 9 years ago

Commit id: e220baf6810ddc73c379ac8fde3cced320e4322a

deletions | additions      

       

\subsection{Data Acquisition}  Finding and downloading all of the data necessary took a lot of time and effort. When the project was in its initial workings, it was required that possible locations for data were recorded. So, the research for data started with those websites and locations.  \par  The carbon dioxide was found on the website originally recorded. NOAA holds data on carbon dioxide concentrations from around the world. The data was taken from locations like the Black Sea, Barrow, andeven  Guam. By taking the data from such varied locations, it gives a better world perspective on how much the carbon dioxide concentrations are changing. \par  The data acquired for the carbon dioxide analysis came from flask data sets. Flask data is taken by scientists who know the volume of the flask, and then do certain tests on the volume of air to determine what the concentration is in parts per million. Collecting this type of data was consistent across all of the sets. There were multiple options to choose from that were not specifically flask data.  \par 

\par   The carbon dioxide data was the first to be downloaded. To start, the data sets were downloaded and placed into individual latex files in emacs. For this project's research purposes, only two of the columns of data were needed. These two columns consisted of the date and the concentration of carbon dioxide in parts per million. There quite a few columns of straight zeros that had to be dealt with. To get rid of these systematically, a macro keyboard command was used in Emacs. This was done to all of the data sets to get them down to just the date and the concentration.  \par  The next step in the process was to place these data sets in Matlab for analysis. To do this, all of the data was inserted as cell arrays because the dates had symbols that Matlab could not use. So the data of the concentrations was converted into a Matlab array. When this was finished up, finished,  all of the data sets were plotted. All of the graphs looked great, except two of them had a lot of sharp peaks and values associated with them. To fix this, the two data sets were looked through with a fine tooth comb to find outlier points, and adjust them to reasonable values. After the major spikes and values were adjust, adjusted,  a smoothing function was used over each data set. \par  The first data set which required smoothing was the Baring Head data. An averaging function was used across all of the data points. This smoothing takes the values around a certain point and averages them, and then replaces the point in question with  that average value. The details of this script are in the appendix under \'script 1\'. Both the original version and the smoothed version are shown below. \begin{center}  Figure 3 

\par  The smoothing effect is also very noticeable here as well. The curves are much smoother and easier to read compared to before.  \par  Lastly, all of the data sets downloaded consisted of monthly recorded data over time. However, the Ascension Island data had been recorded almost daily. This meant that the Ascension Island set contained more than 6000 points, which is huge compared to much larger than  the other sets, being which have  only a few hundred values. To reduce the number of values in the set, a reducing averaging function was used on the data set. This function took every fifteen points and averaged them together, and saved this as single point in a new matrix. All of dates were aborted for this data set, except for the initial and final dates. This function is depicted under \'script 3\' in the appendix. In total, this brought the data set down to a more reasonable 400 points.   \par  The carbon dioxide data was located and downloaded from NOAA's(????) climate change website. The data sets contained very little errors which required adjustment. 

\end{figure}    \par  All of the graphs of the carbon dioxide above include a red trend-line. Evaluating these is the easiest way to see the upward trend in the concentration of carbon dioxide in the atmosphere. There wasalso  a second major pattern recognized across all of the carbon dioxide data as well. data.  This secondary pattern is the sinusoidal look that most of the data sets have. The barrow data (Figure 11) really shows this well. However, there is a big question what could be creating this pattern. By looking at the years on the bottom of the figure, one can see that there is almost exactly one per cycle on the carbon dioxide. This could most likely mean that it is a seasonal occurrence through a given year. The important thing to notice with this pattern is that it gets higher and higher each year. It looks like an x*sin(x) style function. However, there were a couple of exceptions in the sinusoidal pattern. For example, the Pacific Ocean 30 South data did not show much of a pattern except that it was increasing with time. \par  Next a slight analysis of all the trend-lines was done to see how close the slopes truly are. This was done through Matlab's graphing system. These values are depicted in the table below.\\  *The original data sets, not smoothed versions. 

\par  Overall, the carbon dioxide trends helps out the research objective because the ocean temperature anomaly data is increasing as well. This means that there is a correlation which exists between the increasing carbon dioxide concentration and warmer temperature anomalies.    \par Figure 8 shown Shown  above in figure 8 is the plot that  shows the number of fires every year and it  has some interesting anomalies. The red trend line for the graph has a negative slope which means that overall, since 1960, overall  the numbers of fires have been decreasing. decreasing since 1960.  This is intriguing considering that  in figure 9, the amount of acres burned every year has been increasing. Another anomaly in figure 8 is the sudden drop around 1983. Before this drop the figure shows that the number of fires was consistently very high for about 10 years. After this low, the number of fires slightly increases but the number stays below any of the previous years. \par To better compare the Figure 8 and 9, they were combined without the trend lines in figure 15. This plot clearly shows similar spikes in the data around 1975 to 1982 and what appears to be a divergence between the plots starting in about 2000. This divergence indicates an increasing severity of fires as shown in the next figure.   \par As analyzed above, global temperature is increasing and in areas that are hot and dry, increasing temperatures makes the areas even hotter and drier. With drier conditions there is more fuel for the fires to burn. This accounts for the increasing severity of the fires.  \par