Welcome to Authorea
This project consists of a months work, of data research, analysis and results. The project’s intent is to demonstrate how the increasing carbon dioxide concentrations in the atmosphere are effecting the global temperature. Then cross analyze the ocean temperature anomaly data against the wildfire data to find a correlation between increasing temperatures and drier conditions that facilitate wildfires.
The data was sifted through using a few labor saving devices, such as Matlab and Emacs. The usefulness in Emacs comes from its ability for the user to create macro keyboard commands that can be repeated thousands of times to remove columns of data, or adjust the set as a whole. The data used comes from several different locations from across the internet. The errors and uncertainties were evaluated to gain a better understanding of how accurate the results are. Plots of the data are shown in various figures throughout the document. A strong majority of the graphs include trend lines that show an obvious upward trend. The final results indicate a strong correlation between the increase of carbon dioxide concentration in the atmosphere, temperature anomaly data, and an increase of wildfire severity.
Talk of climate change is everywhere, it is on the news, in the papers, and in publishings by researchers with new findings or backing up old ones. Climate change was originally called global warming, but now we know that the climate is doing more than just getting warmer. Assertions heard most commonly about climate change include sea level rise, warming oceans, glacial retreat, more extreme event occurrences, and rising carbon dioxide(CO2) levels. This project focuses on three big aspects of climate change. A major cause, evidence of warming global temperatures, and a specific affect of the increasing temperatures.
Carbon dioxide has the largest effect on the Earth’s atmosphere. It is significantly larger than other molecules like hydrogen and oxygen. The amount of energy carbon dioxide holds is the standard for the greenhouse effect with a rating of 1. Other molecules, like methane, have a much higher greenhouse rating. Methane has a rating of 20 on the scale(1). However, the volume of them is considerably less than carbon dioxide. When heat radiates from the Earth’s surface CO2, and other large molecules, absorb the radiation and reflect it back towards the surface of Earth(1). This is the main contributor in the greenhouse effect. In 2013 the US produced 5,505 million metric tons of CO2(5), 70% of which was from transportation and producing electricity from burning coal and fossil fuels. The most effective way to reduce CO2 emissions is to reduce fossil fuel consumption(1).
An effective way to observe the warming effect from carbon dioxide is to analyze the ocean temperature anomalies. These temperature anomalies are a good indicator of global average temperatures because the water has a high heat index and does not change without a lot of heat input(2). Looking at the temperature anomalies through time is the best option because the records are more reliable than actual temperature records. A temperature anomaly is the difference between the expected average temperature and the actual temperature. Rising ocean temperatures lead to higher sea levels and have a strong effect on the climate and environment of certain areas. Higher ocean temperatures lead to stronger ocean storms and can destabilize marine habitats by aiding in the spread of invasive species and marine diseases(2). The ocean temperature anomalies combined with carbon dioxide data should show a significant correlation
A specific affect of increasing temperatures could be an increase of wildfire severity. It is not a direct cause and effect relationship, but there are many ways that the changing climate is affecting wildfires. Fire seasons are becoming longer, conditions are becoming drier making for more fuel for fires, and there is an increase in lightning(3). Wild fires have a very important role in nature. The fires return nutrients to the soil by burning dead or dying plants. Wildfires can troublesome when they infringe on populated areas. If this happens, the wildfires threaten the lives and property of the residents and firefighters. In the past, there has been an average of more than 100,000 wildfires which burn 4 to 5 million acres of U.S. land during a given year. Recently that average has peaked at 9 million acres(4). In addition, the overall area burned by fires per year in the United States is projected to double by late this century, if the average summertime temperature increases by a mere 2.9 degrees F(3).
Is there a strong correlation between the carbon dioxide levels in the atmosphere and the temperature anomalies in the ocean? If there is a significant correlation, how does it relate to the severity of wildfires within the US?
The main objective of this project is to prove that climate change does exist, and show that increased concentrations of carbon dioxide does increase the temperature of the Earth. The increase in temperatures will be proven using temperature anomaly data of the ocean. With these two data sets in mind, the actual effect on the climate change will be shown using wildfire data. The correlation between the increase in carbon dioxide and the ocean temperature anomalies will follow along the same trend line as the severity of wildfires. A secondary objective is to show this correlation using data from more recent sources. It is important for the purposes of this project to stick to data that is less than 150 years old. This is to show that the climate change is modern occurrence, rather than a long term trend.
The data used in this project is more modern compared to the scope of time that climate change occurs over. The oldest data set used dates back to the late 1800s. This is because the data used consists of modern recordings of the various data types. An alternative to this would be exploring data in ice-cores. Data in ice cores goes back thousands of years, and could possibly show a larger picture of the climate cycle. This approach was not used because one of the main objectives is to show the correlation is a modern trend.
This project also had a lot of options in terms of looking at data of the responding variables to the increase in carbon dioxide. Rather than use wildfires as the indication of climate change, the project could have been based around the melting of ice sheets, rising ocean levels, or even reduced snowfall in the mountains. Wildfire severity was chosen because it has a more abstract application, and it should increase as the climate gets warmer and drier, showing more correlation back to climate change.
This project covers a broad range of subjects, and finding data for all of the subjects can be difficult. This was the main restriction on the project. For example, there are only two data sets to use for the wildfire data. However, the carbon dioxide data was the easiest to access because it is kept by the EPA on their .gov site. This means that the data is free to use, and is open access. The other sets of data did not have the same advantages.
The other main restriction was the time that the group had to work. Climate change a very broad issue with many different datasets from around the world. Exploring every avenue was simply not possible, so the workload was fine-tuned and filed down to a reasonable level for a three week project. This was also the hardest boundary to deal with, because it is very easy to go over board when gathering information, and figuring out what is actually useful is difficult.
Finding and downloading all of the data necessary took a lot of time and effort. When the project was in its initial workings, it was required that possible locations for data were recorded. So, the research for data started with those websites and locations.
The carbon dioxide was found on the website originally recorded. NOAA holds data on carbon dioxide concentrations from around the world. The data was taken from locations like the Black Sea, Barrow, and Guam. By taking the data from such varied locations, it gives a better world perspective on how much the carbon dioxide concentrations are changing.
The data acquired for the carbon dioxide analysis came from flask data sets. Flask data is taken by scientists who know the volume of the flask, and then do certain tests on the volume of air to determine what the concentration is in parts per million. Collecting this type of data was consistent across all of the sets. There were multiple options to choose from that were not specifically flask data.
The last parameter used when gathering the carbon dioxide data was to make sure the data was up to date. This was not extremely difficult to do thanks to NOAA’s filtering options. When looking for data an option was chosen that the data sets had to have been updated within the last 365 days. This means that all of the data sets have measurements within the last year, or slightly longer. It appeared that updating the data did not always mean adding new points in. The worst data set, in terms of new data, had its newest data point as the end of 2012.
Global sea surface temperature (SST) data was supplied from NOAA’s Extended Reconstructed Sea Surface Temperature (ERSST) data base. This data was collected both in situ and using satellite radiometer sst data, though there is historical trouble with cloud cover giving the satellites’ infrared sensors a slight cold-bias.(citation not found: ERSST)
The temperature anomaly data is the difference between the actual temperature and the expected average temperature. Therefore a positive anomaly means that that given year was warmer than the expected average. This difference is given off of a base line trend based usually on the last three decades of temperatures. This is more relevant then the actual value temperature as it gives can be used to describe greater areas, and can be more accurate then actual temperature data. It comes out of the concept that saying March is warmer than February, which does not say much, but saying both months were warmer than last year’s months gives relevance and perspective.
The fire data was collected from the National Interagency Fire Center website. This site has many different data sets and information on fires from around the United States. The data set used has the total number and acreage of wild fires in the U.S. from 1960 to 2014. The use of wildfires is important because these are fires that were strictly caused by natural forces, no human caused fires or prescribed fires were included in this report. The use of these fires would significantly alter the analysis of the data and they don’t accurately reflect the effects of climate change on the environment. Only U.S. fire data was used so this project could narrow its focus and have a better analysis of fires in this country.
The carbon dioxide data was the first to be downloaded. To start, the data sets were downloaded and placed into individual latex files in emacs. For this project’s research purposes, only two of the columns of data were needed. These two columns consisted of the date and the concentration of carbon dioxide in parts per million. There quite a few columns of straight zeros that had to be dealt with. To get rid of these systematically, a macro keyboard command was used in Emacs. This was done to all of the data sets to get them down to just the date and the concentration.
The next step in the process was to place these data sets in Matlab for analysis. To do this, all of the data was inserted as cell arrays because the dates had symbols that Matlab could not use. So the data of the concentrations was converted into a Matlab array. When this was finished, all of the data sets were plotted. All of the graphs looked great, except two of them had a lot of sharp peaks and values associated with them. To fix this, the two data sets were looked through with a fine tooth comb to find outlier points, and adjust them to reasonable values. After the major spikes and values were adjusted, a smoothing function was used over each data set.
The first data set which required smoothing was the Baring Head data. An averaging function was used across all of the data points. This smoothing takes the values around a certain point and averages them, and then replaces the point in question with that average value. The details of this script are in the appendix under ścript 1. Both the original version and the smoothed version are shown below.
It is very easy to see the difference that the smoothing made on the sharper parts of the graph. This process made it much easier to analyze.
Next, the Black Sea data also needed to have some smoothing done to it. For this data set a median smoothing method was used. This entails taking the values around a point in question and determining the median value of the set. Then this median value is plugged in for the point in question. The details of this script are shown under ścript 2' in the appendix. The two graphs are shown below.
The smoothing effect is also very noticeable here as well. The curves are much smoother and easier to read compared to before.
Lastly, all of the data sets downloaded consisted of monthly recorded data over time. However, the Ascension Island data had been recorded almost daily. This meant that the Ascension Island set contained more than 6000 points, which is much larger than the other sets, which have only a few hundred values. To reduce the number of values in the set, a reducing averaging function was used on the data set. This function took every fifteen points and averaged them together, and saved this as single point in a new matrix. All of dates were aborted for this data set, except for the initial and final dates. This function is depicted under ścript 3' in the appendix. In total, this brought the data set down to a more reasonable 400 points.
The carbon dioxide data was located and downloaded from NOAA’s(????) climate change website. The data sets contained very little errors which required adjustment.
The temperature anomaly data was supplied from the NOAA EITTC database. The raw data had eight columns worth of data, though this project only required two of the columns. Using Emacs, a text editor, the data was copied into a file that could be easily input into the MATLAB files. The relevant data columns were uploaded into MATLAB as cell arrays and converted into arrays doubles. The array of temperature anomalies was plotted with a corresponding time array to easily show the temperature anomalies in reference to years.
Immediately it is evident that the data is very dense and unfiltered. All of the records for each latitude band go back into the 1800’s, with a point for every month of the year, adding up to a very detailed, large dataset with many small spikes and fluctuations. In order to reduce and simplify the data to focus on the bigger trends, the data was smoothed out. This was accomplished by using a while loop for every data point, and replacing it with average of the point and the two points before and after it. This averaging procedure helped to bring the more radical spikes and dips down to a more reasonable slope that made more sense when looking at such a large scale.
The fire data was very good and required very little processing. It was a small dataset and did not need reducing. The only processing required was flipping the data so it went from 1960 to 2014 instead of the other way around and separating the years, acres, and fires, into three different vectors. The number of fires and acres burned were first plotted separately and lines were fitted to them to show the overall trend to the data. Equations to the lines are shown on the figures below.
The analysis of the carbon dioxide started in a very simple manner. The graph of all of the data was looked at first. It is very easy to see the trend that all of the data shares (Figure 2). All of the general trends follow each other near exactly. One thing which made it slightly harder to analyze the data was the differing dates for all of the data sets. Some of the sets had their first recording as late as 1999, and some as early as 1979. This was solved using the linspace() command in Matlab. It takes in the start and end values, as well as the number of points inbetween. This allowed the data sets to all be plotted on the same graph because of the consistent numbers associated with them. As far as the trends go, it is very easy to see the upwards trend as time goes on. The rest of the data graphs are shown below, excluding the Black Sea and Baring Head sets because they are shown prior as Figures 3-6.
All of the graphs of the carbon dioxide above include a red trend-line. Evaluating these is the easiest way to see the upward trend in the concentration of carbon dioxide in the atmosphere. There was a second major pattern recognized across all of the carbon dioxide data. This secondary pattern is the sinusoidal look that most of the data sets have. The barrow data (Figure 11) really shows this well. However, there is a big question what could be creating this pattern. By looking at the years on the bottom of the figure, one can see that there is almost exactly one per cycle on the carbon dioxide. This could most likely mean that it is a seasonal occurrence through a given year. The important thing to notice with this pattern is that it gets higher and higher each year. It looks like an x*sin(x) style function. However, there were a couple of exceptions in the sinusoidal pattern. For example, the Pacific Ocean 30 South data did not show much of a pattern except that it was increasing with time.
Next a slight analysis of all the trend-lines was done to see how close the slopes truly are. This was done through Matlab’s graphing system. These values are depicted in the table below.
*The original data sets, not smoothed versions.
|Data Location||Figure number||Slope of trend-line|
|Pacific Ocean 30 South||14||1.8|
Overall, the carbon dioxide trends helps out the research objective because the ocean temperature anomaly data is increasing as well. This means that there is a correlation which exists between the increasing carbon dioxide concentration and warmer temperature anomalies.
After smoothing the temperature anomaly graphs, linear trends and higher degree trend lines were implemented to show how the patterns shape over the past 100 years.
Interesting things to note are how the 7th degree trend line reveals how the temperature anomalies are cyclical in nature, but the linear trend line shows how that there is an over arching upward trend, where in the 1980’s there is a major departure from the zero line and almost all years after that have been warmer than the expected average.
The cyclical pattern of the sst has a period of approximately 60 years, with even smaller fluctuations within these over-arching cycles with amplitudes of approximately half of a degree. The fluctuations tend to go warmer and warmer through time and when the temperature anomaly cools down in the regular cycle, it does not cool off as much, making the cooling downtrend unable to compensate for the complimentary upward warming trends. In the end this creates the positive linear trend shown, where even though the cycle still has a 30 year cooling period, the warming trend overpowers it. Though these are the trends in the more medial latitudes, there have been different trends elsewhere.
In this plot, there actually appears to be a cooling trend that contradicts all the other data sets, yet this seemingly contradictory data actually coincides with the warming trends of the medial latitudes. The atmospheric cells all interact with each other and effect temperatures and winds in the neighboring cells. An increase in temperature in the tropics leads to more heat rising up into the cycle, and eventually this creates air with lower temperatures and higher wind speeds in the polar cell. These cold, strong winds push out to the coast of Antarctica, carrying lower temperatures to the oceans and subsequently creating more ice to also be pushed out into the ocean. This combination of new ice and cold winds decreases the temperature of the oceans, which actually all draws back to the rising temperatures at the equator.(citation not found: POLAR)
Both the CO2 emissions and sst anomalies both have recent upward trends globally, there also seems to be a correlation between locations of the CO2 emissions and sea temperatures. The latitude bands 30N to 60N contain both the Black Sea and the Cold Bay in Alaska, both of these areas had high CO2 emission rates, which correlates to 30N to 60N high temperature anomalies. However, there seem to be some troubles with correlation in the south again, the Pacific Ocean 30 south location had a relatively high CO2 emission rate listed, the ocean temperature anomalies there have the downward trend, which could be explained by the atmospheric cells.
Shown above in figure 8 is the plot that shows the number of fires every year and it has some interesting anomalies. The red trend line for the graph has a negative slope which means that overall the numbers of fires have been decreasing since 1960. This is intriguing considering that in figure 9, the amount of acres burned every year has been increasing. Another anomaly in figure 8 is the sudden drop around 1983. Before this drop the figure shows that the number of fires was consistently very high for about 10 years. After this low, the number of fires slightly increases but the number stays below any of the previous years.
To better compare the Figure 8 and 9, they were combined without the trend lines in figure 18. This plot clearly shows similar spikes in the data around 1975 to 1982 and what appears to be a divergence between the plots starting in about 2000. This divergence indicates an increasing severity of fires as shown in figure 19.
As analyzed above, global temperature is increasing and in areas that are hot and dry, increasing temperatures makes the areas even hotter and drier. With drier conditions there is more fuel for the fires to burn. This accounts for the increasing severity of the fires.
The carbon dioxide data did have a couple of issues that needed to be dealt with. To start, a few of the data sets had some -9999.0 values which were eliminated and replaced. The replacing process is where there could have been some errors. The process consisted of deleting the original value and putting in a similar value to the points around it. The error here would be if the point was anomalous and had significance there. However, with the intense pattern elements in the data sets this would not seem to be the case.
Next, a couple of the data sets had very little natural curvature. The Baring Head and Black Sea data show how spiky the data can be at a maximum. This was fixed by doing a smoothing function over both of these sets. These two smoothed graphs are depicted as Figures 4 and 6. This process could have removed some of the important data spikes, but it did not have a major impact on the analysis of the data.
The last error that could have existed in the carbon dioxide data came from trying to create the time lines on the bottoms of the graphs. Matlab did not work well when the data sets would consistently use the same numbers, like a year for example. The data would have ten or twelve measurements for a given year. To fix this a method called linspace() in Matlab was used. This takes in an initial value, an ending value, and how many points you want in between those two values. For example, if a data has 400 points and dates from 1979 to 2014. It would then create 400 value incremented from 1979 to 2014. This worked really well in plotting the data and getting the years on the bottom line. The error comes from the fact that all of the years do not have the same amount of points, some have less than others. So, evenly spacing the distance between each point is not one hundred percent accurate. However, this error would not be very significant because it would not affect the slopes of the trend-lines by more than .1, which was the main purpose of all the data graphs. In total there was not a significant amount of error within the carbon dioxide data to make a real difference.
Error for the global temperature anomaly data could come from both the physical data acquisition or in data processing. Some of the more recent data points could have been collected by satellite, which have been known to have cold-bias from their infrared being skewed by cloud cover. During processing the smoothing process could have a slight change in the data, comparing linear trends of the raw and smoothed data, th