Public Articles
A Review of Exoplanet Discovery and the Structure of Exoplanet Systems
Throughout the history of exoplanet research there have been two main competing theories about how exoplanets form around their host stars. The first being a monistic theory, which suggests that planet formation is closely related to the formation of the host star, forming in the outer rings of the protostar. The dualistic theory, however, suggests that exoplanet and star formations are completely separate events, and that exoplanets either formed at a later time around the host star or that the exoplanets formed elsewhere and were captured by the host star. We will discuss the evidence that observers have collected over the years and give an overview of how the data can give us an insight on exoplanet formation.
Since the first astronomers we have gradually come to know our own solar system well. We know that there are 8 planets, sorry Pluto, four being smaller and rocky and four being gaseous and larger in size. the orbits of it’s planets are very close to circular and have mean eccentricity of 0.06 and individual eccentricities ranging from 0.0068 to 0.21. Even the mean inclinations vary by only a little. We know this and more about our own solar system, but we don’t know if they are necessary for other systems to form, only that they may be clues to help discover them. Here we hope to add to our understanding of other systems.
First we will review several different methods of exoplanet detection and the significance of each method. Each method has its own set of distinct properties that can help detect a variety of planets. Next we will discuss the geometry of exoplanet systems, regarding eccentricity and orbital distances, systems containing multiple planets and systems with multiple stars.
A vast majority of exo-planet discoveries come from using the Kepler Space Telescope. Kepler is a 0.95 meter photometric telescope that orbits above Earth’s atmosphere. Its main goal is to find exoplanets, specifically Earth-like terrestrial planets that orbit in their host star’s habitable zone. Kepler’s first task was to explore over 150,000 stars in search of planets transiting in front of a star, which was a success. Early 2013 brought Kepler to the end of its first mission. Two gyroscopes used to carefully position the spacecraft had failed, and after several months of attempts to repair the craft NASA was forced to admit defeat. However, that was not the end of Kepler. Kepler can still work with just two working gyroscope wheels, by positioning the craft in the plane of the orbit it needs only the Y and Z axes wheels along with the X axis thrusters. Kepler takes continuous data while pointed at the same small portion of the sky, but must cycle through different fields of view during the year. This is due to having to avoid interference from the Sun, Kepler must reposition its target field of view roughly every 80 days so that the view is never blocked. To this day Kepler has discovered well over 2,000 exoplanets using the transit method of planet detection (NASA, 2017).
Here we will delve into the the methods of detecting exoplanets. Each method has a selection bias in detection for planets with certain properties and therefore highlights the importance of having a diverse arsenal of detection methods.
A Review of Exoplanet Discovery and the Structure of Exoplanet Systems
Throughout the history of exoplanet research there have been two main competing theories about how exoplanets form around their host stars. The first being a monistic theory, which suggests that planet formation is closely related to the formation of the host star, forming in the outer rings of the protostar. The dualistic theory, however, suggests that exoplanet and star formations are completely separate events, and that exoplanets either formed at a later time around the host star or that the exoplanets formed elsewhere and were captured by the host star. We will discuss the evidence that observers have collected over the years and give an overview of how the data can give us an insight on exoplanet formation.
Since the first astronomers we have gradually come to know our own solar system well. We know that there are 8 planets, sorry Pluto, four being smaller and rocky and four being gaseous and larger in size. the orbits of it’s planets are very close to circular and have mean eccentricity of 0.06 and individual eccentricities ranging from 0.0068 to 0.21. Even the mean inclinations vary by only a little. We know this and more about our own solar system, but we don’t know if they are necessary for other systems to form, only that they may be clues to help discover them. Here we hope to add to our understanding of other systems.
First we will review several different methods of exoplanet detection and the significance of each method. Each method has its own set of distinct properties that can help detect a variety of planets. Next we will discuss the geometry of exoplanet systems, regarding eccentricity and orbital distances, systems containing multiple planets and systems with multiple stars.
- https://www.nasa.gov/mission_pages/kepler/overview/index.html https://www.nasa.gov/feature/ames/kepler/nasa-ends-attempts-to-fully-recover-kepler-spacecraft-potential-new-missions-considered https://ui.adsabs.harvard.edu/?#abs/2014PASP..126..398H
A vast majority of exo-planet discoveries come from using the Kepler Space Telescope. Kepler is a 0.95 meter photometric telescope that orbits above Earth’s atmosphere. Its main goal is to find exoplanets, specifically Earth-like terrestrial planets that orbit in their host star’s habitable zone. Kepler’s first task was to explore over 150,000 stars in search of planets transiting in front of a star, which was a success. Early 2013 brought Kepler to the end of its first mission. Two gyroscopes used to carefully position the spacecraft had failed, and after several months of attempts to repair the craft NASA was forced to admit defeat. However, that was not the end of Kepler. Kepler can still work with just two working gyroscope wheels, by positioning the craft in the plane of the orbit it needs only the Y and Z axes wheels along with the X axis thrusters. Kepler takes continuous data while pointed at the same small portion of the sky, but must cycle through different fields of view during the year. This is due to having to avoid interference from the Sun, Kepler must reposition its target field of view roughly every 80 days so that the view is never blocked. To this day Kepler has discovered well over 2,000 exoplanets using the transit method of planet detection (NASA, 2017).
Here we will delve into the the methods of detecting exoplanets. Each method has a selection bias in detection for planets with certain properties and therefore highlights the importance of having a diverse arsenal of detection methods.
Figure 1 Giant planets dominate the observed systems even though small planets are more abundant when simulated. Hot Jupiter-like planets pile up in places, implying bias’ in our methods.
http://iopscience.iop.org/article/10.1088/0004-637X/742/1/38/meta#apj404101s2 - done
The transit method requires that planets be in the plane between our sight and the host star which clearly indicates a bias will form. Since this method measures light blocked by planets, it will tend to favor planets that have a larger mass and smaller orbital distance. Planets that are not in the line of sight will not be able to be detected, which is troubling when trying to get an accurate occurrence rate. To account for bias’ we factor in efficiencies, for the transit method they are the probability of the planetary orientation, the efficiency of the survey detecting the planets, and the rate of false positive events.
These efficiencies will not remain constant, we must factor in the properties of the star as well as the properties of the planet such as radius and orbital period.
The transit probability attempts to account for the random orientation of a planet around a host star. The probability is
\begin{equation} \eta_t_r = \dfrac{R_*}{a} = (\dfrac{3\pi}{G\rho_*P^2})^{1/3} for $R \ll R_* \end{equation}
with radius R*, semi-major axis a, and mean density ρ*. Here the mean density can be approximated as that of the sun for simplicity, as uncertainties will only affect the result as ρ−1/3. This equation implies that planets with higher eccentricity have a higher probability to be discovered. It should be noted that multi-planet systems do not have a different probability due to the fact that systems are randomly oriented \cite{Youdin_2011}.
Discovery Efficiency - figure 2 —
The Dopplar method uses the Dopplar shift of the light received from stars and potential planets as a means of detection. Distinguishing these shifts in the light is extremely difficult due to the light noise from the host star, so the Dopplar method is ideal for large planets with a large orbital radius.
Figure showing discoveries, page 4
The work done by \cite{Cumming_2008} has given some good insight on planet detection with the Dopplar velocity method. They observed nearly 600 stars searching for potential exo-planets for 8 years. They found that planets with very large masses and short periods were rare. Although, some candidates still need more time for detection, their data also shows that gas giants are about 5 times more likely to be detected with a period of 300 days or more, which differs from the theoretical predictions of a smooth increase in likelihood of gas giants forming the as the period increases.
http://iopscience.iop.org/article/10.1086/588487/pdf - done
In contrast, it has been found that small planets with short periods tend to occur in multiples around their host star. \cite{Howard_2010} conducted a study in 2010 with a survey of almost 200 stars. Each star was taken to be well-defined, meaning each star had characteristics within a set of bounds that was deemed ideal for the survey including brightness, radial distance, luminosity, and observability.
Taking several measurements of radial velocity over a five year span with the Keck Telescopes, they then matched the velocity measurements with the maximum planetary mass that would be possible given the velocity. For the stars where no planets were observed a calculation is done to determine a mass limit, above the limit we can determine that there is no planet possible with a high degree of certainty, below the limit we can not rule out the possibility of a planet.
After accounting for the possible missed planets, they calculated the occurrence rate, focusing on the planets with a period of 50 days or less, and fit the data to a power law,
\begin{equation} \dfrac{df}{dlog{M_E}} = k*M_E^\alpha, \end{equation}
where $k =0.39 \substack{+0.27 \\ -0.16}$ and $\alpha = -0.48 \substack{+0.12 \\ -0.14}$. Their results supported an occurrence rate of 1.2% for hot Jupiters within 0.1 AU, a rate of $14 \substack{+8 \\ -5}\%$ for planets of 1 to 3 MEarth, and for Earth-like planets of 0.5 to 2 MEarth a rate of $23 \substack{+16 \\ -10}\%$.
Figure #
http://www.nature.com/nature/journal/v473/n7347/full/nature10092.html - done
The microlensing method is much more effective than either of the previous methods for detecting planets at extreme distances. Unlike those methods, microlensing detections are based on the mass rather than luminosity, meaning that we don’t need to collect light from the planet or the light from the host star. This provides the ability to detect extremely far planets as well as ones that do not have host stars at all. The magnification is characterized by Einstien’s radius crossing time,
\begin{equation} t_E = \sqrt{\dfrac{M}{M_J}} days, \end{equation}
where MJ is Jupiter’s mass, 9.5 × 10−4MSun.
In 2007, with data from the Microlensing Observations in Astrophysics (MOA), several thousand microlensing events were observed. After narrowing the event down to about 500 that fit within their well defined criteria, the events were narrowed down to just 10 that had a tM of about 2 days. The Optical Gravitational Lensing Experiment (OGLE), another microlensing survey, was able to observe seven of those ten events and confirm six of the ten \cite{Sumi_2011}.
Several features of exo-planet systems have been researched with several methods. We see that although there is some discrepency in the exact rate, gas giants tend to have long periods compared to small, rocky planets.
The eccentricity of a planet gives it the shape of its orbit around its host star. An eccentricity of zero means that the orbit is perfectly circular and an eccentricity of one means the orbital shape is that of a parabola. Planets generally don’t have an eccentricity of one, however, as that would cause the planet to become detached from the host star’s gravitational pull and be sent of into space.
http://www.aanda.org/articles/aa/pdf/2004/35/aa1213.pdf - STARS - done \cite{Pourbaix_2004} here used the data of the The Ninth Catalogue of Spectroscopic Binary Orbits (SB9) to try to show a correlation between a planet’s period and eccentricity. The catalog adds to the previous edition published three years prior and shows data up to May of 2004. SB9 contains data from 2386 systems.
http://www.aanda.org/articles/aa/pdf/2001/32/aade293.pdf - done.
\cite{Naef_2001} here conducted a study on the highest eccentricity planet known at the time, HD 80606. With an eccentricity of 0.927 and mass of 3.9MJup. \cite{Naef_2001} formulated several ideas on the cause of the abnormally high eccentricity. First, the eccentricity could be due to interactions between the planet and a disk, however, a recent study suggests that this would only happen for objects with a much higher mass such as a brown dwarf. Another possible cause would be interactions with other planetary objects. Certain dynamical instabilities could lead to this high eccentricity and even eject a planet out of the system all together.
http://iopscience.iop.org/article/10.1086/590047/pdf - done
\cite{Juri__2008} builds on the latter theory by running simulations of forming solar systems. They used two different computer clusters to run their simulations, one for short-lived systems spanning about 106 years and one for longer-lived systems spanning from 106 years to 108 years. The initial conditions for semimajor axis and planetary masses were chosen from a larger, but reasonable distribution. The initial number of planets were chosen to be 3, 10, and 50 for different runs. Planets were made to be homogeneous spheres with densities of 1g × cm−3. In all but one run the planets were allowed to collide and assumed to be inelastic.
They make several, admittedly “aggressive”, interpretations of the final results. First, they discuss the early stage of the formation process, where the initial conditions are determined before the simulations run. Quite plainly they state that there may not be a "good’ set of initial conditions and that the conditions might be much more random than previously thought. It is also mentioned that the systems before the simulation actually runs must be dynamically active themselves.
Next, planet to planet interactions do change the planes of the planets, widening the distributions of inclinations. The average number of planets in a system should be 2 to 3 after a period of 108 years, regardless of the number of initial planets at the beginning of the simulation. Just 5% of systems had four or more planets. This suggests that if one exoplanet is detected in a system, it is likely that to host at least one other similar planet.
For e ≥ 0.2 the final results of the simulations agree very well with observed eccentricities of exoplanet solar systems. The results also showed a deficiency of planets with e ≤ 0.2, which might suggest that the population of systems is not dynamically active or that there is a dampening of the eccentricities due to gas and other planetesimals. Observation tells us that only about a quarter of exoplanet systems are inactive. They found that there was also very little correlation with semimajor axis distance and eccentricity.
- http://iopscience.iop.org/article/10.1088/2041-8205/773/1/L15?fromSearchPage=true - done
The Roche Limit is the distance at which an object that is held together by means of its own gravity will experience tidal force from a second object that will cause the first object to be destroyed. As one can see, the limit clearly plays a significant role in determining how close a planet can safely orbit a host star. The Roche Limit can be expressed by,
\begin{equation} d = 1.26R_m (\dfrac {\rho_M}{\rho_m})^{1/3} \end{equation}
where RM is the radius of the secondary object, ρM is the primary object density, and ρm is the secondary object density.
The density ratio is a defining factor of the limit just by inspection. A study by \cite{Rappaport_2013} reinforces this by examining the planet with the shortest known orbit, 4.2 hours. Assuming an iron core and silicon mantle they find a planetary density between 3.5 - 3.9 g/cm and 10-12 g/cm. This leads to a possible period of 3.6 hours and 6.0 hours respectively.
- http://iopscience.iop.org/article/10.1088/0004-637X/756/2/122/pdf - done http://iopscience.iop.org/article/10.1086/522039#pasp_119_859_986s2 - done
The vast majority of our knowledge on eccentricities in other solar systems comes from the Dopplar detection method. The transit method can not directly tell us a planet’s eccentricity as the velocity measurements are difficult to determine due to the faint images. \cite{Dawson_2012} takes a particular interest in investigating the eccentricities of hot Jupiters, planets that are so close to their host star that they couldn’t have formed there. The main theory of how these kinds of planets came to be in such a close vicinity to their star is that they were moved there by gravitational perturbations with other celestial objects. Therefore these kinds of planets should show up as failed and partially formed as well as complete planets as the perturbations may have taken place at any point in the planet’s formation cycle.
\cite{Barnes_2007} shows that the eccentricity of a transiting planet can be accurately measured by using a light curve. They demonstrated by using a planet with a known eccentricity, HD 17156 b with an e = 0.67 ± 0.08, and fit it with with a few simulated light curves using data from both partial and full transit observations. Barnes describes the process, first create light curves using some assumed data about the planet such as as the period if the eccentricity were zero as well as a best fit light curve. The eccentricity induces varying planetary velocities throughout it’s orbit and those variations will show as asymmetries on the best fit light curve. The two can then be compared to determine the eccentricity. Barnes also makes an important note that this will not work with very small eccentricities as the small effect likely will not show on the light curve.
The main takeaway from these paper is that light curves from transit observations can give reasonably accurate estimate of a planet’s eccentricity. The highlight being that this light curve method does not require radial velocity measurements, which often can not be measured due to the faintness. Furthermore, exoplanets that are bright enough for radial velocity measurement will benefit with more precise eccentricity measurements with more accurate constraints.
http://iopscience.iop.org/article/10.1088/2041-8205/732/2/L24/meta;jsessionid=D451DE8A7E27BCE57FDB4839725F2BCE.c1.iopscience.cld.iop.org
Are coplanar planets normal? Intro https://academic.oup.com/mnras/article-lookup/doi/10.1093/mnras/stt1657
There are some planets that orbit with an irregular inclinations, however, the majority of the discovered planets share an inclination with their star’s disk remnant. This does not necessarily mean that the systems formed that way, their inclinations may have changed due to various purterbations after the system’s formation. Given a monistic theory of planet formation, planets should form coplanarly in a single disk of gas. \cite{Kennedy_2013} makes this simple but powerful theorization.
\cite{Kennedy_2013} discusses one system in particular, HD 82943, a Sun-like star with that hosts two Jupiter-like planets at roughly 1 AU. Recent data has shown that the inclination of orbit for these two planets is 20 deg +- 4 deg. It must be assumed that these planets share the same inclination, else no constraints can be made on the inclination. The evidence also shows that the star has an inclination of 28 deg +- 4 deg. This implies that there is an extreme likelihood that HD 82943 and it’s two planets have the same or nearly the same inclination.
- http://www.sciencedirect.com/science/article/pii/S0019103504002106 - done
The planets in our own Solar System travel on roughly the same coplanar inclination around the Sun, however, there is nothing that says we can not have systems of planets that have varied inclinations of orbits. This suggests that the closeness of inclination is related to the initial conditions of the formation of the Solar System. It is possible that systems of highly inclined planets are prevalent, but instabilities in the obits cause a number of the planets to collide or otherwise be destroyed.
\cite{Veras_2004} investigates and discusses how exoplanet systems form with respect to their inclination. We just can’t know if these systems formed in the same plane together or if they formed in a system with high inclinations. They explain that there are some cases where known multiple planet systems where some constraints can be put on the systems, but these only rule out extreme variations in the systems. They also acknowledge that gravitational scatter from planet to planet could play a key role in high inclination planets and to a lesser extent, eccentricities. While scattering inward is an unlikely explanation for hot Jupiters, it is a very plausible theory to explain outward scattering given observations of stellar disk debris. Large radii would be fairly common for these planets, accompanied by higher eccentricities.
\cite{Veras_2004}finally discusses that the calculations done as well as the overall prediction is consistent with our observations of exoplanetary systems. However, gravitational scatter would take place on a shorter scale than the typical timescale of disk dissipation. This tells us that there must be some other forces that come into forming a planet’s inclination and eccentricity. The formation of planets with relatively large separations would lack this gravitational scattering and be subject to lower amounts of dynamical changes. If the separations were random, then only a fraction of the systems would be subject to the scattering, which is the case given past observations. This leads to the conclusion that planet to planet gravitational scattering as well as disk-driven migration share a part in planetary system formation.
http://ac.els-cdn.com/S0019103598959991/1-s2.0-S0019103598959991-main.pdf?_tid=4c95fd90-292d-11e7-ad71-00000aab0f6b&acdnat=1493066199_8aa595230cea7e606a36ed49b4590303
The Titius-Bode Law can accurately predict the orbital spacing of every planet in our own Solar system, with the exception of Neptune. This law, an = 0.4 + 0.3 × 2n in AU, not only works for our eight well known planets, but also predicts Ceres and the asteroid belt. The dwarf planet Pluto also fits well into the series if Neptune were excluded. So does Bode’s Law only apply to our system of planets or is there more to it?
Simple simulations can be used to test Bode’s Law. First, we must make a few constraints. Planets should not be stable in their orbits with respect to other planets, in otherwords, they need to be a sufficient distance way in order to avoid instabilities. Results of the simulation will be highly dependent on the distances that are determined to meet those conditions. The simulations are run, running several thousand times per conditions that need to be met. Some exceptions are allowed to be made in these simulations, mirroring the exceptions that are often made to Bode’s Law when it concerns the Solar System. A gap in the models is also allowed to take place, as it does in the Solar System in the form of the asteroid belt.
From the data \cite{Hayes_1998} concludes that with stricter conditions on the radii, the systems tend to fit better with Bode’s Law. With no exceptions or gaps, the Solar System doesn’t fit much better than near random simulations that have very relaxed conditions. With gaps and up to three exception planets allowed, the Solar System fits very well. However, these exceptions were made specificly to make it fit better to Bode’s Law. \cite{Hayes_1998} admits that the approach used is very simplistic, if simulations were to be done with realistic orbit integrations instead of random ones then the results would likely fit a general Bode Law much better.
https://academic.oup.com/mnras/article-lookup/doi/10.1093/mnras/114.2.232 - done
Another aspect of this to be considered deals with mean motions and resonance. Using data taken from Connaissance des Temps in 1949, the goal of \cite{Roy_1954} was to find every resonance between two bodies in the Solar System up to an arbitrary upper limit of 7. With this they could determine the frequency of resonances with respect to random planetary spacings. ϵ was defined as a degree of commensurability, which compared the actual resonance with theoretical resonances. It can be represented by,
\begin{equation} \epsilon = \lvert \dfrac{n_2}{n_1} - \dfrac{A_2}{A_1} \lvert \end{equation}
where n1 ≤ n2. A1 and A2 are both integers up to 7 and A1 ≥ A2.
From here we can find any resonances between celestial bodies. Each of the eight planets as well as Pluto share several resonances with each other. Furthermore, in a sample of 46 pairs, several moons of the Solar System share resonances, one with the two moons of Mars, seven within the moons of Jupiter, 16 within the moons of Saturn, and 9 within the moons of Uranus. Next, calculating the probabilities of the resonances of the Solar System and of a control distribution of moons with resonances. \cite{Roy_1954} finds that the universe has some preference toward resonances, as there is a much lower chance of finding planets and moons with these spacing and resonances than if by random chance.
With the more recent discoveries of exoplanetary systems, the question of whether binary systems can also form planets has become increasingly of interest. The notion of binary systems is nothing knew to us, they can be comprised of anything from white dwarfs to main sequence stars like the Sun to black holes. The majority of current models of planet formation use only one star, a bias that stems from our own solar system We will examine how the addition of a second star would likely affect the formation of planets in it’s system.
http://www.aanda.org/articles/aa/pdf/2012/06/aa18051-11.pdf - done
S-type Orbits
https://arxiv.org/pdf/1406.1357.pdf - done http://www.aanda.org/articles/aa/pdf/2009/04/aa10639-08.pdf - 2009 1st - done http://iopscience.iop.org/article/10.1086/504823 - 2006 2nd - done
S-type orbits are systems in which the planet orbits one of the two stars in the system, \cite{Roell_2012} gives an occurrence rate of at least 12% for S-type orbit exoplanet host systems based on the most recent Extrasolar Planets Encyclopedia. Prior to this, it was calculated to be 17% and 23% by \cite{Mugrauer_2009} and \cite{Raghavan_2006}, respectively.
\cite{Thebault_2015} discusses how the formation of S-type planets can be altered as the accretion disk develops. From observation, formation disks are less frequent and less massive compared to single stellar systems. This especially applies to close orbit stars, at a separation distance of 40AU or less. Extreme cases can cause disks to have insufficient gas and dust to form planets. Furthermore, accretion in multi-star systems can be accelerated, giving a much lower timescale in which planets can form.
https://arxiv.org/pdf/0806.0819.pdf - done
In the intermediate stages, planetesimals would form and increase in size through impacting one another. This would happen at an increasing rate, requiring very few perturbations. The key is the low impact velocity required for planetesimals to combine, which is lower than their escape velocity. The gaseous disk causes as drag force on the developing planetesimals, which is size dependent and causes different orbits depending on the size of the object. The larger of the planetesimals tend to have less stable orbits and are prone to oscillations inside the disk. The addition of a second star, however, will slow the velocity of the orbiting objects and dampens their growth. This in turn makes planet formation difficult, which is exactly what \cite{Th_bault_2008} found while studying the systems of α Centauri A and B when they simulated planet formation. It is noted that it is much more probable that planets could form very near the host star as the effects of the binary system would be much less in regions around 0.5 AU.
http://iopscience.iop.org/article/10.1088/0004-637X/791/2/111/pdf - done
A majority of planets found to be in multiple star systems we found before the second star of the binary system. Once a planets have been detected via Kepler with the transit method, another search is then conducted using Dopplar and direct imaging. In a study that uses this exact method it is found that the occurrence rate for multiple star systems is different from that of single star systems \cite{Wang_2014}. For binary systems with less than 1500AU separation distance there is less than a 50% chance on average of finding planets than in single and the probability decreases as the distance from the star decreases. Once the separation distance exceeds 1500 AU the difference in occurrence rate is negligible, therefore the binary system likely a very small effect on planet formation.
P-Type Orbits
P-type orbits also involve a system of two orbiting stars, however, instead of a planet orbiting a single star the planet orbits both. P-type orbits are seem to be rarer than S-type orbits, Kepler has so far found several P-type orbit planets, though they seem very rare compared to S-type. It is difficult to get an idea of the occurrence rates for these types of planets. All P-type orbits have been found to be greater than 3 Earth radii. It is thought that this is due to the fact that these planets would be very hard to detect rather than smaller radii being devoid of planets .
The study of exoplanets is still relatively new in the field of astronomy, though we are quickly making leaps and bounds in our research. We have developed and supported a monistic Solar System formation theory and have made countless observations of it in hopes to apply them to systems beyond our own. Detection methods such as the transit method using the Keplar Space Telescope have been crucial to extending our understanding of exoplanets. It is of paramount importance that we have several methods of detection, not only to avoid as much bias as possible, but so that we can continue to improve upon the data and discoveries that we make everyday.
Orbital eccentricity and inclination both play a major role in the architecture of exoplanetary systems. The focus needs to be on what happens before the formation of planets, as well as during and after. Interactions in these need to be heavily considered in formation theory. The methodology behind discovering both are just as important, if not more important as it can tell us how these systems came to be. Even simplistic models shed light on the creation of said systems, including our own, and can fit our monistic model. From there we can revise and retest our theories and refit our data.
Many of the papers and studies mentioned here are still in the early stages. Data is still being collected, Keplar for example is still searching for more exoplanet systems and will be for as long as it lasts. We are by no means done simulating eccentricies or inclinations and we are far from having difinitive proof of how planets form in binary systems. This is encouraging and despite having made a numerous amount of progress already, there is still much, much more to go.
Observation and Detection of Gravitational Waves through Binary Mergers
Gravitational waves are produced by the merging of binary cosmic bodies such as neutron stars and black holes. Using the basis in Einstein’s theory of relativity, astronomers and scientists alike have tried to come up with efficient and accurate methods to detect and observe gravitational waves. This resulted in the development of detectors from resonant mass detectors to laser interferometers. Since the discovery of gravitational waves recently, it has become imperative to capitalize on the data and come up with templates that will make it easier to detect gravitational waves in the future. The dichotomy of detecting gravitational waves is that while they help in uncovering the mysteries of the universe that would otherwise been impossible with only electromagnetic waves, there is no way of knowing the details of their source. This paper explores how the waves from different combinations of neutron stars and black holes with varying masses, radii and orbital frequencies are analyzed to accurately categorize and profile their sources. Moreover, this paper intends to provide an insight in to the methods and theories that have or are being used, what they imply and their impact on future research.
Gravitational waves are ripples in the curvature of space-time travelling at the speed of light. It was Oliver Heaviside in 1893 who first made them analogous to the inverse-square law between gravity and electricity. The curvature of space-time is generally caused by the presence of a mass or in an astrophysical sense, a stellar mass. The greater the mass in a given volume, the greater the curvature of space-time at its boundary. As these masses move, the curvature changes to adjust for the movement and thus produces gravitational waves. In 1905, Henry Poincare compared them to accelerating electrical charges producing electromagnetic waves, but it wasn’t until 1915 when Einstein published his theory of general relativity that the concept of gravitational waves gained some traction. Poincare’s ideas implied that for gravitational waves to exist there would be no such thing as gravitational dipoles. Einstein on the other hand believed that there are three types of gravitational waves, namely longitudinal-longitudinal waves, longitudinal-transverse waves and transverse-transverse waves. Einstein and Eddington then studied the problem further and predicted that very small amounts of energy would be radiated by a spinning rod or a double star and this laid the foundation for future research into the topic. Einstein worked on a paper with Rosen where they came up with rigorous solutions for undulatory gravitational fields with cylindrical gravitational waves in Euclidean space. They used the approximate method of integration of gravitational equations of the general relativity theory to lead to the existence of gravitational waves. Einstein’s final result came about after the radiation in special relativity was analyzed using nonlinear theory and is represented by the leading order terms of the quadrupole formula for gravitational wave emissions. He started with the integration of gravitational equations of general relativity theory to provide the existence of gravitational waves defined as, \begin{equation*}
R_{\mu\nu} - {1/2}g_{\mu\nu}R = -T_{\mu\nu},
\end{equation*} where gμν can be replaced by the equation \begin{equation*}
g_{\mu\nu} = \delta_{\mu\nu} + \gamma_{\mu\nu},
\end{equation*} and \begin{align*}\delta_{\mu\nu} &= 1 & \mu = \nu\\
&= 0 & \mu \ne \nu,
\end{align*} as theorized by \cite{Einstein_1937}.
An approximately small value of γμν shows that the gravitational field is weak with its derivatives occurring in many high powers. These higher power derivatives can be neglected due to insignificance and the rest of the equation is represented as \begin{equation}
\overline{\gamma}_{\mu\nu} = \gamma_{\mu\nu} - \frac{1}{2}\delta_{\mu\nu}\gamma_{\alpha\alpha}
\end{equation} which further expands into the relation \begin{equation}
\overline{\gamma}_{\alpha\alpha,\mu\nu} + \overline{\gamma}_{\mu\nu,\alpha\alpha} - \overline{\gamma}_{\mu\nu,\alpha\nu} - \overline{\gamma}_{\nu\alpha,\alpha\mu} = -2T_{\mu\nu},
\end{equation} as stated in \cite{Einstein_1937}.
The intention of those equations was to establish conditions under which the variables can exist as gravitational equations for empty space that can then be written as \begin{align*}
\overline{\gamma}_{\mu\nu,\alpha\alpha} &= 0\\
\gamma_{\mu\alpha,\alpha} &= 0,
\end{align*} thus providing the plane gravitational waves moving in a positive direction along the x-axis with the following conditions satisfied \begin{align*}
\overline{\gamma}_{11} + i\overline{\gamma}_{14} &= 0\\
\overline{\gamma}_{41} + i\overline{\gamma}_{44} &= 0\\
\overline{\gamma}_{21} + i\overline{\gamma}_{24} &= 0\\
\overline{\gamma}_{31} + i\overline{\gamma}_{34} &= 0,
\end{align*} The plane gravitational waves are then categorized into three types based on the values of $\overline{\gamma}$. Pure longitudinal waves have only $\overline{\gamma}_{11}$, $\overline{\gamma}_{14}$ and $\overline{\gamma}_{44}$ not equal to zero. Half longitudinal, half transverse waves have only $\overline{\gamma}_{21}$ and $\overline{\gamma}_{24}$ or only $\overline{\gamma}_{31}$ and $\overline{\gamma}_{34}$. The pure transverse waves have only $\overline{\gamma}_{22}$, $\overline{\gamma}_{23}$ and $\overline{\gamma}_{33}$ that are not equal to zero. These categories of gravitational waves from Einstein’s theories laid a foundation that aided in the observation of gravitational waves many years later.
On September 14th, 2015, both twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors in Livingston, Louisiana and Hanford, Washington, detected the presence of gravitational waves emitted by a cataclysmic event in the distant universe. The LIGO scientists estimated that the black holes involved in the formation of gravitational waves were about 29 and 36 times the mass of the sun. This incident occurred 1.3 billion years in the past, and about 3 times the mass of the sun was converted in to gravitational waves in a very short amount of time. The power output of the binary merger between the black holes produced a power output equal to 50 times that of the visible universe.
A quadrupole formula was hence developed which defined the loss of orbital energy and angular momentum due to gravitational wave emission. The importance of the discovery of gravitational waves was that they contain information amongst themselves through their origin about the nature of gravity that could otherwise not be obtained and moreover their detection contains potential application for future research.
Gravitational waves reach earth continuously from a great distance. Their effect however is negligible. The gravitational waves produced by the merger of GW150914 travelled over a billion lightyears and its largest effect on earth was that it changed the length of one of the LIGO detectors by ten thousandth the width of a proton. LIGO states that any object with mass that accelerates produces gravitational waves even on as small of a magnitude as humans and vehicles; its just that they are too small to detect. The gravitational waves that are detected are normally a result of incredibly dense objects such as neutron stars spinning in the deep regions of space. The main sources of gravitational waves in space are mainly neutron stars, black holes and to a smaller extent supernovas. This is because all the events that cause the formation of neutron stars and black holes form a background in the audio frequency part of the spectrum. Previously there has been some evidence for the emission of gravitational waves like the Hulse-Taylor binary system which consisted of a pulsar. The pulsar had a 17 Hz radio emission in a 8 hour orbit and a companion neutron star. This led to astronomers finding more sources of gravitational waves.
Gravitational waves are similar to electromagnetic waves in many ways. They carry energy and angular momentum away from the source. Additionally, gravitational waves also carry off linear momentum whose ’kick’ in the region of approximately 4000 km/s can knock a coalescing black hole out of its host galaxy. \cite{PhysRevLett.98.231101}. Also gravitational waves show redshifts and blue shifts not only due to the relative velocities of the observer and the source but also because of changes in space-time and universal expansions.
All the sources can be generally classified into three distinct classes based on the signal that can be extracted. One of the categories is the burst sources consisting of the formation of neutron stars, black holes and supernovae. It is a short event with one or very few cycles of signal of broad bandwidth. Another category is the narrow-band sources consisting the rotation of non-axial symmetric stars such as pulsars and accreting neutron stars. These sources are much weaker than burst sources and are affected by the Doppler shift. The third category is the stochastic backgrounds which consist of weak periodic sources from galaxies or burst sources at very large distances. Binary neutron star systems are known to produce gravitational waves in all three classes. Also present are low-frequency sources which are normally the result of massive black holes. With masses ranging between $10^{6} \textemdash{ 10^{9}}$ solar masses, waves are produced by low mass objects orbiting the aforementioned black holes. Neutron stars and black holes are known to have high spin rates and these spin rates give properties to the gravitational waves that are observed.
Bursts are transients of poorly known phase evolutions. The detectors that are setup to observe bursts are extremely sensitive to high signal-noise-ratios. For these waves to be detected, the merger or the supernova explosion that causes this needs to be asymmetric. Stochastic waves mainly arise from a superposition of sources according to \cite{Riles_2013}.Short period binary systems are the ones that produce an abundance of gravitational waves as long as their masses are greater than 0.6 solar masses, below which it becomes much harder to detect. The binary systems are quite similar until the collision, so there is an enveloping theory to accurately model them for future research.
Compact binary systems namely, neutron star/neutron star (NS/NS), neutron star/black hole (NS/BH) and black hole/black hole (BH/BH) are the most promising sources of gravitational waves in the universe. Under the effect of gravitational radiation, a neutron star binary (consisting of two neutron stars) with orbital periods generally around less than half a day, will spiral together and merge in less than a Hubble time. A Hubble time is the reciprocal of the Hubble parameter which is defined as the ratio between the size of the universe at a certain time to the size at a chosen time. Many x-ray binary systems show neutron stars spinning between 250 Hz. Their spin rate is determined by their mass accretion and angular momentum. The spiralling can take place over millenia as it is their mutual orbital energies that keep these binary systems from colliding. Over time these bodies loose the energy and move closer towards one another and subsequently revolve faster. Before the tidal disruption/coalescence stage begins, during the last few minutes of inspiral a strong gravitational wave signal is emitted. With LIGO and VIRGO detectors at optimum sensitivity, this is observable to a high degree of accuracy. In the past, there have been three occurrences of binary mergers, which when extrapolated to the rest of the universe, they have led to an estimated merger rate in the universe of about 102yr−1Gpc−3. The binary merger of two black holes of 10 solar masses and higher can be detected upto distances with red shifts of about 2 based on the calculations of \cite{Cutler_1994}.
Stellar evolution studies mention that globular clusters are responsible for the formation of binary black holes which tend to coalesce without an electromagnetic signature under a Hubble time.
Binary systems while being the best source for gravitational waves have innately large uncertainties. One of these uncertainties are the coalescence rates of neutron stars and/or black holes. Data from LIGO suggests that the estimated merger rates of NS/NS binary systems range between 2 × 10−4 per year, 7 × 10−5 per year for NS/BH binary systems and 2 × 10−4 per year for BH/BH mergers assuming they are within the boundary masses. A NS/NS binary is normally of the magnitude of 1.4-1.4 solar masses while a BH/BH binary is usually 106-106 solar masses. Observers also have with them theoretically designed templates through previous experimentations and theories such as those by Einstein that they use to compare the current observed waveforms. A single detector provides enough output to determine the masses of the bodies in the binaries but unfortunately not their distance or their position in the sky.
Detectable gravitational waves have some very identifiable properties. The signal from gravitational waves is proportional to the size of the objects that are merging. They also have very low frequencies with similarly low graviton (ℏω) energy. When passing through ordinary matter, only a negligible amount of gravitational wave gets scattered or absorbed, and pass unobstructed through dust clouds and stars. With that in mind detectors were created to analyze these waves.
Resonant mass detectors were first used in the 1960’s in experiments to detect gravitational waves. They could have been described as simple harmonic oscillators that were driven by gravitational waves. An example of a resonant mass detector is the mass quadrupole detector. By measuring the displacement amplitude and the power absorbed it provided important components of the Riemann tensor. These detectors were later improved into interferometric detectors by reducing the background noise and having a long-baseline broadband. Today the detectors are widely separated to distinguish the observed gravitational waves from environmental and background noise. Multiple detectors are needed to be setup because of the difficulty in detection gravitational waves. Each detector consists of mirrors with a distance of about 4 kilometers between the two sites. For increased sensitivity, each detector has a resonant optical cavity that multiplies the effect. The detectors two methods of searching that are namely the generic transient search and the binary coalescence search. The generic transient search does not use a specific waveform model but rather reconstructs the signal waveforms with common gravitational wave signal using a maximum likelihood method. The statistic is defined by the equation \begin{equation}
\eta_c = \sqrt{2E_c/(1+E_n/E_c)},
\end{equation} where Ec is the dimensionless coherent signal energy that is observed from the reconstructed waveforms, En is the dimensionless residual noise.
Each event is then classified into search classes; class C1 being known population of noise transients, C3 being events with frequency that increases with time and C2 being all the remaining events.\cite{PhysRevD.49.2658}
The binary coalescence search uses emissions from binary systems with individual masses lower than 99 solar masses. This model assumes that the spins of the binary system are aligned with orbital angular momentum and a template is made which covers an extremely large parameter space. It is safe to assume that the orbits are circular because the gravitational radiation reaction causes the eccentricity of the orbits to decrease during the spiral, according to equation \begin{equation}
\epsilon^2 \propto P^{19/9},
\end{equation} where P is the orbital period and ϵ is the eccentricity.
And the bodies can be treated as structureless, spinning point masses because tidal interactions between them have been shown to be negligible. For the most frequent gravitational wave occurrences, the detectors calculate the strain amplitude with equation \begin{equation}
s(t) = h(t) + n(t),
\end{equation} where h(t) is a potential signal, and n(t) is the detector noise.
After convolving with Wiener’s optimal filter, s(t) → ∫(ω(t − τ)s(τ)dτ), allows the measurement of signal to noise ratio S/N which is represented by the equation \begin{equation}
\frac{S}{N}{[h]} = \frac{\int{{h}(t)\omega(t-\tau)s(\tau)d\tau dt}}{rms \int{{h}(t)\omega(t-\tau)n(\tau)d\tau dt}}
\end{equation} where the denominator is the root-mean-square value of the numerator if the detector measured only noise.
In the absence of gravitational waves, the S/N[h] has a root mean square equal to 1. If the value is greater than or equal to 6.0 in each of the detectors then with almost certain confidence, a gravitational wave has been detected. This is done by comparing the value to around 1015 templates of waveforms, with more being added every year. The signal however is limited to the distance at which it can measure. So a signal to noise ratio is created for the network of detectors defined by the equation, \begin{equation*}
\rho \equiv \sqrt{\displaystyle \sum_{a}\rho_{a}^2},
\end{equation*} where ρa is the signal to noise ratio in the ath detector.
The detection threshold is represented by ρ which are nearly equal to 8.5. The threshold is the point at which the most distant binary system can be observed by the detectors. This distance is usually 100 Mpc. Since the gravitational waves detected are determined mainly by the masses of the spinning bodies an equivalency is created for the combination of masses in the system. This equivalency includes the errors that can be created by the post-Newtonian effects. The equivalency is \begin{equation*}
M = \mu^{(3/5)} M^{(2/5)},
\end{equation*} defined as the chirp mass which in turn helps define each of the masses in the binary system.\cite{Tutukov_1993}
The chirp mass in general relativity is the leading-order amplitude and frequency evolution of the gravitational wave signal that is emitted by a binary system. It helps in accurately determining the individual masses in the binary system.
The information collected however only represents a small portion of the data that needs to be measured because the detector will have noise with Gaussian and non-Gaussian components. The non-Gaussian noise is considered unimportant since most of the time it will not be detected simultaneously in the twin detectors. The remaining Gaussian part is defined by spectral density Sn(f), where f is the the frequency such that, \begin{equation*}
S_n(f) =
\begin{cases} \infty & f<10Hz\\
S_{0}[(f_{0}/f)^4 + 2(1+(f^2/f_{0}^2))] & f>10Hz
\end{cases}\end{equation*} where S0 = 3 × 10−48Hz−1 and f0 = 70Hz.
The detector noise determines the strength of the signal that can be detected and the distance to which it can be observed. The previous equation can detect a NS/NS merger out to distances in the range of 1 Gpc. These detectors then use the Fourier transform of the Riemann tensor for further detection of the radiation, that will accurately identify the binary system. The Riemann tensor is a common method used to define the curvature of space-time and beneficial in the theories of general relativity.
The merger rates of neutron stars are usually estimated using binary pular statistics and supernova rates. Merger rates of black holes on the other hand correspond to the category of star clusters. Theoretically there are two broad ways under which gravitational waves can be detected.
Classical physics involving stellar objects can be solved using classical or Newtonian gravitational two-body problems. This leads to an accurate model of a large range of astrophysical systems. Gravitational waves however are based on the theories of general relativity which is nonlinear so many systems are not symmetric and because it includes radiation, the bound solutions change due to angular momentum and energy. So it cannot be represented with a two-body solution. Hence these waves as a concept cannot exist in this system. In Newtonian gravity bodies follow closed elliptical orbits. A finite coupling coefficient by Einstein tries to express gravitational waves in Newtonian physics in the equation \begin{equation*}
\textbf{T} = \frac{c^{4}}{8\pi G}\textbf{G},
\end{equation*} where T is the stress energy tensor, G is the Einstein curvature tensor, c is the speed of light and G is Newton’s gravitational constant.
This equation represents the high rigidity of space which is a good approximation of why the gravitational waves have small amplitude even with high energy density. Gravitational waves have a weak coupling to matter which further proves the massive elastic stiffness of space-time.
General relativity vastly differs from Newtonian physics. Unlike Newtonian gravity, in general relativity when the binary shrinks as the objects merge, the frequency and amplitude of the gravitational waves increase. The basis of the methods of the detection are based on the fact that waves are produced by a system of masses that are interacting with one another defined by the action principle, \begin{equation}
\delta I = \delta [ -cm \int{ds} + W ] = 0,
\end{equation} where m is the rest mass and W is resultant of the motion of the mass reacting with another.\cite{PhysRev.117.306} The element ds is defined by \begin{equation}
ds^2 = g_{\mu\nu}d\chi^{\mu}d\chi^{\nu},
\end{equation} which can further be approximated as the gravitational field equation, \begin{equation*}
g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu},
\end{equation}
where $\eta_{\mu\nu}$ is the metric of the flat background and $h_{\mu\nu}$ is the perturbation on the background.
The equations of general relativity then become a system of linear equations,
\begin{equation*}
\left(\nabla^2 -- \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right)
\end{equation*} providing a three-dimensional wave equation with no z-component.\cite{Ju_2000}
The formation of gravitational waves is an exciting event even in theory. The collision of two massive stellar objects such as stars or black holes to produce waves that are detected light-years away by instruments that were created on a hypothesis is extraordinary in itself. Moreover, the detection of gravitational waves provided the ability to view the universe in a completely different way. It potentially provides evidence to the fact that the universe expanded in a process known as cosmic inflation. Prior to their discovery the astronomical observations were restricted to electromagnetic waves and particle-like entities. These unfortunately have a limitation. They can be obscured or hidden behind other stellar objects that absorb these ’light’ waves. Gravitational waves on the other hand, do not. With one eye on the future, LIGO intends to detect five more black hole mergers in the next observing campaign and possibly 40 more binary star mergers each year. With these objectives come many related advancements in the technology. The signal-to-noise ratio upgrades are expected to double and improve detections by a factor of ten. A Laser Interferometer Space Antenna (LISA) is proposed to be placed in space with its main goal to detect gravitational waves. LISA will look to detect mergers a 1000 years before they merge that will allow astronomers to create classes of previously unknown sources. The information received from the amplitudes of the waves from a black hole merger make the measurement of distances more accurate.
Coalescing compact binaries are currently the best sources to detect and understand gravitational waves. Extrapolation from the data collected by the detectors shows that around several hundred binary neutron star mergers happen every year. Binary mergers hence are much better than stellar core collapses or pulsars and supernova as sources of gravitational waves.\cite{PhysRevLett.98.231101}
The LIGO detectors have observed waves from the coalescence of a binary black hole system and the resultant waveforms match the theoretical templates that have been created and stored for such mergers. This observation shows the existence of binary black hole systems. The results can then be used to trace back to what binary systems led to the detection of the waves.
If the merging objects are black holes, they form a single perturbed black hole that radiates gravitational waves as a superposition of quasinormal ringdown modes according to \cite{Abbott_2016}. Eventually the oscillation gets damped after the merger and reaches constant frequency as the black hole reaches its final state. Black holes binaries also have a different phenomenon that help in their identification and detection of gravitational waves called the ’kick’. This recoil is due to the emission of anisotropic gravitational radiation and the loss of linear momentum as it is travels away. A big difference between neutron star mergers and black hole mergers are that black hole mergers are much rarer. Also currently with LIGO it is harder to measure black holes since they exist at much further distance and in some cases, their masses are too large for accurate detection. There will be more accurate data if and when LISA is activated. The present evidence though uses the eccentricity of the orbits and the asymmetric mass loss in the merger to extrapolate the findings. If tidal deformations or rotation-induced quadrupole moments are present in the analysis of companion stars, it becomes highly unlikely that it is a neutron star binary. This is consistent with the mass distribution normally present in neutron star binaries. Hence a system containing two neutron stars has relatively lower masses and an orbit that is inclined at nearly 90 degrees. The decay of the orbital period because of gravitational waves is measurable within 15 months. There are as many as five neutron star binary systems known in the universe so far, but only three have orbits tight enough to cause a merger within a Hubble time. Two of these neutron star binaries are located in the galactic field while the third is on the outskirts of the globular cluster. The two binary systems in the galactic field with merge with their companion stars because of the emission of gravitational waves that have been detected are in the right timescale for a merger. These properties also have an effect on the prediction of the rate of mergers between binary systems.\cite{Burgay_2003}
If the mass of the objects after calculations turns out to be greater than 3 solar masses, then with high probability the binary system consists of a black hole.\cite{Portegies_Zwart_2000}
Each of these measurements and calculations bear uncertainties especially in the case of neutron stars, because of their high spin rates, low mass and angular momentum, but with improvements in the detectors, it has become easier to extrapolate the data back to the events that created the gravitational waves. The equations that Einstein hypothesized in his theory of general relativity are used even now as a base when it comes to observing gravitational waves. The analysis of the data has allowed astronomers to categorize properties of the results that correspond to black hole binaries. With coalescing binaries being the main source of gravitational waves it becomes easier to distinguish between neutron star binaries and black hole binary systems by using the chirp mass data due to their massive size and distance differences.
Improving Open Data Quality Through Manual and Automated Classification of User Comments
Broken Link |
| |
Incorrect Link |
| |
Data Availability |
| |
Data Validity |
| |
Incorrect Metadata |
| |
Obsolescence |
| |
Lack of Documentation |
| |
Unclear |
|
Un retour sur LibrePlanet 2017
My First Article
Create a STEPmod change request
and 5 collaborators
Upgrade Thesis
Bioconjugate Chemistry Template
Trad Olsen
and 1 collaborator
”Siempre estamos utilizando conceptos que originalmente fueron concebidos en términos espaciales, pero que sin embargo tienen un significado temporal. Así podemos hablar de refracciones, fricciones, y de la ruptura de ciertos elementos duraderos que tienen un efecto sobre la cadena de acontecimientos, o podemos referirnos a los acontecimientos retrospectivos sobre sus presuposiciones persistentes. Aquí, nuestras expresiones se toman del ámbito espacial, incluso de la geología. Son indudablemente muy vívidos y gráficos, pero también ilustran nuestro dilema. Se refiere al hecho de que la historia, en lo que se refiere al tiempo, debe tomar sus conceptos del reino espacial como una cuestión de principio. Vivimos por expresiones naturalmente metafóricas, y no podemos escapar de ellas, por la simple razón de que el tiempo no es manifiesto y no puede ser intuido ”.
Método de Nelder-Mead (\(n\) dimensões) - Downhill Simplex
Considere f(x1, x2, …, xn) a função a ser minimizada. Define-se avaliar um ponto como calcular o valor da função nesse ponto.
1 - Defina n + 1 pontos iniciais com n dimensões. xi = (xi1,xi2,…,xin), em que 1 ≤ i ≤ n + 1. Ordene e renomeie de forma que f(x1)<f(x2)<…<f(xn + 1).
2 - Calcule o centróide xg = (xg1,xg2,…,xgn) dos n pontos com menor avaliação: $\displaystyle x_{gj} \leftarrow \frac{1}{n}\sum_{i = 1}^{n + 1}x_{ij}$, 1 ≤ j ≤ n.
3 - Calcule o ponto de reflexão xr = (xr1,xr2,…,xrn): rri ← xgi + α(xgi−x(n + 1)i), 1 ≤ i ≤ n. Avalie esse ponto: f(xr).
4 - Se f(x1)<f(xr)≤f(xn), então faça x(n + 1)j ← xrj, 1 ≤ j ≤ n. Ordene os pontos por ordem crescente de avaliação e vá para o passo 2.
5 - Se f(xr)≤f(x1), então calcule o ponto de expansão xe = (xe1,xe2,…,xen): xej ← xrj + β(xrj−xgj), 1 ≤ j ≤ n. Avalie esse ponto: f(xe).
6 - Se f(xe)≤f(xr), então faça x1j ← xej e xij ← x(i − 1)j, 1 ≤ j ≤ n + 1, e vá para o passo 2. Senão, faça x1j ← xrj e xij ← x(i − 1)j, 1 ≤ j ≤ n + 1, e vá para o passo 2.
7 - Se f(xr)>f(xn), então calcule o ponto de contração xc = (xc1,xc2,…,xcn): xcj ← xgj + γ(x(n + 1)j − xgj), 1 ≤ j ≤ n. Avalie esse ponto: f(xc).
8 - Se f(xc)≤f(xn + 1), então faça x(n + 1)j ← xcj, 1 ≤ j ≤ n. Ordene os pontos por ordem crescente de avaliação e vá para o passo 2.
9 - Se f(xc)>f(xn + 1), então realize uma contração ao longo de todas as dimensões em direção ao ponto x1: xij ← xij + ν(xij − x1j), 2 ≤ i ≤ n + 1, 1 ≤ j ≤ n. Ordene os pontos por ordem crescente de avaliação e vá para o passo 2.
Valores recomendados: α = 1, β = 1, γ = 0, 5 e ν = 0, 5.
Massive Submission Template
Why should I use Authorea to write my paper?
and 1 collaborator
Scientists are busy people. We have deadlines to meet, meetings to attend, lectures to give. And of course we need to write papers, not only because we are excited to share our findings, but also because scientific papers are the currency of the academic world. Authorea was created by scientists and for scientists. The idea: improving the process of writing and sharing the results of research. While Authorea has big plans for the paper of the future, in this post I want to focus on the here and now. This is because when I talk about the platform with my colleagues, by far the most common question I get is “Why should I use Authorea to write my paper?”
Great question! Here some highlights that should make you curious:
With Authorea, your paper is accessible from any computer anywhere in the world.
You can write it from your browser, no installations required.
You can write in rich text (wysiwyg) LaTeX or in markdown.
Your paper is also a beautiful web page.
Collaboration is made easy. Managing your co-authors is straightforward.
Authorea is version controlled. Again, no installations required.
Adding citations has never been easier. Believe me, you will never wanna go back.
You can include data and code in your paper. This allows for transparency and reproducibility of results.
Export to any journal format with just one click.
Powerful commenting system. For internal or even external review.
Ok, If you got this far you deserve more than a list of fancy features, so here’s my personal experience and why I think you should start using Authorea.
I switched to writing papers with Authorea about a year ago and I noticed a number of immediate improvements: first of all my papers get written faster. Then I noticed that I have no need to exchange emails with collaborators concerning the paper. This is fantastic. All the action happens (and it’s logged) on Authorea, including discussions about revisions and suggestions for improvements. This said, I didn’t really expect the most important upturn. By getting rid of the overhead I previously considered necessary, unavoidable parts of the scientific writing process, something remarkable happened. I actually started enjoying writing more! And I do not mean just publishing; I had experienced that joy before. The difference is I now cherish the time I spend putting my science into words. It might sound crazy, but Authorea did something amazing: it made me discover the pleasure of writing science together with my collaborators.
Interactive and discoverable preprints
and 3 collaborators
Alternatives to Google Docs
Online Markdown Editor
and 1 collaborator
# some text
ShareLaTeX vs Overleaf
and 1 collaborator
Microsoft Word vs Authorea
Alternatives to Scrivener
Paperpile vs Authorea
Endnote vs Authorea
Zotero vs Authorea
Alternatives to HackPad
Academic Writing
and 1 collaborator