Areas in the cities have their own profile. In this study we define a way to measure the signature of a place. We define a way to compare different areas. Finally, we apply those methods for three practical applications. First, the comparison of areas in different cities (What is the SoHo of Stockholm?). Second, the comparison of areas within the same city ("How similar is South Kensington to Richmond?"). Thrid, we use those measures to characterise the evolution of an area, by comparing different snapshots of an area
I want to add a table. ------ ----------- ----------- asdf amp; asdf amp; asdf fda amp; fds amp; fdsa ------ ----------- ----------- : This is a table caption Can you see me typing? It doesn’t update in real time like Google does, but it did show you.... This is how you add an accent. Holá Can I make a reference to a table? This is a ref to Table [tab:test_table_1].
Rationale Advances in MRI technology and image segmentation algorithms have enabled researchers to begin to understand the mechanisms of healthy brain development and neurological disorders, such as multiple sclerosis . Due to the wide variability of brain morphology, coupled with a pathological process in the case of neurological disorders, increasingly large sample sizes are necessary to confidently answer the progressively complex biomedical questions the research community is interested in. Automated algorithms have been developed to reduce information-rich 3D MRI images to 1-dimensional summary measures that describe tissue properties and are easy to interpret, such as total gray matter volume. Automated segmentation algorithms save considerable time, compared to manual human inspection, but lack the advanced visual system of humans. As a result, these algorithms often make systematic errors, especially when analyzing brains with pathology or those in the early stages of development. Data science is poised to facilitate complex neuroscience research by fusing a crowdsourcing strategy with machine learning methods; automatic quantification can perform the bulk of the work efficiently and errors can be resolved by non-expert “citizen-scientists” with the advantage of the human visual system. Crowdsourcing has been successful in many other disciplines , including mathematics , astronomy , and biochemistry . Recently, over 200,000 “citizen-neuroscientists” from over 147 countries helped identify neuronal connections in a mouse retina through the Eyewire game . This crowdsourced game led to a new understanding of how mammalian retinal cells detect motion. I propose to implement three key features of the EyeWire paradigm and adapt them for the segmentation of MRI data. First, by breaking up the problem into smaller “micro-tasks”, Eyewire scientists were able to access a much larger user-pool of non-experts. In a similar vein, 3D MRI data can be divided into 2D slices to be segmented by users. Second, machine learning algorithms were trained to help with the task, which improved the speed of manual neuronal tracing and validated non-expert input in the Eyewire game. Deep learning methods have already shown to be successful at segmenting MRI data, and similar models could be built to support manual segmentation. Lastly, EyeWire transformed a dull, monotonous task for experts into a fun, competitive game that trained non-experts and acquired valuable scientific data. The University of Washington is an ideal place to develop a similar game platform for MRI segmentation, using the resources at the Center for Game Science, led by Zoran Popovic. I propose to create an open-source platform for efficiently crowdsourcing brain tissue classification problems in order to answer neuroscience research questions with more precision. Specific Aims 1. SCALEABLE AND SECURE MICRO-TASKS: A scaleable database system and server backend that keeps data private by dividing it into small “micro-tasks” 2. LEARNING BY EXAMPLE: Machine learning algorithm that learns from human curation to improve efficiency of manual tasks 3. TRAINING THROUGH GAMIFICATION: User interface that trains users to solve a specific problem, and keeps them engaged through a reward system
1 INSTRUCTIONS FOR CO-AUTHORS Authorlist is not final. It’s just the order in which I added to this article. Will clean up later. This is my first experience with Collaborative online article editing. We write the article here online. When we are done, I will download the LaTeX and bring it in the right form to submit. So, please edit and comment as you like and don’t worry about design and appearance. You can get started by double clicking a text block and begin editing. You can also click the Text button below a block to add new block elements. Or you can drag and drop an image right onto this text. Click the little speech bubble icon on the top right of a block to leave comments. REMEMBER TO PRESS ”SAVE AND CLOSE” AFTER EDITING A TEXT BLOCK! Delete as you see fit - it’s all in the git version control system and I can revert any edits we don’t like. You can use ”export document” on the top right of the screen to see a pdf version. If you don’t see your changes - press ”Reload” in the browser.
TYC8241 2652 1 is a young star that showed a strong mid-infrared (mid-IR, 8-25 μm) excess in all observations before 2008 CONSISTENT WITH a dusty disk. Between 2008 and 2010 the mid-IR luminosity of this system dropped dramatically by at least a factor of 30 suggesting a loss of dust mass of an order of magnitude or more. So far there is no conclusive explanation for this observational fact; possibilities include removal of disk material by stellar activity processes, a collisional cascade that rapidly grinds dust of all sizes down to where radiative blowout is effective, or a run-away accretion event spurred by the presence of gaseous material in the disk. We present new X-ray observations, optical spectroscopy, near-IR interferometry, and mid-IR photometry of this system to constrain its parameters and identify the cause of the dust mass loss. In X-rays TYC8241 2652 1 has all properties expected from a young star: Its luminosity is in the saturation regime and the abundance pattern shows enhancement of O/FE. The photospheric Hα line is filled with a weak emission feature, indicating chromospheric activity consistent THE OBSERVED LEVEL OF CORONAL EMISSION. Interferometry does not detect a companion and sets upper limits on the companion mass of 0.2, 0.35, 0.1 and 0.05 M⊙ at a distance of 0.1-4 AU, 4-6 AU, 6-11 AU, and 11-34 AU, respectively. Our mid-IR measurements, the first of the system since 2012, are consistent with the depleted dust level seen after 2009. THE NEW DATA CONFIRMS THAT STELLAR ACTIVITY IS UNLIKELY TO DESTROY THE DUST IN THE DISK AND SHOWS THAT SCENARIOS WHERE EITHER TYC8241 2652 1 HEATS THE DISK OF A BINARY COMPANION OR A POTENTIAL COMPANION HEATS THE DISK OF TYC8241 2652 1 ARE HIGLY UNLIKELY.
This collaborative document has been created for the panel discussion on “Rotation in massive stars” (FOE 2015), held on Thursday 6/4/2015 in Raileigh. All conference participants have been added to the document and can edit / comment / add figures (just drag&drop) / references and even LaTeX equations if needed (check the help page for more info on how to edit the document). Hopefully this will capture the essential ideas and interactions that will stem during and after the discussion. The document can be forked at any time, so that particular discussions can be taken further and potentially lead to active collaborations.
SUMMARY The 2013-2015 Ebola virus disease (EVD) epidemic is caused by the Makona variant of Ebola virus (EBOV). Early in the epidemic, genome sequencing provided insights into virus evolution and transmission, and offered important information for outbreak response. Here we analyze sequences from 232 patients sampled over 7 months in Sierra Leone, along with 86 previously released genomes from earlier in the epidemic. We confirm sustained human-to-human transmission within Sierra Leone and find no evidence for import or export of EBOV across national borders after its initial introduction. Using high-depth replicate sequencing, we observe both host-to-host transmission and recurrent emergence of intrahost genetic variants. We trace the increasing impact of purifying selection in suppressing the accumulation of nonsynonymous mutations over time. Finally, we note changes in the mucin-like domain of EBOV glycoprotein that merit further investigation. These findings clarify the movement of EBOV within the region and describe viral evolution during prolonged human-to-human transmission.
The first studies of the HL-LHC physics program and the performance of the upgraded LHC detectors were documented for the European strategy meeting in Cracow , the Snowmass workshop in the US , and the first ECFA HL-LHC workshop in 2013 . The second ECFA HL-LHC workshop was a significant step forward to develop understanding of the performance of the upgrade detectors in the harsh HL-LHC environment. The studies were organized to motivate a number of specific performance related detector upgrades and to address points raised during the first ECFA HL-LHC workshop. Below only some key aspects of the programme are discussed. We will examine progress and prospects regarding Higgs and BSM physics analyses at the general purpose detectors, continue with a discussion of detector aspects and close with heavy-flavour and heavy-ion prospects. An important component of the HL-LHC is to carry out studies involving the recently discovered Higgs boson at 125 GeV mass. One aspect is precision measurements of the properties of this scalar, in order to test the Standard Model pattern of couplings to elementary particles. Additionally, because the hierarchy problem, a quantum instability of the Higgs sector, many models of new physics affect precision Higgs observables, even in cases where such the corresponding new particles are hard to discover experimentally. The ATLAS and CMS experiments project comparable precision with an estimated uncertainty of 2-5% for many of the investigated Higgs boson coupling to elementary fermions and bosons, demonstrating that with an integrated luminosity of 3000 fb−1 the HL-LHC is a very capable precision Higgs physics machine. Figure [fig:Higgs1] shows the reduced coupling scale factors, yi, for weak bosons and fermions expected with 3000 fb−1 of data . To fully benefit from the higher experimental precision in the Higgs sector it will be necessary to reduce theory uncertainties relative to today’s state-of-the-art. Progress is being made on many fronts currently: for example steps towards a N³LO computation of the Higgs-boson cross section in gluon fusion or various NNLO computations for more exclusive final states (e.g. Higgs+jet ), both of which can affect Higgs parameter extraction. A reduction in the uncertainty of parton distribution functions (PDFs) will also be needed, and here too the advent of new calculations at NNLO may help, as may detailed studies of the origins of systematic different between different PDF sets (see e.g. Ref. ). Other progress that will be relevant includes higher order merging of parton showers and fixed-order calculations (e.g. Refs. ) as well improvements in methods to estimate higher-order uncertainties such as Ref. . As well improving the precision of Higgs-sector measurements, the substantial luminosity of HL-LHC will make it possible to probe important rare processes involving the Higgs boson. Some examples involve rare decays, such as the Zγ decay, or those involving second generation fermion couplings, which can open a novel window on the problem of flavour. Off-shell and high transverse momentum Higgs production to new physics near the TeV scale that may otherwise be hidden, in a way that overlaps only partially with precision Higgs measurements. Finally, the HL-LHC may have the potential to study di-Higgs production. In the Standard Model, with the Higgs boson mass and the Higgs-field (ϕ) vacuum expectation value now both known, the structure of the Higgs potential is fully predicted. This is because the potential involves just two terms, proportional to ϕ² and ϕ⁴. An elementary field potential of this kind has never been seen before in nature and it is crucial to test whether it is indeed the potential associated with our vacuum. A study of the Higgs boson self-coupling provides one such test, because the self-coupling is related to the third derivative of the Higgs potential at its minimum, uniquely predicted in the Standard Model. One main avenue for studying the self coupling is through di-Higgs production, which is sensitive to the (off-shell) H* → HH process. One should be aware, however, that this interferes with other mechanisms for the production of two Higgs bosons, which complicates the determination of the self coupling. One should also keep in mind that new-physics can modify the relation between the Higgs potential and di-Higgs production: for example di-Higgs production can be greatly enhanced in cases where the Higgs is composite rather than elementary. Preliminary studies of the rare di-Higgs process, only accessible at the HL-LHC, have been performed by the ATLAS and CMS experiments, considering HH → bbγγ and bbWW final states. The findings of these analyses show the challenge that this physics process represents, where high performance on mass resolution, primary vertex, b-tagging and photon identification efficiencies, as well as mitigation of the event pileup effects, are crucial to the success of this measurement . As an example, Figure [fig:Higgs2] shows the relative uncertainty on the di-Higgs boson production cross section measurement as a function of the b-tagging efficiency. Studies of other di-Higgs final states, such as bbττ and bbbb, are needed to improve the accuracy of this analysis. As well as exploring the Higgs sector, with its corresponding scope for sensitivity to BSM physics, the HL-LHC also presents opportunities for direct discovery of new particles. The HL-HLC will extend the discovery reach of the LHC in a wide range of new phenomena from modified Higgs sectors to dark matter and new resonances. The sensitivity to BSM models is significantly enhanced at the HL-LHC compared to the corresponding sensitivities at lower integrated luminosities. One class of particles whose discovery potential benefits especially from the extra factor of ten in luminosity provided by the HL-LHC is those with poduction rates that are suppressed, for example by small couplings. The electroweak production of neutralino-chargino pairs provides such a striking example. Both ATLAS and CMS studied the production of χ₁+ χ₂⁰ with decay to WZ and WH final states in the context of a simplified SUSY model. In the case of the WZ final state, the mass reach increases by approximately 50% from Run 3 at the LHC to the HL-LHC, while the reach about doubles in the WH(bb) final state. This significant increase would not be achievable with the current detector due to degraded performance beyond 300 fb−1 from radiation damage. Other scenarios with suppressed rates to conventional final states include models such as supersymmetry and compositeness with a split spectrum (see recent search based on charm-tagging ) or models with compressed spectra and/or kinematic degeneracies in the decays. The ATLAS and CMS Collaborations have also shown that the mass reach for the discovery of new heavy states like gluinos/squarks in standard supersymmetric scenarios or new gauge bosons (W′ or Z′) typically increases by about 20% at the HL-LHC. In a detailed study of five full-spectrum SUSY models, CMS showed that a combination of nine different experimental signatures is able to establish discovery with differing amounts of integrated luminosity but only the HL-LHC is capable of discovering the physics nature of all five models. In the event of a discovery with the Run 2 or 3 datasets, it is likely to be difficult to distinguish between different new physics interpretations. For example, the CMS Collaboration has shown that a spin-1 dilepton resonance with a mass of 4 TeV could be discovered at the LHC but spin-0 or spin-2 interpretations could only be discriminated against at the 0.5 to 2 standard deviations level with 300 fb−1 of data using both angular and rapidity distributions of the dilepton system. The level of discrimination reaches 2 to 5 standard deviations with the data set of the HL-LHC. The mitigation of pileup at the HL-LHC is of fundamental importance to be able to deliver the physics goals of the LHC luminosity upgrade. An instantaneous luminosity of 5 × 10³⁴ cm−2s−1 is assumed to correspond to an average number of proton-proton interactions per bunch crossing (pileup), μ, of 140 events. The ATLAS performance has been evaluated using the baseline Phase II central tracker in full detector simulation. CMS have shown the impact of aging on the detector after 1000 fb−1 data collected in 2019, and compared this to a Phase II detector performance. In both experiments the efficiency for finding the primary vertex in top-pair events with pileup is expected to be 96%. The b-tagging performance of the Phase II detectors with μ = 140 is close to that of the Phase I detectors with μ = 50. ATLAS showed that the performance continues to degrade with more pileup; 10 times more light jets are mistagged as b-jets with μ = 300 compared to μ = 140. Once the correct primary vertex is identified, the long-flat beam spot seemed to offer no further improvement for b-tagging. As anticipated, the CMS muon performance is strongly degraded by detector aging. The Phase II tracking detectors bring an improvement for both experiments in muon pT resolution compared to Run 1. The Phase II algorithms for e, γ, and τ-lepton identification need more development work by both collaborations. CMS showed that the Puppi algorithm to reject pileup is particularly effective for jet reconstruction, giving the best jet energy resolution of the algorithms studied, and eliminating jets from pileup. ATLAS have also used track and vertex matching algorithms to reject pileup jets. In addition, ATLAS have demonstrated that jet substructure algorithms continue to be effective at large pileup. A jet mass algorithm for large radius jets allows the identification of top-quark jets. The efficiency and resolution gradually degrades as μ increases from 140 to 300. Most of these studies are performed with Run 1 algorithms. A range of further methods is also being considered, some of which were discussed at a recent dedicated pileup workshop . The ATLAS and CMS collaborations are studying upgrades of forward tracking and calorimetry by understanding their piecewise impact on physics analysis, leading to an optimized set of upgrades for each detector. Primary upgrades towards exploiting the forward region are proposed extensions to the tracking detectors of each experiment. The ATLAS collaboration is studying multiple forward tracker upgrade options varying the gross detector geometry as well as sensor granularity. Moreover, the ATLAS tracking extension studies demonstrate a 90% selection efficiency of forward jets from the primary vertex with a background rejection factor of 500 until pseudorapidity 3.2, and 100 in the region 3.2<|η|<4.0, showing that jets can be purely selected by exploiting forward tracking. The impact of the forward tracking extensions on physics analysis is demonstrated in a vector boson fusion (VBF) Higgs →ττ study, with a reduction up to a factor of 3 on the expected signal strength uncertainty assuming a 90 % rejection of jets from pile-up collisions. ATLAS also showed preliminary results on the impact of extensions to the muon spectrometer, including the addition of a new warm toroid at large pseudorapidity. Improvements to the muon momentum resolution are typically dominated by the contribution of the extended inner tracker at small values of muon transverse momentum. However, when using the fully proposed set of upgrades there is significant improvement to the muon transverse momentum resolution at high energies from the increased lever arm. The impact of this upgrade is shown in an analysis studying the H → ZZ → 4ℓ final state, where the acceptance is increased by 35% by extending the tracking coverage to pseudorapidity |η|= 4.0. The CMS collaboration is pursuing a similar tracking extension in pseudorapidity up to |η|= 4.0. The extended tracker being proposed also provides a factor three improvement in the tracking fake rate and track momentum resolution. These improvements are critical to the performance of _particle flow_ reconstruction in the Phase-II upgrades, which attempts to reconstruct with best precision all charged and neutral particle content in the event. CMS is investigating two endcap calorimeter designs. At present, both calorimeters are being studied for their performance benefits on their own and within the context of a global event reconstruction. Updated results for the HL-LHC from both experiments will be presented after evaluating the physics impact of each of the proposed upgrades finalizing physics simulation studies whose preliminary results have been presented in the Workshops of this year and last year. The study of heavy flavour sector represents one of the most interesting domains for indirect searches for new physics. Substantial progress is expected in the next two decades, both from the experimental and from the theoretical side, with anticipated progress in lattice QCD calculations. The main experimental advances should come from LHC experiments, thanks to the planned luminosity increase with the HL-LHC upgrade, and from the BELLE-II experiment in Japan. Analyses from Run 1 data have firmly established the great impact of LHC experiments. LHCb has produced a plethora of results on a broad range of flavor observables in the charm and beauty sectors, and ATLAS and CMS have given significant contributions to the beauty sector, mainly using final states containing muon pairs due to constraints dictated by the trigger. One of the most striking results of Run 1 is the observation of the Bs decay, through the combined analysis of CMS and LHCb data. CP violation induced by Bs mixing has been measured with astonishing precision by LHCb, with relevant contributions also from ATLAS and CMS. The measurement of the γ angle performed by LHCb is now dominating the world average, and further improvements are expected in Run 2 and beyond. The aforementioned observables are particularly interesting at HL-LHC as they are sensitive to new physics while not being dominated by theoretical uncertainties. Selected observables associated to the B → K( * )l+l− decay channels, and observables sensitive to CP-violation (CPV) in the charm sector are also promising probes of new physics. In the context e.g. of supersymmetry with a non-trivial squark spectrum and a light, natural third generation, flavour physics plays a key role: effects of the order of 5 − 20% are expected in CPV in the Bs, d mixing, Bs → μ+μ−, and Bs → K*μ+μ− . The large number of top pairs expected at the HL-LHC further opens the possibility of new studies in B physics, especially in the case of CPV, using “top-tagged b decays" . By incorporating knowledge emerging from the analyses of Run 1 data sets, sensitivity projections for the HL-LHC phase have been updated. These show that improvements in sensitivity will come not only from increased data samples, but also from improved detection capabilities. This will lead to results of fundamental importance in the search for physics beyond the Standard Model (BSM). In case that BSM particles are seen to be produced in LHC collisions, results in the flavor sector will provide crucial information to determine the couplings and the flavor structure of new physics. If new particles are not observed, higher mass scales can be probed through quantum effects, giving a complementary approach to discover BSM physics. Finally, also the studies of heavy ion physics possible at the HL-LHC enriches and completes the physics program of this machine and detector upgrades. The goal of the experiments ALICE, ATLAS and CMS is to integrate, for Pb-Pb collisions at $} = 5.5$ TeV, a luminosity of more than 10 nb−1 during the LHC Run 3 and 4. This represents an increase by an order of magnitude with respect to the expectation for Run 2. In the case of the ALICE experiment, the upgrade of the detector read-out capabilities will allow for the recording of all interactions, increasing the minimum-bias statistics by two orders of magnitude. The experiments stress the importance of a proton-proton reference sample at the same energy as Pb-Pb collisions. The ALICE requirement, driven by low-pT measurements, is of 6 pb−1; the ATLAS and CMS requirement, driven by high-pT measurements, is of 300 pb−1. After LS2, the study of the Quark-Gluon Plasma formed in nucleus-nucleus collisions will focus on rare probes, on their coupling with the medium and hadronization processes. These include but are not limited to heavy-flavour particles, quarkonium states, real and virtual photons, jets and their correlations with other probes. New studies are now available, for example for low-pT heavy-flavour and charmonium measurements at forward rapidity in ALICE. High-luminosity p-Pb collisions will be an essential part of the heavy-ion program after LS2. The LHCb experiment will also contribute to this part of the program, as demonstrated by its excellent performance during the 2013 p-Pb run. Proton-nucleus collisions allow, on the one hand, to explore initial-state effects and low Bjorken-x gluon dynamics, on the other hand, to study the interplay between the initial conditions and the development of collective effects in the final state of high particle density collisions. In order to fully exploit this potential, the ALICE Collaboration is considering the technical feasibility and the physics case for a high-granularity calorimeter in the forward region (η ∼ 3 − 5), to be installed during LS3. This detector would give access to forward-rapidity photons and neutral pions, which are predicted to be very sensitive to small-x gluon dynamics and the possible onset of gluon saturation.
The possibility to analyze, quantify and forecast epidemic outbreaks is fundamental when devising effective disease containment strategies. Policy makers are faced with the intricate task of drafting realistically implementable policies that strike a balance between risk management and cost. Two major techniques policy makers have at their disposal are: epidemic modeling and contact tracing. Models are used to forecast the evolution of the epidemic both globally and regionally, while contact tracing is used to reconstruct the chain of people who have been potentially infected, so that they can be tested, isolated and treated immediately. However, both techniques might provide limited information, especially during an already advanced crisis when the need for action is urgent. In this paper we propose an alternative approach that goes beyond epidemic modeling and contact tracing, and leverages behavioral data generated by mobile carrier networks to evaluate contagion risk on a per-user basis. The individual risk represents the loss incurred by not isolating or treating a specific person, both in terms of how likely it is for this person to spread the disease as well as how many secondary infections it will cause. To this aim, we develop a model, named _Progmosis_, which quantifies this risk based on movement and regional aggregated statistics about infection rates. We develop and release an open-source tool that calculates this risk based on cellular network events. We simulate a realistic epidemic scenarios, based on an Ebola virus outbreak; we find that gradually restricting the mobility of a subset of individuals reduces the number of infected people after 30 days by 24%. While these results are promising, it is important to underline the fact that this is only an initial foundational work and to stress some key points. First, this paper focuses on a theoretical model, rather than on its actual translation into a real-world system. In particular, centralized deployments of this model would pose several ethical questions, as they would require access to user data. Decentralized deployments for which user mobility data never leaves the mobile device of a user are possible and should be preferred, as they fully protect user privacy. Second, results are generated from computer-based simulations, under specific assumptions. Social factors and technical difficulties might greatly affect results obtained in the real world. Third, this risk-assessment tool is not designed specifically for implementing containment measures based on mobility restrictions. For example, it could be used to advise users about the most appropriate behavior given his/her risk profile (e.g., willingly change own behavior, see a doctor, and similar); users would finally choose whether to follow the advice or not. Finally, the simulations were run on data call records from a country that is according to WHO Ebola-free , and this work has not been commissioned neither by Orange nor by any other entity for preparation to a real-world disease outbreak.
LabCraft is a community-driven mod for the popular video game Minecraft (published by Mojang). LabCraft is meant to be used as a teaching tool and to expose children to various biology-related disciplines. Feel free to contact any listed author if you’re interested in helping develop or deploy LabCraft. GitHub Source
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems.
In this splinter session, ten speakers presented results on solar and stellar activity and how the two fields are connected. This was followed by a lively discussion and supplemented by short, one-minute highlight talks. The talks presented new theoretical and observational results on mass accretion on the Sun, the activity rate of flare stars, the evolution of the stellar magentic field on time scales of a single cycle and over the lifetime of a star, and two different approaches to model the radial-velocity jitter in cool stars that is due to the granulation on the surface. Talks and discussion showed how much the interpretation of stellar activity data relies on the sun and how the large number of objects available in stellar studies can extend the parameter range of activity models.
Young stars accrete mass from circumstellar disks and in many cases, the accretion coincides with a phase of massive outflows, which can be highly collimated. Those jets emit predominantly in the optical and IR wavelength range. However, in several cases X-ray and UV observations reveal a weak but highly energetic component in those jets. X-rays are observed both from stationary regions close to the star and from knots in the jet several hundred AU from the star. In this article we show semi-analytically that a fast stellar wind which is recollimated by the pressure from a slower, more massive disk wind can have the right properties to power stationary X-ray emission. The size of the shocked regions is compatible with observational constraints. Our calculations support a wind-wind interaction scenario for the high energy emission near the base of YSO jets. For the specific case of DG Tau, a stellar wind with a mass loss rate of 5 ⋅ 10−10 M⊙ yr−1 and a wind speed of 800 km s−1 reproduces the observed X-ray spectrum. We conclude that a stellar wind recollimation shock is a viable scenario to power stationary X-ray emission close to the jet launching point.
INTRODUCTION The Higgs boson with mass around 125 GeV recently discovered by the ATLAS and CMS experiments at the LHC is found to have properties compatible with the Standard Model predictions , as shown for example in Fig. [fig:ellis] . Coupled with the absence of any other indication so far for new physics at the LHC, be it either through precision measurements or via direct searches, this fundamental observation seems to push the energy scale of any physics beyond the Standard Model above several hundred GeV. The higher-energy LHC run, which is expected to start in 2015 at $ \sim 13$-14 TeV, will extend the sensitivity to new physics to 1 TeV or more. Fundamental discoveries may therefore be made in this energy range by 2017-2018. Independently of the outcome of this higher-energy run, however, there must be new phenomena, albeit at unknown energy scales, as shown by the evidence for non-baryonic dark matter, the cosmological baryon-antibaryon asymmetry and non-zero neutrino masses, which are all evidence for physics beyond the Standard Model. In addition to the high-luminosity upgrade of the LHC, new particle accelerators will be instrumental to understand the physics underlying these observations.