Logbook Summer School 2016

Top Mass Precision Measurements at the FCC-ee

**Goal of the work**

From the Cross section scan as a function of the energy extract Top mass, width, Yukawa coupling, \(\alpha_s\).

Study the effeciency and the background level as a function of the energy

What detector features are most important? (like b-tagging, momentum resolution...which precision do we need?)

Study how much luminosity do we need to scan the cross section and how many point do we need to get

Study how to split the luminosity available at the \(4^{th}\) run between the top mass and the top couplings

**What we need to achieve this goal**

Write down the heppy script to create ntuple and the selection script to reject background.

Generate all the events (signal and bg).

**What we did during this week**

fcc-pythia8 is ready to hadronize lhe files (test with madgraph: ok!, wait for whizard)

completed the missing energy class in the heppy framework

a lot of knowledge concerning the FCC software, but not enough!

**Next steps**

Make whizard working

Write down an heppy script which finds at least the iso-lepton, the missing energy and reconstruct the w-lep (and the tops?)

Writing down the b-tagging algorithm

Understand how to deal with the beam energy spread (put it inside whizard, but how to take it in account during the calculation of the missing energy-momentum?)

I understood how to pick the recoil (missing energy, there is no so big difference).

I need to understand how to create my analyzer to build resonance.

I don’t understand how the b-tagging is implemented for the moment, and how can I do to implement the b-tagging algorithm because it doesn’t seem that we have the parameters of the track.

Last year we got our ntuples from the reconstruction and then we create new ntuples with useful variables, and used topConstrainer (it’s something done by Patrick, so how to deal with now?)

In my opinion we must put the lepton smearing inside papas.

It seems that the energy sum in the isolation doesn’t work.

Understood how to create my own analyzers.

Isolation cut: how to do that?

How fast jet cluster the particles?

Construct best tops and w bosons: w_lep is constrained, but to find the best combination I decide to minimize the quantity: \( \sum (m_{res} - m_{pdg})^2\), where the sum goes on w_had, top_lep and top_had

Patrizia managed to compile Whizard: let’s think about the theory and how to run it on lxplus.

What to do:

Some question for Patrick

Understand how to redo the analysis (isolation, photon recovery, cut...)

Lepton smearing? Look at Delphes!

Kinematics: fit? constraint?

Finding the track parameters and the generator info in the ntuples

**Wednesday - Thursday - Friday**

Understood the question about fit and constrain

Written an analyzer that call TopConstrainer and search for the best jet combination for each event; unfortunately the analyzer is really slow, and it processes at the moment only 2 events/second.

Attended the visit at the antiproton deceleration center and the CMG introduction meeting.

**Week end: to do**

Useful program from Giovanni Petrucciani to plot and useful hints with heppy: LEARN them!

Understand with Giovanni Franzoni what is the best way to simulate my events and process them.

Study tracks and b-tagging paper.

Try to understand what to do with the b-tagging algorithm, and how to study efficiency and purity varying the cut for instance and of course the distribution for signal and bg (I could generate for instance \( ZZ \rightarrow q \bar q\) for each species of quarks and test the b-tagging algorithm, and then try to understand how to deal with the \(t \bar t\) system.

Moreover I NEED to understand how can I get the MC info from the simulation.

And try to install Whizard on lxplus.

**Doubts**

Beamstralung

lepton smearing?

Photons recovery?

Isolation?

SOLVED: The tops don’t have necessary the same mass, they could have a different slightly different one.

How to deal with the fact that the W and the Top have a non-zero width?

**Week end: results**

Understood that the IP calculation in the Helix class is not what we need, and got the idea to write the new one.

Almost read all the b-tagging paper.

**Monday**

Written the calculation of the IP and significance (with a smearing like the one in the ILD tdr \(\sigma = 5 \oplus 10 / p \sin^{3/2} \theta \mu\)m. It’s formed by a part in the Helix class, and another part in the analyzer.

I have to speak with Lucas to understand how can be possible to merge the two parts (he considered a lot of material and beam pipe effect).

I have to understand something on the paper: in particular

It seems that if you measure well a b-hadron the D is 0

Why the resolution is 0 in 0?

What resolution on the IP do I have to implement?

Test the code and debug it and try to have a look at some distributions of the IP for instance, because I need to decide which function use to calculate the probability.

**Tuesday**

I spoke with Lucas concerning the b-tagging algorithm:

The IP is calculated in a different way... In particular Lucas is defining IP the distance of minimum approach to the primary vertex, whereas in the Aleph paper they give this definition at page 2 (chapter 3, second par) but they use instead something more complicated, as shown in fig 1 at page 4. In this moment I don’t understand also the definition in fig 1.

I have to understand what the ILD formula \(\sigma = 5 \oplus 10 / p \sin^{3/2} \theta \mu\)m of the resolution takes into account.

He is saying that for the moment, he is considering the multiple scattering trough the beampipe and the finite resolution of the algorithm that finds the minimum. The resolution of the tracker is already taken into account because we calculate the resolution with the smeared tracks (smeared by papas, but you don’t do this with the leptons...whereas the impact parameter of the lepton could be useful (maybe) __leptons of course are a second order correction__). If this is true, I understand that the ILD formula is a parametrization of the effect of the tracker resolution. Anyway we need to know this resolution...I don’t understand if we need to extract this from the simulation...

He calculates the b-tag with the ATLAS and CMS algorithms, that are more complicated than that used by ALEPH... For doing what is done in the Aleph paper I need to know the resolution function, to extract the significance and the probability for each track.

I don’t know how to test the code I have written...really no idea... -> SOLVED: I have tried with a simple case of a particle whose velocity is parallel to the magnetic field and it works. Now I’m dealing with more complicated cases, so I am writing a simple event displayer.

**Wednesday**

I managed to debug the code, writing a simple vertex displayer in the transverse plane. I produced a simple generation of \(e^+ e^- \rightarrow Z \rightarrow b \bar b\) at \(\sqrt{s} = 91\) GeV to try to look at the IP distribution

**Thursday**

Looking at the list of the mc particles, I don’t know how to get the mc truth (in particular I cannot find the mc parent reletionship. I asked to Colin to know if it’s possible or not). In this sense it doesn’t seem to be possible to have a mc matching algorithm, and studying the isolation and so on.

Last year we performed the photon recovery to add the photons produced by bremstralung and FSR. I don’t know actually if these are implemented (it seems to be possible to implement in the PYTHIA card the FSR option), and if it’s worth to study them.

I don’t understand how often we might have one jet more (produced by a gluon for instance)...in this case could be useful to clusterize the particles in an exclusive way (but we need to know which algorithm fastejet uses when it clusters jet in an exclusive way.

Concerning the b-tagging, It could be nice to have, for each track, the simulated parameters so that we can calculate the IP, smear it with a resolution parametrized by some functions, calculate the significance and obtain something like in the aleph paper.

## Share on Social Media