**<font size="6">Associated production of J/$\psi$ and D$^*$ in CMS With Full Run 2 Data </font>**
[TOC]
## Introduction
Due to the high energy and luminosity at the LHC the charm production studies can be done with great precision and the cross sections of open charm1 and charmonium2 production are large. Besides that, the studies of double charmonium and charmonium associated with open charm productions probes the quarkonium production mechanism3, helps the understanding of Multiple Parton Scattering (MPS) and Nonrelativistic QDC (NRQC).
The contributions to this process can come from Double Parton Scaterring (DPS) and the intrinsic charm content.
### Motivations
The experiment LHCb did the measurements of the channels associated with open charms (D$^0$, D$^+$, D$^*$, D$_s^+$, $\Lambda_c^+$) in the forward direction5. If one note the Figure bellow, is easy to note that the CMS kinematic region covers the one that LHCb covers.
### Parton Scattering
## Monte Carlo
### Particle gun
This sample is produced with Pythia8ptGun without pile up. The events are generate with flat p$_T$ and $\eta$ distributions. It is used to study muon, J/$\psi$, and D$^*$ acceptance to define the better region to produce the DPS and SPS MCs. The instructions to run the entire chain are described on github.
**J/$\psi$ and muons cfi fragment:**
```
import FWCore.ParameterSet.Config as cms
generator = cms.EDFilter("Pythia8PtGun",
PGunParameters = cms.PSet(
ParticleID = cms.vint32(443),
AddAntiParticle = cms.bool(False),
MinPt = cms.double(0.0),
MaxPt = cms.double(100.0),
MinEta = cms.double(-4.0),
MaxEta = cms.double(4.0),
MinPhi = cms.double(-3.14159265359),
MaxPhi = cms.double(3.14159265359),
),
maxEventsToPrint = cms.untracked.int32(1),
pythiaPylistVerbosity = cms.untracked.int32(0),
pythiaHepMCVerbosity = cms.untracked.bool(False),
PythiaParameters = cms.PSet(
py8JpsiDecaySettings = cms.vstring('443:onMode = off', # Turn OFF all Jpsi decays
'443:addChannel = 1 1. 0 13 -13' # Turn ON JPsi-> mu mu only
),
parameterSets = cms.vstring('py8JpsiDecaySettings')
)
)
ProductionFilterSequence = cms.Sequence(generator)
```
**D$^*$ cfi fragment:**
```
import FWCore.ParameterSet.Config as cms
generator = cms.EDFilter("Pythia8PtGun",
PGunParameters = cms.PSet(
ParticleID = cms.vint32(413),
AddAntiParticle = cms.bool(True),
MinPt = cms.double(0.0),
MaxPt = cms.double(100.0),
MinEta = cms.double(-4.0),
MaxEta = cms.double(4.0),
MinPhi = cms.double(-3.14159265359),
MaxPhi = cms.double(3.14159265359),
),
maxEventsToPrint = cms.untracked.int32(1),
pythiaPylistVerbosity = cms.untracked.int32(0),
pythiaHepMCVerbosity = cms.untracked.bool(False),
PythiaParameters = cms.PSet(
py8DstarDecaySettings = cms.vstring('413:onMode = off', # Turn OFF all D* decays
'413:addChannel = 1 1. 0 421 211', # Turn ON D*-> D0 pislow only
'421:onMode = off', # Turn OFF all D* decays
'421:addChannel = 1 1. 0 -321 211', # Turn ON D0-> k pi only
),
parameterSets = cms.vstring('py8DstarDecaySettings')
)
)
ProductionFilterSequence = cms.Sequence(generator)
```
### DPS
#### Generating events
bla bla bla
#### Generation Summary
##### Sample: dimuon p$_T$ 9-30 GeV/c
| Parameter | Value |
|----------------------------------------------|----------------------------------------------|
| Filter Efficiency (event-level) | (1617) / (2.94×10⁷) = **5.50×10$^{⁻5}$ ± 1.368×10$^{⁻6}$** |
| Final Cross Section (after filter) | **373.3 ± 9.3 pb** |
| Equivalent Luminosity for 1M Events (after filter) | **2.68 ± 0.67 1/fb**
##### Sample: dimuon p$_T$ 30-50 GeV/c
| Parameter | Value |
|----------------------------------------------|----------------------------------------------|
| Filter Efficiency (event-level) | (4517) / (5.05×10⁷) = **8.95×10$^{⁻5}$ ± 1.33×10$^{⁻6}$** |
| Final Cross Section (after filter) | **0.135 ± 0.002 pb** |
| Equivalent Luminosity for 1M Events (after filter) | **7.40×10$^{+3}$ ± 1.11×10$^{+2}$** 1/fb**
##### Sample: dimuon p$_T$ 50-100 GeV/c
| Parameter | Value |
|----------------------------------------------|----------------------------------------------|
| Filter Efficiency (event-level) | (6611) / (3.52×10⁷) = **1.88×10$^{⁻4}$ ± 2.31×10$^{⁻6}$** |
| Final Cross Section (after filter) | **1.47×10$^{⁻3}$ ± 1.80×10$^{⁻5}$ pb** |
| Equivalent Luminosity for 1M Events (after filter) | **6.82×10$^{+5}$ ± 8.42×10$^{+3}$** 1/fb**
### SPS
In this section, the setup for producing single parton scattering events is described. The Helac-Onia package is used to generate the matrix elements for the calculation of the heavy quarkonium helicity amplitudes with the use of nonrelativistic QCD factorized[^sps1].
#### Helac-Onia
This tutorial is based on Helac-Onia 2.7.1[^sps2] (To get HELAC-Onia-2.5.3-VFNS you can ask the Helac-Onia developer). It is strongly recommendable to have a look at the README file. There are two ways to compile, here the second way is shown, via Python script. It is worth mentioning that the machine must have a valid <i>gfortran</i> compiler, <i>e.g gcc</i>. If the machine does not have a <i>gfortran</i> compiler this can be done:
```
sudo apt-get install gfortran
```
Before compiling Helac-Onia the package LHAPDF [^sps3] must be installed. This is a tool to evaluate parton distribution functions that Helac-Onia relies on. Instead of downloading it with <i>wget</i> go directly to the link. The instructions here are prepared with the version 6.4.0. Follow the rest of the commands given on the webpage and it should work normally. Also here, make sure the correct compiler is being used. Then, finally on HELAC-Onia-2.7.1 directory, do,
```
./config
```
This command should return some warnings, don't worry. Now, HELAC-Onia can be run with,
```
./ho_cluster
```
Something similar to this should be seen
```
============================================================
= =
= H E L A C - O n i a =
= =
= VERSION 2.7.1 2021-11-26 =
= =
= =
= Author: Hua-Sheng Shao =
= Affliation: LPTHE =
= Contact: huasheng.shao@lpthe.jussieu.fr =
= Reference: Comput.Phys.Commun. 184 (2013) 2562 =
= arXiv: 1212.5293 =
= Reference: Comput.Phys.Commun. 198 (2016) 238 =
= arXiv: 1507.03435 =
= Webpage: http://helac-phegas.web.cern.ch/helac-phegas =
= =
============================================================
```
To test it, the following commands can be used,
```
set preunw = 100000
set nmc = 1000000
set nopt = 100000
set nopt_step = 100000
set noptlim = 1000000
generate p p > cc~(3s11) c c~
define cc = c c~
generate g cc > cc~(3s11) cc
generate cc g > cc~(3s11) cc
decay jpsi > m+ m- @ 1d0
launch
```
This generates a LHE file containing J/$\psi$ + c$\overline{c}$ events. The LHE file will be located in the path: PROC/results//results.lhe
It is possible to create a .ho file and provided it as input to Helac-Onia. Imagine <i>generate_Jpsi_3S11_ccbar_seed_2240.ho</i> is a file with the following content
```
set seed = 2240
set preunw = 100000
set nmc = 1000000
set nopt = 100000
set nopt_step = 100000
set noptlim = 1000000
set minptconia = 6d0
generate p p > cc~(3s11) c c~
define cc = c c~
generate g cc > cc~(3s11) cc
generate cc g > cc~(3s11) cc
decay jpsi > m+ m- @ 1d0
launch
```
To provide this file as input to Helac-Onia, the following command is used
```
./ho_cluster < generate_Jpsi_3S11_ccbar_seed_2240.ho
```
**!!!IMPORTANT!!!**
On the HELAC-Onia-2.7.1(or HELAC-Onia-2.5.3-VFNS)/input/user.inp the parameter cmass must be,
```
cmass 1.54845d0
```
So that the J/$\psi$ peak mass will be centeread at ~ 3.09 GeV!
#### Using condor to generate LHE files
To generate many LHE files, a condor script must be used. The repository is on this link [^sps5]. On the main directory (SPS_MC) two directories are used: condor_sps (contains condor) and jpsi_ccbar_3FS_4FS (contains fragments, etc). Sections **(1)** and **(2)** gives a description of the files that should be used to generate the files. In section **(3)** the steps and commands used to generate the files are described.
**1. condor_sps directory**
First of all, the folders containing both Helac-Onia packages (HELAC-Onia-2.7.1 and HELAC-Onia-2.5.3-VFNS) must be on this directory. The other files are:
* **quick_setup.sh:** Used to use the grid certificate and copy it to the current area.
* **jobs_template.jdl:** Contains condor options. **You need to change the variable <i>x509userproxy</i> and set the correct path to the grid certificate.**
* **submit.sh:** Condor submission file. The <i>cd</i> command should point to the **condor_jobs** repository.
* **condor.py:** Python script that is used to generate the LHE files.
**2. jpsi_ccbar_3FS_4FS directory**
It is suggest to create this directory on the eos because you have more disk there.
* **condor_jobs/**: Directory where the condor jobs are produced. **IMPORTANT**: the script **run_helac.py** is on this directory and it should not be excluded! This script is used by the condor code to call the Helac-Onia package.
* **condor_jobs_organized/:** Directory to store all jobs produced
* **fragmens/**: Directory to store the fragments
* **lhe_final:** Directory to store the final .lhe files after the proper reweighing.
* **generate_Jpsi_3FS_4FS_ccbar_model.ho**: fragment with the information about the process. To control the pT of the generated quarkonia <i>set minptconia</i> should be changed.
* *create_frags.py*: Generate many <i>.ho</i> files (controled by the variable <i>nrun</i>) on the folder **fragments/**.
* **lhe_combine_seeds_my.py/lhe_combine_vfns.py:** Combine the LHE files produced on condor_jobs_organized/ directory, moving the final files to lhe_final/. In this step, the files <i>results.lhe</i> are combined and the reweighting is performed. The file **lhe_combine_seeds_my.py** is used to combine the LHE files produced by Helac-Onia 2.7.1 and lhe_combine_vfns.py is usedto combine the LHE files produced by HELAC-Onia-2.5.3-VFNS.
* **rest_job.py:** Used to detect failed condor jobs.
* **run_combine_vfns.py:** Used only with HELAC-Onia-2.5.3-VFNS generation. It combines the Helac-Onia subprocess into a single lhe file (results.lhe).
* **take_jobs.py:** Takes hte .lhe files produces on condor_jobs/job_x/PROC_HO_X/results and moves to condor_jobs_organized.
**3. Running the code**
Go to **jpsi_ccbar_3FS_4FS** directory, then:
**a)** Open **generate_Jpsi_3FS_4FS_ccbar_model.ho**. The code should be like this (For both Helac-Onia versions):
```
set seed = 7
set preunw = 100000
set nmc = 1000000
set nopt = 100000
set nopt_step = 100000
set noptlim = 1000000
set minptconia = 6d0
generate p p > cc~(3s11) c c~
define cc = c c~
generate g cc > cc~(3s11) cc
generate cc g > cc~(3s11) cc
decay jpsi > m+ m- @ 1d0
launch
```
The line **set minptconia = 6d0** is the p$_T$ threshold (6 GeV/c) for the quarkonia production (in this case, J/$\psi$). The higher this value, the higher the p$_T$ of the generated particle. It is suggested to set it to at least 10 GeV/c lower (When possible) than the wanted p$_T$. For instance:
- p$_T^{J/\psi}$ > 10 GeV/c $\rightarrow$ set minptconia = 6d0; (6 is chosen because for lower thresholds the production efficiency is very low)
- p$_T^{J/\psi}$ > 30 GeV/c $\rightarrow$ set minptconia = 20d0
- p$_T^{J/\psi}$ > 50 GeV/c $\rightarrow$ set minptconia = 40d0;
**b)** Open **create_frags.py**. It will read the **generate_Jpsi_3FS_4FS_ccbar_model.ho** file and create many fragments with different seeds. It is suggested to generate 200 fragments to provide to condor. To run it, just do:
```
python3 create_frags.py
```
**c)** Got to the fragments/ directory and do:
```
ls -d $PWD/* > jpsi_ccbar_pt6_0to200_path.txt
```
The txt file create above has the path to the fragments. Note that pt6 refers to fragments with **set minptconia = 6d0** and 0to200 refers to seeds from 0 to 200. (This is just a suggestion, you can name the file whatever way you want.)
**d)** Choose the Helac-Onia version
On the directory condor_jobs and edit the file **run_helac.py** on line 36:
```
helac_version = '/afs/cern.ch/work/m/mabarros/public/MonteCarlo/SPS/condor_sps/HELAC-Onia-2.5.3-VFNS/ho_cluster' # HELAC-Onia-2.5.3-VFNS ; HELAC-Onia-2.7.1
```
The path should point to the **condor_sps** directory and you should set the Helac-Onia version, either Helac-Onia 2.7.1 (For 3FS + 4FS production) HELAC-Onia-2.5.3-VFNS (For VFNS production).
**e)** Go to condor_sps directory
Move the file **jpsi_ccbar_pt6_0to200_path.txt** to here. Then,
```
. quick_setup.sh
python3 condor.py -n jpsi_ccbar_pt6_0to200 -s
```
That will run 200 condor jobs. The jobs will be created on **jpsi_ccbar_3FS_4FS/condor_jobs/**. Each job directory should look like this(take job_0 as example):
```
job_0/PROC_HO_0
```
On the directory PROC_HO_0 you can find many folders with the Helac-Onia subprocesses P0_calc_1, P0_calc_2, ..... P0_calc_10 and a folder called results.
**!!!IMPORTANT!!!**
**f)** Depending on the Helac-Onia version, the next steps are not the same.
##### Working with Helac-Onia 2.7.1 (3FS + 4FS)
If you are working with Helac-Onia 2.7.1 the **results/** directory will have three files: **results.HwU**, **results.out**, and **results.lhe**. Move to the **jpsi_ccbar_3FS_4FS** directory. Now, you need to edit **take_jobs.py*. In line 50 you choose the number of jobs setting the variable <i>n</i>. The variable **transfer_model_path** should point to the **condor_jobs_organized/** directory, the variable** range_f** is a list with the number of jobs to be organized, and **jobs_model_path** should point to **condor_jobs/job_X/PROC_HO_0/results/results.lhe**. Please, note that sometimes a problematic job will create a folder PROC_HO_1, PROC_HO_2.. etc. Therefore, sometimes will need to call the function take_lhe(jobs_model_path, transfer_model_path, range_f) again (You will see several lines commented that do this). After that, simply do:
```
python3 take_jobs.py
```
All files **condor_jobs/job_X/PROC_HO_0/results/results.lhe** will be organized on **condor_jobs_organized/** with the names results_job_0.lhe, results_job_1.lhe,..., results_job_200.lhe.
##### Working with HELAC-Onia-2.5.3-VFNS (VFNS)
If you are working with HELAC-Onia-2.5.3-VFNS the **results/** directory will have *two* files: **results.HwU**, **results.out**. Before running **take_jobs.py** you need to run **run_combine_vfns.py** to produce **results.lhe** files. This script will call **lhe_combine_vfns.py** and will merge the subprocess files located on **condor_jobs/job_X/PROC_HO_0/P0_calc_x/output/** into a **result.lhe** file. You just need to set the number of jobs and run,
```
python run_combine_vfns.py
```
**!!!IMPORTANT!!!**
This program runs **lhe_combine_vfns.py** which runs wwith python2.
After that you can do the same as was done for Helac-Onia 2.7.1 (read the steps on the previous topic):
```
python3 take_jobs.py
```
**g)** Running failed jobs
Before weighting the .lhe files you can run again the failed jobs. You need to edit **rest_job.py**. This script will compare the files on **condor_jobs_organized/** directory with the fragments you produced on **fragments/**. The jobs that are not on **condor_jobs_organized/** directory but have a fragment will be added to a new .txt files. The name of this file can be set on line 28. To run the code, just do,
```
python3 rest_job.py
```
**h)** Combine and weighting the lhe files
Finally, after producing .lhe files you want, you need to weight them properly. To do this, you need to edit **lhe_combine_seeds_my.py** script. This code will combine and weight the files on **condor_jobs_organized/** directory and move them to the **lhe_final** directory. To run it, do (you need to install <i>psutil</i> package):
```
python3 lhe_combine_seeds_my.py
```
Depending on the number of lhe files to combine, this will take a long time. After it finished, you would find the final files on the **lhe_final** directory. Now the lhe files are prepared and the hadronization can be performed.
#### Converting LHE files to ROOT format
In this step, the fragment to perform the hadronization of the process is shown. The **results.lhe** file is converted to a ROOT file. After this conversion, the ROOT file is provided as the input to the GEM-SIM step. Therefore, all other steps can be followed, DIGI2RAW, HLT, AOD, and NanoAODPlus. The commands are highlighted in this document [^sps4]. It is worth mentioning that each year (2016, 2017, 2018) has it own particularity. All fragments, as well as the other codes are in this repository [^sps5]
### HelacOnia Datacards
#### J/$\psi$ + $b\overline{b}$
The datacard for producing both 3FS + 4FS and VFNS is:
```
set seed = 7
set preunw = 100000
set nmc = 1000000
set nopt = 100000
set nopt_step = 100000
set noptlim = 1000000
set minptconia = 40d0
generate p p > cc~(3s11) b b~
define cc = c c~
define bb = b b~
generate g cc > cc~(3s11) bb
generate cc g > cc~(3s11) bb
decay jpsi > m+ m- @ 1d0
launch
```
### b -> J/psi: background MC
A fraction of events contains non-promp J/$\psi$ which comes from b quark. Therefore, in order to properly compare data and MC (before applying splot weights!) this sample needs to be simulated. It is performed with Pythia 8 and the strategy for generating it can be accessed in this repository [^btojpsi]. In this case, the phase space cuts are
| Particle | p$_T$ range (GeV/c) | η range |
|----------|---------------------------|--------------------|
| **J/ψ** | 25 < pT < 100 | -1.2 < η < 1.2 |
| **μ±** | 3 < pT < 150 | -2.4 < η < 2.4 |
Table below shows the generator parameters obtained with this sample
#### Generation Summary
| Parameter | Value |
|----------------------------------------------|----------------------------------------------|
| Filter Efficiency (event-level) | (356) / (5.53×10⁷) = **6.438×10⁻⁶ ± 3.412×10⁻⁷** |
| Final Cross Section (after filter) | **578.3 ± 30.65 pb** |
| Equivalent Luminosity for 1M Events (after filter) | **1.729 ± 0.09166 1/fb** |
#### Calculating the number of events for central production
The number of events to be asked
| Dataset | Luminosity (fb⁻¹) | $\epsilon_{cuts}$ | Cross-section (pb) | N$_{fit}$ | N$_{before\_cuts}$ |
| ------------ | ----------------- | --------------- | ------------------ | ---------- | ------------------ |
| 2016-pre-VFP | 1.309E+16 | 0.070 | 578 | 7,566,020 | 108,086,000 |
| 2016-pos-VFP | 1.326E+16 | 0.060 | 578 | 7,664,280 | 127,738,000 |
| 2017 | 4.148E+16 | 0.080 | 578 | 23,975,440 | 299,693,000 |
| 2018 | 5.769E+16 | 0.084 | 578 | 33,344,820 | 396,962,142 |
### Monte carlo cross-sections - gen level
Using code xxxx the cross-sections from the generator are obtained from the GEN-SIM samples. In the table below, the values are shown
### **Cross sections for different processes and dimuon pT ranges**
| Process Type | Dimuon p$_T$ Range (GeV/c) | $\sigma$ (pb) |
|----------------------------------|------------------------------|--------------------|
| DPS ($c\bar{c}$) | 9–30 | 373.3 |
| DPS ($c\bar{c}$) | 30–50 | 1.35 $\cdot$ 10$^{-1}$ |
| DPS ($c\bar{c}$) | 50–100 | 1.47 $\cdot$ 10$^{-3}$ |
| DPS ($b\bar{b}$) | 9–30 | 6.25 $\cdot$ 10$^{-1}$ |
| DPS ($b\bar{b}$) | 30–50 | 7.05 $\cdot$ 10$^{-4}$ |
| DPS ($b\bar{b}$) | 50–100 | 7.83 $\cdot$ 10$^{-6}$ |
| SPS (3FS) ($c\bar{c}$) | 25–100 | 8.14 $\cdot$ 10$^{-3}$ |
| SPS (3FS) ($b\bar{b}$) | 25–100 | 5.70 $\cdot$ 10$^{-3}$ |
| b $\rightarrow$ (J/$\psi$) | 25–100 | 578 |
### Pile up
In this document one can find the strategy for getting the root files with the pileup content for the **Run II Ultra Legacy processing**. In the end one can find a jupyter notebook that reads the two root files (one for data, one for MC), calculate the weights and make the plots.
#### Data
In order to get details concerning the pileup histograms for data one can go to this twiki [^pu1].
In order to get the histograms one can execute either the code:
```
pileupCalc.py -i MyAnalysisJSON.txt --inputLumiJSON pileup_latest.txt --calcMode true --minBiasXsec 69200 --maxPileupBin 100 --numPileupBins 100 MyDataPileupHistogram.root'''
```
where,
* MyAnalysisJSON.txt is the JSON file defining the lumi sections that your analysis uses. This is generally the appropriate certification JSON file from PdmV or processedLumis.json from your CRAB job.
* **2016:** '/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/Legacy_2016/Cert_271036-284044_13TeV_Legacy2016_Collisions16_JSON.txt'
* **2017:** '/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/Legacy_2017/Cert_294927-306462_13TeV_UL2017_Collisions17_GoldenJSON.txt'
* **2018:** '/afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions18/13TeV/Legacy_2018/Cert_314472-325175_13TeV_Legacy2018_Collisions18_JSON.txt'
* pileup_latest.txt is the appropriate pileup file for your analysis. See below for the locations of the central files.
* minBiasXsec defines the minimum bias cross section to use (in μb). The current run 2 recommended value is 69200. Please see below for discussion of this value.
* MyDataPileupHistogram.root is the name of the output file.
Or you can simply take it from the paths:
* **2018:** /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions18/13TeV/PileUp/UltraLegacy/
* **2017:** /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions17/13TeV/PileUp/UltraLegacy/
* **2016:** /afs/cern.ch/cms/CAF/CMSCOMM/COMM_DQM/certification/Collisions16/13TeV/PileUp/UltraLegacy/
#### Monte Carlo
In order to get the MC histogram one need to install this tool [^pu2].
For UL 2016,17,18 datasets in CMSSW_10_6_X v1 the installation steps are:
```
wget https://raw.githubusercontent.com/UHH2/UHH2/RunII_106X_v1/scripts/install.sh
source install.sh
```
If this exceeds the disk quota, do:
```
export CMSSW_GIT_REFERENCE=<DIRECTORY_WITH_ENOUGH_SPACE>/cmssw.git
```
and then run the first step again.
After installing it, compile:
```
cmsenv
cd ../../../SFrame
source setup.sh
make -j4
cd ../CMSSW_*/src/UHH2
make -j9
```
Then, on the UHH2 folder
```
cd scripts/
```
The mixing modules with the pileup content for UL are not present on CMSSW modules. So, before running the code <em>makeMCPileupHist.py</em> one should provide the mixing modules as arguments. The python codes with each mixing module are:
* **2018:** https://github.com/cms-sw/cmssw/blob/master/SimGeneral/MixingModule/python/mix_2018_25ns_UltraLegacy_PoissonOOTPU_cfi.py
* **2017:** https://github.com/cms-sw/cmssw/blob/master/SimGeneral/MixingModule/python/mix_2017_25ns_UltraLegacy_PoissonOOTPU_cfi.py
* **2016:** https://github.com/cms-sw/cmssw/blob/master/SimGeneral/MixingModule/python/mix_2016_25ns_UltraLegacy_PoissonOOTPU_cfi.py
Put these three files on the <em> scripts/ </em> folder and then run (don't forget cmsenv):
* **2018:** ./makeMCPileupHist.py mix_2018_25ns_UltraLegacy_PoissonOOTPU_cfi --Nbins=99 --min=0 --max=99
* **2017:** ./makeMCPileupHist.py --Nbins=99 --min=0 --max=99 mix_2017_25ns_UltraLegacy_PoissonOOTPU_cfi --Nbins=99 --min=0 --max=99
* **2016:** ./makeMCPileupHist.py mix_2016_25ns_UltraLegacy_PoissonOOTPU_cfi --Nbins=99 --min=0 --max=99
This will return a root file called <em> PileupMC.root </em> containing the pileup profile for the MC (normalized). Note that as the data histograms have 99 bins the MC histograms must have the same number of bins!
#### Producing plots and getting the weights
A Jupyter notebook has been developed for getting the weights and producing the plots. You can find it here [^pu3]. There
pileup_reweight(hist_data, hist_mc): Plot the comparisson between data and mc **before** the reweighting and return two numpy arrays containing bins and the weighted mc values per bin.
plot_mc_weight(hist_data, hist_mc): Plot the comparisson between data and mc **after** the reweighting.
## Data
## Processing data and monte carlo
In this section, the procedure used to perform the analysis is described. Here is shown how to read the NanoAODplus files, apply selection criteria, and produce data and MC histograms and distributions for fitting, comparison, and other purposes.
### Processing monte carlo
Once the NanoAODPlus samples are ready, a dedicated python code is used to analyse the samples. In this step, a condor framework is used to read the NanoAODPlus root files and performe the analysis using coffea and awkward array. The github page with the content can be found here [^gitmccond].
The directory OniaOpenCharmRun2ULAn contains all codes used for the analysis. The list of codes used for the analysis are:
* **config/config_files.py**: To choose the dataset. Possibilities: 2016APV, 2016, 2017, 2018
* **nanoAODplus_condor.py**: Contains the tecnicalities to run many files using uproot. In principle, you don't need to change anything.
* **nanoAODplus_processor/EventSelectorProcessor.py**: This is the main script. This is where the analysis is performed (reweighting, cuts, histograms, etc).
## Luminosity Calculation
## Signal extraction
As explained in the AN-21-143 the ROOFIT package [^fi1].
```
EXPLAIN THE PDFS, FITTING STRATEGY AND YIELDS EXTRACTION
START GITHUB DOCUMENTATION
```
## Acceptance
## Efficiency
The efficiencies are calculated using the equations shown in Section 5 of the Analysis Note. In this section, the codes used for calculating it are presented.
## Systematic uncertainties
### Trigger
#### DPS
When running nanoAODplus_efficiency.py, the nanoAODplus files are should be changed in lines 41-43. In this case, they are like this:
```
/eos/user/m/mabarros/Monte_Carlo/dps_official/ccbar/JpsiPt9To30ToMuMuDstarToD0pi
/eos/user/m/mabarros/Monte_Carlo/dps_official/ccbar/JpsiPt30To50ToMuMuDstarToD0pi
/eos/user/m/mabarros/Monte_Carlo/dps_official/ccbar/JpsiPt50To100ToMuMuDstarToD0pi
```
#### Getting files - ccbar
**A) 2016-pre-VFP**
* **Dimuon p$_T$ range: 50-100 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/DPS_D0ToKPi_JPsiPt-50To100_JPsiFilter_TuneCP5_13TeV-pythia8-evtgen/DPS_D0ToKPi_JPsiPt-50To100_JPsiFilter_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL16RECOAPV/220823_051151/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
**C) 2017**
* **Dimuon p$_T$ range: 9-30 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/DPS_D0ToKPi_JPsiPt-9To30_JPsiFilter_TuneCP5_13TeV-pythia8-evtgen/DPS_D0ToKPi_JPsiPt-9To30_JPsiFilter_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/230124_161004/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/DPS_D0ToKPi_JPsiPt-9To30_JPsiFilter_TuneCP5_13TeV-pythia8-evtgen/DPS_D0ToKPi_JPsiPt-9To30_JPsiFilter_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/230124_161004/0001/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 30-50 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/DPS_D0ToKPi_JPsiPt-30To50_JPsiFilter_TuneCP5_13TeV-pythia8-evtgen/DPS_D0ToKPi_JPsiPt-30To50_JPsiFilter_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/230124_161011/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 50-100 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/DPS_D0ToKPi_JPsiPt-50To100_JPsiFilter_TuneCP5_13TeV-pythia8-evtgen/DPS_D0ToKPi_JPsiPt-50To100_JPsiFilter_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/220823_050416/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
#### Getting files - bbbar
**A) 2016-pre-VFP**
**B) 2016-pos-VFP**
**C) 2017**
* **Dimuon p$_T$ range: 9-30 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/D0ToKPi_Jpsi9to30_HardQCD_TuneCP5_13TeV-pythia8-evtgen/D0ToKPi_Jpsi9to30_HardQCD_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/241128_165410/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 30-50 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/D0ToKPi_Jpsi30to50_HardQCD_TuneCP5_13TeV-pythia8-evtgen/D0ToKPi_Jpsi30to50_HardQCD_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/241128_165417/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 50-100 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/D0ToKPi_Jpsi50to100_HardQCD_TuneCP5_13TeV-pythia8-evtgen/D0ToKPi_Jpsi50to100_HardQCD_TuneCP5_13TeV-pythia8-evtgenRunIISummer20UL17RECO/241128_165423/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
**D) 2018**
#### SPS
##### Getting files
Below the commands used to easily transfer the files to your lxplus/eos are shown.
Remember, as you are going to use grid services, do this:
```voms-proxy-init --rfc --voms cms -valid 192:00```
**A) 2016-pre-VFP**
* **Dimuon p$_T$ range: 9-30 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-9To30_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-9To30_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL16RECOAPV/230211_202103/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 30-50 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-30To50_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-30To50_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL16RECOAPV/230211_202109/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 50-100 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-50To100_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-50To100_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL16RECOAPV/230211_202116/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
**B) 2016**
* **Dimuon p$_T$ range: 9-30 GeV/c**
* **Dimuon p$_T$ range: 30-50 GeV/c**
* **Dimuon p$_T$ range: 50-100 GeV/c**
**C) 2017**
* **Dimuon p$_T$ range: 9-30 GeV/c**
* **Dimuon p$_T$ range: 30-50 GeV/c**
* **Dimuon p$_T$ range: 50-100 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-50To100_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-50To100_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL17RECO/230211_204309/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
**D) 2018**
* **Dimuon p$_T$ range: 9-30 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-9To30_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-9To30_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL18RECO/230211_204657/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 30-50 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-30To50_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-30To50_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL18RECO/230214_124749/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
* **Dimuon p$_T$ range: 50-100 GeV/c**
```
xrdfs root://k8s-transfer-18.ultralight.org:1094 ls /store/group/uerj/mabarros/SPS_D0ToKPi_JPsiPt-50To100_TuneCP5_13TeV-helaconia-pythia8-evtgen/SPS_D0ToKPi_JPsiPt-50To100_TuneCP5_13TeV-helaconia-pythia8-evtgenRunIISummer20UL18RECO/230211_204713/0000/ | grep '\.root$' | while read file; do
xrdcp root://k8s-transfer-18.ultralight.org:1094/$file .
done
```
#### Running the efficiency script
```
python3 nanoAODplus_efficiency.py -y 2016APV
python3 nanoAODplus_efficiency.py -y 2016
python3 nanoAODplus_efficiency.py -y 2017
python3 nanoAODplus_efficiency.py -y 2018
```
#### Getting the efficiency values
## Important References
Intrisic charm of the proton:
https://www.sciencedirect.com/science/article/abs/pii/0370269380903640?via%3Dihub
https://journals.aps.org/prd/abstract/10.1103/PhysRevD.81.051502
[^sps1]: HELAC-Onia: An automatic matrix element generator for heavy quarkonium physics" H.-S. Shao
[^sps2]: <http://hshao.web.cern.ch/hshao/helaconia.html> Shao, H.-S. (n.d.). Hua-Sheng Shao's home page. Hua-Sheng Shao's Home Page. Retrieved April 25, 2022
[^sps3]: <https://lhapdf.hepforge.org/install.html> "Installation instructions - LHADF, Retrieved April 25, 2022"
[^sps4]: <https://docs.google.com/document/d/1ddvL_9gktE_tZrjvrtJ6h8esvosE-xlosJSV_1gHY2Q/edit> "UL LogBook. Retrieved April 25, 2022"
[^sps5]: <https://github.com/Mapse/SPS_MC> "SPS MC production guide"
[^pu1]: <https://twiki.cern.ch/twiki/bin/viewauth/CMS/PileupJSONFileforData> "Data Pile up"
[^pu2]: <https://github.com/UHH2/UHH2/wiki/Installing,-Compiling,-Ntuples-(UL-2016,17,18-datasets-in-CMSSW_10_6_X-v1)> "UHH2 ntuples"
[^pu3]: <https://github.com/Mapse/OniaOpenCharmRun2ULAna/blob/newProcessing/pileup_reweight/pileup_reweight.ipynb> "pu reweighting"
[^gitmccond]: <https://github.com/Mapse/condor_mc_lxplus "Condor MC"
[^fi1]:
<https://root.cern.ch/download/doc/RooFit_Users_Manual_2.91-33.pdf> "Roofit"
[^ap1]: <https://gitlab.cern.ch/cms-bph/nanoaodplus_charm> "NanoAODPlus"
[^ap2]: <https://github.com/Mapse/NanoAOD> "NanoAODPlus + CRAB"
[^vpn1]: <https://docs.google.com/document/d/1w_PWWS-05BODuxkyS7BQZCbk25DC1NlG13RXXzbSQqM/edit#> "VPN Configuration"
[^xrdcal]: <https://gitlab.cern.ch/SITECONF/T2_US_CALTECH
ECH/-/blob/master/storage.json?ref_type=heads> "Caltech grid commands"
[^btojpsi]: https://github.com/Mapse/BToJpsi/tree/master
"Strategy for producing b -> Jpsi MC"
## Analysis Note
The afs path for accessing the LaTeX files are in this path:
```
/afs/cern.ch/work/m/mabarros/public/Analysis_note/AN-21-143
```
The main file is called AN-21-143.tex, where one can find AN title, authors, and abstract. In this document, the other modules are called with **\input{module}** command. Today (31 March 2025) the current modules are: introduction.tex, data_monte_carlo_sample.tex, selection_strategy.tex, signal_extraction.tex, efficiency.tex, systematic_uncertainties.tex, cross_section.tex, disc_results.tex, conclusions.tex, AN-21-143.bib (file for bibliography), appendix.tex.
After editing the tex files, the compilation is performed with the following command:
```
utils/tdr b
```
This will compile the tex code and generate the PDF file:
```
/afs/cern.ch/work/m/mabarros/public/Analysis_note/wkdir/AN-21-143_temp.pdf
```
## Appendix
### Number of events: SPS-MC
To discover the number of reconstructed events in SPS-VFNS contained in a LHE file, the following steps are performed.
* Run LHE-GEN steps to see how many events are in the LHE file.
* Based on the number of events per LHE files, take a good amount of LHE files based (let's say 1000) and run all steps.
### Running NanoAODPlus
The C++ code can be found here [^ap1]. This GitLab repository is constantly updated, so it is important to check with the developers which version should be used. Tu run this code using CRAB, the following repository should be used [^ap2]. The instructions for running the NanoAODPlus for data and Monte Carlo are found there.
### Grid Communication
#### UERJ Grid
To connect to the analysis machine you need to set up the VPN [^vpn1]
``` ssh -y mabarros@analysis7.acad.hepgrid ```
xrdfs xrootd.hepgrid.uerj.br ls -u /store/user/mabarros
xrdcp -R root://xrootd.hepgrid.uerj.br:1095//store/user/mabarros/Data18UL//Charmonium//CharmoniumRun2018A_AOD//220115_005257
ssh -Y mabarros@analysis7.acad.hepgrid -L 7000:localhost:7000
gfal-ls -l gsiftp://xrootd.hepgrid.uerj.br:1095/storage/cms/store/user/mabarros (not confirmed)
To see files (read only): ls ~/../../mnt/hadoop/cms/store/user/mabarros/
With Hadoop
hadoop fs -ls /cms/store/user/mabarros
hadoop fs -mkdir /cms/store/user/mabarros/LHE
#### Caltech Grid
The latest version of the commands can be found here [^xrdcal].
The commands are:
```
k8s-redir-stageout.ultralight.org:1094 (for stageout - writing/delete)
k8s-redir.ultralight.org:1094 (for read)
```
Using xroot to access files (using as example my account). Don't forget
```
voms-proxy-init --rfc --voms cms --valid 192:00
```
```
xrdfs k8s-redir.ultralight.org:1094 ls -u /store/group/uerj/mabarros
```
For copying:
```
xrdcp -R root://k8s-transfer-13.ultralight.org:1094//store/group/uerj/mabarros .
```
Using gfal:
```
gfal-ls -l davs://k8s-redir-stageout.ultralight.org:1094/store/group/uerj/mabarros/
```
#### Cernbox
```
xrdfs eosuser.cern.ch ls -u /eos/user/m/mabarros/Monte_Carlo/SPS/jpsi_ccbar_3s11/lhe_final
xrdcp root://eoshome-m.cern.ch:1094//eos/user/m/mabarros
```
#### Excluding (to be done)
### Repositories
https://github.com/Mapse/SPS_MC
https://github.com/Mapse/NanoAOD
https://github.com/Mapse/Analysis_General
https://github.com/Mapse/OniaOpenCharmRun2ULMC
https://github.com/Mapse/condor_mc_lxplus
https://github.com/Mapse/ParticleListDrawer
https://github.com/Mapse/Particle-gun
https://github.com/Mapse/OniaOpenCharmRun2ULAna/tree/newProcessing
### Important links
**Twiki:** https://twiki.cern.ch/twiki/bin/viewauth/CMS/QuarkoniaCharmAssociatedProduction
**GitLab - Analysis note:** https://gitlab.cern.ch/tdr/notes/AN-21-143
**Gitlab - nanoaodplus_charm:**
**Rucio:** requesting a transfer: https://cms-rucio-webui.cern.ch/r2d2/request
**Caltech tier2:** https://tier2.hep.caltech.edu/?page_id=141
**CMS Data aggregation:** https://cmsweb.cern.ch/das/
**NanoAOD plus char twiki:** https://twiki.cern.ch/twiki/pub/CMS/DPOANanoAODlike/nanoAODcharmv0.8.html
**CRAB in CMSSW:**
1- https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookCRAB3Tutorial
2- https://twiki.cern.ch/twiki/bin/view/CMSPublic/CRAB3ConfigurationFile
3- https://twiki.cern.ch/twiki/bin/view/CMSPublic/CRAB3Commands
#### Analysis note
v1: http://cms.cern.ch/iCMS/jsp/openfile.jsp?tp=draft&files=AN2021_143_v1.pdf
v2: http://cms.cern.ch/iCMS/jsp/openfile.jsp?tp=draft&files=AN2021_143_v2.pdf
v3: http://cms.cern.ch/iCMS/jsp/openfile.jsp?tp=draft&files=AN2021_143_v3.pdf
v4: http://cms.cern.ch/iCMS/jsp/openfile.jsp?tp=draft&files=AN2021_143_v4.pdf
v5: http://cms.cern.ch/iCMS/jsp/openfile.jsp?tp=draft&files=AN2021_143_v5.pdf
### Presentations
BPH Jamboree: [15-December-2020](https://indico.cern.ch/event/979184/contributions/4151704/attachments/2163077/3650176/Quarkonia_Open_Charm-Jamboree_15_Dec_2020.pdf)
P&P meeting: [09-August-2021](https://indico.cern.ch/event/1065317/contributions/4479397/attachments/2292692/3898370/Charmonium_Open_Charm_Production%26Properties_09.Aug.2021.pdf)
P&P meeting: [17-January-2022](https://indico.cern.ch/event/1108863/contributions/4684974/attachments/2374068/4055158/Quarkonia_Open_Charm_PP_Meeting.pdf)
P&P meeting: [23-May-2022](https://indico.cern.ch/event/1154900/contributions/4881765/attachments/2449358/4197423/May_23_2022_Charmonium_Open_Charm_PP_Meeting_-1.pdf)
P&P meeting: [10-October-2022](https://indico.cern.ch/event/1198393/contributions/5081709/attachments/2525440/4343434/October_10_2022_Charmonium_Open_Charm_PP_Meeting_.pdf)
### Analysis Note
Analysis note: AN-21-143
##### Configuration
```
scl enable rh-git29 bash
git clone --recursive https://:@gitlab.cern.ch:8443/tdr/notes/AN-21-143.git
export TDR_TMP_DIR=/afs/cern.ch/work/m/mabarros/public/Analysis_note/wkdir
```
**To build:** utils/tdr b
##### Problems/solutions
**Problem:**
Unable to rewind rpc post data - try increasing http.postBuffer
error: RPC failed; result=65, HTTP code = 401
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
**Solution:**
git config http.postBuffer 524288000
#### Comments
##### 12-July-2022
[Comments](https://docs.google.com/presentation/d/1ghZPBfGI1NBT_vau0TrM1WHhJTLtu0JFaftMi7oKDS8/edit#slide=id.g131809f1d6c_0_103)
##### 30-March-2023
[Comments](https://docs.google.com/presentation/d/1jsY09LzcIZdGfF5icTa6VwqokLnVSGXMZ0tTNKmZTkY/edit?usp=sharing)
##### 24-April-2023
[Comments](https://docs.google.com/presentation/d/1_R5EFuZ0rjKRdFdiMkBUOqO4B0VkUJybvnNCZjvfJ0M/edit#slide=id.g1612443bb36_0_0)
##### 11-October-2023
[Comments](https://docs.google.com/presentation/d/1DO_p25OTdrOpobhQo8gJdp0v_OaItvR0GEsd99nl1PQ/edit#slide=id.g1612443bb36_0_0)
##### 24-November-2023
[Comments](https://docs.google.com/presentation/d/16SIUGF-nDC7jyccJXGe2DEq6waagJ3HmbnovKk5dhU0/edit?usp=sharing)
##### 05-December-2023
[Comments](https://docs.google.com/presentation/d/1EPpDxLwAX6b59ZkVRtCGPAeIP2aMuwap5NjNdLR3KiQ/edit?usp=sharing)
### P&P meeting comments
#### 10-October-2022
[Comments](https://docs.google.com/presentation/d/1qZ0KpHvkLOlSppPs5JU-klwXXXVyHjrotX1mS9tiO1g/edit#slide=id.p1)
### Useful tools
#### Rucio transfers
**1. Select data identifiers (DIDs)**
Choose container and write:
cms:/Charmonium/*/*
**2. Select Rucio Storage Elements (RSEs)**
Just to check the available quota
**3. Options**
To tick the options: Groupping (All), Notifications (yes)
#### Useful commands
CERNBox - checking quota
```
export EOS_MGM_URL=root://eosuser.cern.ch
eos quota
```
Voms certificate - CMS
```
voms-proxy-init --rfc --voms cms -valid 192:00
```
Using wc command to count the number of a files in a directory. Say you want to calculate the number of .lhe files in a directory. To do this you can use the following command
```
ls -l *.lhe | wc -l
```
To get the file size, do:
```
du -sh file
```
Search words:
```
grep -i -r word path/
```
ex:
```
grep -i -r Simulation output/
```
Path of all files in a directory:
```
ls -d $PWD/*
```
To give permissions:
```
chown username folder
```
ex:
```
chown mapse oniaopencharm-docs/
```
To give permissions folders and subfolders,
```
sudo chmod -R a+rwx path/
```
To check if a ROOT file is not corruped (you must activate the CMSSW environment using cmsenv)
```
edmFileUtil -f path/file_name.root
```
To give permissions in lxplus (AFS)
```fs setacl /path nicelogin all ```
#### Code snippets
Using gfal too exclude files in Caltech: (currently not working due to gfal problems! **[26/04/2024]**)
```
import os
run = True
while run:
x = os.system('gfal-rm -r davs://xrootd-redir-stageout.ultralight.org:1095/store/group/uerj/mabarros/BToJPsiKMu_JpsiMuMu_TuneCP5_13TeV-pythia8-evtgen')
if x == 0:
run = False
```
Running code for getting xsec
```
cmsRun genXsec_cfg.py inputFiles="file:xxxx.root" maxEvents=-1
```