ORCA for Dummies ---------------- by Jason Mumford The following will tell you how to install ORCA and run the Trigger/L1CSCTrigger package. Setting up other packages will be very similar to the procedure outlined below. I start with a quick setup, and then get into more details below. To get set up quickly: --------------------- Login to cms using: telnet xlplus.cern.ch login with user name and password if you have a scratch directory, go there; otherwise go to ~/public if you already have cvs access then do: project ORCA scram list Look for the most recent release of ORCA (not the prerelease). As of March 2002, this is ORCA_6_0_1. Then type: scram project ORCA ORCA_6_0_1 If this doesn't work then first login anonymously to cvs: setenv CVSROOT :pserver:anonymous@cmscvs.cern.ch:/cvs_server/repositories/ORCA cvs login password: 98passwd Once you have ORCA installed do: cd ORCA_6_0_1/src cvs co Trigger/L1CSCTrigger cd Trigger/L1CSCTrigger/src scram b cd ../test scram b bin There are 2 ways to run the job: interactively or batch. Interactive is the easiest: source int.csh RunL1CSCStubs If you want to run the batch version, you need to edit fed1.csh. Replace the line with /m/mumford to reflect your account name. The ORCA_RELEASE variable should also be set properly. You can modify the number of events in 'MaxEvents' (also possible in int.csh). Also, fed1.csh contains a cp to an /analysis/ subdirectory. If this doesn't exist, just go to ~/public and type: mkdir analysis And finally to run type: bsub fed1.csh (make sure you type this from the Trigger/L1CSCTrigger/test directory) There is also a GUI way to run: xbsub & This brings up a window where you can enter in the information about your job. When menu comes up, enter fed1.csh for the command line, enter 8nm for the queue. Go to the advanced menu and enter your email address and then click on the boxes for notification of beginning and end of a job. After this press submit on the main menu. When your job has finished running, if you ran in batch mode you will receive an email notifying you that your job is done. If you ran interactively, you won't get an email. Either way you will end up with a file called 'csc_stub.hbook'. If you ran interactively it will be in your Trigger/L1CSCTrigger/test directory. In fed1.csh, the last command copies the .hbook file into your /public/analysis directory. I have limited space in my public directory. So for bigger jobs, I modify the last line in fed1.csh from copying the .hbook into the /public/analysis directory and replace it with 'cp ./csc_stub.hbook ~/scratch0/analysis/csc_stub.hbook_{$LSB_JOBID}'. This places the .hbook into my scratch directory which is much bigger (something like 500 MBytes). You should check where your scratch is (it can be different for different machines) and set this line accordingly. Now you have your csc_stub.hbook file. Since the file is an .hbook file, you can open it and look at the ntple using paw or you can type: h2root csc_stub.hbook to convert it to csc_stub.root for analysis in root. To give an example of our plots, if like root, type: root -This starts root TFile f("csc_stub.root") -This loads the file into root TFile f.ls() -This gives a list of all the ntples and histograms h10->Print() -This gives a list of all the variables in the h10 ntple. h10->Draw("stubphivalue:cathodephi") -This plots the reconstructed phi vs simulated phi of our LCTs h61111->Draw() -This plots one of the many many histograms stored in csc_stub.root. This histogram happens to be the number of LCTs into the portcard of endcap 1 station 1 sector 1 subsector 1. .q -This is the command for exiting root. http://root.cern.ch Is a very useful web page if you want to learn more about using root. ------------------------------------------------------------------------------------ Hopefully everything works fine up to this point. If so, the next few sections contain more detailed explanations about various things in no particular order of importance. If you were not able to run your job, there is something that needs more explanation: how to choose the proper dataset. Choosing the Proper Dataset --------------------------- As ORCA continues to evolve, various packages get upgraded, bugs get fixed, and eventually datasets need to be recreated to accomodate the fixes. As such a dataset that worked in ORCA_5 WILL NOT work in ORCA_6. Also, when running the Trigger/L1CSCTrigger package, there are two types of datasets that can be run on: simhits and digis. The difference between simhits and digis is that simhits are an earlier stage of the simulation. These are entry and exit points of particles passing through the detector (with other information such as energy, transverse momentum, etc.) using a package called CMSIM (based on GEANT3). Digis are the electronic pulses in the wires and strips of a CSC made by the particles that pass through the gas layers in the chambers. The bottom line: I strongly recommend running over digis whenever possible since running over simhits takes longer and is basically a waste of your time (this statement may not apply for other substems/packages). To run over digis, look in the Trigger/L1CSCTrigger/test/BuildFile. If there is a line that says: then you are in business. When you type 'scram b bin' an executable will be made which is designed to run over a digi dataset. If instead you see a line that says: then if you type 'scram b bin', an executable will be made which is designed to run over simhits. You can run over simhits if you want, otherwise just change 'SimReader' to 'RecReader', do 'scram b bin' and you will be set up to run over digis. When you have chosen which kind of dataset you want to run over, you will need to change the fed1.csh script or the int.csh script in order to choose the appropriate dataset. The first thing that needs to be changed is the OO_FD_BOOT variable. There are comments which say which federations can be used with which versions of ORCA. For example suncms88.cern.ch can be used for ORCA_5 but not ORCA_6. Make sure you only have one federation selected. Once you have selected the federation, look bellow for the InputCollections and select the appropriate line depending on whether you want to run over simhits or digis. Everything is labled very clearly in the files. Are you looking for the official productions? There is a link (which hasn't been updated in a while): http://cmsmuon.web.cern.ch/cmsmuon This is supposed to be the official page for the Muon PRS production. If after reading this section things still don't work, try checking out a different version of ORCA. Sometimes libraries are inadvertently removed, or packages become obselete. If things still don't work, send me an email (jason.mumford@cern.ch) and I'll see if I can help solve the problem. Minbias Events -------------- I leave this section here for completeness, but most of it is out of date. As of March 03, 2002, in order to run a min-bias set, it is necessary to run on Floridas events. Please email Darin Acosta (acosta@phys.ufl.edu) for more information about running on the Florida machines. rsh -l cdfuser dip01.cern.ch password: (ask me directly for the password, I can't post it here) There is a minbias set of 99,566 events. By minbias we mean muons produced from QCD (u, d, s, c, b quark decays). b and c quark decays dominate below 1 GeV. Look for the followin: orca5test.csh (used for non-pileup), and orca5pileup.csh (used for pileup). A sample containing neutrons is available in the orca5pileup.csh. The input file is: '/System/SimHits/neutron1034/neutron1034'. This sample contains 6000 events. As of Oct 16 2001, the federation that is working is: setenv OO_FD_BOOT /grinraid/raid1/acosta/databases/muon1/ORCATEST.boot. The data file is: InputCollections = /System/SimHits/minbias/minbias To get the number of events to run on in pileup, take the total number of events and divide by the number of interactions/bx (17.3 for 10^34 Luminosity). For 99,566 events, this gives 5,755. Other Topics ----------------------------------- CVS Access ---------- You can check out any package in ORCA that you want (once you have CVS Access). If you make a change to a file, and want it to be part of the official ORCA package, you need to have developer status. In order to do this you should email Hans-Peter Wellisch (Hans-Peter.Wellisch@cern.ch) or the cvs administrator (cvsadmin@cmscvs.cern.ch). It is usefull to know a few cvs commands. Once you have checked out a package (say the Trigger/L1CSCTrigger package), and say you start editing a bunch of files, go home, come back the next day, and forgot what you have changed. Go to the directory which contains the edited files and do: cvs -n update This will give a list of all the files that are different from the official package that you checked out. If you want to see how these files are different from the originals do: tkdiff filename This brings up a split window with the original file on one side, the modified file on the other side, and the differences highlighted in blue. If you decide that everything you had done to a file was crap and you want to start over, delete the file, then do: cvs update filename This will install the original version of the file If one of your coworkers made a change to one of your files, and you want to have his change, if he committed his changes also do: cvs update filename (Warning!!! If you had also made changes to the same file, cvs will combine both changes into the same file. Sometimes there is an incompatibility between changes. In this case you will need to look for the places in the file which cvs has marked as an incompatibility). If you have developer status, and IF YOU KNOW WHAT YOU ARE DOING, and you want your changes committed into the repository, type: cvs commit -m "enter a brief message about the changes you made" filename There are also other trickier things that you can do with cvs. For example, if you want to see all the log messages of a certain file: cvs log filename If you want to compare the changes between two different versions of a file (say the last revision was 1.7 and you want to compare this with revision 1.2): cvs diff -r1.7 -r1.2 filename If you want to check out a previous version of the code (say you have installed ORCA_6_0_0 but you want to check out the ORCA_5_1_0 version of the Trigger/L1CSCTrigger package), in this case go to the ORCA_6_0_0/src directory, make sure you haven't already checked out a package, then type: cvs co -r ORCA_5_1_0 Trigger/L1CSCTrigger You can also check out the code using the date as a tag (eg. to check out the March 21, 2001 version of the code): cvs co -D 03/21/2001 Trigger/L1CSCTrigger The Batch Job ------------- I really prefer running my job interactively, but under certain circumstances it is necessary to run in batch mode. This is everything I know about batch mode. When running the GUI version, xbsub, you may notice that there are different values in the Queue. 'nm' stands for normalized minutes of CPU time. So 8nm should take around 8 minutes (it doesn't!!! Because there are other jobs that are continually butting in front of you). If you have data that takes longer than the time you have allotted, then the job won't finish completely. You don't want to specify too long a time though, because the longer a queue specified, the less priority your job will have. So it would be unwise to specify 1nw (1 normalized week) for a job that only has 100 events. The longest job you probably have the authority to run is 1 normalized week. Sometimes you want to run a really big job of say 100000 events or more. It's hard to estimate how long a job like this will last. If the job terminates before it was finished processing, you risk losing the entire job. So it is a good idea to split a real big job up into smaller jobs. So for instance, the 100,000 event job can be split into 5 20,000 event jobs of 8nh each. You can specify the first event, and the max events in fed1.csh in order to split your job up accordingly. It's a pain in the ass to have to reset your email and such every time you open this window, so you can save your preferences under the 'file' tab. Now you have submitted your job, and you want to see how things are going. Type 'xlsbatch&'. This brings up a window that shows all of the jobs currently going on. You probably don't care what some random Joe is doing, so click on 'Job', then 'Filter'. You can then select yourself as the user that you want to see. Click OK, and this will show the status of all of your jobs. If the status says 'SSUSP', this means someone butted in front of you, and your job has been momentarily suspended. 'PEND' means your job is waiting to be run. 'Exit' means you probably terminated the job, or it didn't run properly. 'Done' doesn't allways mean it ran properly, but it's usually a good sign. By clicking on a job, a new set of buttons comes up that allows you to (among other things) see the job history, terminate the job, or my personal favorite: peak at the output of the job. Sometimes you will submit a job that has problems that your not aware of. By clicking on this button, you can see if data is really coming out, or if it is hung up on some error and needs to be killed. If you are running your job over the modem from your home, you probably don't want to deal with the slowness of a GUI interface. 'xbsub' and 'xlsbatch' have non-graphical counterparts. Instead of using xbsub, simply type 'bsub fed1.csh'. If you want to see the ouptut of your job so far type 'bpeek'. A very usefull web page which contains all the options for running your jobs this way is: http://cmsdoc.cern.ch/comp/help/doc/Hints-Faq/Batch_Job/lsf-batch-job.shtml When the job is terminated, go to the /test/ directory and look for a new directory with a name like LSFJOB_ with a bunch of numbers. Inside this directory, you can type 'more STDOUT'. This will show you what the output of your job looked like. If it ran correctly, it should be a rather big file. If not, error messages will show up here. If the job is big (say 10,000 events), you will have to wait all day to scroll down to the bottom of your file. You want to see the bottom of the file, because that is where it tells you if your .hbook file was copied to the right place (.hbook files will be explained shortly). If you want to see the last lines of your file, type 'tail -1000 STDOUT | more'. This will jump to the last 1000 lines of your STDOUT file. How to Plot All the Trigger/L1CSCTrigger Histograms --------------------------------------------------- We added so many histograms, that we had to make a compiler option. The default is that many of our histograms will not be produced unless you specify that you want them in the compiler. To do this, open up /ORCA_x_x_x/config/compilers.mk and add the line -DL1MUCSC_HISTOS to the top so that you have: CXXFLAGS+= -O2 -DL1MUCSC_HISTOS Then do a scram b in the Trigger/L1CSCTrigger/src directory, and then the /test directory. Now you will most likely run into another problem: COBRA doesn't have enough memory to keep track of all our stuff. Here is a formula to increase that memory. You can tell that you are having problems if you get an 'ERROR in HNBUF' type message. 1. Go to your ORCA_x_x_x/src directory. Type 'project COBRA' 2. Do 'cvs co Utilities/CHBook4' 3. Go to Utilities/CHBook4/interface directory. 4. Open up CHObect.h in emacs 5. Note the size of SizeOfPawCommon. By default it is 200000. Change it to something like 1000000. 6. Do a scram b in the Utilites/CHBook4/src directory. 7. Do a scram b in the L1CSCTrigger/src directory. 8. Finally do a scram b bin in the test directory. 9. Run your job. If you see in your output a message like: **** ERROR in HNBUF: Not enough space in memory: * ID = 10 Then you still don't have enough memory, and you should increase the size again. Note: I don't know if there is a limit to the size of SizeOfPawCommon Miscellaneous ---------------------------------------------------------------------------- Backup ------ There is a nice elegant way to back up your files when you work for CMS. Type 'hsm help' and you will learn about the tape backup system for CERN. This program is a newer, better version of the obselete 'cmstape'. PAW --- Here are some commands for those who want to use PAW histo/fil 1 csc_stub.hbook -Open file hi/li -Gives list of ntples and histograms n/print 10 -Gives a list of the variables in ntple 10 n/plot 10.stubphivalue -Plots the reconstructed phi of our LCTs n/plot 10.stubphivalue%cathodephi -Plots reconstructed phi vs simulated phi of our LCTs. histo/plot 61111 -Plots histogram 61111 zone 2 2 -Splits the canvas into a 2 X 2 square for viewing multiple plots exit -Exits PAW If at any time you want to know the different uses for a command, just type: usage command_name This will tell you the different options for this command. For example if you type: usage n/pl you will see that there is a variable called 'option' that can be specified. If you type: n/pl 10.stubetavalue%cscid option=box you will get a 3d plot of these values. Other interesting options to try for n/pl are option=lego, option=cont, or option=surf. These are just a few examples of variables that can be set for different commands in PAW. More Info --------- If you want to see any of the files in ORCA, the official page for all CMS reconstruction software can be found at: http://cmsdoc.cern.ch/cmsreco