Since there is now some interest in generating MC as well as analyzing it, let me remind you of the computing resources in Florida. The main server for our CDF needs is the machine: cdf.phys.ufl.edu. If you have an account in the Physics Dept., you should be able to log into it using ssh. It runs RH7.1 and has most frozen releases of the CDF software available. In particular, it has 4.5.0. If there is a particular release that is not there and you want it, let me know. To get started, after logging in, you must: source /grincdf/software/cdf2.cshrc This server matches the original version of the server you now have at FNAL. In particular, our RAID array is working. It is partitioned into two spaces: /grincdf/software/ which has the CDF software, and /grincdf/tmp1/ which is the main user scratch space. From this machine you can also access these areas by changing "grincdf" with "cdf" (the real name), but in general from the farm (described next) you should use the name "grincdf" to access the gigabit ethernet link. You should be able to create a directory on the tmp1 partition. Let me know if you have trouble. By the way, we have Kerberos in Florida for RH7 machines (i.e. cdf). You don't need it to log in here, but you can't go the other way to Fermilab without it or a cryptocard. When you source /grincdf/software/cdf2.cshrc, you should get a set of Kerberized tools such as kinit, ftp, ssh, ... As for submitting jobs on the 140 node CPU cluster, you need a special account on the machine grinulix.phys.ufl.edu. You should ask Jorge Rodriguez (jorge@phys.ufl.edu) to create an account for you to submit jobs. The farm and grinulix run RH 6.2. So to submit jobs you must compile and build your code on grinulix as well. You should not run jobs on grinulix, but instead submit them to the PBS batch system (qsub, qstat, qdel,...). Dmitri and I have some scripts and can get people started with more sophisticated productions. Here are the basics: First compile your job on grinulix. Then, to find the list of nodes that are available, type: pbsnodes -a This will show a list of nodes, and the ones marked "free" are not doing anything. You can just ssh to them to test out your executable (but don't submit long jobs this way). It should let you in with no password if you got onto grinulix. Once you are sure your executable and scripts work, submit a script that runs your job to the PBS batch queue: qsub myjob.csh You can check the status with "qstat", and delete with "qdel". In your script, you should source ~cdfsoft/cdf2.cshrc as usual and issue "setup cdfsoft2" (with whatever version you are using). You ALSO need this: setenv LD_LIBRARY_PATH ~cdfmc/grinux-shlib/:"$LD_LIBRARY_PATH" because the Redhat OS on the farm nodes are missing some shared libraries. You should write your output to /export/scratch/$USER which is the local scratch disk on the node. Otherwise all the nodes will give too much traffic to the /cdf/tmp1/ disk. Then copy the output to your main area when the job is done (and clean up the scratch area). The first time you run on a node, you'll have to create your directory on the local disk. The next time, you don't. I do this in my scripts: if ($HOST != grinulix) then setenv SCRATCH /export/scratch/cdfmc/ else setenv SCRATCH . endif if (-e $SCRATCH) then cd $SCRATCH else mkdir $SCRATCH cd $SCRATCH endif At the end of my job, I just cp the files I want over to the cdf machine: cp -f job2.root /grincdf/tmp1/cdfmc/cdfSim/job2.root If you are copying back to FNAL, maybe you want to first copy them to an area on cdf, and then by hand ftp them all back. Because you won't be able to use Kerberos on the farm.