Advanced physics is essentially a study in precision, and the particle accelerators of today that are located at the cutting-edge Intensity and Energy Frontiers work against approximations everyday. The particles they synthesize, track and study are so small, quick and short-lived that they might as well have simply popped in and out of existence and nothing would’ve changed. However, fortunately, that’s not the point of studying these things at all: understanding why the “popping” happens at all is what is key.
At the world’s most powerful collider, the LHC at CERN, two proton beams are shot around 27-km long rings. These are not continuous beams but ones intermittently segregated into bunches, like a pulse. Each of these bunches contains 2,808 protons (which are the hadrons in question) and there are 1,000 bunches per beam. It is ensured that the bunches from the rings don’t cross each other – “collide” – more than once every 25 nanoseconds. At this rate, 112.32 billion protons – 56.16 billion from each side – meet each other every second. This is what every particle accelerator makes possible: a rendezvous.
Once this is done, the detectors take over, and they are the real measure of an accelerator’s performance. The accelerator will have ensured that enough collisions occur so that the detector can record at least one (even though I’m understating the ratio, it is really quite small). Ergo, to measure a detector’s performance as either being good or bad, or perhaps even as somewhere in between in the rare case, how much it is capable of seeing is what makes the difference. This is where luminosity comes in.
The generic definition of luminosity is that it is a measure of the quantity of light that passes through an area each second, and so its units are per metre-squared per second. Accelerator physics adopted this definition and modified it a little: accelerator luminosity is a measure of the number of particles that pass through a given area each second multiplied by the opacity of the detector. This final parameter is necessary because it also accounts for the tendency of some particles to escape detection by passing right through the target: if the target’s opacity is high, most particles will be “seen”, and if it is low, most particles will be invisible to the cameras’ eyes.
(Even though the definition of luminosity indicates the number of particles that pass through an area per second, its meaning in the confines of an accelerator changes: it is the number particles that are seen by a detector irrespective of how many particles there are in total.)
Inside the accelerator and in the presence of the detector, the following differential equation dictates the machine’s luminosity:
Here, σ is the total cross section of the detector – the area that is exposed to and receives the stream of particles, N the number of particles, L the instantaneous luminosity, and t the duration over which the detector remains in operation. The opacity affects σ. (The ‘d‘ denotes that the value of the parameter is being considered for an infinitesimal period of time, as indicated by the dt in the denominator. If it was dx or dy instead of dt, it would mean the value of N is being considered over a very small distance in the x or y direction.)
If Ω (omega) were the solid angle through which the detector’s cross section was exposed, its differential cross section is computed as
This formula gives the luminosity with respect to the angular cross section (as opposed to a planar surface) as the number of particles per degree per second, and from here, the number of particles per volume of space can be easily computed. The formula also shows that the greater the detecting cross section per degree of solid angle, the greater the luminosity per degree of the same angle (or, “particle-seeability”). And for the detector to be useful at all, the instantaneous luminosity has to be high enough to detect particles so small that… well, they’re incredibly small. Therefore, the smaller the particle being studied, the larger the detector will be.
There is no better way to illustrate this conclusion than to point, again, to the LHC, where the Higgs boson particle, one of the smallest particles conceivable, a veritable building block of nature, is being hunted by the world’s largest detector (which also has a misleading name): the Compact Muon Solenoid (CMS). The CMS, weighing 12,500 tons, has been able to achieve an astounding integrated (as in not instantaneous) luminosity of 1 per femtobarn: 1 barn is one-hundred-billion-billion-billionth of a squared metre; 1 femtobarn is one-million-billionth of that!
Another detector at the site, the much more prolific A Toroidal LHC Apparatus (ATLAS) weighs 7,000 tons and has a luminosity of 50 per femtobarn. The under-construction iron-calorimeter (ICAL) detector at the India-based Neutrino Observatory (INO) in Theni, Tamil Nadu, will weigh 50,000 tons after being completed in 2015 and will be used to track and study neutrinos exclusively. Neutrinos are particles more elusive than the Higgs, and, though the luminosity of ICAL hasn’t been disclosed, we can expect the device to be one of the pioneers in detector technology simply because its luminosity must be that low for the project to be a success.
This much and more can be said of accelerator luminosity. While the media goes gaga over the energies at which the beams are being accelerated, there is a silent revolution in detector technology happening in the background, a revolution that is spawning brilliant techniques to spot the fastest, smallest and most volatile particles. These detectors also consume the greater part of accelerator budgets to build and the greater part of total maintenance time. Some of the most advanced detectors in existence include hadronic calorimeters (HCAL), ring-imaging Cherenkov detectors (RICH detectors) and muon spectrometers.