You may (or most likely, may not) have heard the big news this week regarding one of the oldest players in the nuclear magnetic resonance (NMR) game. Agilent Technologies, owner of Varian Inc. has decided to close down shop in their long-standing NMR sector, leaving Bruker the only sizable company still making the things.
This wasn’t always the case, however. JEOL, Bruker, Varian, Oxford Instruments, even IBM  and General Electric were all once major players in the NMR market .
Now Bruker stands pretty much alone. Carrying with this nostalgic theme, I’ve decided to compile a digital museum of sorts regarding the history of NMR instrumentation. I’ve gathered images and information from across the web, including others’ blogs, advertisements, reports, and even articles (credit given where credit is deserved). I Hope you enjoy the walk down memory lane with me.
Nuclear Magnetic Resonance: From First Principles to Applications
NMR, at least in principle, dates back to 1925 when the idea of spin magnetic moment was first theorized. Electrons, neutrons, and protons (among other particles) all posess a property called “spin”, but not in a classical sense, as the earth spins on its axis, because particles do not have an axis to “spin” around. Magnetic spin is simply an intrinsic property of certain particles, like mass or charge (for a more detailed, but still simplified, explanation of spin click here).
Because the particles that make up a nucleus have spin (protons and neutrons) the nucleus also can have spin (as long as the sum of the particles’ spin does not equal 0). This spin was first measured in 1937 by Isidor Isaac Rabi in lithium isotopes and protons. Rabi’s discovery won him a Nobel Prize in 1943.The underlying principles of nuclear magnetic resonance having been discovered it was only a matter of time before Edward Mills Purcell adopted the technique to bulk materials in 1945, which won him a Nobel Prize as well.
In 1949 Varian’s F6 Nuclear Fluxmeter became the first commercially available product to employ the principles of NMR.From there, the field exploded. Engineering developments in magnetic coil designs and field stabilizers allowed crude commercial NMR spectrometers to enter the market by the mid-1950’s. Unfortunately, the practical limits of NMR spectroscopy were soon realized by the late-1950’s. Up to this point, NMR data was acquired by scanning a sample across a broad range of radio frequencies (RF), in sequence, over and over, until enough signal was obtained to be useful. This technique, called continuous wave or “CW-NMR” was a time-consuming process, taking several minutes to scan a single time; dozens, hundreds, or even thousands of scans can be required to resolve a sample. Running 5000 scans (often required for dilute samples) at 5 minutes per scan would require over two weeks of continuous scanning, clearly not tenable from a resources management perspective.
Two solutions to this problem began to develop, largely side-by-side, in the 1960’s.
Fourier Transform NMR
What if, instead of scanning the RF band sequentially, the entire RF band could be scanned at once? A method of short-pulsed radio frequency excitation had been known since the early days of NMR. A pulse RF could be used to scan an entire frequency range at once, in a matter of seconds. The result of this pulse is called a free induction decay (FID). Unfortunately, FID’s are functions of time, and the resulting spectrum is of no practical use to chemists.
However, other forms of chemical analysis, such are infrared spectroscopy, had successfully employed a mathematical operation known as a Fourier transform to convert data from the time domain (FID) to the frequency domain. In 1957 it was shown that, in theory, it should be possible to convert FID data to the frequency domain, giving data identical to what was obtained by CW-NMR.
In 1966 Ernst and Anderson published the results of their extensive effort to perform a Fourier transform on FID data. The two employed minicomputers to generate tapes that could be processed by larger computers. The results were groundbreaking, and CW-NMR was rapidly phased out in favor of Fourier transform nuclear magnetic resonance. The advent of smaller, cheaper, and faster computers in the early 1970’s made FT-NMR all but ubiquitous.Superconducting Magnets
Early NMR spectrometers used copper or iron-core solenoids to generate a semi-uniform electromagnetic field. This field was limited by the resistance of the solenoid coils. A stronger magnetic field would lead to higher resolution and faster acquisition times. However, generating a more powerful EM field also requires pumping more electric current through the solenoid, generating large amounts of heat and consuming a huge amount of power. Furthermore, the magnetic capacity of iron eventually maxed out, physically plateauing the progress of high-field spectrometers.
Superconductors had been known since 1911, however, they were large and impractical. In theory, replacing the iron-core magnets in a spectrometer with a superconducting magnet would provide a massive increase in field strength. And again, in theory, if you wanted a higher field spectrometer, you could just build a bigger superconducting coil.
The problem became an engineering one: superconductors must operate at cryogenic temperatures. To achieve these temperatures, the superconducting coil needed to be immersed in liquid helium (4 K, -452 °F), contained in a dewar, which is itself contained in a dewar of liquid nitrogen (77 K, -320 °F). The first such instrument became available in 1964, the Varian HR-200.Field strength in NMR is generally given in megahertz (MHz), despite frequency not being a direct measurement of magnetic field magnitude. This is done to simplify the relative resolving power across instruments of different field strength. The frequency corresponds to the resonance frequency of a proton in the magnetic field of a particular instrument. A proton in a 7.05 Tesla magnetic field will resonate at 300 MHz. Thus, a 7.05 T instrument is referred to as a “300 MHz.” The HR-200 (200 MHz) represented a massive increase in resolving power over previous non-superconducting magnets, which clocked in around 50 MHz.
Onward and Upward
After the advent of FT-NMR, coupled with superconducting magnets, the practical constraints on NMR previously realized no longer existed. Increasing a magnet’s resolving power from 200 MHz to 500 MHz was simply a matter of scaling up existing technology. Improvements in computing technology, software, and programmed pulse-sequences allowed for more efficient use of the magnet’s hardware. The largest NMR spectrometer  currently in existence (to my knowledge) operates at 23.5 T, a whopping 1000 MHz of resolving power.
We have reached a point where it is no longer necessary to go larger. A 300 MHz instrument (what I use every day) is more than capable of performing all basic 1-D and 2-D NMR experiments; there’s nothing you could throw at an 800 MHz instrument that it couldn’t easily handle.
Moving away from purely chemical applications, NMR showed significant promise in medical diagnostics. An analytical NMR spectrometer has a sample chamber made to fit a glass tube a few millimeters in diameter, and 8-10 inches in length. Again, what’s to stop us from making an instrument with a giant sample chamber and sticking a whole person inside? If you’ve ever been to the hospital and had an MRI, you’ve done exactly that. MRI stands for Magnetic Resonance Imaging, they dropped the “nuclear” from the name as a public relations move.
Hope you enjoyed this brief history of NMR, and maybe learned something along the way.
 Decker, E. Analytical Chemistry. 1980, 901A
 Decker, E. Analytical Chemistry. 1993, 295
 Arnold, J.T.; et al. J. Chem. Phys. 1951, 507