## Monday, June 29, 2015

### How much information can you cram down an optical fiber?

A new cool result showed up in Science this week, implying that we may be able to increase the information-carrying capacity of fiber optics beyond what had been thought of as (material-dependent) fundamental limits.  To appreciate this, it's good to think a bit about the way optical fiber carries information right now, including the bits of this blog post to you.  (This sort of thing is discussed in the photonics chapter of my book, by the way.)
Information is passed through optical fibers in a way that isn't vastly different than AM radio.  A carrier frequency is chosen (corresponding to a free-space wavelength of light of around 1.55 microns, in the near-infrared) that just so happens to correspond to the frequency where the optical absorption of ultrapure SiO2 glass is minimized.   Light at that frequency is generated by a diode laser, and the intensity of that light is modulated at high speed (say 10 GHz or 40 GHz), to encode the 1s and 0s of digital information.  If you look at the power vs. frequency for the modulated signal, you get something like what is shown in the figure - the central carrier frequency, with sidebands offset by the modulation frequency.   The faster the modulation, the farther apart the sidebands.   In current practice, a number of carrier frequencies (colors) are used, all close to the minimum in the fiber absorption, and the carriers are offset enough that the sidebands from modulation don't run into each other.  Since the glass is very nearly a linear medium, we can generally use superposition nicely and have those different colors all in there without them affecting each other (much).

So, if you want to improve data carrying capacity (including signal-to-noise), what can you do?  You could imagine packing in as many channels as possible, modulated as fast as possible to avoid cross-channel interference, and cranking up the laser power so that the signal size is big.  One problem, though, is that while the ultrapure silica glass is really good stuff, it's not perfectly linear, and it has dispersion:  The propagation speed of different colors is slightly different, and it's affected by the intensity of the different colors.  This tends to limit the total amount of power you can put in without the signals degrading each other (that is, channel A effectively acts like a phase and amplitude noise source for channel B).  What the UCSD researchers have apparently figured out is, if you start with the different channels coherently synced, then the way the channels couple to each other is mathematically nicely determined, and can be de-convolved later on, essentially cutting down on the effective interference.  This could boost total information carrying capacity by quite a bit - very neat.

## Wednesday, June 24, 2015

### What is quantum coherence?

Often when people write about the "weirdness" of quantum mechanics, they talk about the difference between the interesting, often counter-intuitive properties of matter at the microscopic level (single electrons or single atoms) and the response of matter at the macroscopic level.  That is, they point out how on the one hand we can have quantum interference physics where electrons (or atoms or small molecules) seem to act like waves that are, in some sense, in multiple places at once; but on the other hand we can't seem to make a baseball act like this, or have a cat act like it's in a superposition of being both alive and dead.  Somehow, as system size (whatever that means) increases, matter acts more like classical physics would suggest, and quantum effects (except in very particular situations) become negligibly small.  How does that work, exactly?

Rather than comparing the properties of one atom vs. 1025 atoms, we can gain some insights by thinking about one electron "by itself" vs. one electron in a more complicated environment.   We learn in high school chemistry that we need quantum mechanics to understand how electrons arrange themselves in single atoms. The 1s orbital of a hydrogen atom is a puffy spherical shape; the 2p orbitals look like two-lobed blobs that just touch at the position of the proton; the higher d and f orbitals look even more complicated.  Later on, if you actually take quantum mechanics, you learn that these shapes are basically standing waves - the spatial state of the electron is described by a (complex, in the sense of complex numbers) wavefunction $\psi(\mathbf{r})$ that obeys the Schroedinger equation, and if you have the electron feeling the spherically symmetric $1/r$ attractive potential from the proton, then there are certain discrete allowed shapes for $\psi(\mathbf{r})$.  These funny shapes are the result of "self interference", in the same way that the allowed vibrational modes of a drumhead are the result of self-interfering (and thus standing) waves of the drumhead.

In quantum mechanics, we also learn that, if you were able to do some measurement that tries to locate the electron (e.g., you decide to shoot gamma rays at the atom to do some scattering experiment to deduce where the electron is), and you looked at a big ensemble of such identically prepared atoms, each measurement would give you a different result for the location.  However, if you asked, what is the probability of finding the electron in some small region around a location $\mathbf{r}$, the answer is $|\psi(\mathbf{r})|^2$.  The wavefunction gives you the complex amplitude for finding the particle in a location, and the probability of that outcome of a measurement is proportional to the magnitude squared of that amplitude.  The complex nature of the quantum amplitudes, combined with the idea that you have to square amplitudes to get probabilities, is where quantum interference effects originate.

This is all well and good, but when you worry about the electrons flowing in your house wiring, or even your computer or mobile device, you basically never worry about these quantum interference effects.  Why not?

The answer is rooted in the idea of quantum coherence, in this case of the spatial state of the electron.  Think of the electron as a wave with some wavelength and some particular phase - some arrangement of peaks and troughs that passes through zero at spatially periodic locations (say at x = 0, 1, 2, 3.... nanometers in some coordinate system).   If an electron propagates along in vacuum, this just continues ad infinitum.

If an electron scatters off some static obstacle, that can reset where the zeros are (say, now at x = 0.2, 1.2, 2.2, .... nm after the scattering).  A given static obstacle would always shift those zeros the same way.   Interference between waves (summing the complex wave amplitudes and squaring to find the probabilities) with a well-defined phase difference is what gives the fringes seen in the famous two-slit experiment linked above.

If an electron scatters off some dynamic obstacle (this could be another electron, or some other degree of freedom whose state can be, in turn, altered by the electron), then the phase of the electron wave can be shifted in a more complicated way.  For example, maybe the scatterer ends up in state S1, and that corresponds to the electron wave having zeros at x=0.2, 1.2, 2.2, .....; maybe the scatterer ends up in state S2, and that goes with the electron wave having zeros at x=0.3, 1.3, 2.3, ....  If the electron loses energy to the scatterer, then the spacing between the zeros can change (x=0.2, 1.3, 2.4, ....).  If we don't keep track of the quantum state of the scatterer as well, and we only look at the electron, it looks like the electron's phase is no longer well-defined after the scattering event.  That means if we try to do an interference measurement with that electron, the interference effects are comparatively suppressed.

In your house wiring, there are many many allowed states for the conduction electrons that are close by in energy, and there are many many dynamical things (other electrons, lattice vibrations) that can scatter the electrons.  The consequence of this is that the phase of the electron's wavefunction only remains well defined for a really short time, like 10-15 seconds.    Conversely, in a single hydrogen atom, the electron has no states available close in energy, and in the absence of some really invasive probe, doesn't have any dynamical things off which to scatter.

I'll try to write more about this soon, and may come back to make a figure or two to illustrate this post.

## Monday, June 15, 2015

### Brief news items

In the wake of travel, I wanted to point readers to a few things that might have been missed:
• Physics Today asks "Has science 'taken a turn towards darkness'?"  I tend to think that the physical sciences and engineering are inherently less problematic (because of the ability of others to try to reproduce results in a controlled environment) than biology/medicine (incredibly complex and therefore difficult or impractical to do controlled experimentation) or the social sciences.
• Likewise, Physics Today's Steven Corneliussen also asks, "Could the evolution of theoretical physics harm public trust in science?"  This gets at the extremely worrying (to me) tendency of some high energy/cosmology theorists these days to decry that the inability to test their ideas is really not a big deal, and that we shouldn't be so hung up on the idea of falsifiability
• Ice spikes are cool.
• Anshul Kogar and Ethan Brown have started a new condensed matter blog!  The more the merrier, definitely.
• My book is available for download right now in kindle form, with hard copies available in the UK in a few days and in the US next month.

## Wednesday, June 10, 2015

### Molecular electronics: 40+ years

More than 40 years ago, this paper was published, articulating clearly from a physical chemistry point of view the possibility that it might be possible to make a nontrivial electronic device (a rectifier, or diode) out of a single small molecule (a "donor"-bridge-"acceptor" structure, analogous to a pn junction - see this figure, from that paper).  Since then, there has been a great deal of interest in "molecular electronics".  This week I am at this conference in Israel, celebrating both this anniversary and the 70th birthday of Mark Ratner, the tremendous theoretical physical chemist who coauthored that paper and has maintained an infectious level of enthusiasm about this and all related topics.

The progress of the field has been interesting.  In the late '90s through about 2002, there was enormous enthusiasm, with some practitioners making rather wild statements about where things were going.  It turned out that this hype was largely over-the-top - some early measurements proved to be very poorly reproducible and/or incorrectly interpreted; being able to synthesize 1022 identical "components" in a beaker is great, but if each one has to be bonded with atomic precision to get reproducible responses that's less awesome; getting molecular devices to have genuinely useful electronic properties was harder than it looked, with some fundamental limitations;  Hendrik Schoen was a fraud and his actions tainted the field; DARPA killed their Moletronics program, etc.    That's roughly when I entered the field.  Timing is everything.

Even with all these issues, these systems have proven to be a great proving ground for testing our understanding of a fair bit of physics and chemistry - how should we think about charge transport through small quantum systems?  How important are quantum effects, electron-electron interactions, electron-vibrational interactions?   How does dissipation really work at these scales?  Do we really understand how to compute molecular levels/gaps in free space and on surfaces with quantitative accuracy?  Can we properly treat open quantum systems, where particles and energy flow in and out?  What about time-dependent cases, relevant when experiments involve pump/probe optical approaches?  Even though we are (in my opinion) very unlikely to use single- or few-molecule devices in technologies, we are absolutely headed toward molecular-scale (countably few atom) silicon devices, and a lot of this physics is relevant there.  Similarly, the energetic and electronic structure issues involved are critically important to understanding catalysis, surface chemistry, organic photovoltaics, etc.

## Friday, June 05, 2015

### What does a molecule sound like?

We all learn in high school chemistry or earlier that atoms can bind together to form molecules, and like a "highly sophisticated interlocking brick system", those atoms like to bind in particular geometrical arrangements.  Later we learn that those bonds are dynamic things, with the atoms vibrating and wiggling like masses connected by springs, though here the (nonlinear) spring constants are set by the detailed quantum mechanical arrangement of electrons.  Like any connected set of masses and springs, or like a guitar string or tuning form, molecules have "normal modes" of vibration.  Because the vibrations involve the movement of charge, either altering how positive and negative charge are spatially separated (dipole active modes) or how the charge would be able to respond to an electric field (roughly speaking, Raman active modes), these vibrations can be excited by light.  This is the basis for the whole field of vibrational spectroscopy.  Each molecule has a particular, distinct set of vibrations, like a musical chord.

Because the atoms involved are quite light (one carbon atom has a mass of 2$\times$10-26 kg) and the effective springs are rather stiff, the vibrations are typically at frequencies of around 1013 Hz and higher - that's 10 billion times higher than the frequency of a typical acoustic frequency (1 kHz).  Still, suppose we shifted the frequencies down to the acoustic range, using a conversion of 1 cm-1 (a convenient unit of frequency for molecular spectroscopists) $\rightarrow$ 1 Hz.  What would molecules sound like?  As an example, I looked at the (surface enhanced) Raman spectrum of a small molecule, pMA. The Raman spectrum is from this  paper (Fig. 3a), and I took the three most dominant vibrational modes, added the pitches with the appropriate amplitude, and this is the result (mp3 - embedding audio in blogger is annoying).

I thought I was being clever in doing this, only to realize that, as usual, someone else had this same idea, beat me to it, and implemented it in a very cool way.  You should really check that out.

## Tuesday, June 02, 2015

### Anecdote 3: The postdoc job talk and the Nobel laureate

Back when I was finishing up my doctoral work, I made a trip to New Jersey to interview in two places for possible postdoc positions.  As you might imagine, this was both exciting and nerve-wracking.  My first stop was Princeton, my old undergrad stomping grounds, where I was trying to compete for a prestigious named fellowship, and from there I was headed north to Bell Labs the following day.

As I've mentioned previously, my graduate work was on the low temperature properties of glasses, which share certain universal properties (temperature dependences of the thermal conductivity, specific heat, speed of sound, and dielectric response, to name a few) that are very distinct from those of crystals.  These parameters were all described remarkably well by the "two-level system" (TLS) model (the original paper - sorry for the paywall that even my own university library won't cover) dreamed up in 1971 by Phil Anderson, Bert Halperin, and Chandra Varma.  Anderson, a Nobel laureate for his many contributions to condensed matter physics (including Anderson localization, the Anderson model, and the Anderson-Higgs mechanism) was widely known for his paean to condensed matter physics and for being a curmudgeon.  He was (and still is) at Princeton, and while he'd known my thesis adviser for years, I was still pretty nervous about presenting my thesis work (experiments that essentially poked at the residual inadequacies of the original TLS model trying to understand why it worked so darn well) to him.

My visit was the standard format - in addition to showing me around the lab and talking with me about what projects I'd likely be doing, my host (who would've been my postdoc boss if I'd ended up going there) had thoughtfully arranged a few 1-on-1 meetings for me with a couple of other postdocs and a couple of faculty members, including Anderson.  My meeting with Anderson was right before lunch, and after I got over my nerves we had what felt to me like a pretty good discussion, and he seemed interested in what I was going to present.  My talk was scheduled for 1:00pm, right after lunch, always a tricky time.  I was speaking in one of the small classrooms in the basement of Jadwin Hall (right next to the room where I'd had undergrad quantum seven years earlier).  I was all set to go, with my binder full of transparencies - this was in the awkward period when we used computers to print transparencies, but good laptops + projectors were rare.   Anderson came in and sat down pointedly in the second row.  By my third slide, he was sound asleep.  By my fifth slide, he was noticeably snoring, though that didn't last too long.  He did revive and ask me a solid question at the end of the talk, which had gone fine.  In hindsight, I realize that my work, while solid and interesting, was in an area pretty far from the trendiest topics of the day, and therefore it was going to be an uphill battle to capture enthusiasm.  At least I'd survived, and the talk the next day up at Murray Hill was better received.