The Stochastically Driven Underdamped Harmonic Oscillator for Dummies

Comments
Last updated 2018-01-28 21:11:46 SGT

Tiago Campante likes to talk about a stochastically excited harmonic oscillator, but there doesn't seem to be a lot of information about that online1, so here are some personal notes about what I imagine that could possibly look like. Expect errors.

Damped harmonic oscillator

In introductory physics it is customary to discuss the damped harmonic oscillator, which can be written (without loss of generality) as

[\left({\mathrm d \over \mathrm d t}\right)^2 X_t + 2\gamma {\mathrm d \over \mathrm d t} X_t + \omega_0^2 X_t = \eta(t).\tag{0}]

In the underdamped limit, the solutions to the homogenous problem are spanned by the basis functions [\exp\left(-\gamma t \pm \mathrm{i} \omega t\right)], with [\omega = \sqrt{\omega_0^2 - \gamma^2}]. This expression also holds good in the overdamped limit, and in what follows we will ignore the critically damped case.

For general [\eta] and zero initial conditions, we have the unique solution

[X_t = \int^t \eta(t') {1 \over \omega} e^{- \gamma (t-t')} \sin \left(\omega (t - t')\right) \mathrm{d} t',]

where we are here leaving the initial time unspecified. This clearly has the structure of a convolution against some Green's function, and a similar expression exists for the overdamped case. As the initial time tends to [-\infty], the initial conditions are exponentially suppressed, and this driving term dominates asymptotically; consequently we will ignore them.

Now, suppose [X] to be a stochastic process parameterised by the time [t]. Let us consider the case where the excitation happens to be [\delta]-correlated white noise satisfying

[\left<\eta(t)\right> = 0,\text{ and } \left<\eta(t)\eta(t')\right> = \eta_0^2 \delta(t - t').]

From the above expression it is clear that the first moment (ensemble average) of [X] vanishes: [\left = 0]. Likewise, since the velocity (obtained by differentiating it with respect to [t]) is also linear in [\eta], its first moment must also vanish. Explicitly, if we define

[Y_t = \int^t \eta(t') {1 \over \omega} e^{- \gamma (t-t')} \cos \left(\omega (t - t')\right) \mathrm{d} t',]

we can write the mean velocity2 as,

[\left<{\mathrm d \over \mathrm d t}X_t\right> = - \gamma \left + \int^t \left<\eta(t')\right> e^{- \gamma (t-t')} \cos \left(\omega (t - t')\right) \mathrm{d} t' = -\gamma \left + \omega \left = 0. \tag{1}]

Second moments

We now wish to find the two-time correlation function for the positions and the velocities; since the means vanish everywhere, these second moments coincide with the second cumulants, and can also be interpreted as the ensemble covariances. Inspecting the above integrals, it is clear that the positions are easier to evaluate. Explicitly, we have3 (from squaring (1)):

[\begin{aligned}K(t_1, t_2) &= \left \\ &= \int^{t_1} \int^{t_2} \mathrm{d} t' \mathrm{d} t'' \left<\eta(t')\eta(t'')\right> {1 \over \omega^2} e^{- \gamma (t_1-t')} e^{- \gamma (t_2-t'')} \sin \left(\omega (t_1 - t')\right) \sin \left(\omega (t_2 - t'')\right) \\ &= {\eta_0^2e^{-\gamma(t_1 + t_2)} \over \omega^2} \int^{\mathrm{min}(t_1, t_2)} \mathrm{d} t'~e^{2\gamma t'} \sin \left(\omega (t_1 - t')\right) \sin \left(\omega (t_2 - t')\right) \\ &= \left(\eta_0 \over 2 \omega\right)^2 e^{-\gamma |t_1 - t_2|} \left[ {1\over \gamma} \cos\left(\omega |t_1 - t_2|\right) - {1 \over \omega_0} \cos\left(\omega |t_1 - t_2| + \arccos{\gamma \over \omega_0}\right)\right].\end{aligned}]

Here we have taken the limit of the initial time tending to [-\infty] — a “dynamically relaxed” system — thereby exponentially suppressing any dependence on the initial conditions, and leaving behind only stationary terms (i.e. terms depending only on [|t_1 - t_2|]). This gives a (co)variance of [{1 \over \gamma} \left(\eta_0 \over 2 \omega_0\right)^2] at [t_1 = t_2].

The covariances of the velocities are a little more complicated; we need to find the cross-correlation of [X_t] and [Y_t], since

[\left<\dot{X}_{t_1} \dot{X}_{t_2}\right> = \gamma^2\left - \gamma\omega \left(\left + \left \right) + \omega^2 \left.]

The resulting expression has essentially the same structure as the covariances of [X], except we need to compute the integral of [\sin(\cdot) \cos(\cdot)] for the cross-correlation [\left], and of [\cos(\cdot) \cos(\cdot)] for the autocorrelation [\left]. With some work, we eventually arrive at

[\begin{aligned}\left<\dot{X}_{t_1} \dot{X}_{t_2}\right> &= \left(\eta_0 \over 2 \omega\right)^2 e^{-\gamma |t_1 - t_2|} \left[\gamma^2 \left({1\over \gamma} \cos\left(\omega |t_1 - t_2|\right) - {1 \over \omega_0} \cos\left(\omega |t_1 - t_2| + \arccos{\gamma \over \omega_0}\right)\right)\right.\\ &- 2\gamma{\omega \over \omega_0} \sin \left(\omega |t_1 - t_2| + \arccos {\gamma \over \omega_0}\right) \\ &+ \left.\omega^2 \left({1\over \gamma} \cos\left(\omega |t_1 - t_2|\right) + {1 \over \omega_0} \cos\left(\omega |t_1 - t_2| + \arccos{\gamma \over \omega_0}\right)\right)\right]\\&= {\eta_0^2 \over 4 \omega} e^{-\gamma |t_1 - t_2|} \left({\omega \over \gamma} \cos \left(\omega(t_1 - t_2)\right) - \sin \left(\omega|t_1 - t_2|\right)\right). \hskip 5em \text{(2)}\end{aligned}]

As [t_1 \to t_2], (2) gives us a mean-squared velocity of [\left<\dot{X}^2\right> = {\eta_0^2 \over 4 \gamma} \equiv {k_B T \over m}] by equipartition, whence we get the fluctuation-dissipation theorem

[2\gamma = {m \over 2 k_B T}\int \mathrm{d}t' \left<\eta(t) \eta(t + t')\right>.]

Note that this does not depend on [\omega_0], and we should therefore expect it to hold also in the overdamped limit. In fact, the expression (2) coincides with what we would get in the Brownian-motion limit ([\omega_0 \to 0 \implies \omega \to \mathrm{i} \gamma]):

[K(t_1, t_2) = {\eta_0^2 \over 4 \gamma} e^{-2 \gamma |t_1 - t_2|}.]

Power Spectra

We can relate the spectral contents of the stationary covariances derived above to the power spectra of [X] and [\dot{X}] through the Wiener-Khinchin theorem4. Since we have used [\omega] for the oscillator frequencies, we will use [\sigma] as our conjugate variable to [\tau = t_1 - t_2] in the Fourier transform. Examining the covariances above, we observe that they are of the form

[K(\tau) = A e^{-\gamma|\tau|} \left(\text{ sum of sinusoidal terms in } \omega |\tau|\right).]

We therefore construct the power spectrum through the convolution theorem by translating copies of some characteristic function to [\sigma = \pm \omega]. Under the unitary Fourier transform, we get this line profile as

[\int_{-\infty}^{\infty}e^{-\gamma|\tau| - 2\pi i\sigma\tau}\mathrm{d} \tau = {2 \gamma \over \gamma^2 + (2 \pi \sigma)^2};]

that is, the line profile is a Lorentzian.

Fokker-Planck equation

The corresponding Fokker-Planck equation can be found by noting that our definition of the harmonic oscillator corresponds to the trajectory of a particle of unit mass in a quadratic potential.5 Then [m=1, p = \dot{x}], and in what follows I will write things in the standard coordinates [(x, p)].

Beginning with Boltzmann's Equation for some distribution in phase space, let us suppose that the scattering term [w(x \to x + x', p \to p + p') \equiv w(p,p')] does not affect or depend on the position coordinate. Then

[\begin{aligned}{\partial f \over \partial t} &+ {p \over m} {\partial f \over \partial x} - {\partial V \over \partial x} {\partial f \over \partial p} = \int \mathrm{d} p'~ \left(f(x, p-p')w(p-p',p') - f(x, p)w(p,p')\right) \\ &\sim -{\partial \over \partial p}\left(\int \mathrm{d} p'~p' f(x, p)w(p,p')\right) + {1 \over 2} {\partial^2 \over \partial p^2}\left(\int \mathrm{d} p'~(p')^2 f(x, p)w(p,p')\right),\end{aligned}]

where we have expanded in powers of [p'] assuming [w] to be significant only where [\left|{p' \over p}\right| \ll 1]. We relate the terms on the RHS of this equation to the quantities in our above formulation (0) in the following way:

[-{\partial \over \partial p}\left(\int \mathrm{d} p'~p' f(x, p)w(p,p')\right) = {\partial \over \partial p}\left(2\gamma p f(x,p)\right).]

In fact, if we cheat a little (examining the limit of no forces and ignoring the dissipative term), we realise that this is a heat equation for [f] in [(p, t)] with the integral in the second term playing the role of the diffusion coefficient. If we knew the “answer at the back of the book” for Brownian motion6, we are already done. Otherwise, however, with [\alpha] as some (so far) undetermined constant, we write

[\begin{aligned}{\partial f \over \partial t} + {p \over m} {\partial f \over \partial x} &- m \omega_0^2 x {\partial f \over \partial p} = 2 \gamma {\partial \over \partial p}\left(p f(x,p)\right) + \alpha{m^2 \eta_0^2 \over 2} {\partial^2 \over \partial p^2}f(x, p) \\ &= 2 \gamma {\partial \over \partial p} \left(p f(x, p) + \alpha m^2 {\eta_0^2 \over 4\gamma} {\partial \over \partial p} f(x, p)\right)\\ &= \alpha m^2 {\eta_0^2 \over 2} {\partial \over \partial p} \left(e^{-{p^2 \over 2}{4\gamma \over \alpha m^2 \eta_0^2}} {\partial \over \partial p} \left(e^{{p^2 \over 2}{4\gamma \over \alpha m^2 \eta_0^2}}f(x, p) \right)\right).\end{aligned}]

Observing that the RHS vanishes if the term in the inner parentheses is independent of [p], and that [f] tends towards the Boltzmann distribution at equilibrium7, we argue that

[\begin{aligned}{p^2 \over 2 m k_B T} &= {p^2 \cdot 4\gamma \over 2 \alpha m^2 \eta_0^2} \\ &= {p^2 \over 2 \alpha m k_B T}\\ \implies \alpha &= 1.\end{aligned}]

Then the Fokker-Planck equation reads

[\begin{aligned}{\partial f \over \partial t} + {p \over m} {\partial f \over \partial x} &- m \omega_0^2 x {\partial f \over \partial p} = 2\gamma m k_B T {\partial \over \partial p} \left(e^{-{p^2 \over 2 m k_B T}} {\partial \over \partial p} \left(e^{{p^2 \over 2 m k_B T}}f(x, p) \right)\right),\end{aligned}]

or, in velocity coordinates,

[\begin{aligned}{\partial f \over \partial t} + v {\partial f \over \partial x} &- \omega_0^2 x {\partial f \over \partial v} = {\eta_0^2 \over 2} {\partial \over \partial v} \left(e^{-{2 v^2 \gamma \over \eta_0^2}} {\partial \over \partial v} \left(e^{{2 v^2 \gamma \over \eta_0^2}}f(x, v) \right)\right) \\ &= 2 \gamma {\partial \over \partial v} \left(v f(x, v)\right) + {\eta_0^2 \over 2}{\partial^2 \over \partial v^2} f(x, v).\end{aligned}]

On closer inspection, however, this clearly does not reduce to the “canonical” result in the Brownian-motion limit! In particular, projecting down to the probability density function in only the position coordinate fails to yield the simple heat equation. The reason for this becomes apparent when we consider the defining Itō SDE

[\mathrm dX_t = \eta_0 \mathrm dW_t,]

for which the Wiener process appears in the master equation with respect to an integral over [x]. Conversely, in our construction, the RHS of the Boltzmann equation (effectively the master equation over phase space) contained only momentum terms. That is to say: in Brownian motion we have random [\delta]-correlated impulses, while here we have [\delta]-correlated forces.

Finally, the expression here is suggestive of the Fokker-Planck equation resulting from an arbitrary potential and dissipative forces under [\delta]-correlated excitation, which can be written in terms of the combined force [F] as8

[\boxed{{\partial f \over \partial t} + \sum_{\mu}{\partial \over \partial x^{\mu}} \left(v^\mu f(x, p)\right) + \sum_\mu{\partial \over \partial p_{\mu}} \left( F_{\mu} f(x, p) \right) = \sum_{\mu,\nu, \kappa} {\partial \over \partial p_\mu}{\partial \over \partial p_\nu}{m^2\eta_{\mu\kappa}\eta_{\nu\kappa} \over 2} f(x, p).}]

  1. At least, at a level accessible to me; I have a very short attention span, very little mathematical training, and things to do on weekdays. Hopefully I'll have more free time after quals. 

  2. I'm not actually sure if it's valid to write things this way, but let's run with it for now. 

  3. Fun anecdote: I evaluated this integral (and the other ones that go [\sin(\cdot)\cos(\cdot)] and [\cos(\cdot)\cos(\cdot)]), by hand, during a train ride on Christmas Eve 2017. While typesetting this blog post, I decided to change the defining ODE from reading [\gamma \dot{X}] to [2\gamma\dot{X}] (so that the exponential terms read [e^{\gamma t}] instead of [e^{{\gamma\over 2} t}]), and re-computed them in Mathematica to check correctness; this took all of maybe 5 seconds per integral. Mathematica also found the reduced expression (2) basically instantaneously, as opposed to me staring very hard at my tablet for half the train ride. That sure was a real confidence booster… 

  4. In turn, this also gives us the power spectrum of a single realisation of [X], under assumption of ergodicity (which permits us to identify ensemble and long-time averages). 

  5. We could also have turned this into a system of coupled first-order Itō SDEs, but I'm not familiar enough with them to use that to recover the corresponding Fokker-Planck equations directly. Instead I follow the construction taught by BG Englert in his exceptional stat mech course, PC4241. 

  6. Strictly speaking as [\omega_0 \to 0] we should recover an “Ornstein-Uhlenbeck” process, in which case (as in Brownian motion) the diffusion constant is immediately given as [\eta_0^2 \over 2] with respect to the velocities, or with an extra factor of [m^2] in momentum coordinates. 

  7. This argument is lifted directly from Englert's notes, and I'm not able to think of a better one at this time that doesn't explictly invoke the Boltzmann distribution, and/or use results about first-order stochastic differential equations that may not apply in this case. 

  8. This is in fact what we would have gotten if we had set up a coupled system of Itō SDEs in the first place: compare this to the usual result of

    [{\partial p \over \partial t} = -{\partial \over \partial x} \left(\mu p\right) + {\partial^2 \over \partial x^2}\left({\sigma^2\over 2} p\right)]

    associated with the SDE (driven by the standard Wiener process)

    [\mathrm dX_t = \mu(X_t, t) \mathrm{d} t + \sigma(X_t, t) \mathrm d W_t.]

    It is now obvious how to add random impulses in addition to random forces; the result of this is simply to add an additional diffusion term in the position coordinate on the RHS of the resulting Fokker-Planck equation. 


comments powered by Disqus