The Stochastically Driven Underdamped Harmonic Oscillator for Dummies
CommentsLast updated 20180128 21:11:46 SGT
Tiago Campante likes to talk about a stochastically excited harmonic oscillator, but there doesn't seem to be a lot of information about that online^{1}, so here are some personal notes about what I imagine that could possibly look like. Expect errors.
Damped harmonic oscillator
In introductory physics it is customary to discuss the damped harmonic oscillator, which can be written (without loss of generality) as
[\left({\mathrm d \over \mathrm d t}\right)^2 X_t + 2\gamma {\mathrm d \over \mathrm d t} X_t + \omega_0^2 X_t = \eta(t).\tag{0}]In the underdamped limit, the solutions to the homogenous problem are spanned by the basis functions [\exp\left(\gamma t \pm \mathrm{i} \omega t\right)], with [\omega = \sqrt{\omega_0^2  \gamma^2}]. This expression also holds good in the overdamped limit, and in what follows we will ignore the critically damped case.
For general [\eta] and zero initial conditions, we have the unique solution
[X_t = \int^t \eta(t') {1 \over \omega} e^{ \gamma (tt')} \sin \left(\omega (t  t')\right) \mathrm{d} t',]where we are here leaving the initial time unspecified. This clearly has the structure of a convolution against some Green's function, and a similar expression exists for the overdamped case. As the initial time tends to [\infty], the initial conditions are exponentially suppressed, and this driving term dominates asymptotically; consequently we will ignore them.
Now, suppose [X] to be a stochastic process parameterised by the time [t]. Let us consider the case where the excitation happens to be [\delta]correlated white noise satisfying
[\left<\eta(t)\right> = 0,\text{ and } \left<\eta(t)\eta(t')\right> = \eta_0^2 \delta(t  t').]From the above expression it is clear that the first moment (ensemble average) of [X] vanishes: [\left
we can write the mean velocity^{2} as,
[\left<{\mathrm d \over \mathrm d t}X_t\right> =  \gamma \leftSecond moments
We now wish to find the twotime correlation function for the positions and the velocities; since the means vanish everywhere, these second moments coincide with the second cumulants, and can also be interpreted as the ensemble covariances. Inspecting the above integrals, it is clear that the positions are easier to evaluate. Explicitly, we have^{3} (from squaring (1)):
[\begin{aligned}K(t_1, t_2) &= \leftHere we have taken the limit of the initial time tending to [\infty] — a “dynamically relaxed” system — thereby exponentially suppressing any dependence on the initial conditions, and leaving behind only stationary terms (i.e. terms depending only on [t_1  t_2]). This gives a (co)variance of [{1 \over \gamma} \left(\eta_0 \over 2 \omega_0\right)^2] at [t_1 = t_2].
The covariances of the velocities are a little more complicated; we need to find the crosscorrelation of [X_t] and [Y_t], since
[\left<\dot{X}_{t_1} \dot{X}_{t_2}\right> = \gamma^2\leftThe resulting expression has essentially the same structure as the covariances of [X], except we need to compute the integral of [\sin(\cdot) \cos(\cdot)] for the crosscorrelation [\left
As [t_1 \to t_2], (2) gives us a meansquared velocity of [\left<\dot{X}^2\right> = {\eta_0^2 \over 4 \gamma} \equiv {k_B T \over m}] by equipartition, whence we get the fluctuationdissipation theorem
[2\gamma = {m \over 2 k_B T}\int \mathrm{d}t' \left<\eta(t) \eta(t + t')\right>.]Note that this does not depend on [\omega_0], and we should therefore expect it to hold also in the overdamped limit. In fact, the expression (2) coincides with what we would get in the Brownianmotion limit ([\omega_0 \to 0 \implies \omega \to \mathrm{i} \gamma]):
[K(t_1, t_2) = {\eta_0^2 \over 4 \gamma} e^{2 \gamma t_1  t_2}.]Power Spectra
We can relate the spectral contents of the stationary covariances derived above to the power spectra of [X] and [\dot{X}] through the WienerKhinchin theorem^{4}. Since we have used [\omega] for the oscillator frequencies, we will use [\sigma] as our conjugate variable to [\tau = t_1  t_2] in the Fourier transform. Examining the covariances above, we observe that they are of the form
[K(\tau) = A e^{\gamma\tau} \left(\text{ sum of sinusoidal terms in } \omega \tau\right).]We therefore construct the power spectrum through the convolution theorem by translating copies of some characteristic function to [\sigma = \pm \omega]. Under the unitary Fourier transform, we get this line profile as
[\int_{\infty}^{\infty}e^{\gamma\tau  2\pi i\sigma\tau}\mathrm{d} \tau = {2 \gamma \over \gamma^2 + (2 \pi \sigma)^2};]that is, the line profile is a Lorentzian.
FokkerPlanck equation
The corresponding FokkerPlanck equation can be found by noting that our definition of the harmonic oscillator corresponds to the trajectory of a particle of unit mass in a quadratic potential.^{5} Then [m=1, p = \dot{x}], and in what follows I will write things in the standard coordinates [(x, p)].
Beginning with Boltzmann's Equation for some distribution in phase space, let us suppose that the scattering term [w(x \to x + x', p \to p + p') \equiv w(p,p')] does not affect or depend on the position coordinate. Then
[\begin{aligned}{\partial f \over \partial t} &+ {p \over m} {\partial f \over \partial x}  {\partial V \over \partial x} {\partial f \over \partial p} = \int \mathrm{d} p'~ \left(f(x, pp')w(pp',p')  f(x, p)w(p,p')\right) \\ &\sim {\partial \over \partial p}\left(\int \mathrm{d} p'~p' f(x, p)w(p,p')\right) + {1 \over 2} {\partial^2 \over \partial p^2}\left(\int \mathrm{d} p'~(p')^2 f(x, p)w(p,p')\right),\end{aligned}]where we have expanded in powers of [p'] assuming [w] to be significant only where [\left{p' \over p}\right \ll 1]. We relate the terms on the RHS of this equation to the quantities in our above formulation (0) in the following way:
 Since the term on the LHS containing the momentum derivative of [f] corresponds to a force (in this case the oscillator potential), so too must the momentum derivative on the RHS (which must be the dissipative force, proportional to [2\gamma]). For dimensional consistency we must have
 Since [\gamma] is related to [\eta] through the fluctuationdissipation theorem, one also suspects the integral in the second term to be related to the second moment of the velocity. On the other hand, [w] must have units of inverse time. For dimensional consistency, therefore, our ansatz must be that the integral is proportional to [m^2 \cdot 2 \gamma \left<\dot{X}^2\right> \sim m^2 \eta_0^2].
In fact, if we cheat a little (examining the limit of no forces and ignoring the dissipative term), we realise that this is a heat equation for [f] in [(p, t)] with the integral in the second term playing the role of the diffusion coefficient. If we knew the “answer at the back of the book” for Brownian motion^{6}, we are already done. Otherwise, however, with [\alpha] as some (so far) undetermined constant, we write
[\begin{aligned}{\partial f \over \partial t} + {p \over m} {\partial f \over \partial x} & m \omega_0^2 x {\partial f \over \partial p} = 2 \gamma {\partial \over \partial p}\left(p f(x,p)\right) + \alpha{m^2 \eta_0^2 \over 2} {\partial^2 \over \partial p^2}f(x, p) \\ &= 2 \gamma {\partial \over \partial p} \left(p f(x, p) + \alpha m^2 {\eta_0^2 \over 4\gamma} {\partial \over \partial p} f(x, p)\right)\\ &= \alpha m^2 {\eta_0^2 \over 2} {\partial \over \partial p} \left(e^{{p^2 \over 2}{4\gamma \over \alpha m^2 \eta_0^2}} {\partial \over \partial p} \left(e^{{p^2 \over 2}{4\gamma \over \alpha m^2 \eta_0^2}}f(x, p) \right)\right).\end{aligned}]Observing that the RHS vanishes if the term in the inner parentheses is independent of [p], and that [f] tends towards the Boltzmann distribution at equilibrium^{7}, we argue that
[\begin{aligned}{p^2 \over 2 m k_B T} &= {p^2 \cdot 4\gamma \over 2 \alpha m^2 \eta_0^2} \\ &= {p^2 \over 2 \alpha m k_B T}\\ \implies \alpha &= 1.\end{aligned}]Then the FokkerPlanck equation reads
[\begin{aligned}{\partial f \over \partial t} + {p \over m} {\partial f \over \partial x} & m \omega_0^2 x {\partial f \over \partial p} = 2\gamma m k_B T {\partial \over \partial p} \left(e^{{p^2 \over 2 m k_B T}} {\partial \over \partial p} \left(e^{{p^2 \over 2 m k_B T}}f(x, p) \right)\right),\end{aligned}]or, in velocity coordinates,
[\begin{aligned}{\partial f \over \partial t} + v {\partial f \over \partial x} & \omega_0^2 x {\partial f \over \partial v} = {\eta_0^2 \over 2} {\partial \over \partial v} \left(e^{{2 v^2 \gamma \over \eta_0^2}} {\partial \over \partial v} \left(e^{{2 v^2 \gamma \over \eta_0^2}}f(x, v) \right)\right) \\ &= 2 \gamma {\partial \over \partial v} \left(v f(x, v)\right) + {\eta_0^2 \over 2}{\partial^2 \over \partial v^2} f(x, v).\end{aligned}]On closer inspection, however, this clearly does not reduce to the “canonical” result in the Brownianmotion limit! In particular, projecting down to the probability density function in only the position coordinate fails to yield the simple heat equation. The reason for this becomes apparent when we consider the defining Itō SDE
[\mathrm dX_t = \eta_0 \mathrm dW_t,]for which the Wiener process appears in the master equation with respect to an integral over [x]. Conversely, in our construction, the RHS of the Boltzmann equation (effectively the master equation over phase space) contained only momentum terms. That is to say: in Brownian motion we have random [\delta]correlated impulses, while here we have [\delta]correlated forces.
Finally, the expression here is suggestive of the FokkerPlanck equation resulting from an arbitrary potential and dissipative forces under [\delta]correlated excitation, which can be written in terms of the combined force [F] as^{8}
[\boxed{{\partial f \over \partial t} + \sum_{\mu}{\partial \over \partial x^{\mu}} \left(v^\mu f(x, p)\right) + \sum_\mu{\partial \over \partial p_{\mu}} \left( F_{\mu} f(x, p) \right) = \sum_{\mu,\nu, \kappa} {\partial \over \partial p_\mu}{\partial \over \partial p_\nu}{m^2\eta_{\mu\kappa}\eta_{\nu\kappa} \over 2} f(x, p).}]
At least, at a level accessible to me; I have a very short attention span, very little mathematical training, and things to do on weekdays. Hopefully I'll have more free time after quals. ↩

I'm not actually sure if it's valid to write things this way, but let's run with it for now. ↩

Fun anecdote: I evaluated this integral (and the other ones that go [\sin(\cdot)\cos(\cdot)] and [\cos(\cdot)\cos(\cdot)]), by hand, during a train ride on Christmas Eve 2017. While typesetting this blog post, I decided to change the defining ODE from reading [\gamma \dot{X}] to [2\gamma\dot{X}] (so that the exponential terms read [e^{\gamma t}] instead of [e^{{\gamma\over 2} t}]), and recomputed them in Mathematica to check correctness; this took all of maybe 5 seconds per integral. Mathematica also found the reduced expression (2) basically instantaneously, as opposed to me staring very hard at my tablet for half the train ride. That sure was a real confidence booster… ↩

In turn, this also gives us the power spectrum of a single realisation of [X], under assumption of ergodicity (which permits us to identify ensemble and longtime averages). ↩

We could also have turned this into a system of coupled firstorder Itō SDEs, but I'm not familiar enough with them to use that to recover the corresponding FokkerPlanck equations directly. Instead I follow the construction taught by BG Englert in his exceptional stat mech course, PC4241. ↩

Strictly speaking as [\omega_0 \to 0] we should recover an “OrnsteinUhlenbeck” process, in which case (as in Brownian motion) the diffusion constant is immediately given as [\eta_0^2 \over 2] with respect to the velocities, or with an extra factor of [m^2] in momentum coordinates. ↩

This argument is lifted directly from Englert's notes, and I'm not able to think of a better one at this time that doesn't explictly invoke the Boltzmann distribution, and/or use results about firstorder stochastic differential equations that may not apply in this case. ↩

This is in fact what we would have gotten if we had set up a coupled system of Itō SDEs in the first place: compare this to the usual result of
[{\partial p \over \partial t} = {\partial \over \partial x} \left(\mu p\right) + {\partial^2 \over \partial x^2}\left({\sigma^2\over 2} p\right)]associated with the SDE (driven by the standard Wiener process)
[\mathrm dX_t = \mu(X_t, t) \mathrm{d} t + \sigma(X_t, t) \mathrm d W_t.]It is now obvious how to add random impulses in addition to random forces; the result of this is simply to add an additional diffusion term in the position coordinate on the RHS of the resulting FokkerPlanck equation. ↩