# Generate samples for unimodal systems with standard MCMC¶

When many prior samples are used with The Joker, and the sampler returns one sample, or the samples returned are within the same mode of the posterior, the posterior pdf is likely unimodal. In these cases, we can use standard MCMC (e.g., Metropolis-Hastings or an ensemble sampler like emcee) to generate posterior samples for us. In this example, we will use emcee to “continue” sampling for data that are very constraining, so that The Joker only returns a single sample.

[1]:

import pickle

from astropy.time import Time
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline

from thejoker.data import RVData
from thejoker.sampler.mcmc import TheJokerMCMCModel
from thejoker.sampler import JokerParams, TheJoker
from thejoker.plot import plot_rv_curves


For this example, I’ve already pre-generated some simulated data that we’re going to use – the true parameters of the main orbit are in the cell below:

[2]:

true_P = 61.1166 * u.day


The data are provided in-line in the next cell:

[3]:

from twobody import KeplerOrbit
rnd = np.random.RandomState(42)

t0 = Time(2450546.80, format='jd', scale='utc')

truth = dict()
truth['P'] = true_P
truth['a'] = 0.2081*u.au * (1000.95*u.Mjup) / (0.320*u.Msun)
truth['e'] = 0.0324 * u.one
phi0 = 2*np.pi*t0.tcb.mjd / truth['P'].to(u.day).value
truth['M0'] = (phi0 % (2*np.pi)) * u.radian
truth['omega'] = 50.3 * u.degree

orbit = KeplerOrbit(**truth,
Omega=0*u.deg, i=90*u.deg) # these angle don't matter for RV

n_data = 16
t = Time(55557. + rnd.uniform(0, 350, n_data),
format='mjd', scale='tcb')
t.sort()

err = np.full_like(t, 500) * u.m/u.s
rv = rv + rnd.normal(0, err.value)*err.unit

[4]:

t = Time([55688.0890416 , 55889.75000724, 55813.19787963, 55766.53046947,
55611.60652415, 55611.59808212, 55577.32926426, 55860.16165102,
55767.39025411, 55804.82540223, 55564.204573  , 55896.46844826,
55848.35492428, 55631.31868874, 55620.63873852, 55621.19157845],
format='mjd', scale='tcb')
rv = [  83.31689433,   39.6248937 ,  100.92882012,   51.26634215,
-69.07714276,  -68.48276032,  100.59400063,  -31.12700028,
42.74779325,   31.24570195,   69.42366188,  -38.67907851,
-107.75242796,  108.11581448,   21.85112231,   28.52994995] * u.km/u.s
rv_err = [500., 500., 500., 500.,
500., 500., 500., 500.,
500., 500., 500., 500.,
500., 500., 500., 500.] * u.m/u.s
data = RVData(t, rv, stddev=rv_err)
_ = data.plot()


We’ll start by running The Joker to see how many samples we get back:

[5]:

params = JokerParams(P_min=2*u.day, P_max=1024*u.day)
joker = TheJoker(params)
joker_samples = joker.rejection_sample(data, n_prior_samples=2**19)

INFO: 1 good sample after rejection sampling [thejoker.sampler.sampler]

[6]:

_ = plot_rv_curves(joker_samples, data=data)


Only one sample is returned! This either means we need to dramatically increase the number of prior samples we’re using, or it means that the data are so informative that the posterior pdf is effectively unimodal. Here, we’ll assume the latter is true, and we’ll use the one sample that is returned to initialize standard MCMC to continue generating posterior samples.

We perform the MCMC sampling in a different parametrization of the Kepler problem from that used in The Joker. Motivated by this paper, we use our own parametrization defined as:

\begin{split}\begin{align} &\ln P \\ &\sqrt{K}\,\cos(M_0-\omega), \sqrt{K}\,\sin(M_0-\omega) \\ &\sqrt{e}\,\cos\omega, \sqrt{e}\,\sin\omega \\ &\ln s^2 \\ &v_0,..., v_n \end{align}\end{split}

However, we don’t want you to have to worry about these details: The Joker provides a class, TheJokerMCMCModel, that handles transforming to and from this parametrization, and deals with the MCMC sampling for you. This is all wrapped inside a convenience method on TheJoker object, so you can continue sampling with emcee by specifying the number of steps and number of walkers. This method uses samples generated from The Joker (here, joker_samples) to initialize the emcee sampling:

[7]:

# Note: this takes a few minutes to execute, so I've pickled the results
# and we load them for this demo
# model, samples, sampler = joker.mcmc_sample(data, joker_samples,
#                                             n_steps=256, n_burn=256,
#                                             n_walkers=256,
#                                             return_sampler=True)

# with open('mcmc_results.pkl', 'wb') as f:
#     pickle.dump((model, samples, sampler), f)

with open('mcmc_results.pkl', 'rb') as f:


Let’s start by plotting the MCMC chain traces for each parameter as a function of step number:

[8]:

fig, axes = plt.subplots(sampler.chain.shape[-1], 1, figsize=(8, 16),
sharex=True)

for k in range(sampler.chain.shape[-1]):
ax = axes[k]
for walker in sampler.chain[..., k]:
ax.plot(walker, marker='', drawstyle='steps-mid',
color='k', alpha=0.2)


Visually, the chains look like they are converged (the parameters plotted are the transformed MCMC parameters). We can also visualize the samples by making a “corner” plot with the corner package (you can install this with pip install corner):

[9]:

import corner

[10]:

_ = corner.corner(sampler.flatchain)


The samples returned from the MCMC sampling have been transformed back to the Keplerian parameters, i.e. the parameters used in The Joker:

[11]:

samples.keys()

[11]:

odict_keys(['P', 'M0', 'e', 'omega', 'K', 'jitter', 'v0'])


How do the period samples compare to the true period?

[12]:

plt.hist(samples['P'].value)
plt.axvline(true_P.value, zorder=100, color='tab:orange')
plt.xlabel('period, $P$ [day]')

[12]:

Text(0.5, 0, 'period, $P$ [day]')