```
[1]:
```

```
%run notebook_setup
```

*If you have not already read it, you may want to start with the first tutorial: `Getting started with The Joker <1-Getting-started.ipynb>`__.*

# Continue generating samples with standard MCMC¶

When many prior samples are used with *The Joker*, and the sampler returns one sample, or the samples returned are within the same mode of the posterior, the posterior *pdf* is likely unimodal. In these cases, we can use standard MCMC methods to generate posterior samples, which will typically be much more efficient than *The Joker* itself. In this example, we will use `pymc3`

to “continue” sampling for data that are very constraining.

First, some imports we will need later:

```
[2]:
```

```
import astropy.coordinates as coord
import astropy.table as at
from astropy.time import Time
import astropy.units as u
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import corner
import pymc3 as pm
import exoplanet as xo
import thejoker as tj
```

```
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
```

```
[3]:
```

```
# set up a random state to ensure reproducibility
rnd = np.random.RandomState(seed=42)
```

Here we will again load some pre-generated data meant to represent well-sampled, precise radial velocity observations of a single luminous source with a single companion (we again downsample the data set here just for demonstration):

```
[4]:
```

```
data_tbl = at.QTable.read('data.ecsv')
sub_tbl = data_tbl[rnd.choice(len(data_tbl), size=18, replace=False)] # downsample data
data = tj.RVData.guess_from_table(sub_tbl, t0=data_tbl.meta['t0'])
```

```
[5]:
```

```
_ = data.plot()
```

```
findfont: Font family ['serif'] not found. Falling back to DejaVu Sans.
findfont: Font family ['cursive'] not found. Falling back to DejaVu Sans.
```

We will use the default prior, but feel free to play around with these values:

```
[6]:
```

```
prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s)
```

The data above look fairly constraining: it would be hard to draw many distinct orbital solutions through the RV data plotted above. In cases like this, we will often only get back 1 or a few samples from *The Joker* even if we use a huge number of prior samples. Since we are only going to use the samples from *The Joker* to initialize standard MCMC, we will only use a moderate number of prior samples:

```
[7]:
```

```
prior_samples = prior.sample(size=250_000,
random_state=rnd)
```

```
[8]:
```

```
joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=256)
joker_samples
```

```
[8]:
```

```
<JokerSamples [P, e, omega, M0, s, K, v0] (1 samples)>
```

```
[9]:
```

```
joker_samples.tbl
```

```
[9]:
```

*QTable length=1*

P | e | omega | M0 | s | K | v0 |
---|---|---|---|---|---|---|

d | rad | rad | km / s | km / s | km / s | |

float64 | float64 | float64 | float64 | float64 | float64 | float64 |

51.1085855887787 | 0.07370493104368926 | -0.5443556813315529 | 0.6392295780006165 | 0.0 | -12.36123645081821 | -7.722156875169216 |

```
[10]:
```

```
_ = tj.plot_rv_curves(joker_samples, data=data)
```

The sample that was returned by *The Joker* does look like it is a reasonable fit to the RV data, but to fully explore the posterior *pdf* we will use standard MCMC through `pymc3`

. Here we will use the NUTS sampler, but you could also experiment with other backends (e.g., Metropolis-Hastings, or even `emcee`

by following this blog post):

```
[11]:
```

```
with prior.model:
mcmc_init = joker.setup_mcmc(data, joker_samples)
trace = pm.sample(tune=1000, draws=1000,
start=mcmc_init,
step=xo.get_dense_nuts_step(target_accept=0.95))
```

```
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [v0, K, P, M0, omega, e]
Sampling 4 chains, 0 divergences: 100%|██████████| 8000/8000 [00:19<00:00, 418.13draws/s]
```

If you get warnings from running the sampler above, they usually indicate that we should run the sampler for many more steps to tune the sampler and for our main run, but let’s ignore that for now. With the MCMC traces in hand, we can summarize the properties of the chains using `pymc3.summary`

:

```
[12]:
```

```
pm.summary(trace, var_names=prior.par_names)
```

```
[12]:
```

mean | sd | hpd_3% | hpd_97% | mcse_mean | mcse_sd | ess_mean | ess_sd | ess_bulk | ess_tail | r_hat | |
---|---|---|---|---|---|---|---|---|---|---|---|

P | 51.557 | 0.211 | 51.161 | 51.940 | 0.004 | 0.003 | 2992.0 | 2992.0 | 2990.0 | 2680.0 | 1.0 |

e | 0.096 | 0.013 | 0.070 | 0.121 | 0.000 | 0.000 | 3259.0 | 3259.0 | 3321.0 | 2135.0 | 1.0 |

omega | 0.365 | 0.142 | 0.104 | 0.632 | 0.003 | 0.002 | 1918.0 | 1918.0 | 1917.0 | 2428.0 | 1.0 |

M0 | 1.456 | 0.123 | 1.233 | 1.693 | 0.003 | 0.002 | 1911.0 | 1911.0 | 1910.0 | 2502.0 | 1.0 |

s | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 4000.0 | 4000.0 | 4000.0 | 4000.0 | NaN |

K | -12.164 | 0.135 | -12.427 | -11.919 | 0.002 | 0.002 | 3532.0 | 3517.0 | 3552.0 | 2882.0 | 1.0 |

v0 | -7.714 | 0.113 | -7.923 | -7.495 | 0.002 | 0.001 | 3655.0 | 3655.0 | 3655.0 | 2649.0 | 1.0 |

To convert the trace into a `JokerSamples`

instance, we can use the `TheJoker.trace_to_samples()`

method. Note here that the sign of `K`

is arbitrary, so to compare to the true value, we also call `wrap_K()`

to store only the absolute value of `K`

(which also increases `omega`

by π, to stay consistent):

```
[13]:
```

```
mcmc_samples = joker.trace_to_samples(trace, data)
mcmc_samples.wrap_K()
mcmc_samples
```

```
[13]:
```

```
<JokerSamples [P, e, omega, M0, s, K, v0] (4000 samples)>
```

We can now compare the samples we got from MCMC to the true orbital parameters used to generate this data:

```
[14]:
```

```
import pickle
with open('true-orbit.pkl', 'rb') as f:
truth = pickle.load(f)
# make sure the angles are wrapped the same way
if np.median(mcmc_samples['omega']) < 0:
truth['omega'] = coord.Angle(truth['omega']).wrap_at(np.pi*u.radian)
if np.median(mcmc_samples['M0']) < 0:
truth['M0'] = coord.Angle(truth['M0']).wrap_at(np.pi*u.radian)
```

```
[15]:
```

```
df = mcmc_samples.tbl.to_pandas()
truths = []
colnames = []
for name in df.columns:
if name in truth:
colnames.append(name)
truths.append(truth[name].value)
_ = corner.corner(df[colnames], truths=truths)
```

Overall, it looks like we do recover the input parameters!