[1]:

%run notebook_setup


# Inferring calibration offsets between instruments¶

Also in addition to the default linear parameters (see Tutorial 1, or the documentation for JokerSamples.default()), The Joker allows adding linear parameters to account for possible calibration offsets between instruments. For example, there may be an absolute velocity offset between two spectrographs. Below we will demonstrate how to simultaneously infer and marginalize over a constant velocity offset between two simulated surveys of the same “star”.

First, some imports we will need later:

[2]:

import astropy.table as at
import astropy.units as u
from astropy.visualization.units import quantity_support
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import corner
import pymc3 as pm
import exoplanet as xo
import exoplanet.units as xu

import thejoker as tj

WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.

[3]:

# set up a random number generator to ensure reproducibility
rnd = np.random.default_rng(seed=42)


The data for our two surveys are stored in two separate CSV files included with the documentation. We will load separate RVData instances for the two data sets and append these objects to a list of datasets:

[4]:

data = []
for filename in ['data-survey1.ecsv', 'data-survey2.ecsv']:
_data = tj.RVData.guess_from_table(tbl, t0=tbl.meta['t0'])
data.append(_data)


In the plot below, the two data sets are shown in different colors:

[5]:

for d, color in zip(data, ['tab:blue', 'tab:red']):
_ = d.plot(color=color)

findfont: Font family ['serif'] not found. Falling back to DejaVu Sans.


To tell The Joker to handle additional linear parameters to account for offsets in absolute velocity, we must define a new parameter for the offset betwen survey 1 and survey 2 and specify a prior. Here we will assume a Gaussian prior on the offset, centered on 0, but with a 10 km/s standard deviation. We then pass this in to JokerPrior.default() (all other parameters here use the default prior) through the v0_offsets argument:

[6]:

with pm.Model() as model:
dv0_1 = xu.with_unit(pm.Normal('dv0_1', 0, 10),
u.km/u.s)

prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=256*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s,
v0_offsets=[dv0_1])


The rest should look familiar: The code below is identical to previous tutorials, in which we generate prior samples and then rejection sample with The Joker:

[7]:

prior_samples = prior.sample(size=1_000_000,
random_state=rnd)

[8]:

joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=128)
joker_samples

[8]:

<JokerSamples [P, e, omega, M0, s, K, v0, dv0_1] (1 samples)>


Note that the new parameter, dv0_1, now appears in the returned samples above.

If we pass these samples in to the plot_rv_curves function, the data from other surveys is, by default, shifted by the mean value of the offset before plotting:

[9]:

_ = tj.plot_rv_curves(joker_samples, data=data)


However, the above behavior can be disabled by setting apply_mean_v0_offset=False. Note that with this set, the inferred orbit will not generally pass through data that suffer from a measurable offset:

[10]:

_ = tj.plot_rv_curves(joker_samples, data=data,
apply_mean_v0_offset=False)


As introduced in the previous tutorial, we can also continue generating samples by initializing and running standard MCMC:

[11]:

with prior.model:
mcmc_init = joker.setup_mcmc(data, joker_samples)

trace = pm.sample(
tune=1000, draws=1000,
start=mcmc_init,
step=xo.get_dense_nuts_step(target_accept=0.95))

Multiprocess sampling (4 chains in 4 jobs)
NUTS: [v0, K, P, M0, omega, e, dv0_1]

100.00% [8000/8000 00:22<00:00 Sampling 4 chains, 1 divergences]
Sampling 4 chains for 1_000 tune and 1_000 draw iterations (4_000 + 4_000 draws total) took 23 seconds.
There was 1 divergence after tuning. Increase target_accept or reparameterize.
The number of effective samples is smaller than 25% for some parameters.

[12]:

pm.summary(trace, var_names=prior.par_names)

[12]:

mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
P 67.160 0.208 66.796 67.561 0.004 0.003 2304.0 2302.0 2337.0 2351.0 1.0
e 0.139 0.018 0.107 0.173 0.000 0.000 1675.0 1527.0 1923.0 1480.0 1.0
omega 1.061 0.270 0.559 1.556 0.009 0.006 964.0 918.0 968.0 1228.0 1.0
M0 1.922 0.307 1.374 2.498 0.010 0.007 950.0 919.0 954.0 1211.0 1.0
s 0.000 0.000 0.000 0.000 0.000 0.000 4000.0 4000.0 4000.0 4000.0 NaN
K -7.956 0.106 -8.150 -7.760 0.003 0.002 1306.0 1296.0 1451.0 1442.0 1.0
v0 40.214 0.288 39.613 40.708 0.010 0.007 895.0 895.0 929.0 1062.0 1.0
dv0_1 4.807 0.365 4.147 5.535 0.011 0.008 1124.0 1098.0 1144.0 1317.0 1.0

Here the true offset is 4.8 km/s, so it looks like we recover this value!

A full corner plot of the MCMC samples:

[13]:

mcmc_samples = joker.trace_to_samples(trace, data)
mcmc_samples.wrap_K()

[13]:

<JokerSamples [P, e, omega, M0, s, K, v0, dv0_1] (4000 samples)>

[14]:

df = mcmc_samples.tbl.to_pandas()
colnames = mcmc_samples.par_names
colnames.pop(colnames.index('s'))
_ = corner.corner(df[colnames])