API Reference

Plots

plot_autocorr(data[, var_names, …])

Bar plot of the autocorrelation function for a sequence of data.

plot_compare(comp_df[, insample_dev, …])

Summary plot for model comparison.

plot_density(data[, group, data_labels, …])

Generate KDE plots for continuous variables and histograms for discrete ones.

plot_dist(values[, values2, color, kind, …])

Plot distribution as histogram or kernel density estimates.

plot_elpd(compare_dict[, color, xlabels, …])

Plot pointwise elpd differences between two or more models.

plot_energy(data[, kind, bfmi, figsize, …])

Plot energy transition distribution and marginal energy distribution in HMC algorithms.

plot_ess(idata[, var_names, filter_vars, …])

Plot quantile, local or evolution of effective sample sizes (ESS).

plot_forest(data[, kind, model_names, …])

Forest plot to compare HDI intervals from a number of distributions.

plot_hdi(x[, y, hdi_prob, hdi_data, color, …])

Plot HDI intervals for regression data.

plot_joint(data[, group, var_names, …])

Plot a scatter or hexbin of two variables with their respective marginals distributions.

plot_kde(values[, values2, cumulative, rug, …])

1D or 2D KDE plot taking into account boundary conditions.

plot_khat(khats[, color, xlabels, …])

Plot Pareto tail indices.

plot_loo_pit([idata, y, y_hat, log_weights, …])

Plot Leave-One-Out (LOO) probability integral transformation (PIT) predictive checks.

plot_mcse(idata[, var_names, filter_vars, …])

Plot quantile or local Monte Carlo Standard Error.

plot_pair(data[, group, var_names, …])

Plot a scatter, kde and/or hexbin matrix with (optional) marginals on the diagonal.

plot_parallel(data[, var_names, …])

Plot parallel coordinates plot showing posterior points with and without divergences.

plot_posterior(data[, var_names, …])

Plot Posterior densities in the style of John K.

plot_ppc(data[, kind, alpha, mean, figsize, …])

Plot for posterior/prior predictive checks.

plot_rank(data[, var_names, filter_vars, …])

Plot rank order statistics of chains.

plot_trace(data[, var_names, filter_vars, …])

Plot distribution (histogram or kernel density estimates) and sampled values or rank plot.

plot_violin(data[, var_names, filter_vars, …])

Plot posterior of traces as violin plot.

Stats

apply_test_function(idata, func[, group, …])

Apply a Bayesian test function to an InferenceData object.

compare(dataset_dict[, ic, method, …])

Compare models based on PSIS-LOO loo or WAIC waic cross-validation.

hdi(ary[, hdi_prob, circular, multimodal, …])

Calculate highest density interval (HDI) of array for given probability.

hpd(ary[, hdi_prob, circular, multimodal, …])

Pending deprecation.

loo(data[, pointwise, var_name, reff, scale])

Compute Pareto-smoothed importance sampling leave-one-out cross-validation (PSIS-LOO-CV).

loo_pit([idata, y, y_hat, log_weights])

Compute leave one out (PSIS-LOO) probability integral transform (PIT) values.

psislw(log_weights[, reff])

Pareto smoothed importance sampling (PSIS).

r2_score(y_true, y_pred)

R² for Bayesian regression models.

summary(data[, var_names, filter_vars, fmt, …])

Create a data frame with summary statistics.

waic(data[, pointwise, var_name, scale])

Compute the widely applicable information criterion.

Diagnostics

bfmi(data)

Calculate the estimated Bayesian fraction of missing information (BFMI).

geweke

Compute z-scores for convergence diagnostics.

ess(data, *[, var_names, method, relative, prob])

Calculate estimate of the effective sample size (ess).

rhat(data, *[, var_names, method])

Compute estimate of rank normalized splitR-hat for a set of traces.

mcse(data, *[, var_names, method, prob])

Calculate Markov Chain Standard Error statistic.

Stats utils

autocov(ary[, axis])

Compute autocovariance estimates for every lag for the input array.

autocorr(ary[, axis])

Compute autocorrelation using FFT for every lag for the input array.

make_ufunc(func[, n_dims, n_output, …])

Make ufunc from a function taking 1D array input.

wrap_xarray_ufunc(ufunc, *datasets[, …])

Wrap make_ufunc with xarray.apply_ufunc.

Data

InferenceData(**kwargs)

Container for inference data storage using xarray.

convert_to_inference_data(obj, *[, group, …])

Convert a supported object to an InferenceData object.

load_arviz_data([dataset, data_home])

Load a local or remote pre-made dataset.

to_netcdf(data, filename, *[, group, …])

Save dataset as a netcdf file.

from_netcdf(filename)

Load netcdf file back into an arviz.InferenceData.

from_cmdstan([posterior, …])

Convert CmdStan data into an InferenceData object.

from_cmdstanpy([posterior, …])

Convert CmdStanPy data into an InferenceData object.

from_dict([posterior, posterior_predictive, …])

Convert Dictionary data into an InferenceData object.

from_emcee([sampler, var_names, slices, …])

Convert emcee data into an InferenceData object.

from_pymc3([trace, prior, …])

Convert pymc3 data into an InferenceData object.

from_pymc3_predictions(predictions[, …])

Translate out-of-sample predictions into InferenceData.

from_pyro([posterior, prior, …])

Convert Pyro data into an InferenceData object.

from_numpyro([posterior, prior, …])

Convert NumPyro data into an InferenceData object.

from_pystan([posterior, …])

Convert PyStan data into an InferenceData object.

from_tfp([posterior, var_names, model_fn, …])

Convert tfp data into an InferenceData object.

from_pyjags([posterior, prior, …])

Convert PyJAGS posterior samples to an ArviZ inference data object.

concat(*args[, dim, copy, inplace, reset_dim])

Concatenate InferenceData objects.

Utils

Numba()

A class to toggle numba states.

interactive_backend([backend])

Context manager to change backend temporarily in ipython sesson.

rcParams

rc_context([rc, fname])

Return a context manager for managing rc settings.

Wrappers

Experimental feature

SamplingWrapper(model[, idata_orig, …])

Class wrapping sampling routines for its usage via ArviZ.

PyStanSamplingWrapper(model[, idata_orig, …])

PyStan sampling wrapper base class.

Stats (requiring refitting)

Experimental feature

reloo(wrapper[, loo_orig, k_thresh, scale, …])

Recalculate exact Leave-One-Out cross validation refitting where the approximation fails.