Logdet Method Profiling Across Matrix Sizes

This notebook profiles the runtime of different log-determinant strategies used in bayespecon.

The profiling matrices come from a regular polygon grid generated by the bayespecon.dgp module. Each size n is a grid side length producing an n × n rook-contiguity layout with spatial units; the spatial graph is built from the polygon GeoDataFrame returned by the DGP function.

Methods compared:

  • exact: symbolic determinant route

  • eigenvalue: spectral precompute route

  • dense_grid: spline interpolation over a precomputed grid (was grid)

  • sparse_grid: exact sparse-LU grid (MATLAB-style lndetfull, was full)

  • spline: sparse-LU + spline interpolation (MATLAB-style lndetint, was int)

  • mc: Monte Carlo trace approximation (MATLAB-style lndetmc)

  • ilu: ILU-based approximation (MATLAB-style lndetichol analog, was ichol)

  • chebyshev: Chebyshev polynomial approximation via Clenshaw’s algorithm (Pace & LeSage 2004)

For each matrix size, we report:

  • setup time: build + compile callable logdet function

  • evaluation time: average cost to evaluate at many rho values

import time
from dataclasses import dataclass

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pytensor
import pytensor.tensor as pt
from libpysal import graph

from bayespecon import dgp
from bayespecon.logdet import make_logdet_fn
def make_grid_w(n_side: int) -> np.ndarray:
    """Create a row-standardized rook-contiguity matrix from an n_side x n_side polygon grid.

    Uses ``dgp.simulate_sar`` with ``create_gdf=True`` to generate the polygon
    geometry, then builds a contiguity graph from the returned GeoDataFrame.
    """
    gdf = dgp.simulate_sar(n=n_side, create_gdf=True)
    W = (
        graph.Graph.build_contiguity(gdf, rook=True)
        .transform("r")
        .sparse.toarray()
        .astype(np.float64)
    )
    return W


def compile_logdet_callable(W: np.ndarray, method: str, rho_min: float, rho_max: float):
    """Return a compiled callable f(rho) and its setup time in seconds."""
    t0 = time.perf_counter()
    rho = pt.scalar("rho")
    expr = make_logdet_fn(W, method=method, rho_min=rho_min, rho_max=rho_max)(rho)
    fn = pytensor.function([rho], expr)
    setup_s = time.perf_counter() - t0
    return fn, setup_s


def bench_eval_seconds(fn, rhos: np.ndarray, repeats: int = 5) -> float:
    """Median per-call evaluation latency in microseconds."""
    run_times = []
    for _ in range(repeats):
        t0 = time.perf_counter()
        for r in rhos:
            _ = fn(float(r))
        elapsed = time.perf_counter() - t0
        run_times.append(elapsed / len(rhos))
    return float(np.median(run_times))
@dataclass
class ProfileConfig:
    # Grid side lengths; obs count = n_side². e.g. 6→36, 9→81, 11→121, 13→169, 16→256.
    sizes: tuple[int, ...] = (10, 20, 25, 40, 50, 75)
    methods: tuple[str, ...] = (
        "exact",
        "eigenvalue",
        "dense_grid",
        "sparse_grid",
        "spline",
        "mc",
        "mc_poly",
        "ilu",
        "chebyshev",
    )
    method_max_n: dict = None
    eval_points: int = 80
    eval_repeats: int = 3
    seed: int = 2026

    def __post_init__(self):
        if self.method_max_n is None:
            self.method_max_n = {
                "exact": 20,
                "eigenvalue": 75,
                "dense_grid": 40,
                "sparse_grid": 75,
                "spline": 75,
                "mc": 75,
                "ilu": 75,
                "chebyshev": 75,
                "mc_poly": 75,
            }


cfg = ProfileConfig()

# spline/mc are defined on nonnegative rho ranges; others can use symmetric ranges.
method_rho_bounds = {
    "exact": (-0.95, 0.95),
    "eigenvalue": (-0.95, 0.95),
    "dense_grid": (-0.95, 0.95),
    "sparse_grid": (-0.95, 0.95),
    "ilu": (-0.95, 0.95),
    "spline": (1e-5, 0.95),
    "mc": (1e-5, 0.95),
    "chebyshev": (-0.95, 0.95),
    "mc_poly": (-0.95, 0.95),
}

results = []
skipped = []

for n in cfg.sizes:
    W = make_grid_w(n_side=n)
    print(f"Profiling n_side={n} ({n * n} obs)...")
    for method in cfg.methods:
        if n > cfg.method_max_n[method]:
            skipped.append(
                {
                    "n_side": n,
                    "n_obs": n * n,
                    "method": method,
                    "reason": "above method_max_n cap",
                }
            )
            continue

        rho_min, rho_max = method_rho_bounds[method]
        rho_grid = np.linspace(rho_min, rho_max, cfg.eval_points)

        try:
            fn, setup_s = compile_logdet_callable(
                W, method, rho_min=rho_min, rho_max=rho_max
            )
            eval_s = bench_eval_seconds(fn, rho_grid, repeats=cfg.eval_repeats)
            results.append(
                {
                    "n_side": n,
                    "n_obs": n * n,
                    "method": method,
                    "rho_min": rho_min,
                    "rho_max": rho_max,
                    "setup_ms": 1e3 * setup_s,
                    "eval_us": 1e6 * eval_s,
                }
            )
        except Exception as exc:
            skipped.append(
                {
                    "n_side": n,
                    "n_obs": n * n,
                    "method": method,
                    "reason": f"failed: {type(exc).__name__}: {exc}",
                }
            )

res = pd.DataFrame(results).sort_values(["method", "n_obs"]).reset_index(drop=True)
if not res.empty:
    res["total_ms"] = res["setup_ms"] + (res["eval_us"] * cfg.eval_points / 1e3)
res
Profiling n_side=10 (100 obs)...
Profiling n_side=20 (400 obs)...
Profiling n_side=25 (625 obs)...
Profiling n_side=40 (1600 obs)...
Profiling n_side=50 (2500 obs)...
Profiling n_side=75 (5625 obs)...
n_side n_obs method rho_min rho_max setup_ms eval_us total_ms
0 10 100 chebyshev -0.95000 0.95 696.920199 4.073713 697.246096
1 20 400 chebyshev -0.95000 0.95 734.256888 4.317163 734.602261
2 25 625 chebyshev -0.95000 0.95 822.549548 3.787300 822.852532
3 40 1600 chebyshev -0.95000 0.95 1615.268985 4.277575 1615.611191
4 50 2500 chebyshev -0.95000 0.95 735.034846 3.830750 735.341306
5 75 5625 chebyshev -0.95000 0.95 853.410552 3.770387 853.712183
6 10 100 dense_grid -0.95000 0.95 2765.704186 5.708375 2766.160856
7 20 400 dense_grid -0.95000 0.95 284.792858 5.401050 285.224942
8 25 625 dense_grid -0.95000 0.95 1196.150999 5.807800 1196.615623
9 40 1600 dense_grid -0.95000 0.95 10674.711259 5.522650 10675.153071
10 10 100 eigenvalue -0.95000 0.95 1176.079836 5.525912 1176.521909
11 20 400 eigenvalue -0.95000 0.95 53.808049 7.725000 54.426049
12 25 625 eigenvalue -0.95000 0.95 141.234224 9.323737 141.980123
13 40 1600 eigenvalue -0.95000 0.95 962.637999 16.838713 963.985096
14 50 2500 eigenvalue -0.95000 0.95 2820.383394 23.271062 2822.245079
15 75 5625 eigenvalue -0.95000 0.95 29495.591098 44.096275 29499.118800
16 10 100 exact -0.95000 0.95 2510.569489 87.581525 2517.576011
17 20 400 exact -0.95000 0.95 17.494855 1181.247663 111.994668
18 10 100 ilu -0.95000 0.95 108.316180 5.685700 108.771036
19 20 400 ilu -0.95000 0.95 225.332153 5.650025 225.784155
20 25 625 ilu -0.95000 0.95 323.321763 5.574125 323.767693
21 40 1600 ilu -0.95000 0.95 722.053443 6.049137 722.537374
22 50 2500 ilu -0.95000 0.95 1124.994016 5.714500 1125.451176
23 75 5625 ilu -0.95000 0.95 2669.094488 6.214425 2669.591642
24 10 100 mc 0.00001 0.95 599.998990 5.507250 600.439570
25 20 400 mc 0.00001 0.95 50.829995 5.641000 51.281275
26 25 625 mc 0.00001 0.95 55.214316 5.727788 55.672539
27 40 1600 mc 0.00001 0.95 73.633551 5.873813 74.103456
28 50 2500 mc 0.00001 0.95 96.986149 5.319650 97.411721
29 75 5625 mc 0.00001 0.95 246.342920 5.703837 246.799227
30 10 100 mc_poly -0.95000 0.95 661.239353 4.288850 661.582461
31 20 400 mc_poly -0.95000 0.95 663.584305 4.283100 663.926953
32 25 625 mc_poly -0.95000 0.95 709.879232 4.307025 710.223794
33 40 1600 mc_poly -0.95000 0.95 684.056105 4.246650 684.395837
34 50 2500 mc_poly -0.95000 0.95 709.230261 4.016975 709.551619
35 75 5625 mc_poly -0.95000 0.95 871.879652 4.305500 872.224092
36 10 100 sparse_grid -0.95000 0.95 652.647329 5.697113 653.103098
37 20 400 sparse_grid -0.95000 0.95 204.246746 5.731538 204.705269
38 25 625 sparse_grid -0.95000 0.95 294.316772 5.350463 294.744809
39 40 1600 sparse_grid -0.95000 0.95 667.544790 5.918637 668.018281
40 50 2500 sparse_grid -0.95000 0.95 1040.272585 5.469050 1040.710109
41 75 5625 sparse_grid -0.95000 0.95 2533.614430 5.811675 2534.079364
42 10 100 spline 0.00001 0.95 596.371598 5.772875 596.833428
43 20 400 spline 0.00001 0.95 54.324069 5.648137 54.775920
44 25 625 spline 0.00001 0.95 64.594506 5.368875 65.024016
45 40 1600 spline 0.00001 0.95 158.314477 6.057775 158.799099
46 50 2500 spline 0.00001 0.95 152.697674 5.617950 153.147110
47 75 5625 spline 0.00001 0.95 376.927609 5.622563 377.377414
if res.empty:
    raise RuntimeError("No profiling results were generated.")

fig, axes = plt.subplots(1, 3, figsize=(16, 4.8), constrained_layout=True)

for method, grp in res.groupby("method"):
    grp = grp.sort_values("n_obs")
    axes[0].plot(grp["n_obs"], grp["setup_ms"], marker="o", label=method)
    axes[1].plot(grp["n_obs"], grp["eval_us"], marker="o", label=method)
    axes[2].plot(grp["n_obs"], grp["total_ms"], marker="o", label=method)

axes[0].set_title("Setup Time vs n")
axes[0].set_xlabel("observations")
axes[0].set_ylabel("setup time (ms)")
axes[0].set_yscale("log")
axes[0].grid(True, alpha=0.3)

axes[1].set_title("Evaluation Time vs n")
axes[1].set_xlabel("observations")
axes[1].set_ylabel("time per rho eval (us)")
axes[1].set_yscale("log")
axes[1].grid(True, alpha=0.3)

axes[2].set_title(f"Total Time vs n (setup + {cfg.eval_points} evals)")
axes[2].set_xlabel("observations")
axes[2].set_ylabel("total time (ms)")
axes[2].set_yscale("log")
axes[2].grid(True, alpha=0.3)

for ax in axes:
    ax.legend()

plt.show()
../_images/783bf296bcfd3285add23a6fc353700c464681944e443557e467d0480a873889.png
summary = res.pivot_table(
    index="n_obs", columns="method", values=["setup_ms", "eval_us", "total_ms"]
).sort_index()
display(summary)

if skipped:
    skipped_df = (
        pd.DataFrame(skipped).sort_values(["n_side", "method"]).reset_index(drop=True)
    )
    print("Skipped combinations (due to safety caps or failures):")
    display(skipped_df)
eval_us setup_ms total_ms
method chebyshev dense_grid eigenvalue exact ilu mc mc_poly sparse_grid spline chebyshev ... spline chebyshev dense_grid eigenvalue exact ilu mc mc_poly sparse_grid spline
n_obs
100 4.073713 5.708375 5.525912 87.581525 5.685700 5.507250 4.288850 5.697113 5.772875 696.920199 ... 596.371598 697.246096 2766.160856 1176.521909 2517.576011 108.771036 600.439570 661.582461 653.103098 596.833428
400 4.317163 5.401050 7.725000 1181.247663 5.650025 5.641000 4.283100 5.731538 5.648137 734.256888 ... 54.324069 734.602261 285.224942 54.426049 111.994668 225.784155 51.281275 663.926953 204.705269 54.775920
625 3.787300 5.807800 9.323737 NaN 5.574125 5.727788 4.307025 5.350463 5.368875 822.549548 ... 64.594506 822.852532 1196.615623 141.980123 NaN 323.767693 55.672539 710.223794 294.744809 65.024016
1600 4.277575 5.522650 16.838713 NaN 6.049137 5.873813 4.246650 5.918637 6.057775 1615.268985 ... 158.314477 1615.611191 10675.153071 963.985096 NaN 722.537374 74.103456 684.395837 668.018281 158.799099
2500 3.830750 NaN 23.271062 NaN 5.714500 5.319650 4.016975 5.469050 5.617950 735.034846 ... 152.697674 735.341306 NaN 2822.245079 NaN 1125.451176 97.411721 709.551619 1040.710109 153.147110
5625 3.770387 NaN 44.096275 NaN 6.214425 5.703837 4.305500 5.811675 5.622563 853.410552 ... 376.927609 853.712183 NaN 29499.118800 NaN 2669.591642 246.799227 872.224092 2534.079364 377.377414

6 rows × 27 columns

Skipped combinations (due to safety caps or failures):
n_side n_obs method reason
0 25 625 exact above method_max_n cap
1 40 1600 exact above method_max_n cap
2 50 2500 dense_grid above method_max_n cap
3 50 2500 exact above method_max_n cap
4 75 5625 dense_grid above method_max_n cap
5 75 5625 exact above method_max_n cap

Coefficient and Fit-Time Comparison Across Logdet Methods

This section uses a regular polygon grid generated by bayespecon.dgp to simulate one SAR dataset, maps the simulated response, and estimates the same SAR model using each logdet_method.

We compare:

  • posterior mean coefficients (rho, beta_0, beta_1, beta_2)

  • total wall-clock time to estimate each model

To keep this section runnable in docs contexts, sampling is intentionally modest.

from bayespecon import SAR


def simulate_sar_data(n_side: int = 16, seed: int = 2026):
    """Simulate SAR data on an n_side x n_side polygon grid using the DGP module."""
    rng = np.random.default_rng(seed)
    beta_true = np.array([1.0, 0.8, -0.5], dtype=np.float64)
    rho_true = 0.35
    sigma_true = 0.7

    gdf = dgp.simulate_sar(
        n=n_side,
        rho=rho_true,
        beta=beta_true,
        sigma=sigma_true,
        rng=rng,
        create_gdf=True,
    )
    # Keep the SAR parameterization consistent with model assumptions.
    W_graph = graph.Graph.build_contiguity(gdf, rook=True).transform("r")
    y = gdf["y"].to_numpy()
    X_cols = [c for c in gdf.columns if c.startswith("X_")]
    X = gdf[X_cols].to_numpy()
    return gdf, y, X, W_graph, rho_true, beta_true


def fit_sar_for_method(
    y,
    X,
    W,
    method: str,
    draws: int = 400,
    tune: int = 400,
    seed: int = 2026,
    rho_lower: float = 1e-5,
    rho_upper: float = 0.95,
):
    """Fit SAR with a specific logdet method and return posterior means + runtime."""
    t0 = time.perf_counter()

    model = SAR(
        y=y,
        X=X,
        W=W,
        logdet_method=method,
        priors={"rho_lower": rho_lower, "rho_upper": rho_upper},
    )
    idata = model.fit(
        draws=draws,
        tune=tune,
        chains=2,
        cores=1,
        random_seed=seed,
        target_accept=0.95,
        progressbar=False,
        compute_convergence_checks=False,
    )

    elapsed_s = time.perf_counter() - t0
    beta_mean = idata.posterior["beta"].mean(("chain", "draw")).to_numpy()
    rho_mean = float(idata.posterior["rho"].mean(("chain", "draw")).to_numpy())

    return {
        "method": method,
        "total_time_s": elapsed_s,
        "rho": rho_mean,
        "beta_0": float(beta_mean[0]),
        "beta_1": float(beta_mean[1]),
        "beta_2": float(beta_mean[2]),
    }


methods_for_model = [
    "eigenvalue",
    "dense_grid",
    "sparse_grid",
    "mc_poly",
    "ilu",
    "chebyshev",
]
gdf_model, y_model, X_model, W_model, rho_true, beta_true = simulate_sar_data(n_side=25)

fig, ax = plt.subplots(1, 1, figsize=(7, 7))
gdf_model.plot(
    column="y", cmap="viridis", legend=True, linewidth=0.15, edgecolor="white", ax=ax
)
ax.set_title("Simulated y on 25x25 polygon grid")
ax.set_axis_off()
plt.show()

model_rows = []
for m in methods_for_model:
    print(f"Estimating SAR with method={m}...")
    try:
        model_rows.append(fit_sar_for_method(y_model, X_model, W_model, method=m))
    except Exception as exc:
        model_rows.append(
            {
                "method": m,
                "total_time_s": np.nan,
                "rho": np.nan,
                "beta_0": np.nan,
                "beta_1": np.nan,
                "beta_2": np.nan,
                "error": f"{type(exc).__name__}: {exc}",
            }
        )

coef_compare = pd.DataFrame(model_rows)
coef_compare = coef_compare.sort_values("total_time_s", na_position="last").reset_index(
    drop=True
)

coef_compare["rho_true"] = rho_true
coef_compare["abs_err_rho"] = (coef_compare["rho"] - rho_true).abs()
for j, btrue in enumerate(beta_true):
    coef_compare[f"beta_{j}_true"] = btrue
    coef_compare[f"abs_err_beta_{j}"] = (coef_compare[f"beta_{j}"] - btrue).abs()

if "eigenvalue" in coef_compare["method"].values:
    base = coef_compare.loc[
        coef_compare["method"] == "eigenvalue", ["rho", "beta_0", "beta_1", "beta_2"]
    ].iloc[0]
    for col in ["rho", "beta_0", "beta_1", "beta_2"]:
        coef_compare[f"delta_vs_eigen_{col}"] = coef_compare[col] - base[col]

display(coef_compare)
../_images/7423a913d5ac047bc4c9b538d77a43c72586f2be136e1f40b0a1cc235afd97ec.png
Estimating SAR with method=eigenvalue...
Estimating SAR with method=dense_grid...
Estimating SAR with method=sparse_grid...
Estimating SAR with method=mc_poly...
Estimating SAR with method=ilu...
Estimating SAR with method=chebyshev...
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 3 seconds.
method total_time_s rho beta_0 beta_1 beta_2 rho_true abs_err_rho beta_0_true abs_err_beta_0 beta_1_true abs_err_beta_1 beta_2_true abs_err_beta_2 delta_vs_eigen_rho delta_vs_eigen_beta_0 delta_vs_eigen_beta_1 delta_vs_eigen_beta_2
0 ilu 3.508033 0.382076 0.882487 0.807844 -0.520888 0.35 0.032076 1.0 0.117513 0.8 0.007844 -0.5 0.020888 6.188231e-04 -2.032757e-03 -6.876075e-04 -2.634322e-03
1 sparse_grid 4.044967 0.381457 0.884519 0.808531 -0.518253 0.35 0.031457 1.0 0.115481 0.8 0.008531 -0.5 0.018253 1.312631e-10 -2.506775e-10 -3.051681e-11 2.516268e-10
2 eigenvalue 6.246851 0.381457 0.884519 0.808531 -0.518253 0.35 0.031457 1.0 0.115481 0.8 0.008531 -0.5 0.018253 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00
3 dense_grid 8.147875 0.381457 0.884519 0.808531 -0.518253 0.35 0.031457 1.0 0.115481 0.8 0.008531 -0.5 0.018253 -8.043217e-10 5.792263e-10 5.364920e-11 -1.770358e-10
4 mc_poly 11.576404 0.381253 0.882464 0.809891 -0.520724 0.35 0.031253 1.0 0.117536 0.8 0.009891 -0.5 0.020724 -2.043695e-04 -2.055350e-03 1.359103e-03 -2.470396e-03
5 chebyshev 12.700295 0.380549 0.885094 0.811359 -0.519605 0.35 0.030549 1.0 0.114906 0.8 0.011359 -0.5 0.019605 -9.080499e-04 5.745152e-04 2.827230e-03 -1.351804e-03

Notes

  • exact is expected to scale poorly and is intentionally capped at smaller n in this notebook.

  • eigenvalue usually has moderate setup cost and very fast repeated evaluations.

  • dense_grid has an upfront grid-construction cost but can be competitive for repeated evaluation scenarios (was grid).

  • sparse_grid and spline are sparse-LU variants inspired by the MATLAB toolbox routines (were full and int).

  • mc is stochastic and may vary slightly run-to-run unless random seeds are fixed.

  • ilu uses an ILU approximation that can trade precision for speed depending on sparsity (was ichol).

  • chebyshev uses Chebyshev polynomial approximation via Clenshaw’s algorithm (Pace & LeSage 2004). It pre-computes coefficients at Chebyshev nodes (O(n³) for eigenvalue path, O(R·n·m) for MC path) and then evaluates in O(m) per call. It supports negative rho ranges, unlike spline and mc.

  • Parameter ranges: spline and mc are profiled on a nonnegative range (rho in [1e-5, 0.95]), while exact, eigenvalue, dense_grid, sparse_grid, ilu, and chebyshev are profiled on a symmetric range (rho in [-0.95, 0.95]).

  • Adjust ProfileConfig.sizes and method_max_n for deeper stress tests.

import arviz as az
import pandas as pd

# Assuming you have a list of inference data objects (idata) for each method,
# or you can refit/collect them as needed. If you only have the summary DataFrames,
# adapt the code to use those directly.

ess_rows = []
for i, m in enumerate(methods_for_model):
    model = SAR(
        y=y_model,
        X=X_model,
        W=W_model,
        logdet_method=m,
        priors={"rho_lower": 1e-5, "rho_upper": 0.95},
    )
    idata = model.fit(
        draws=400,
        tune=400,
        chains=2,
        cores=1,
        random_seed=2026,
        target_accept=0.95,
        progressbar=False,
        compute_convergence_checks=False,
    )
    summary = az.summary(idata, var_names=["rho"])
    ess = summary.loc["rho", "ess_bulk"]
    ess_rows.append({"method": m, "ess_rho": ess})
    az.plot_trace(idata, var_names=["rho"], compact=True, legend=False)
    plt.suptitle(m)

ess_df = pd.DataFrame(ess_rows)
display(ess_df)

ess_df.set_index("method")["ess_rho"].plot.bar(
    ylabel="ESS (rho)", title="Effective Sample Size for rho by Logdet Method"
)
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 2 seconds.
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (2 chains in 1 job)
NUTS: [rho, beta, sigma]
Sampling 2 chains for 400 tune and 400 draw iterations (800 + 800 draws total) took 3 seconds.
method ess_rho
0 eigenvalue 585.0
1 dense_grid 585.0
2 sparse_grid 585.0
3 mc_poly 579.0
4 ilu 591.0
5 chebyshev 494.0
<Axes: title={'center': 'Effective Sample Size for rho by Logdet Method'}, xlabel='method', ylabel='ESS (rho)'>
../_images/69fc0d58d684857829505531531cf3c3f9fc9bf6506f87d77b924cfdd4435b49.png ../_images/552f7ba23dc67531e2510875303e20fdbc4ab42ddc9cde0ffecd70e115755da0.png ../_images/c3f15b02b580188832192f0fba8123c3d21f63e46d18d1d8d1f9c4cd9dace8ad.png ../_images/4654ed4742aaa63f5db25ab1bd921fee106e56e7f835baea6df87ab5038ca1e0.png ../_images/9b0c9a6635d13177f52cdaf562486c35af6a09913ce4f7b3155e0811a5c0e587.png ../_images/12dff3a82fb8ecc1dc4201c378d681b22fad4711bcded40c78b43c0b02025f85.png