HomeDocs-Technical WhitePaper03-EFT.WP.Core.Parameters v1.0

Chapter 4 — Priors, Likelihoods, and Posteriors


I. Aims and Scope


II. Symbols and Objects


III. Bayes Mother Form and Evidence (Minimal Equation S41-1)

  1. S41-1 (Bayes mother form)
    • post(theta | data) = L(data | theta) * prior(theta) / Z
    • Z = ∫_Theta L(data | theta) * prior(theta) d theta
    • log post(theta | data) = log L(data | theta) + log prior(theta) - log Z
  2. Extremum definitions:
    • theta_MLE = argmax_theta L(data | theta)
    • theta_MAP = argmax_theta post(theta | data)
  3. Dimensional closure requirement:
    prior(theta) and L(data | theta) are densities or probability masses; Z is a normalizing constant; verify with check_dim(expr:str).

IV. Likelihood Construction Templates

  1. Conditional-independence factorization (state explicitly):
    • L(data | theta) = ∏_{k=1}^N L_k(y_k | x_k, theta)
    • If grouped: L(data | theta) = ∏_g ∏_{k ∈ g} L_{g,k}(y_k | x_k, theta)
  2. Additive Gaussian noise:
    • Observation model y_k = f(x_k; theta) + ε_k, ε_k ~ Normal(0, sigma_k^2)
    • L_k = Normal(y_k | f(x_k; theta), sigma_k)
  3. Heteroscedastic Gaussian:
    sigma_k = sigma(x_k); same likelihood as above with sigma_k varying over x_k.
  4. Multiplicative LogNormal noise:
    • y_k = f(x_k; theta) * η_k, log η_k ~ Normal(0, s^2)
    • L_k = LogNormal(y_k | log f(x_k; theta), s)
  5. Counting models:
    L_k = Poisson(y_k | λ_k(theta)) or Binomial(y_k | n_k, p_k(theta))
  6. Path/arrival-time measurements (cross-volume alignment):
    • Predictor f(x_k; theta) def= T_arr(theta) = ( ∫ ( n_eff / c_ref ) d ell ), with gamma(ell) and d ell declared explicitly
    • Observation model y_k = T_arr(theta) + ε_k, commonly ε_k ~ Normal(0, sigma^2)
  7. Mixtures/compound models:
    Component mixture L_k = Σ_j w_j * L_{k,j}, w_j ≥ 0, Σ_j w_j = 1; priors on w via Dirichlet.

V. Prior Families and Hyper-Parameters (Canonical Conventions)

  1. Real, unbounded:
    • Normal(mu, sigma); hyper-parameters {mu, sigma>0}
    • Laplace(mu, b) (L1-regularization equivalent); b>0
  2. Positive domain (0, +inf):
    LogNormal(mu, sigma); Gamma(shape, rate); HalfNormal(sigma)
  3. Bounded interval (lb, ub):
    Beta(a, b) on s ∈ (0,1); interval map theta = lb + (ub - lb) * s
  4. Covariance/correlation structures:
    InvWishart(ν, S) or factored priors Sigma = D * R * D, with R ~ LKJ(η) and diagonal D elements HalfNormal
  5. Proportions and mixture weights:
    Dirichlet(alpha_vec); alpha_vec > 0
  6. Prior-writing postulate:
    Every prior family must list all hyper-parameters and support. Example: prior(c_ref) = LogNormal(mu_c, sigma_c) with units stated.

VI. Posterior in the Transform Domain (Minimal Equation S41-2)

  1. Let phi = T_map(theta) be invertible, with J_T = ∂theta/∂phi. Then
    log post(phi | data) = log L(data | theta(phi)) + log prior_theta(theta(phi)) + log | det(J_T(phi)) | - log ZS41-2 (log posterior in transform space)
  2. Common special cases:
    • Log transform (positive domain): theta = lb + exp(phi), log | det(J_T) | = Σ_i phi_i
    • Logit transform (interval): theta = lb + (ub - lb) * σ(phi),
      log | det(J_T) | = Σ_i [ log(ub_i - lb_i) + log σ(phi_i) + log(1 - σ(phi_i)) ]

VII. Regularization Equivalences and Selection Guide


VIII. Hierarchies and Shared Parameters

  1. Group-level priors (partial pooling):
    • theta_g | mu, tau ~ Normal(mu, tau^{-1})
    • mu ~ Normal(mu0, s0), tau ~ Gamma(a0, b0)
  2. Sharing/coupled coefficients:
    If theta_a = r * theta_b, put a LogNormal or Normal prior on r and infer jointly.

IX. Model Comparison and Information Criteria (Minimal Equation S41-3)

  1. S41-3 (information-criteria short list)
    • AIC = 2k - 2 log L(data | theta_MLE)
    • BIC = k * log N - 2 log L(data | theta_MLE)
    • DIC approx= 2 * avg_theta[ -log L(data | theta) ] - ( -2 log L(data | theta_hat) )
  2. Evidence and Bayes factors:
    Z = ∫ L * prior d theta; for two models M1, M2, the ratio BF_{12} = Z_1 / Z_2 (estimate numerically via bridge sampling/thermodynamic integration).

X. Implementation Binding and Minimal Working Examples (I30 3 / I30 5)

  1. Prior registration (I30 3):
    • set_prior(code="c_ref", family="LogNormal", hyper={"mu":mu_c, "sigma":sigma_c})
    • set_prior(code="n_eff.alpha", family="Gamma", hyper={"shape":a, "rate":b})
  2. Inference and sampling (I30 5):
    • theta_mle = infer_mle(model=S20_arrival, data=D, params=["c_ref","n_eff.alpha"])
    • theta_map = infer_map(model=S20_arrival, data=D, params=[...])
    • samples = posterior_sample_mcmc(model=S20_arrival, data=D, params=[...], n=2000, burn=500, method="NUTS")
  3. Evidence approximations and comparison:
    After sampling, call the information-criteria or bridge-sampling module; report AIC/BIC and BF.

XI. Calibration Pipeline (Mx-2)


XII. Arrival-Time Sidebar (Cross-Volume Consistency)

  1. If parameters affect T_arr, all formulas must retain the fully parenthesized form and explicit path:
    T_arr = ( 1 / c_ref ) * ( ∫ n_eff d ell ) or T_arr = ( ∫ ( n_eff / c_ref ) d ell )
  2. Implementation bindings:
    • propagate_time(n_eff_path, ds, c_ref) -> float
    • Propagate posterior samples to the uncertainty of T_arr; avg_gamma may serve as the statistical window for path averages.

XIII. Misuse and Conflict List


XIV. Output Anchors and Citations


Copyright & License (CC BY 4.0)

Copyright: Unless otherwise noted, the copyright of “Energy Filament Theory” (text, charts, illustrations, symbols, and formulas) belongs to the author “Guanglin Tu”.
License: This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0). You may copy, redistribute, excerpt, adapt, and share for commercial or non‑commercial purposes with proper attribution.
Suggested attribution: Author: “Guanglin Tu”; Work: “Energy Filament Theory”; Source: energyfilament.org; License: CC BY 4.0.

First published: 2025-11-11|Current version:v5.1
License link:https://creativecommons.org/licenses/by/4.0/