HomeAppendix-Prediction and Falsification

This chapter follows the publication template for the falsification program. It uses plain language, avoids equations, and preserves the fixed structure. With unified timescales and near–far calibration, we compare arrival-time structures across beam, atmospheric, and astrophysical neutrino sources and multiple experiments/baselines. After subtracting Standard Model energy dependences (mass–spectrum–geometry), we test for a flavor- and energy-insensitive non-dispersive common term—a constant time shift or slow platform that aligns across baselines and replicates at zero lag across experiments and pipelines. If residuals scale or flip with energy/flavor, or fail under held-out/blinded tests, the claim is disfavored.


I. One-Sentence Goal

Determine whether a baseline-alignable, non-dispersive timing term exists in neutrino data across sources and detectors, independent of energy and flavor, and reproducible at zero lag across analyses.


II. What to Measure


III. How to Do It

  1. Observation families and data planes:
    • Beams: at least two long-baseline beams with different lengths/azimuths and near–far detectors.
    • Atmospheric: deep detectors in northern/southern/equatorial hemispheres, covering up-going/down-going paths.
    • Astrophysical: time-tagged with multi-messenger networks, with independent trigger seasons held out.
  2. Unified calibration and timing:
    Use a single external timescale (common-view/GNSS two-way/White Rabbit). Publish delay ledgers for sensors-to-disk chains with temperature→delay regression and inject–recover (LED/laser/pulse plates) to lock absolute offsets. Release reconstruction templates for energy/zenith/multipath and hold them out during fits.
  3. Windows and event tomography:
    For beams, define short (ms) and long (s) windows around spill gates. For atmospheric/astrophysical, window by local time/tide phase/geomagnetic level and set control windows for bursts.
  4. Processing and stacking:
    Run ≥ 2 independent pipelines (time-domain alignment / frequency-domain coherence / wavelet–empirical mode). Combine bandpass/high-pass filters and conditionally stack in baseline × source × energy × flavor × zenith to produce constant/slow-slope/synchrony text tables.
  5. Forward prediction, blinding, arbitration:
    The forward team issues prediction cards using only geometry/environment variables. The measurement team independently reports non-dispersion/zero-lag/shape summaries. The arbitration team scores hit / wrong / null across baseline/experiment/source/pipeline/window and publishes decisions.

IV. Positive/Negative Controls and Artifact Removal

  1. Positive controls (support a non-dispersive common term):
    • Same-sign, similar-amplitude constant/slow platforms appear across multiple baselines/experiments/pipelines and are insensitive to energy/flavor slicing.
    • Near–far differencing preserves the platform, excluding source jitter.
    • The term tracks geometry/environment as monotonic/plateau/threshold, and prediction cards beat chance significantly.
    • Replication in held-out seasons/experiments/energy windows with high zero-lag synchrony.
  2. Negative controls (argue against the term):
    • Residuals scale/flip with energy or correlate with flavor layers.
    • Signals exist only in one experiment/pipeline/window, or are highly sensitive to bandpass/window/alignment/reconstruction templates.
    • Label swaps/time reversals/parameter shuffles still “detect” signals—method bias.
    • With tighter delay budgets/temperature regressions/energy-reconstruction corrections, signals vanish or can be reproduced by timestamp bias/trigger thresholds/buffer latency.

V. Systematics and Safeguards (Three Items)


VI. Execution and Transparency

Pre-register baselines/azimuths/source types/detectors, timing scheme, criteria for non-dispersion/zero-lag/shapes, variable lists, positive/negative controls, exclusions, and scoring. Define held-out units by experiment/season/energy/flavor/zenith. Enable cross-team replication by sharing raw timestamps/trigger logs/delay ledgers/reconstruction weights/scripts and by running down-sampling/noise/kernel-variant/template-swap robustness tests. Release prediction cards, common-term strength tables, zero-lag indices, bandpass/alignment kernels, delay ledgers, and environment logs (text summaries), plus key intermediates.


VII. Pass/Fail Criteria

  1. Support (passes):
    • In ≥ 2 experiments, ≥ 2 baselines, ≥ 2 pipelines and multiple source types, recover a non-dispersive, zero-lag common term.
    • The term follows predictable monotonic/plateau/threshold geometry–environment profiles and is robust to bandpass/alignment/templates/energy–flavor slicing.
    • Arbitration significantly exceeds chance and held-out units replicate.
  2. Refutation (fails):
    • Results are dominated by energy/flavor dependence or timing/reconstruction systematics, or do not replicate across baselines/experiments/seasons.
    • Parameter fragility or disappearance/reversal in held-out sets.
    • Arbitration near chance—indistinguishable from system/method artifacts.

Copyright & License (CC BY 4.0)

Copyright: Unless otherwise noted, the copyright of “Energy Filament Theory” (text, charts, illustrations, symbols, and formulas) belongs to the author “Guanglin Tu”.
License: This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0). You may copy, redistribute, excerpt, adapt, and share for commercial or non‑commercial purposes with proper attribution.
Suggested attribution: Author: “Guanglin Tu”; Work: “Energy Filament Theory”; Source: energyfilament.org; License: CC BY 4.0.

First published: 2025-11-11|Current version:v5.1
License link:https://creativecommons.org/licenses/by/4.0/