Probing optimisation in physics-informed neural networks
Nayara Fonseca, Veronica Guidetti, et al.
ICLR 2023
We introduce π-variance, a generalization of variance built on the machinery of random bipartite matchings. π-variance measures the expected cost of matching two sets of π samples from a distribution to each other, capturing local rather than global information about a measure as π increases; it is easily approximated stochastically using sampling and linear programming. In addition to defining π-variance and proving its basic properties, we provide in-depth analysis of this quantity in several key cases, including one-dimensional measures, clustered measures, and measures concentrated on low-dimensional subsets of βπ. We conclude with experiments and open problems motivated by this new way to summarize distributional shape.
Nayara Fonseca, Veronica Guidetti, et al.
ICLR 2023
Gabriel Rioux, Apoorva Nitsure, et al.
NeurIPS 2024
Pietro Tassan, Darius Urbonas, et al.
SPIE Photonics West 2025
Anthony Praino, Lloyd Treinish, et al.
AGU 2024