371 Publications

Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit

Jason D. Lee, Kazusato Oko, Taiji Suzuki, D. Wu

We study the problem of gradient descent learning of a single-index target function f∗(x) = σ∗(⟨x,θ⟩) under isotropic Gaussian data in Rd, where the unknown link function σ∗ : R → R has information exponent p (defined as the lowest degree in the Hermite expansion). Prior works showed that gradientbased training of neural networks can learn this target with n ≳ dΘ(p) samples, and such complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm (on the squared loss) learns f∗ with a complexity that is not governed by the information exponent. Specifically, for arbitrary polynomial single-index models, we establish a sample and runtime complexity of n ≃ T = Θ(d·polylogd), where Θ(·) hides a constant only depending on the degree of σ∗; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. More generally, we show that n ≳ d(p∗−1)∨1 samples are sufficient to achieve low generalization error, where p∗ ≤ p is the generative exponent of the link function. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.

Show Abstract

xVal: A Continuous Numerical Tokenization for Scientific Language Models

Siavash Golkar, Ph.D. , Mariel Pettee, Ph.D. , M. Eickenberg, A. Bietti, et al.

Due in part to their discontinuous and discrete default encodings for numbers, Large Language Models (LLMs) have not yet been commonly used to process numerically-dense scientific datasets. Rendering datasets as text, however, could help aggregate diverse and multi-modal scientific data into a single training corpus, thereby potentially facilitating the development of foundation models for science. In this work, we introduce xVal, a strategy for continuously tokenizing numbers within language models that results in a more appropriate inductive bias for scientific applications. By training specially-modified language models from scratch on a variety of scientific datasets formatted as text, we find that xVal generally outperforms other common numerical tokenization strategies on metrics including out-of-distribution generalization and computational efficiency.

Show Abstract

Multiple Physics Pretraining for Physical Surrogate Models

Michael McCabe, B. Régaldo-Saint Blancard, Liam Holden Parker, R. Ohana, Miles Cranmer, A. Bietti, Michael Eickenberg, et al.

We introduce multiple physics pretraining (MPP), an autoregressive task-agnostic pretraining approach for physical surrogate modeling of spatiotemporal systems with transformers. In MPP, rather than training one model on a specific physical system, we train a backbone model to predict the dynamics of multiple heterogeneous physical systems simultaneously in order to learn features that are broadly useful across systems and facilitate transfer. In order to learn effectively in this setting, we introduce a shared embedding and normalization strategy that projects the fields of multiple systems into a shared embedding space. We validate the efficacy of our approach on both pretraining and downstream tasks over a broad fluid mechanics-oriented benchmark. We show that a single MPP-pretrained transformer is able to match or outperform task-specific baselines on all pretraining sub-tasks without the need for finetuning. For downstream tasks, we demonstrate that finetuning MPP-trained models results in more accurate predictions across multiple time-steps on systems with previously unseen physical components or higher dimensional systems compared to training from scratch or finetuning pretrained video foundation models. We open-source our code and model weights trained at multiple scales for reproducibility.

Show Abstract

Provable Posterior Sampling with Denoising Oracles via Tilted Transport

Joan Bruna, J. Han

Score-based diffusion models have significantly advanced high-dimensional data generation across various domains, by learning a denoising oracle (or score) from datasets. From a Bayesian perspective, they offer a realistic modeling of data priors and facilitate solving inverse problems through posterior sampling. Although many heuristic methods have been developed recently for this purpose, they lack the quantitative guarantees needed in many scientific applications. This work addresses the topic from two perspectives. We first present a hardness result indicating that a generic method leveraging the prior denoising oracle for posterior sampling becomes infeasible as soon as the measurement operator is mildly ill-conditioned. We next develop the tilted transport technique, which leverages the quadratic structure of the log-likelihood in linear inverse problems in combination with the prior denoising oracle to exactly transform the original posterior sampling problem into a new one that is provably easier to sample from. We quantify the conditions under which the boosted posterior is strongly log-concave, highlighting how task difficulty depends on the condition number of the measurement matrix and the signal-to-noise ratio. The resulting general scheme is shown to match the best-known sampling methods for Ising models, and is further validated on high-dimensional Gaussian mixture models.

Show Abstract

An adaptive spectral method for oscillatory second-order linear ODEs with frequency-independent cost

F. Agocs, A. Barnett

We introduce an efficient numerical method for second-order linear ODEs whose solution may vary between highly oscillatory and slowly changing over the solution interval. In oscillatory regions the solution is generated via a nonoscillatory phase function that obeys the nonlinear Riccati equation. We propose a defect correction iteration that gives an asymptotic series for such a phase function; this is numerically approximated on a Chebyshev grid with a small number of nodes. For analytic coefficients we prove that each iteration, up to a certain maximum number, reduces the residual by a factor of order of the local frequency. The algorithm adapts both the stepsize and the choice of method, switching to a conventional spectral collocation method away from oscillatory regions. In numerical experiments we find that our proposal outperforms other state-of-the-art oscillatory solvers, most significantly at low to intermediate frequencies and at low tolerances, where it may use up to \(10^6\) times fewer function evaluations. Even in high-frequency regimes, our implementation is on average 10 times faster than other specialized solvers.

Show Abstract

On the construction of scattering matrices for irregular or elongated enclosures using Green’s representation formula

Carlos Borges, L. Greengard, Michael O'Neil , M. Rachh

Multiple scattering methods are widely used to reduce the computational complexity of acoustic or electromagnetic scattering problems when waves propagate through media containing many identical inclusions. Historically, this numerical technique has been limited to situations in which the inclusions (particles) can be covered by nonoverlapping disks in two dimensions or spheres in three dimensions. This allows for the use of separation of variables in cylindrical or spherical coordinates to represent the solution to the governing partial differential equation. Here, we provide a more flexible approach, applicable to a much larger class of geometries. We use a Green’s representation formula and the associated layer potentials to construct incoming and outgoing solutions on rectangular enclosures. The performance and flexibility of the resulting scattering operator formulation in two-dimensions is demonstrated via several numerical examples for multi-particle scattering in free space as well as in layered media. The mathematical formalism extends directly to the three dimensional case as well, and can easily be coupled with several commercial numerical PDE software packages.

Show Abstract

Learning Gaussian Multi-Index Models with Gradient Flow: Time Complexity and Directional Convergence

B. Şimşek, Amire Bendjeddou, Daniel Hsu

This work focuses on the gradient flow dynamics of a neural network model that uses correlation loss to approximate a multi-index function on high-dimensional standard Gaussian data. Specifically, the multi-index function we consider is a sum of neurons $f^*(x) \!=\! \sum_{j=1}^k \! \sigma^*(v_j^T x)$ where $v_1, \dots, v_k$ are unit vectors, and $\sigma^*$ lacks the first and second Hermite polynomials in its Hermite expansion. It is known that, for the single-index case ($k\!=\!1$), overcoming the search phase requires polynomial time complexity. We first generalize this result to multi-index functions characterized by vectors in arbitrary directions. After the search phase, it is not clear whether the network neurons converge to the index vectors, or get stuck at a sub-optimal solution. When the index vectors are orthogonal, we give a complete characterization of the fixed points and prove that neurons converge to the nearest index vectors. Therefore, using $n \! \asymp \! k \log k$ neurons ensures finding the full set of index vectors with gradient flow with high probability over random initialization. When $ v_i^T v_j \!=\! \beta \! \geq \! 0$ for all $i \neq j$, we prove the existence of a sharp threshold $\beta_c \!=\! c/(c+k)$ at which the fixed point that computes the average of the index vectors transitions from a saddle point to a minimum. Numerical simulations show that using a correlation loss and a mild overparameterization suffices to learn all of the index vectors when they are nearly orthogonal, however, the correlation loss fails when the dot product between the index vectors exceeds a certain threshold.

Show Abstract

Dynamic allostery drives autocrine and paracrine TGF-β signaling

Mingliang Jin, Robert I. Seed, P. Cossio, et al.

TGF-β, essential for development and immunity, is expressed as a latent complex (L-TGF-β) non-covalently associated with its prodomain and presented on immune cell surfaces by covalent association with GARP. Binding to integrin αvβ8 activates L-TGF-β1/GARP. The dogma is that mature TGF-β must physically dissociate from L-TGF-β1 for signaling to occur. Our previous studies discovered that αvβ8-mediated TGF-β autocrine signaling can occur without TGF-β1 release from its latent form. Here, we show that mice engineered to express TGF-β1 that cannot release from L-TGF-β1 survive without early lethal tissue inflammation, unlike those with TGF-β1 deficiency. Combining cryogenic electron microscopy with cell-based assays, we reveal a dynamic allosteric mechanism of autocrine TGF-β1 signaling without release where αvβ8 binding redistributes the intrinsic flexibility of L-TGF-β1 to expose TGF-β1 to its receptors. Dynamic allostery explains the TGF-β3 latency/activation mechanism and why TGF-β3 functions distinctly from TGF-β1, suggesting that it broadly applies to other flexible cell surface receptor/ligand systems.

Show Abstract

Simulation-based inference of single-molecule experiments

Lars Dingeldein, P. Cossio, Roberto Covino

Single-molecule experiments are a unique tool to characterize the structural dynamics of biomolecules. However, reconstructing molecular details from noisy single-molecule data is challenging. Simulation-based inference (SBI) integrates statistical inference, physics-based simulators, and machine learning and is emerging as a powerful framework for analysing complex experimental data. Recent advances in deep learning have accelerated the development of new SBI methods, enabling the application of Bayesian inference to an ever-increasing number of scientific problems. Here, we review the nascent application of SBI to the analysis of single-molecule experiments. We introduce parametric Bayesian inference and discuss its limitations. We then overview emerging deep-learning-based SBI methods to perform Bayesian inference for complex models encoded in computer simulators. We illustrate the first applications of SBI to single-molecule force-spectroscopy and cryo-electron microscopy experiments. SBI allows us to leverage powerful computer algorithms modeling complex biomolecular phenomena to connect scientific models and experiments in a principled way.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates