Publications HAL

2024

Software

titre
Code for reproducible research - A path-norm toolkit for modern networks: consequences, promises and challenges
auteur
Antoine Gonon, Nicolas Brisebarre, Elisa Riccietti, Rémi Gribonval
article
2024, ⟨swh:1:dir:119d3f903d3b6e0a776bd64c71317331839390d4;origin=https://hal.archives-ouvertes.fr/hal-04498597;visit=swh:1:snp:3b7c23b687511f3d2e4673d222c3ba96195bb004;anchor=swh:1:rel:7c123216ebb2018ea3290cbdb4cf4f4b8ddea964;path=/⟩
resume
In the interest of reproducible research, this is exactly the version of the code used to generate the figures in the paper "A path-norm toolkit for modern networks: consequences, promises and challenges" by the same authors, available at https://hal.science/hal-04225201. Any updates to this code will be available at https://github.com/agonon/pathnorm_toolkit
Accès au texte intégral et bibtex
https://hal.science/hal-04498597/file/pathnorm_toolkit-1.0.0.zip BibTex

2023

Journal articles

titre
Approximation speed of quantized vs. unquantized ReLU neural networks and beyond
auteur
Antoine Gonon, Nicolas Brisebarre, Rémi Gribonval, Elisa Riccietti
article
IEEE Transactions on Information Theory, 2023, 69 (6), pp.3960-3977. ⟨10.1109/TIT.2023.3240360⟩
resume
We deal with two complementary questions about approximation properties of ReLU networks. First, we study how the uniform quantization of ReLU networks with real-valued weights impacts their approximation properties. We establish an upper-bound on the minimal number of bits per coordinate needed for uniformly quantized ReLU networks to keep the same polynomial asymptotic approximation speeds as unquantized ones. We also characterize the error of nearest-neighbour uniform quantization of ReLU networks. This is achieved using a new lower-bound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upper-bound generalizing classical results. Second, we investigate when ReLU networks can be expected, or not, to have better approximation properties than other classical approximation families. Indeed, several approximation families share the following common limitation: their polynomial asymptotic approximation speed of any set is bounded from above by the encoding speed of this set. We introduce a new abstract property of approximation families, called infinite-encodability, which implies this upper-bound. Many classical approximation families, defined with dictionaries or ReLU networks, are shown to be infinite-encodable. This unifies and generalizes several situations where this upper-bound is known.
Accès au texte intégral et bibtex
https://hal.science/hal-03672166/file/v2_preprint_approximation_speed_of_quantized_vs_unquantized_ReLU_neural_networks_and_beyond.pdf BibTex

Conference papers

titre
Can sparsity improve the privacy of neural networks?
auteur
Antoine Gonon, Léon Zheng, Clément Lalanne, Quoc-Tung Le, Guillaume Lauga, Can Pouliquen
article
GRETSI 2023 - XXIXème Colloque Francophone de Traitement du Signal et des Images, Aug 2023, Grenoble, France
resume
Sparse neural networks are mainly motivated by ressource efficiency since they use fewer parameters than their dense counterparts but still reach comparable accuracies. This article empirically investigates whether sparsity could also improve the privacy of the data used to train the networks. The experiments show positive correlations between the sparsity of the model, its privacy, and its classification error. Simply comparing the privacy of two models with different sparsity levels can yield misleading conclusions on the role of sparsity, because of the additional correlation with the classification error. From this perspective, some caveats are raised about previous works that investigate sparsity and privacy.
Accès au texte intégral et bibtex
https://hal.science/hal-04062317/file/HAL_gretsi.pdf BibTex

Preprints, Working Papers, ...

titre
A path-norm toolkit for modern networks: consequences, promises and challenges
auteur
Antoine Gonon, Nicolas Brisebarre, Elisa Riccietti, Rémi Gribonval
article
2023
resume
This work introduces the first toolkit around path-norms that fully encompasses general DAG ReLU networks with biases, skip connections and any operation based on the extraction of order statistics: max pooling, GroupSort etc. This toolkit notably allows us to establish generalization bounds for modern neural networks that are not only the most widely applicable path-norm based ones, but also recover or beat the sharpest known bounds of this type. These extended path-norms further enjoy the usual benefits of path-norms: ease of computation, invariance under the symmetries of the network, and improved sharpness on layered fully-connected networks compared to the product of operator norms, another complexity measure most commonly used. The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet.
Accès au texte intégral et bibtex
https://hal.science/hal-04225201/file/Gonon_ICLR_24.pdf BibTex