On bias, variance, overfitting, gold standard and consensus in single-particle analysis by cryo-electron microscopy

Investor logo

Warning

This publication doesn't include Faculty of Economics and Administration. It includes Faculty of Informatics. Official publication website can be found on muni.cz.
Authors

SORZANO Carlos JIMÉNEZ-MORENO Amaya MALUENDA David MARTÍNEZ Marta RAMÍREZ-APORTELA Erney KRIEGER James MELERO Roberto CUERVO Ana CONESA Javier FILIPOVIČ Jiří CONESA Pablo CANO Laura del FONSECA Yunior LA MORENA Jorge Jiménez-de LOSANA Patricia SÁNCHEZ-GARCÍA Ruben STŘELÁK David FERNÁNDEZ-GIMÉNEZ Estrella DE ISIDRO-GÓMEZ Federico HERREROS David VILAS Jose Luis MARABINI Roberto CARAZO Jose Maria

Year of publication 2022
Type Article in Periodical
Magazine / Source Acta Crystallographica Section D: Structural Biology
MU Faculty or unit

Faculty of Informatics

Citation
Web URL
Doi http://dx.doi.org/10.1107/S2059798322001978
Keywords single-particle analysis; cryo-electron microscopy; parameter estimation; image processing; bias; variance; overfitting; gold standard
Description Cryo-electron microscopy (cryoEM) has become a well established technique to elucidate the 3D structures of biological macromolecules. Projection images from thousands of macromolecules that are assumed to be structurally identical are combined into a single 3D map representing the Coulomb potential of the macromolecule under study. This article discusses possible caveats along the image-processing path and how to avoid them to obtain a reliable 3D structure. Some of these problems are very well known in the community. These may be referred to as sample-related (such as specimen denaturation at interfaces or non-uniform projection geometry leading to underrepresented projection directions). The rest are related to the algorithms used. While some have been discussed in depth in the literature, such as the use of an incorrect initial volume, others have received much less attention. However, they are fundamental in any data-analysis approach. Chiefly among them, instabilities in estimating many of the key parameters that are required for a correct 3D reconstruction that occur all along the processing workflow are referred to, which may significantly affect the reliability of the whole process. In the field, the term overfitting has been coined to refer to some particular kinds of artifacts. It is argued that overfitting is a statistical bias in key parameter-estimation steps in the 3D reconstruction process, including intrinsic algorithmic bias. It is also shown that common tools (Fourier shell correlation) and strategies (gold standard) that are normally used to detect or prevent overfitting do not fully protect against it. Alternatively, it is proposed that detecting the bias that leads to overfitting is much easier when addressed at the level of parameter estimation, rather than detecting it once the particle images have been combined into a 3D map. Comparing the results from multiple algorithms (or at least, independent executions of the same algorithm) can detect parameter bias. These multiple executions could then be averaged to give a lower variance estimate of the underlying parameters.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.