Antimicrobial resistance is a growing problem. Peptides, short sequences of amino acids, are nature’s first defense against bacteria. Research on antimicrobial peptides is promising but such research could be much more productive if combined with machine learning on big data. But collecting, collating and organizing big data is a public good and underprovided. Current peptide databases are small, inconsistent, incompatible with one another and they are biased against negative controls. Thus, there is scope for a million-peptide database modelled on something like Human Genome Project or ProteinDB:
ML needs data. Google’s AlphaGo trained on 30 million moves from human games and orders of magnitude more from games it played against itself. The largest language models are trained on at least 60 terabytes of text. AlphaFold was trained on just over 100,000 3D protein structures from the Protein Data Bank.
The data available for antimicrobial peptides is nowhere near these benchmarks. Some databases contain a few thousand peptides each, but they are scattered, unstandardized, incomplete, and often duplicative. Data on a few thousand peptide sequences and a scattershot view of their biological properties are simply not sufficient to get accurate ML predictions for a system as complex as protein-chemical reactions. For example, the APD3 database is small, with just under 4,000 sequences, but it is among the most tightly curated and detailed. However, most of the sequences available are from frogs or amphibians due to path-dependent discovery of peptides in that taxon. Another database, CAMPR4, has on the order of 20,000 sequences, but around half are “predicted” or synthetic peptides that may not have experimental validation, and contain less info about source and activity. The formatting of each of these sources is different, so it’s not easy to put all the sequences into one model. More inconsistencies and idiosyncrasies stack up for the dozens of other datasets available.
There is even less negative training data; that is, data on all the amino-acid sequences without interesting publishable properties. In current ML research, labs will test dozens or even hundreds of peptide sequences for activity against certain pathogens, but they usually only publish and upload the sequences that worked.
…The data problem facing peptide research is solvable with targeted investments in data infrastructure. We can make a million-peptide database
There are no significant scientific barriers to generating a 1,000x or 10,000x larger peptide dataset. Several high-throughput testing methods have been successfully demonstrated, with some screening as many as 800,000 peptide sequences and nearly doubling the number of unique antimicrobial peptides reported in publicly available databases. These methods will need to be scaled up, not only by testing more peptides, but also by testing them against different bacteria, checking for human toxicity, and testing other chemical properties, but scaling is an infrastructure problem, not a scientific one.
This strategy of targeted data infrastructure investments has three successful precedents: PubChem, the Human Genome Project, and ProteinDB.
Much more in this excellent piece of science and economics from IFP and Max Tabarrok.
The post It’s Time to Build the Peptidome! appeared first on Marginal REVOLUTION.
Discussion about this post