Benzos, barbiturates, and bioinformatics

Painkillers and anaesthetics, antibiotics and antivirals, drugs that fight afflictions from insomnia to cancer—modern pharmaceuticals have transformed human health and our quality of life, but none of these vital treatments would be available without constant advancement in the field of drug discovery. Since ancient times, humans have taken advantage of the natural world to find or produce therapeutic and healing agents. Today, drug discovery is a rigorous scientific pursuit, aided in massive part by modern technology and computation. But what exactly is the role of bioinformatics in this crucial field, and what progress do we still have to make?

Before the advent of modern science and technology, pharmaceuticals were based only on naturally occurring medicines, coming from sources like herbs and fungi. Advances in chemistry in the late nineteenth and early twentieth century introduced the possibility of further purifying and modifying these natural pharmaceuticals, as well as the creation of synthetic therapeutics. Later in the twentieth century, the concept of screening began to grow more popular, with discovery efforts focusing on identifying pharmaceutically active components in natural sources and isolating them for medicinal use.

In the modern day, however, scientific and technological advances have made it possible to screen for these active compounds on a much greater scale, and from much more diverse and extensive compound libraries. Automated, miniaturized, high-throughput screening methods allow researchers to test many more compounds for biological activity faster than ever before. Millions of candidates can be screened for specific properties in a facility within one day, with those that demonstrate promising activity referred to as “hits.” But how is this process actually conducted? 

Automated high-throughput screens typically involve the use of large plates containing hundreds or thousands of tiny wells. Each well is filled with a biological entity, whose exact identity will vary depending on the particular screen—for example, bacterial colonies, human cells, or protein solutions may be used. Each candidate compound is robotically added to a different well, and a reaction is allowed to take place. Finally, readouts for each well can be assessed either manually or automatically. In the case of bacterial cells, researchers might be looking for compounds that decrease replication; for human cells, increased cell size may be the target; or researchers may be seeking a compound that binds to a particular protein.

With the general principle outlined above, increasing knowledge in molecular biology and improving technology have allowed scientists to work on identifying and producing pharmaceuticals on a massive scale. However, high-throughput screening has not been without its obstacles, limitations, and criticisms. Screens are often expensive investments without much payoff; although thousands of compounds may be screened at a time, the vast majority will not turn out to be hits. Even compounds that are identified as hits in the initial screening stage often turn out to be unsuitable in later phases of investigation. Toxicity and bioavailability cannot be assayed by high-throughput screening, along with other complications that may arise in in vivo systems. For example, a compound may bind to a protein in an isolated well of a microtiter plate, but fail to bind to that same protein in a living cell or animal. 

Bioinformatics approaches, however, can provide a partial solution to these limitations. Though the fundamental constraints of high-throughput screening cannot be completely overcome, they can be mitigated by taking advantage of the wealth of biochemical information that scientists have accumulated over recent decades. 

For instance, if we can narrow down the compounds being screened to only those most likely to be hits, we can improve the efficiency of screening systems. Artificial intelligence engines can comb through scientific papers and data sheets by searching for keywords in chemical profiles in order to find compounds with established biological properties—essentially, conducting literature reviews much more rapidly and effectively than a human could. Or bioinformatics tools can analyze compounds and attempt to predict their biological activity based on their three-dimensional structures, whether by comparing their structures to the structures of known hits or through direct molecular analysis.

Additionally, if we can narrow down the compounds being screened to only those that are known to be safe for human consumption, we can overcome in vivo safety constraints. Computational approaches can also aid in predicting the molecular mechanisms of a drug, its potential side effects, and drug resistance, all of which are vital components of the drug discovery pipeline. Combining all of these ideas, we can harness AI to set up high-throughput screens that are much more likely to produce hits that hold up in further tests and, eventually, in a clinical setting.

Beyond finding promising candidates to help improve the efficiency of high-throughput screening, bioinformatics can also help us identify therapeutic targets for these screens. More and more genes have been identified as playing various roles in health and disease, and bioinformatic tools can help find additional such genes, as well as their sequences and functions. The identification of promising targets to use is another way to raise the efficacy and success rate of high-throughput screens.

These are just a few of the invaluable ways by which bioinformatic tools can make high-throughput screening cheaper, more efficient, and more effective. The ability to rapidly analyze huge amounts of pre-existing data from a variety of sources can only grow more important as our biological knowledge continues to surge. Drug discovery has come a long way since humans started using natural medicines, but it still has a long way to go to be the best that it can—and bioinformatics will be integral at every step.