What do deep neural networks and cancer immunotherapy have in common?
While both are among the most transformational areas of modern science, 30 years ago, these fields were all but ridiculed by the scientific community. As a result, progress in each happened at the sidelines of academia for decades.
Between the 1970s and 1990s, some of the most prominent computer scientists, including Marvin Minsky, in his book “Perceptrons,” argued that neural networks (the backbone of most modern AI) would never work for most applications. He exposed flaws in the early conceptions of neural networks and argued that the whole approach was ineffective.
Meanwhile, during the 1980s through the 2000s, neural network pioneers and believers — Geoffrey Hinton, Yoshua Bengio, and Yann LeCun — continued their efforts and pursued their intuition that neural networks would succeed. These researchers found that most of the original ideas were correct but simply needed more data (think of ImageNET), computational power and further modeling tweaks to be effective.
Hinton, Bengio and LeCun were awarded the Turing Award in 2018 (the computer science equivalent of a Nobel prize) for their work. Today, their revelations have made neural networks the most vibrant area of computer science and have revolutionized fields such as computer vision and natural language processing.
Cancer immunology faced similar obstacles. Treatment with IL-2 cytokine, one of the first immunomodulatory drugs, failed to meet expectations. These outcomes slowed further research, and for decades, cancer immunology wasn’t taken seriously by many cancer biologists. With the effort and intuition of some, however, it was discovered decades later that the concept of boosting the immune system to fight cancer had objective validity. It turned out that we just needed better drug targets and combinations, and eventually, researchers demonstrated that the immune system is the best tool in our fight against cancer.
James P. Allison and Tasuku Honjo, who pioneered the class of cancer immunotherapy drugs known as checkpoint inhibitors, were awarded the Nobel Prize in 2018.
Though widely accepted now, it took decades for the scientific establishment to accept these novel approaches as valid.
Machine learning and immunotherapy have more in common than historical similarities. The beauty of immunotherapy is that it leverages the versatility and flexibility of the immune system to fight different types of cancers. While the first immunotherapies showed results in a few cancers, they were later shown to work in many other cancer types. AI, similarly, utilizes flexible tools to solve a wide range of problems across applications via transfer and multitask learning. These processes are made possible through access to large-scale data.
Here’s something to remember: The resurgence of neural networks started in 2012 after the AlexNet architecture demonstrated 84.7% accuracy in the ImageNET competition. This level of performance was revolutionary at the time, with the second-best model achieving 73.8% accuracy. The ImageNET dataset, started by Fei-Fei Li, is robust, well labeled and high quality. As a result, it has been integral to how far neural networks have brought computer vision today.
Interestingly, similar developments are happening now in biology. Life sciences companies and labs are building large-scale datasets with tens of millions of immune cells labeled consistently to ensure the validity of the underlying data. These datasets are the analogs of ImageNET in biology.
We’re already seeing these large, high-quality datasets giving rise to experimentation at a rate and scale that was impossible before. For example, machine learning is being used to identify immune cell types in different parts of the body and their involvement in various diseases. After identifying patterns, algorithms can “map” or predict different immune trajectories, which can then be used to interpret, for example, why some cancer immunotherapies work on particular cancer types and some don’t. The datasets act as the Google Maps of the immune system.
Mapping patterns of genes, proteins and cell interactions across diseases allows researchers to understand molecular pathways as the building blocks of disease. The presence or absence of a functional block helps interpret why some cancer immunotherapies work on particular cancer types but not others.
Mapping pathways of genes and proteins across diseases and phenotypes allows researchers to learn how they work together to activate specific pathways and fight multiple diseases. Genes can be part of numerous pathways, and they can cause distinct types of cells to behave differently.
Moreover, different cell types can share similar gene activities, and the same functional pathways can be found in various immune-related disorders. This makes a case for building machine learning models that perform effectively on specific tasks and transfer to other tasks.
Transfer learning works in deep learning models, for example, by taking simple patterns (in images, think of simple lines and curves) learned by early layers of a neural network and leveraging those layers for different problems. In biology, this allows us to transfer knowledge on how specific genes and pathways in one disease or cell type play a role in other contexts.
AI research that addresses the effects of genetic changes (perturbations) on immune cells and their impact on the cells and possible treatments is increasingly common in cancer immunology. This kind of research will enable us to understand these cells more quickly and lead to better drugs and treatments.
With large-scale data fueling further research in immunotherapy and AI, we are confident that more effective drugs to fight cancer will appear soon, thus giving hope to the over 18 million people who are diagnosed with cancer every year.
What do deep neural networks and cancer immunotherapy have in common?Read MoreArtificial Intelligence, Biotech, Column, Health, Opinion, TC, artificial intelligence, cancer, cancer immunotherapy, deep learning, machine learning, natural language processing, neural networkTechCrunch