Connect with us

Regulation

Risks And Rewards For AI Fighting Climate Change

mm

Published

 on

Risks And Rewards For AI Fighting Climate Change

As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?

Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users.  In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.

Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.

‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’

Dingum noted that one study, done by the University of Massachusetts, found that creating a  sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.

Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.

Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.

Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.

Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.

Spread the love

Big Data

Human Genome Sequencing and Deep Learning Could Lead to a Coronavirus Vaccine – Opinion

mm

Published

on

Human Genome Sequencing and Deep Learning Could Lead to a Coronavirus Vaccine - Opinion

The AI community must collaborate with geneticists, in finding a treatment for those deemed most at risk of coronavirus. A potential treatment could involve removing a person’s cells, editing the DNA and then injecting the cells back in, now hopefully armed with a successful immune response. This is currently being worked on for some other vaccines.

The first step would be sequencing the entire human genome from a sizeable segment of the human population.

Sequencing Human Genomes

Sequencing the first human genome cost $2.7 billion and took nearly 15 years to complete. The current cost of sequencing an entire human has dropped dramatically. As recent as 2015 the cost was $4000, now the cost is less than $1000 per person. This cost could drop a few percentage points more when economies of scale are taken into consideration.

We need to sequence the genome of two different types of patients:

  1. Infected with Coronavirus; but healthy
  2. Infected with Coronavirus; but poor immune response

It is impossible to predict which data point will be most valuable, but each sequenced genome would provide a dataset. The more data the more options there are to locate DNA variations which increase a body’s resistance to the disease vector.

Nations are currently losing trillions of dollars to this outbreak, the cost of $1000 a human genome is minor in comparison. A minimum of 1,000 volunteers for both segments of the population would arm researchers with significant volumes of big data. Should the trial increase in size by one order of magnitude, the AI would have even more training data which would increase the odds of success by several orders of magnitude. The more data the better, which is why a target of 10,000 volunteers should be aimed for.

Machine Learning

While multiple functionalities of machine learning would be present, deep learning would be used to find patterns in the data. For instance, there might be an observation that certain DNA variables correspond to a high immunity, while others correspond to a high mortality. At a minimum we would learn which segments of the human population are more susceptible and should be quarantined.

To decipher this data an Artificial Neural Network (ANN) would be located on the cloud, and sequenced human genomes from around the world would be uploaded. With time being of the essence, parallel computing will reduce the time required for the ANN to work its magic.

We could even take it one step further and use the output data sorted by the ANN,and feed it into a separate system called a Recurrent Neural Network (RNN). The RNN uses reinforcement learning to identify which gene selected by the initial ANN is most successful in a simulated environment. The reinforcement learning agent would gamify the entire process of creating a simulated setting, to test which DNA changes are more effective.

A simulated environment is like a virtual game environment, something many AI companies are well positioned to take advantage of based on their previous success in designing AI algorithms to win at esports. This includes companies such DeepMind and OpenAI.

These companies can use their underlying architecture optimized at mastering video games, to create a stimulated environment, test gene edits, and learn which edits lead to specific desired changes.

Once a gene is identified, another technology is used to make the edits.

CRISPR

Recently, the first ever study using CRISPR to edit DNA inside the human body was approved. This was to treat a rare type of genetic disorder that effects one of every 100,000 newborns. The condition can be caused by mutations in as many as 14 genes that play a role in the growth and operation of the retina. In this case, CRISPR sets out to carefully target DNA and to cause slight temporary damage to the DNA strand, causing the cell to repair itself. It is this restorative healing process which has the potential to restore eyesight.

While we are still waiting for results on if this treatment will work, the precedent of having CRISPR approved for trials in the human body is transformational. Potential disorders which can be treated include improving a body’s immune response to specific disease vectors.

Potentially, we can manipulate the body’s natural genetic resistance to a specific disease. The diseases that could potentially be targeted are diverse, but the community should be focusing on the treatment of the new global epidemic coronavirus.  A threat that if unchecked could lead to a death sentence to a large percentage of our population.

FINAL THOUGHTS

While there are many potential options to achieving success, it will require that geneticists, epidemiologists, and machine learning specialists unify. A potential treatment option may be as described above, or may be revealed to be unimaginably different, the opportunity lies in the genome sequencing of a large segment of the population.

Deep learning is the best analysis tool that humans have ever created; we need to at a minimum attempt to use it to create a vaccine.

When we take into consideration what is currently at risk with this current epidemic, these three scientific communities need to come together to work on a cure.

Spread the love
Continue Reading

Big Data

How AI Predicted Coronavirus and Can Prevent Future Pandemics – Opinion

mm

Published

on

How AI Predicted Coronavirus and Can Prevent Future Pandemics - Opinion

BlueDot AI Prediction

On January 6th, the US Centers for Disease Control and Prevention (CDC) notified the public that a flu-like outbreak was propagating in Wuhan City, in the Hubei Province of China.  Subsequently, the World Health Organization (WHO) released a similar report on January 9th.

While these responses may seem timely, they were slow when compared to an AI company called BlueDot.  BlueDot released a report on December 31st, a full week before the CDC released similar information.

Even more impressive, BlueDot predicted the Zika outbreak in Florida six months before the first case in 2016.

What are some of the datasets that BlueDot analyzes?

  • Disease Surveillance, this includes scanning 10,000+ media and public sources in over 60 languages.
  • Demographic data from national censuses, and national statistic reports. (Population density is a factor behind virus propagation)
  • Real-time climate data from NASA, NOAA, etc. (Viruses spread faster in certain environmental conditions)
  • Insect vectors and animal reservoirs (Important when virus can spread from species to species).

BlueDot currently works with various Government agencies including Global Affairs Canada, Public Health Agency of Canada, the Canadian Medical Association, and the Singapore Ministry of Health.  The BlueDot Insights product sends near real-time infectious disease alerts. Some advantages behind this product include:

  • Reducing the risk of exposure to frontline healthcare workers
  • Global visibility enables time saving on infectious disease surveillance
  • Opportunity to communicate crucial information clearly before it’s too late.
  • Ability to protect populations from infections

How AI Predictability Could Be Improved

What’s preventing the BlueDot AI and similar AIs from improving? The number one limiting factor is inability to access the necessary big data in real-time.

These types of predictive systems rely on big data feeding into an artificial neural network (ANN), which uses deep learning to search for patterns. The more data that is fed into this ANN, the more accurate the machine learning algorithm becomes.

This essentially means that what is preventing the AI from being able to flag a potential outbreak sooner than later, is simply a lack of access to the necessary data. In countries like China which regularly monitor, and filter news, these delays to the necessary data are even more pronounced. The censoring process of each datapoint can significantly reduce the amount of available data, and worse, can even completely remove the accuracy of this data, which removes the potential usefulness of this data. Faulty data was even why previous efforts such as Google Flu Trends failed.

In other words, the major problem that is preventing AI systems from fully being able to predict an outbreak as early as possible is Government interference. Governments like China, and the current Trump administration, need to remove themselves from any type of data filtering, and enable full access to the press to report on global health issues.

That being stated, reporters can only work with the information that is available to them. Bypassing news reports and accessing sources directly would enable machine learning systems to access data in a timelier and more efficient fashion.

What Needs to be Done

Starting immediately, Governments that are truly interested in reducing the cost of healthcare, and preventing an outbreak, should begin a mandatory review of how their health clinics, and hospitals, can distribute certain datapoints in real-time to officials, reporters and AI systems.

Individual private information can be completely stripped from each patient, enabling the patient to remain anonymous while the important data is shared.

A network of hospitals in any city that collects data in real-time and shares this data would be able to offer superior healthcare. For example, it could be tracked that a specific hospital has shown an increase in patients showing flu-like symptoms, with 3 patients at 10:00 AM, to 7 patients at 1:00 PM, to 49 patients by 5:00PM. This data could be compared to hospitals within the same region, for immediate alerts that a certain region is a potential hotzone.

Once this information is collected and assembled, the AI system could trigger alerts to all neighboring regions so that necessary precautions can be made.

While this would be difficult in certain regions of the world, countries with large AI hubs and smaller population densities such as Canada could institute such an advanced system. Canada has AI hubs in the most populated provinces (Waterloo and Toronto, Ontario, and Montreal, Quebec). The advantages of this inter-hospital and inter-provincial cooperation could be extended to offer Canadians other benefits such as accelerated access to emergency medical care, and reduced healthcare spending. Canada could become a leader in both AI and healthcare, licensing this technology to other jurisdictions.

Most importantly, once a country such as Canada has a system in place, the technology/methodologies can then be cloned and exported to other regions. Eventually, the goal would be to blanket the entire world, to ensure outbreaks are a relic of the past.

This type data collection by healthcare workers has benefits for multiple applications. There is no reason why in 2020 that a patient should have to register themselves with each hospital individually, and that those same hospitals are not communicating to one another in real-time. This lack of communication can result in the loss of data with patients who suffer from dementia, or other symptoms which may prevent them from fully communicating the severity of their condition, or even where else they have been treated.

Lessons Learned

We can only hope that governments around the world, take advantage of the important lessons that coronavirus is teaching us. Humanity should consider itself lucky that coronavirus has a relatively mild fatality rate compared to some infectious agents of the past such as the Black Plague which is estimated to have killed 30% to 60% of Europe’s population.

The next time we might not be so lucky, what we do know so far, is that governments are currently ill-equipped to deal with the severity of an outbreak.

Bluedot was conceived in the wake of Toronto’s 2003 SARS outbreak and launched in 2013. The goal was to protect people around the world from infectious diseases with human and artificial intelligence. The AI component has demonstrated remarkable ability to predict the path of infectious diseases, what remains is the human component. We need new policies in place in order to enable companies such as BlueDot to excel at what they do best. As people we need to demand more from our politicians, and healthcare providers.

Spread the love
Continue Reading

Big Data

Allan Hanbury, Co-Founder of contextflow – Interview Series

mm

Published

on

Allan Hanbury, Co-Founder of contextflow - Interview Series

Allan Hanbury is Professor for Data Intelligence at the TU Wien, Austria, and Faculty Member of the Complexity Science Hub, where he leads research and innovation to make sense of unstructured data. He is initiator of the Austrian ICT Lighthouse Project, Data Market Austria, which is creating a Data-Services Ecosystem in Austria. He was scientific coordinator of the EU-funded Khresmoi Integrated Project on medical and health information search and analysis, and is co-founder of contextflow, the spin-off company commercialising the radiology image search technology developed in the Khresmoi project. He also coordinated the EU-funded VISCERAL project on evaluation of algorithms on big data, and the EU-funded KConnect project on technology for analysing medical text.

contextflow is a spin-off from the Medical University of Vienna and European research project KHRESMOI. Could you tell us about the KHRESMOI project?

Sure! The goal of Khresmoi was to develop a multilingual, multimodal search and access system for biomedical information and documents, which required us to effectively automate the information extraction process, develop adaptive user interfaces and link both unstructured and semi-structured text information to images. Essentially, we wanted to make the information retrieval process for medical professionals reliable, fast, accurate and understandable.

 

What’s the current dataset which is powering the contextflow deep learning algorithm?

Our dataset contains approximately 8000 lung CTs. As our AI is rather flexible, we’re moving towards brain MRIs next.

 

Have you seen improvements with how the AI performs as the dataset has become larger?

We’re frequently asked this question, and the answer is likely not satisfying to most readers. To a certain extent, yes, the quality improves with more scans, but after a particular threshold, you don’t gain much more simply from having more. How much is enough really depends on various factors (organ, modality, disease pattern, etc), and it’s impossible to give an exact number. What’s most important is the quality of the data.

 

Is contextflow designed for all cases, or to simply be used for determining differential diagnosis for difficult cases?

Radiologists are really good at what they do. For the majority of cases, the findings are obvious, and external tools are unnecessary. contextflow has differentiated itself in the market by focusing on general search rather than automated diagnosis. There are a few use cases for our tools, but the main one is for helping with difficult cases where the findings aren’t immediately apparent. Here radiologists must consult various resources, and that process takes time. contextflow SEARCH, our 3D image-based search engine) aims to reduce the time it takes for radiologists to search for information during image interpretation by allowing them to search via the image itself. Because we also provide reference information helpful for differential diagnosis, training new radiologists is another promising use case.

 

Can you walk us through the process of how a radiologist would use the contextflow platform?

contextflow SEARCH Lung CT is completely integrated into the radiologist’s workflow (or else they would not use it). The radiologist performs their work as usual, and when they require additional information for a particular patient, they simply select a region of interest in that scan and click the contextflow icon in their workstation to open up our system in a new browser window. From there, they will receive reference cases from our database of patients with similar disease patterns present as the patient they are currently evaluating plus statistics and medical literature (e.g. radiopedia). They can scroll through their patient in our system normally, selecting additional regions to search for additional information and compare side-by-side with the reference cases. There are also heatmaps providing a visualization of the overall distribution of disease patterns, which helps with reporting findings as well. We really tried to put everything a radiologist needs to write a report in one place and available within seconds.

Allan Hanbury, Co-Founder of contextflow - Interview Series

This was designed initially for lung CT scans, will contextflow be expanding to other types of scans?

Yes! We have a list of organs and modalities requested by radiologists that we are eager to add. The ultimate goal is to provide a system that covers the entire human body, regardless of organ or type of scan.

 

contextflow has received the support of two amazing incubator programs INiTS and i2c TU Wien. How beneficial have these programs been and what have you learned from the process?

We owe a lot of gratitude to these incubators. Both connected us with mentors, consultants and investors which challenged our business model and ultimately clarified our who/why/how. They also act very practically, providing funding and office space so that we could really focus on the work and not worry SO much about administrative topics. We truly could not have come as far as we have without their support. The Austrian startup ecosystem is still small, but there are programs out there to help bring innovative ideas to fruition.

 

You are also the initiator of the Austrian ICT Lighthouse Project which aims to build a sustainable Data-Services Ecosystem in Austria. Could you tell us more about this project and about your role in it?

The amount of data produced daily is exponentially growing, and its importance to most industries is also exploding…it’s really one of the world’s most important resources! Data Market Austria’s Lighthouse project aims to develop or reform requirements for successful data-driven businesses, ensuring low cost, high quality and interoperability. I coordinated the project for the first year in 2016-2017. This led to the creation of the Data Intelligence Offensive where I am on the board of directors. The DIO’s mission is to exchange information and know-how between members regarding data management and security.

 

Is there anything else that you would like to share with our readers about contextflow?  

Radiology workflows are not on the average citizen’s mind, and that’s how it should be. The system should just work. Unfortunately, once you become a patient, you realize that is not always the case. contextflow is working to transform that process for both radiologists and patients. You can expect a lot of exciting developments from us in the coming years, so stay tuned!

Please visit contextflow to learn more.

Spread the love
Continue Reading