November show expects to smash attendance records with 10,000+ delegates
SiGMA Group has announced that the winter edition of Malta A.I. & Blockchain Summit will take place 7th to 8th November 2019, at the InterContinental Arena Conference Centre, St Julian’s, Malta, marking the second event in 2019 for the successful expo.
Following the sold out AIBC show at the Malta Hilton in May this year, the November edition of the Malta A.I. and Blockchain Summit expects more than 10,000 attendees, 400 sponsors and exhibitors, 1500 investors and 200 top quality speakers, coming from more than 80 countries worldwide.
Once again, the most subscribed activities for delegates is expected to be the conference room, with speakers set to revisit following a rapturous reception at both the 2018 and the May 2019 editions. The VIP speakers wowing the crowds with debates and panel discussions at the May show included Ben Goertzel, Brock Pierce, Tone Vays, Roger Ver, Noel Sharkey, and many more.
The organisers are also working with the Maltese government to highlight the opportunities on the Blockchain Island for businesses in the crypto, blockchain, A.I., and emerging tech sectors. As with previous AIBC Summits, it’s expected that the event will serve as a platform for the government to renew its commitment to the future of these sectors in Malta, with the announcement of further legislation and regulation for the A.I. sector a distinct possibility.
With networking high on the agenda for all who attend, the benefits are numerous for attendees and exhibitors alike, with connections and deals being made every second of the two day event. Workshops will add to the insights to be gleaned by attendees and, in addition to the new business opportunities, the A.I. Start up village will provide bright new companies a chance to win support and secure investment as they present the future ideas for the industry. The A.I. Startup pitch battle will return to give another selection of trailblazers the opportunity to win a life-changing cash prize.
Plus, there’s the prestigious Awards ceremony at the start of the event, and the renowned closing party for everyone to let their hair down once all business has been concluded.
Now firmly established as a staple in the blockchain calendar, the Malta A.I. & Blockchain Summit is the unmissable event for this forward-looking emerging tech sector.
Malta A.I. & Blockchain Summit is a bi-annual expo covering topics relating to the global sectors for blockchain, A.I., Big Data, IoT, and Quantum technologies. The event includes conferences hosted by globally renowned speakers, workshops for industry learning and discussion, an exhibit space accommodating more than 400 brands and much more.
The first Malta Blockchain Summit took place in November 2018 at the Intercontinental Hotel in St Julians, Malta, attracting 8,500 attendees from over 80 countries worldwide, with 300 sponsors and exhibitors, 200 speakers, and 1 A.I. VIP (Sophia the world’s first robot citizen). With strong support from the Maltese government, the event has quickly established itself as one of the world’s leading destinations for the growing sectors of A.I., Blockchain and DLT, IoT, and other vertical industries. At the 2018 event the Maltese government introduced 3 new bills to support the growth of the sector and promoting Malta as the “Blockchain Island”.
Study Shows That Workers Now Trust A Robot More Than Their Managers
Technology giant Oracle Corporation recently published a study that examined the relation of workers that indicates a distinct change in which artificial intelligence is changing the relationship between people and technology at work. According to this research that was done on a global scale, based on the analysis of the data, Oracle came to the conclusion that now 64% of people are inclined to trust a robot more than their manager.
The study was done involving 8,370 employees, managers and HR leaders across 10 countries. As is stated in the company’s press release, it found that AI has changed the relationship between people and technology at work and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent.
These results counter some common fears present that AI will have a negative impact on jobs, employees, managers and as the release states, “HR leaders across the globe are reporting increased adoption of AI at work and many are welcoming AI with love and optimism.”
In presenting the results of the study, ItProPortal notes that the report’s results show that “the majority of people would trust a robot more than their manager. They’d rather turn to a robot for advice, than their manager.”
Summarising the main points of the report, the press release point to the following:
[ ] AI is becoming more prominent with 50 percent of workers currently using some form of AI at work compared to only 32 percent last year. Workers in China (77 percent) and India (78 percent) have adopted AI over 2X more than those in France (32 percent) and Japan (29 percent).
[ ] The majority (65 percent) of workers are optimistic, excited and grateful about having robot co-workers and nearly a quarter report having a loving and gratifying relationship with AI at work.
[ ] Workers in India (60 percent) and China (56 percent) are the most excited about AI, followed by the UAE (44 percent), Singapore (41 percent), Brazil (32 percent), Australia/New Zealand (26 percent), Japan (25 percent), U.S. (22 percent), UK (20 percent) and France (8 percent).
[ ] Men have a more positive view of AI at work than women with 32 percent of men optimistic vs. 23 percent of women.
The results also indicate that most of the interviewed people believe that, as ItProPortal says, “robots would do a better job than their managers at providing unbiased information, maintaining work schedules, solving problems and maintaining a budget. At the same time, humans are considered better at understanding employee feelings, coaching and building a work culture.”
Also, it turns out that people weren’t afraid of losing their jobs to AI, with most of them being “optimistic, excited and grateful” to be able to work with the latest advancements in technology. The report quotes Jeanne Meister, Founding Partner of Future Workplace, which said that the company’s 2019 results “reveal that forward-looking companies are already capitalizing on the power of AI. As workers and managers leverage the power of artificial intelligence in the workplace, they are moving from fear to enthusiasm as they see the possibility of being free of many of their routine tasks and having more time to solve critical business problems for the enterprise.”
What Is Reinforcement Learning?
Put simply, reinforcement learning is a machine learning technique that involves training an artificial intelligence agent through the repetition of actions and associated rewards. A reinforcement learning agent experiments in an environment, taking actions and being rewarded when the correct actions are taken. Over time, the agent learns to take the actions that will maximize its reward. That’s a quick definition of reinforcement learning, but taking a closer look at the concepts behind reinforcement learning will help you gain a better, more intuitive understanding of it.
Reinforcement In Psychology
The term “reinforcement learning” is adapted from the concept of reinforcement in psychology. For that reason, let’s take a moment to understand the psychological concept of reinforcement. In the psychological sense, the term reinforcement refers to something that increases the likelihood that a particular response/action will occur. This concept of reinforcement is a central idea of the theory of operant conditioning, initially proposed by the psychologist B.F. Skinner. In this context, reinforcement is anything that causes the frequency of a given behavior to increase. If we think about possible reinforcement for humans, these can be things like praise, a raise at work, candy, and fun activities.
In the traditional, psychological sense, there are two types of reinforcement. There’s positive reinforcement and negative reinforcement. Positive reinforcement is the addition of something to increase a behavior, like giving your dog a treat when it is well behaved. Negative reinforcement involves removing a stimulus to elicit a behavior, like shutting off loud noises to coax out a skittish cat.
Positive and Negative Reinforcement In Machine Learning
Positive reinforcement increases the frequency of a behavior while negative reinforcement decreases the frequency. In general, positive reinforcement is the most common type of reinforcement used in reinforcement learning, as it helps models maximize the performance on a given task. Not only that but positive reinforcement leads the model to make more sustainable changes, changes which can become consistent patterns and persist for long periods of time.
In contrast, while negative reinforcement also makes a behavior more likely to occur, it is used for maintaining a minimum performance standard rather than reaching a model’s maximum performance. Negative reinforcement in reinforcement learning can help ensure that a model is kept away from undesirable actions, but it can’t really make a model explore desired actions.
Training A Reinforcement Agent
Imagine that we are training a reinforcement agent to play a platforming video game where the AI’s goal is to make it to the end of the level by moving right across the screen. The initial state of the game is drawn from the environment, meaning the first frame of the game is analyzed and given to the model. Based on this information, the model must decide on an action.
During the initial phases of training, these actions are random but as the model is reinforced, certain actions will become more common. After the action is taken the environment of the game is updated and a new state or frame is created. If the action taken by the agent produced a desirable result, let’s say in this case that the agent is still alive and hasn’t been hit by an enemy, some reward is given to the agent and it becomes more likely to do the same in the future.
This basic system is constantly looped, happening again and again, and each time the agent tries to learn a little more and maximize its reward.
Episodic vs Continuous Tasks
Reinforcement learning tasks can typically be placed in one of two different categories: episodic tasks and continual tasks.
Episodic tasks will carry out the learning/training loop and improve their performance until some end criteria are met and the training is terminated. In a game, this might be reaching the end of the level or falling into a hazard like spikes. In contrast, continual tasks have no termination criteria, essentially continuing to train forever until the engineer chooses to end the training.
Monte Carlo vs Temporal Difference
There are two primary ways of learning, or training, a reinforcement learning agent. In the Monte Carlo approach, rewards are delivered to the agent (its score is updated) only at the end of the training episode. To put that another way, only when the termination condition is hit does the model learn how well it performed. It can then use this information to update and when the next training round is started it will respond in accordance to the new information.
The temporal-difference method differs from the Monte Carlo method in that the value estimation, or the score estimation, is updated during the course of the training episode. Once the model advances to the next time step the values are updated.
Explore vs Exploit
Training a reinforcement learning agent is a balancing act, involving the balancing of two different metrics: exploration and exploitation.
Exploration is the act of collecting more information about the surrounding environment, while exploration is using the information already known about the environment to earn reward points. If an agent only explores and never exploits the environment, the desired actions will never be carried out. On the other hand, if the agent only exploits and never explores, the agent will only learn to carry out one action and won’t discover other possible strategies of earning rewards. Therefore, balancing exploration and exploitation is critical when creating a reinforcement learning agent.
Uses For Reinforcement Learning
Reinforcement learning can be used in a wide variety of roles, and it is best suited for applications where tasks require automation.
Automation of tasks to be carried out by industrial robots is one area where reinforcement learning proves useful. Reinforcement learning can also be used for problems like text mining, creating models that are able to summarize long bodies of text. Researchers are also experimenting with using reinforcement learning in the healthcare field, with reinforcement agents handling jobs like the optimization of treatment policies. Reinforcement learning could also be used to customize educational material for students.
Reinforcement learning is a powerful method of constructing AI agents that can lead to impressive and sometimes surprising results. Training an agent through reinforcement learning can be complex and difficult, as it takes many training iterations and a delicate balance of the explore/exploit dichotomy. However, if successful, an agent created with reinforcement learning can carry out complex tasks under a wide variety of different environments.
Risks And Rewards For AI Fighting Climate Change
As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?
Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users. In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.
Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.
‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’
Dingum noted that one study, done by the University of Massachusetts, found that creating a sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.
Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.
Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.
Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.
Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.