Connect with us

Blockchain Eventon – 6th Dec 2019, Novotel, Juhu Beach, Mumbai




Blockchain Eventon - 6th Dec 2019, Novotel, Juhu Beach, Mumbai

Our previous events of Blockchain Eventon were a stellar success as we attracted some of the top reputed names in our participation list from companies such as Reliance Big Entertainment, Adani Group, DBS Bank India Ltd., Apex Bank, Motilal Oswal, Aditya Birla Capital, Koinex, The Himalaya Drug Company, PWC, Mercedes-Benz India. We also attracted a few government bodies such as SIDBI and SEBI. This year we are expecting more of the industries leading tech visionaries with previous partners including Oracle, IBM, Microsoft, Cognizant, Cisco, EY, ConsenSys, KPMG, Accenture, Dell and many more.

The next show taking place on 6th of December 2019 is themed around Enterprise Blockchain & AI. Blockchain Eventon Awards ceremony will be an integral part and will add value to the event on the eve of the summit. Blockchain Eventon awards will be honouring the foremost thinkers, leaders, and builders in enterprise Blockchain and AI technology.

Aadil Singh, the Founder of the Blockchain Eventon is eager to up the game: “I wish to thank everyone who attended Blockchain Eventon in 2018 and for the remarkable feedback we received. I look forward to contributing and helping drive Blockchain and AI forward in India by creating a community of like-minded people. The whole setup of Blockchain Eventon is focused on bringing the right people together to do business. This year, we are raising the standard with a stellar line-up of speakers and we expect an even larger crowd to attend our show. With our understanding of the Indian market and unrelenting crypto regulations, we have decided to base this year’s show on the theme on Enterprise Blockchain and AI.”

Blockchain Eventon - 6th Dec 2019, Novotel, Juhu Beach, Mumbai

The biggest names in the emerging tech industries will take the stage at Blockchain Eventon to share their visions for the future. We are inviting the most influential people in the industry to share their stories. The speaker line up for December is far from filled but already feature top names within the industry such as

Few top names from the speaker line up:

  • Dr. Evan Singh Luthra – Founder, EL Group International.
  • Mr. Ravinder Pal Singh – Director – Digital Transformation, Dell EMC.
  • Mr. Nishith Pathak – Microsoft Regional Director.
  • Mr. Utpal Chakraborty – Head Artificial Intelligence, YES BANK Ltd.
  • Mr. Kumar Gaurav – Founder & CEO – Cashaa and Auxesis Group.
  • Mr. Jason Fernandes – COO and Co-Founder AEToken.
  • Anoop Chaturvedi – Country Manager Hewlett Packard Enterprise.
  • Aaron Tsai – Chairman, CEO & President, MAS Capital Inc.
  • Chen Xiaohua – Deputy Director and Secretary General of China National Blockchain Economy Research Group.
  • Geetansh Bamania – CEO & Founder, RentoMojo.
  • Karan Ambwani – Solutions Manager – ConsenSys.


IT professionals across India are seeing Blockchain and AI technology as the next wave of opportunity and trying to reskill for it. Indeed’s Blockchain Jobs report for 2019 revealed that India’s Silicon Valley, Bengaluru, will be the hub for crypto and Blockchain-related careers, followed by Pune, Hyderabad, Noida and Mumbai.

As the demand for talent grows, it is imperative that India focuses on fostering a Blockchain culture and trains the next generation of developers and entrepreneurs. And Blockchain Eventon Job Fair aims to do just that.



Blockchain Eventon is an opportunity for global professionals to engage, learn and explore what’s next in the realms of Blockchain, AI, Big Data, IoT and Quantum Technologies. Through a series of diverse knowledge tracks, gain vital insights by participating in thought-provoking discussions about the world-changing potential application of such technologies. Blockchain Eventon is the place to be!

Info –

Deep Learning Specialization on Coursera

AI 101

Study Shows That Workers Now Trust A Robot More Than Their Managers




Study Shows That Workers Now Trust A Robot More Than Their Managers

Technology giant Oracle Corporation recently published a study that examined the relation of workers that indicates a distinct change in which artificial intelligence is changing the relationship between people and technology at work. According to this research that was done on a global scale, based on the analysis of the data, Oracle came to the conclusion that now 64% of people are inclined to trust a robot more than their manager.

The study was done involving 8,370 employees, managers and HR leaders across 10 countries. As is stated in the company’s press release, it found that  AI has changed the relationship between people and technology at work and is reshaping the role HR teams and managers need to play in attracting, retaining and developing talent.

These results counter some common fears present that AI will have a negative impact on  jobs, employees, managers and  as the release states, “HR leaders across the globe are reporting increased adoption of AI at work and many are welcoming AI with love and optimism.”

In presenting the results of the study, ItProPortal notes that the report’s results show that “the majority of people would trust a robot more than their manager. They’d rather turn to a robot for advice, than their manager.”

Summarising the main points of the report, the press release point to the following:

[ ] AI is becoming more prominent with 50 percent of workers currently using some form of AI at work compared to only 32 percent last year. Workers in China (77 percent) and India (78 percent) have adopted AI over 2X more than those in France (32 percent) and Japan (29 percent).

[ ] The majority (65 percent) of workers are optimistic, excited and grateful about having robot co-workers and nearly a quarter report having a loving and gratifying relationship with AI at work.

[ ] Workers in India (60 percent) and China (56 percent) are the most excited about AI, followed by the UAE (44 percent), Singapore (41 percent), Brazil (32 percent), Australia/New Zealand (26 percent), Japan (25 percent), U.S. (22 percent), UK (20 percent) and France (8 percent).

[ ] Men have a more positive view of AI at work than women with 32 percent of men optimistic vs. 23 percent of women.

The results also indicate that most of the interviewed people believe that, as ItProPortal says, “robots would do a better job than their managers at providing unbiased information, maintaining work schedules, solving problems and maintaining a budget. At the same time, humans are considered better at understanding employee feelings, coaching and building a work culture.”

Also, it turns out that people weren’t afraid of losing their jobs to AI, with most of them being “optimistic, excited and grateful” to be able to work with the latest advancements in technology. The report quotes Jeanne Meister, Founding Partner of Future Workplace, which said that the company’s 2019 results “reveal that forward-looking companies are already capitalizing on the power of AI. As workers and managers leverage the power of artificial intelligence in the workplace, they are moving from fear to enthusiasm as they see the possibility of being free of many of their routine tasks and having more time to solve critical business problems for the enterprise.”


Spread the love

Deep Learning Specialization on Coursera
Continue Reading


What Is Reinforcement Learning?




What Is Reinforcement Learning?

Put simply, reinforcement learning is a machine learning technique that involves training an artificial intelligence agent through the repetition of actions and associated rewards. A reinforcement learning agent experiments in an environment, taking actions and being rewarded when the correct actions are taken. Over time, the agent learns to take the actions that will maximize its reward. That’s a quick definition of reinforcement learning, but taking a closer look at the concepts behind reinforcement learning will help you gain a better, more intuitive understanding of it.

Reinforcement In Psychology

The term “reinforcement learning” is adapted from the concept of reinforcement in psychology. For that reason, let’s take a moment to understand the psychological concept of reinforcement. In the psychological sense, the term reinforcement refers to something that increases the likelihood that a particular response/action will occur. This concept of reinforcement is a central idea of the theory of operant conditioning, initially proposed by the psychologist B.F. Skinner. In this context, reinforcement is anything that causes the frequency of a given behavior to increase. If we think about possible reinforcement for humans, these can be things like praise, a raise at work, candy, and fun activities.

In the traditional, psychological sense, there are two types of reinforcement. There’s positive reinforcement and negative reinforcement. Positive reinforcement is the addition of something to increase a behavior, like giving your dog a treat when it is well behaved. Negative reinforcement involves removing a stimulus to elicit a behavior, like shutting off loud noises to coax out a skittish cat.

Positive and Negative Reinforcement In Machine Learning

Positive reinforcement increases the frequency of a behavior while negative reinforcement decreases the frequency. In general, positive reinforcement is the most common type of reinforcement used in reinforcement learning, as it helps models maximize the performance on a given task. Not only that but positive reinforcement leads the model to make more sustainable changes, changes which can become consistent patterns and persist for long periods of time.

In contrast, while negative reinforcement also makes a behavior more likely to occur, it is used for maintaining a minimum performance standard rather than reaching a model’s maximum performance. Negative reinforcement in reinforcement learning can help ensure that a model is kept away from undesirable actions, but it can’t really make a  model explore desired actions.

Training A Reinforcement Agent

When a reinforcement learning agent is trained, there are four different ingredients or states used in the training: initial states (State 0), new state (State 1), actions, and rewards.

Imagine that we are training a reinforcement agent to play a platforming video game where the AI’s goal is to make it to the end of the level by moving right across the screen. The initial state of the game is drawn from the environment, meaning the first frame of the game is analyzed and given to the model. Based on this information, the model must decide on an action.

During the initial phases of training, these actions are random but as the model is reinforced, certain actions will become more common.  After the action is taken the environment of the game is updated and a new state or frame is created. If the action taken by the agent produced a desirable result, let’s say in this case that the agent is still alive and hasn’t been hit by an enemy, some reward is given to the agent and it becomes more likely to do the same in the future.

This basic system is constantly looped, happening again and again, and each time the agent tries to learn a little more and maximize its reward.

Episodic vs Continuous Tasks

Reinforcement learning tasks can typically be placed in one of two different categories: episodic tasks and continual tasks.

Episodic tasks will carry out the learning/training loop and improve their performance until some end criteria are met and the training is terminated. In a game, this might be reaching the end of the level or falling into a hazard like spikes. In contrast, continual tasks have no termination criteria, essentially continuing to train forever until the engineer chooses to end the training.

Monte Carlo vs Temporal Difference

There are two primary ways of learning, or training, a reinforcement learning agent. In the Monte Carlo approach, rewards are delivered to the agent (its score is updated) only at the end of the training episode. To put that another way, only when the termination condition is hit does the model learn how well it performed. It can then use this information to update and when the next training round is started it will respond in accordance to the new information.

The temporal-difference method differs from the Monte Carlo method in that the value estimation, or the score estimation, is updated during the course of the training episode. Once the model advances to the next time step the values are updated.

Explore vs Exploit

Training a reinforcement learning agent is a balancing act, involving the balancing of two different metrics: exploration and exploitation.

Exploration is the act of collecting more information about the surrounding environment, while exploration is using the information already known about the environment to earn reward points. If an agent only explores and never exploits the environment, the desired actions will never be carried out. On the other hand, if the agent only exploits and never explores, the agent will only learn to carry out one action and won’t discover other possible strategies of earning rewards. Therefore, balancing exploration and exploitation is critical when creating a reinforcement learning agent.

Uses For Reinforcement Learning

Reinforcement learning can be used in a wide variety of roles, and it is best suited for applications where tasks require automation.

Automation of tasks to be carried out by industrial robots is one area where reinforcement learning proves useful. Reinforcement learning can also be used for problems like text mining, creating models that are able to summarize long bodies of text. Researchers are also experimenting with using reinforcement learning in the healthcare field, with reinforcement agents handling jobs like the optimization of treatment policies. Reinforcement learning could also be used to customize educational material for students.

Concluding Thoughts

Reinforcement learning is a powerful method of constructing AI agents that can lead to impressive and sometimes surprising results. Training an agent through reinforcement learning can be complex and difficult, as it takes many training iterations and a delicate balance of the explore/exploit dichotomy. However, if successful, an agent created with reinforcement learning can carry out complex tasks under a wide variety of different environments.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Big Data

Risks And Rewards For AI Fighting Climate Change




Risks And Rewards For AI Fighting Climate Change

As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change?

Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined. Dignum points to Netflix and the algorithms used to recommend movies to Netflix users.  In order for these algorithms to run and suggest movies to hundreds of thousands of users, Netflix needs to run large data centers. These data centers store and process the data used to train algorithms.

Dignum belongs to a group of experts advising the European Commission on how to make human-centric, ethical AI. Dignum explained to Horizon Magazine that the environmental impact of AI often goes unappreciated, but under the right circumstances data centres can be responsible for the release of large amounts of C02.

‘It’s a use of energy that we don’t really think about,’ explained Prof. Dignum to Horizon Magazine. ‘We have data farms, especially in the northern countries of Europe and in Canada, which are huge. Some of those things use as much energy as a small city.’

Dingum noted that one study, done by the University of Massachusetts, found that creating a  sophisticated AI to interpret human language lead to the emissions of around 300,000 kilograms of the equivalent of C02. This is approximately five times the impact of the average car in the US. These emissions could potentially grow, as estimates done by a Swedish researcher, Anders Andrae, projects that by the year 2025 data centers could account for apporixmately 10% of all electricity usage. The growth of big data and the computational power needed to handle it has brought the environmental impact of AI to the attention of many scientists and environmentalists.

Despite these concerns, AI can play a role in helping us combat climate change and limit emissions. Scientists and engineers around the world are advocating for the use of AI in designing solutions to climate change. For example, Professor Felix Creutzig is affiliated with the Mercator Research Institute on Global Commons and Climate Change in Berlin and Crutzig hopes to use AI to improve the use of spaces in urban environments. More efficient space usage could help tackle issues like urban heat islands. Machine learning algorithms could be used to determine the optimal position for green spaces as well, or to determine airflow patterns when designing ventilation architecture to fight extreme heat. Urban green spaces can play the role of a carbon sink.

Currently, Creutzig is working with stacked architecture, a method that uses both mechanical modeling and machine learning, aiming to determine how buildings will respond to temperature and energy demands. Creutzig hopes that his work can lead to new building designs that use less energy while maintaining quality of life.

Beyond this, AI could help fight climate change in several ways. For one, AI could be leveraged to construct better electricity systems that could better integrate renewable resources. AI has already been used to monitor deforestation, and its continued use for this task can help preserve forests that act as carbon sinks. Machine learning algorithms could also be used to calculate an individual’s carbon footprint and suggest ways to reduce it.

Tactics to reduce the amount of energy consumed by AI include deleting data that is no longer in use, reducing the need for massive data storage operations. Designing more efficient algorithms and methods of training is also important, including pursuing AI alternatives to machine learning which tends to be data hungry.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading