Connect with us


Vijay Kurkal, Chief Executive Officer for Resolve – Interview Series




Vijay Kurkal serves as the CEO for Resolve where he oversees the strategic growth of the company as it helps maximize the potential of AIOps and IT automation in enterprises around the world. Vijay has a long history in the tech industry, having spent the last twenty years working with numerous software and hardware companies that have run the gamut from mainframe to bleeding-edge, emerging tech. Before joining Resolve, he held leadership positions at IBM, VMware, Bain & Company, and Insight Partners, playing a critical role in accelerating the growth of a wide array of technology companies and introducing state-of-the-art product lines.

You’ve been leading Resolve since 2018, first as COO, and now as CEO. What initially drew you to this company?

There’s a huge need for automation and AIOps today given the challenges that enterprise IT organizations face. These teams are managing increasingly complex, highly virtualized, hybrid environments and are tasked with rapidly implementing new technologies to stay competitive. Without the aid of tools like automation and AIOps, it’s impossible to effectively manage these environments, and the complexity is only going to grow.

Given the tremendous market opportunity, I was immediately drawn to Resolve’s deep roots in automation. Drawing on my 20 years of experience with a wide range of tech companies, I am incredibly excited about the possibility for automation and AIOps to truly transform IT operations. These technologies are game changers for companies — not just to survive, but to thrive in the current environment. As we’ve seen over the last few months as digital transformation has rapidly accelerated, automation is absolutely necessary to succeed. Resolve is uniquely positioned to meet these needs and usher in the next generation of IT operations.


How would you best describe what Resolve offers IT companies?

By combining cutting-edge AIOps capabilities with our industry-leading automation platform, Resolve helps IT teams achieve more agile, autonomous IT operations even as infrastructure continues to expand in scope and complexity. Our unified product offers a closed loop of discovery, analysis, detection, prediction, and automation, including prebuilt automations that can be autonomously triggered by AIOps insights to stay ahead of problems and lighten the load on IT organizations.

Our goal is to help ITOps, NetOps, and Service Desk teams meet the growing demands on IT, streamline operations, reduce costs, improve MTTR and performance, and accelerate service delivery through the power of automation and AIOps.


For readers who are not familiar with the term AIOps, can you explain what this term describes and what makes it so important?

AIOps – or AI for IT Operations – helps streamline the management of complex, hybrid IT environments by deploying AI, machine learning and advanced analytics to aggregate, analyze, and contextualize tremendous amounts of data amassed from various sources across the IT ecosystem. These insights facilitate the identification of existing or potential performance issues while spotting anomalies and pinpointing the probable root cause of incidents. Over time, machine learning can predict future issues and proactively automate fixes before they affect the business.

Additionally, most AIOps tools offer advanced correlation capabilities help IT pros determine how alarms are related, reducing noise by grouping similar events and bringing the true issues to light, so people can focus on what matters most. Some AIOps solutions also perform auto-discovery and dependency mapping to provide deep visibility into how entities are connected to one another, and how outages might impact critical business services. This offers a wide range of benefits from keeping your CMDB up-to-data and accurate, to accelerating incident response and simplifying troubleshooting, change management, and compliance.


What are some of the data challenges faced by IT companies?

By far the biggest data challenge IT organizations face is managing increased complexity caused by exponential infrastructure growth and the daily onslaught of new technologies. Data volumes and alarm noise created by infrastructure growth have far exceeded human capacity to find the needle in the proverbial IT haystack. Gartner estimates a two- to three-fold increase in data volume growth per year. To survive in this dynamic environment, it’s critical for IT organizations to embrace AIOps and automation to help them cope with massive amounts of data and to streamline management of new technologies.


How can businesses overcome these challenges using Resolve?

Resolve enables businesses to manage increasing IT complexity with fewer resources through the powerful combination of AIOps and automation. The platform is designed to provide immediate relief, as well as long-term value.

Unlike many other AIOps solutions on the market, customers don’t have to wait months to start seeing value with Resolve. In fact, customers get value in literally minutes with Resolve’s automated discovery and dependency mapping. These capabilities enable us to generate complete infrastructure visualizations, detailed cross-domain topology maps, application dependency maps, and comprehensive views of inventory. Additionally, Resolve ingests data from many other tools (such as monitoring, event management, ITSM, and logging solutions) and aggregates it with telemetry data collected natively by our own platform. This allows customers to achieve the much-sought-after ‘single pane of glass’ that they need to effectively manage complex, hybrid infrastructure, and it provides significantly richer (and complete) visibility across domains.

Over the course of several weeks, these insights are enhanced and enriched as Resolve “learns” the environment and leverages machine learning to perform activities like event correlation and clustering, predictive analytics, multivariate anomaly detection, dynamic thresholding, and autonomous remediation – making the product exponentially more intelligent (and valuable) over time.

Our enterprise-class automation capabilities can take action on insights from the AIOps components or can be used independently. Built for the scale and complexity of modern, hybrid environments, the platform can handle everything from simple tasks to very complex processes that go well beyond the capacity of other tools. Combining AIOps with this level of automation offers an unparalleled ability to autonomously predict, prevent, and fix issues before they impact the business, and to radically improve overall operational efficiency.


Can you describe how Resolve makes it easier to investigate security incidents?

Resolve’s automated incident validation quickly determines which alarms are actual threats versus those that are simply false positives. Hours of manual effort are eliminated by automatically collecting data across the IT environment and security tools, including SIEMs, threat feeds, antivirus systems, and logs. All of that data gets unified into a customizable dashboard, so it’s easy to see the problem and determine how to fix it. Resolve centralizes orchestration of the end-to-end triage and investigation workflows to ensure that issues can be addressed quickly. We also capture a full audit trail of incident investigation steps and results to support compliance and governance.


One of the features of Resolve is it enables IT professionals to ignore ‘noise’ to focus on highlighted real problems. Can you discuss this?

IT pros are bombarded with alarm noise coming in from multiple systems. It’s hard to know where to focus since many of these alarms are false positives, and many others ultimately derive from the same underlying problem.

Take for example the case of an e-commerce system failing. Alarm bells will start ringing everywhere as IT pros frantically sort through multiple data sources to determine whether it’s the network, application, or one of many underlying pieces of infrastructure or services causing the problem. It could take hours to determine that the culprit was high CPU utilization that led to a slowing database and ultimately the failure of the e-commerce system. Even worse, with all of the alarm noise, the IT teams might miss the events altogether related to the e-commerce system and instead focus on a much lower priority issue that isn’t revenue related.

Resolve eliminates alarm noise by performing event correlation and clustering. Clustering machine learning algorithms are used to identify and group events (across systems and domains) that usually occur together, which dramatically compresses event volumes. Our platform also leverages sequential pattern analysis and time-series event correlation. Millions of events across applications and infrastructure are normalized and sequenced in a time series and then analyzed by machine learning to identify patterns. These patterns enable Resolve to reduce alarm noise and help pinpoint root cause – as well as proactively detect problems before they happen. Additionally, the time-series correlations can be leveraged to playback all of the events that occurred in a time period leading up to an outage.

In the case of the e-commerce example above, Resolve would be able to cluster all of the alarms related to the application failure, compressing those into a single event. The system could also track the root cause back to a spike in CPU utilization, making it fast and easy for the IT team to fix the issue rather than triaging hundreds of alarms independently as they look under every rock to get to the root of the matter. If desired, Resolve can even trigger an automated response to take care of the problem autonomously without human intervention.


Can you give us a case study of how an enterprise client used Resolve?

Fujitsu had a range of drivers for adopting automation to better deliver its suite of IT managed services to a wide range of global enterprises. Chiefly, Fujitsu needed to bring down operational costs while continuing to grow their infrastructure, improve organizational efficiency and standardize processes. We helped them achieve all of those goals by automating key processes, and we helped them improve MTTA and MTTR to ensure they were quickly addressing issues impacting their customers to meet their SLAs.


Is there anything else that you would like to share about Resolve?

Digital transformation has gained momentum in the wake of the global pandemic. We see an incredible need to alleviate the mounting strain on IT systems and staff that the crisis has created. Meanwhile, it’s also apparent that businesses need to be planning ahead for the next unexpected event. Automation and AIOps are both fundamental to achieving those ends as they can help safeguard business continuity and improve agility and resilience while reducing security risks and cost. Our mission is to help our customers excel even during challenging times by strategically leveraging these technologies.

Thank you for your wonderful answers. Anyone who wishes to learn more should visit Resolve.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of a news website focusing on digital securities, and is a founding partner of


Appen’s State of AI Annual Report Reveals Significant Industry Growth




Appen Limited (ASX: APX), the leading provider of high-quality training data for organizations that build effective AI systems at scale, today announced its annual State of AI Report for 2020.

The State of AI 2020 report is the output of a cross-industry, large-organization study of senior business leaders and technologists. The survey intended to examine and identify the main characteristics of the expanding AI and machine learning landscape by gathering responses from AI decision-makers.

There were multiple key takeaways:

  • While nearly 3 out of 4 organizations said AI is critical to their business, nearly half feel their organization is behind in their AI journey.
  • AI Budgets greater than $5M doubled YoY
  • An increasing number of enterprises are getting behind responsible AI as a component to business success, but only 25% of companies said unbiased AI is mission-critical.
  • 3 out of 4 organizations report updating their AI models at least quarterly, signifying a focus on the model’s life after deployment.
  • The gap between business leaders and technologists continues, despite their alignment being instrumental in building a strong AI infrastructure.
  • Despite turbulent times, more than two-thirds of respondents do not expect any negative impact from COVID-19 on their AI strategies.

One of the key findings is that nearly half of those who responded feel their company is behind in their AI journey, this suggests a critical gap exists between the strategic need and the ability to execute.

Lack of data and data management was reported as a main challenge, this includes training data which is foundational of AI and ML model deployments, so, unsurprisingly, 93% of companies report that high-quality training data is important to successful AI.

Organizations also reported using 25% more data types (text, image, video, audio, etc.) in 2020, compared to 2019. Not only are models getting more frequent updates, but teams are using increasingly more data types, and that will translate in an increasing need for investment in reliable training data.

One key indicator of exponential growth of AI was the rapid YoY growth in AI initiates. In 2019, only 39% of executives owned AI initiatives. In 2020, executive ownership of AI skyrocketed to 71%. With this increase in executive ownership, the number of organizations reporting budgets greater than $5M also doubled.

Global cloud providers gained significant traction as data science and ML tools compared to 2019. This may be due to increased budget and executive oversight. What is even more impressive is the increase of respondents who are reporting using global cloud machine learning providers which are identified as: Microsoft Azure (49%), Google Cloud (36%), IBM Watson (31%), AWS (25%), and Salesforce Einstein (17%). Each of these front runners saw double-digit adoption increases vs 2019, proving that as more companies are moving to scale, they’re looking for solutions that can scale with them.

Something of which AI developers may want to take note of is the variability in languages used to build models has also shifted from 2019. While Python remains the most used language in both 2019 and 2020, SQL and R were the second and third most commonly used language in 2019. However, in 2020, Java, C/C++, and JavaScript gained significant traction. Python, R, and SQL are often indicative of the pilot stage, while Java, C/C++, and JavaScript are more production stage languages.

To learn more, we recommend downloading the entire State of AI and Machine Learning Report.

Spread the love
Continue Reading


Omri Geller, CEO & Co-Founder of Run:AI – Interview Series




Omri Geller & Co-Founder Ronen Dar

Omri Geller is the CEO and Co-Founder at Run:AI

Run:AI virtualizes and accelerates AI by pooling GPU compute resources to ensure visibility and, ultimately, control over resource prioritization and allocation. This ensures that AI projects are mapped to business goals and yields significant improvement in the productivity of data science teams, allowing them to build and train concurrent models without resource limitations.

What was it that initially attracted you to Artificial Intelligence?

When I began my Bachelor’s degree in Electrical and Electronics Engineering at Tel Aviv University, I discovered fascinating things about AI that I knew would help take us to the next step in computing possibilities. From there, I knew I wanted to invest myself into the AI space. Whether it was in AI research, or opening a company that would help introduce new ways to apply AI to the world.

Have you always had an interest in computer hardware?

When I received my first computer with an Intel 486 processor at six or seven years old, I was immediately interested to figure out how everything worked, even though I was probably too young to really understand it. Aside from sports, computers became one of my biggest hobbies growing up. Since then, I have built computers, worked with them, and went on to study in the field because of the passion I had as a kid.

What was your inspiration behind launching Run:AI?

I knew from very early on that I wanted to invest myself into the AI space. In the last couple of years, the industry has seen tremendous growth in AI, and a lot of that growth came from both computer scientists, like myself, and hardware that could support more applications. It became clearer to me that I would inevitably start a company – and together with my co-founder Ronen Dar – to continue to innovate and help bring AI to more enterprise companies.

Run:AI enables machine learning specialists to gain a new type of control over the allocation of expensive GPU resources. Can you explain how this works?

What we need to understand is that machine learning engineers, like researchers and data scientists, need to consume computing power in a flexible way. Not only are today’s newest computations very compute-intensive, but there are also new workflows that are being used in data science. These workflows are based on the fact that data science is based on experimentation and running experiments.

In order to develop new solutions to run more efficient experiments, we need to study these workflow tendencies across time. For example: A data scientist uses eight GPUs in one day, but then the next day they might use zero, or they can use one GPU for a long period of time, but then need to use 100 GPUs because they want to run 100 experiments in parallel. Once we understand this workflow for optimizing the processing power of one user, we can begin to scale it to several users.

With traditional computing, a specific number of GPUs are allocated to every user, not taking into account if they are in use or not. With this method, often times, expensive GPUs sit idle without anybody else being able to access them, resulting in low ROI for the GPU. We understand a company’s financial priorities, and offer solutions that allow for dynamic allocation of those resources according to the needs of the users. By offering a flexible system, we can allocate extra power to a specific user when required, by utilizing GPUs not in use by other users, creating maximum ROI for a company’s computing resources and accelerating innovation & time to market of AI solutions.

One of the Run:AI functionalities is that it enables the reduction of blind spots created by static allocation of GPU. How is this achieved?

We have a tool that gives us full visibility into the cluster of resources. By using this tool, we can observe and understand if there are blind spots, and then utilize those idle GPUs for users that need the allocation. The same tool that provides visibility into the cluster and control over the cluster also makes sure those blind spots are mitigated.

In a recent speech, you highlighted some distinctions between build and training workflows, can you explain how Run:AI uses a GPU queueing management mechanism to allocate resource management for both?

An AI model is built in two stages. First, there is the building stage, where a data scientist is writing the code to build the actual model, the same way that an engineer would build a car. The second is the training stage, where the completed model begins to learn and be ‘trained’ on how to optimize a specific task. Similar to someone learning to drive the car after it has been assembled.

To build the model itself, not much computing power is needed. However, eventually, it could need stronger processing power to begin smaller, internal tests. For example, the way an engineer would eventually want to test the engine before they install it. Because of these distinct needs during each stage, Run.AI allows for GPU allocation regardless of if they are building or training the model, however, as mentioned earlier, higher GPU use is generally required for training the model while less is required for building it.

How much raw computing time/resources can be saved by AI developers who wish to integrate Run.AI into their systems?

Our solutions at can improve the digitization of resources, by about two to three times, meaning 2-3 times better overall productivity.

Thank you for the interview, readers who wish to learn more may visit Run:AI.

Spread the love
Continue Reading

Artificial Neural Networks

AI-Controlled 3D Rat Could Lead To New Neuroscience Insights




Researchers from Harvard University and DeepMind have recently created a virtual, biologically accurate 3D model of a rat that can be controlled by artificial neural networks. The researchers hope that studying how an artificial neural network controls a simulated rat through a 3D environment could give neuroscientists clues as to how real brains control organisms.

As IEEE Spectrum recently reported, a new paper that will be presented this week at the International Conference on Learning Representations details the creation of a simulated, 3D environment. A 3D model of a rat exists within this environment, and the computer-generated lab-rat will be controlled by AI models. The goal of the new study is to see if the neural networks that control the rat might have analogous functions found in biological brains.

The building blocks of deep neural networks are neurons, or nodes that transform data with mathematical functions. These neurons are joined together in layers in a way that resembles the synaptic connections of the brain. While there are many notable differences between artificial neural networks and real brains, a number of neuroscientists and researchers believe that the parallels that exist between the two could provide useful insights into how brains operate, potentially improving both AI and neuroscience.

The 3D computer-generated environment created by the researchers is to act as a controlled, experimental platform for AI researchers. Researchers will be able to use the environment to experiment with how various neural networks deal with challenges and how they approach (or don’t approach) biological networks. As postdoctoral researcher and co-author of the study Jesse Marshall explained, quoted by IEEE Spectrum, while the average neuroscience experiment analyzes the brains of animals as they perform one task  (or just a few tasks), and most robots are designed for just a few tasks, a more robust explanation of how flexible brains operate and arise is needed. According to Marshall, the paper “is the start of our effort to understand how flexibility arises and is implemented in the brain, and use the insights we gain to design artificial agents with similar capabilities.”

The computer engineered rat is biologically accurate, with all the joints and muscles one would find in a real rat. The rat also has simulated senses like proprioception (a sense of one’s body parts in space) and vision. The neural network that controls the rat’s movements was trained on four different tasks: tapping on a ball with precise timing, navigating a  maze, jumping over gaps, and navigating a hilly, steep region.

When the virtual rat completed the tasks, the research team analyzed recordings of the network’s activity utilizing techniques based on those utilized in the field of neuroscience. The researchers analyzed the network’s activity to determine how the network had manifested the motor control scheme necessary to carry out the assigned tasks.

The researchers found that the neural network reused certain representations for the different tasks, applying common patterns to different scenarios. The neural activity was often represented as discrete sequences, which is something that has been witnessed in real rodents and in birds. One unexpected finding was that natural activity in an AI model seemed to be present over a longer period of time than was expected should the AI model simply be controlling the movement of limbs and muscles. This could suggest that the AI network manifests behaviors and motion at an abstract level for things like jumping and running. This mirrors cognitive models that have been proposed for real-life animals.

Though artificial neural networks may lack the physiological embodiment and realism of real neural networks, neuroscientists such as Blake Richards from McGill University in Canada argue, as IEEE Spectrum reported, that the models share many important features of neural processing with genuine neural networks, and they are useful in making predictions about how neural activity might influence behavior. Therefore, the recent paper’s achievement was designing a method of experimenting with neural networks and training them in a more realistic environment, enabling a better comparison to experiments involving biological data.

Stephen Scott, a neuroscientist from Queen’s University in Canada, also believes that the framework designed in the new paper could be a useful method of examining the neural underpinnings of behavior. The virtual rat is capable of carrying out a variety of multistage, complex behaviors that can be precisely correlated with neural activity. This is an advantage over how most experiments with animal models are done on just simple tasks, due to how complex the recording of neural activity are.

However, Scott also acknowledges that the process of harvesting neural data from animals performing complicated tasks can be extremely difficult. Therefore, Scott hopes to see the paper’s authors compare the neural activity of the virtual rat, as it carries out easy tasks, to the activity found in real-world laboratory experiments, in order to better understand how the virtual models and real-world brain patterns differ.

Spread the love
Continue Reading