Connect with us


Appen’s State of AI Annual Report Reveals Significant Industry Growth




Appen Limited (ASX: APX), the leading provider of high-quality training data for organizations that build effective AI systems at scale, today announced its annual State of AI Report for 2020.

The State of AI 2020 report is the output of a cross-industry, large-organization study of senior business leaders and technologists. The survey intended to examine and identify the main characteristics of the expanding AI and machine learning landscape by gathering responses from AI decision-makers.

There were multiple key takeaways:

  • While nearly 3 out of 4 organizations said AI is critical to their business, nearly half feel their organization is behind in their AI journey.
  • AI Budgets greater than $5M doubled YoY
  • An increasing number of enterprises are getting behind responsible AI as a component to business success, but only 25% of companies said unbiased AI is mission-critical.
  • 3 out of 4 organizations report updating their AI models at least quarterly, signifying a focus on the model’s life after deployment.
  • The gap between business leaders and technologists continues, despite their alignment being instrumental in building a strong AI infrastructure.
  • Despite turbulent times, more than two-thirds of respondents do not expect any negative impact from COVID-19 on their AI strategies.

One of the key findings is that nearly half of those who responded feel their company is behind in their AI journey, this suggests a critical gap exists between the strategic need and the ability to execute.

Lack of data and data management was reported as a main challenge, this includes training data which is foundational of AI and ML model deployments, so, unsurprisingly, 93% of companies report that high-quality training data is important to successful AI.

Organizations also reported using 25% more data types (text, image, video, audio, etc.) in 2020, compared to 2019. Not only are models getting more frequent updates, but teams are using increasingly more data types, and that will translate in an increasing need for investment in reliable training data.

One key indicator of exponential growth of AI was the rapid YoY growth in AI initiates. In 2019, only 39% of executives owned AI initiatives. In 2020, executive ownership of AI skyrocketed to 71%. With this increase in executive ownership, the number of organizations reporting budgets greater than $5M also doubled.

Global cloud providers gained significant traction as data science and ML tools compared to 2019. This may be due to increased budget and executive oversight. What is even more impressive is the increase of respondents who are reporting using global cloud machine learning providers which are identified as: Microsoft Azure (49%), Google Cloud (36%), IBM Watson (31%), AWS (25%), and Salesforce Einstein (17%). Each of these front runners saw double-digit adoption increases vs 2019, proving that as more companies are moving to scale, they’re looking for solutions that can scale with them.

Something of which AI developers may want to take note of is the variability in languages used to build models has also shifted from 2019. While Python remains the most used language in both 2019 and 2020, SQL and R were the second and third most commonly used language in 2019. However, in 2020, Java, C/C++, and JavaScript gained significant traction. Python, R, and SQL are often indicative of the pilot stage, while Java, C/C++, and JavaScript are more production stage languages.

To learn more, we recommend downloading the entire State of AI and Machine Learning Report.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.


Artificial Intelligence Enhances Speed of Discoveries For Particle Physics




Researchers at MIT have recently demonstrated that utilizing artificial intelligence to simulate aspects of particles and nuclear physics theories can lead to faster algorithms, and therefore faster discoveries when it comes to theoretical physics. The MIT research team combined theoretical physics with AI models to accelerate the creation of samples that simulate interactions between neutrons, protons, and nuclei.

There are four fundamental forces that govern the universe: gravity, electromagnetism, the weak force, and the strong force. The strong, weak, and electromagnetic forces are studied through particle physics. The traditional method of studying particle interactions requires running numerical simulations of these interactions between particles, typically taking place at 1/10th or 1/100th the size of a proton. These studies can take a long time to complete due to limited computing power, and there are many problems that physicists know how to tackle in theory yet cannot address to said computational limitations.

MIT Physics professor Phiala Shanahan is the head of a research group that uses machine learning models to create new algorithms that can speed up particle physics studies. The symmetries found within physics theories (features of the physical system that stay constant even as conditions change) can be incorporated into machine learning algorithms to produce algorithms more suited to particle physics studies. Shanahan explained that the machine learning models aren’t being used to process large amounts of data, rather they are being used to integrate particle symmetries, and the inclusion of these attributes within a model means that computations can be done more quickly.

The research project was lead by Shanahan and it includes several members of the Theoretical Physics team at NYU, as well as machine-learning researchers from Google DeepMind. The recent study is just one of a series of ongoing and recently completed studies aimed at leveraging the power of machine learning to solve theoretical physics problems that are currently impossible with modern computation schemas. According to MIT graduate student Gurtej Kanwar, the problems that the machine-learning boosted algorithms are trying to solve will help scientists understand more about particle physics, and they are useful in making comparisons against results derived by large-scale particle physics experiments (like those conducted at CERN’s Large Hadron Collider). By comparing the results of the large-scale experiments with the AI algorithms, scientists can get a better idea of how their physics models should be constrained, and when those models break down.

Currently, the only method that scientists can reliably use to investigate the Standard Model of particle physics is one where samples/snapshots are taken of fluctuations occurring in a vacuum. Researchers can gain insight into the properties of the particles and what happens when those particles collide. However, taking samples like this is expensive and it is hoped that AI techniques can make taking samples a cheaper, more efficient process. The snapshots taken of the vacuum can be used much like image training data in a computer vision AI model. The quantum snapshots are used to train a model that can create samples in a much more efficient manner, accomplished by taking samples in an easy-to-sample space and running the samples through the trained model.

The research has created a framework intended to streamline the process of creating machine-learning models based on physics symmetries. The framework has already been applied to simpler physics problems and the research team is now attempting to scale up their approach to work with cutting edge calculations. As Kanwar explained via

“I think we have shown over the past year that there is a lot of promise in combining physics knowledge with machine learning techniques. We are actively thinking about how to tackle the remaining barriers in the way of performing full-scale simulations using our approach. I hope to see the first application of these methods to calculations at scale in the next couple of years.”

Spread the love
Continue Reading


IoT Enhanced Processors Increase Performance, AI, & Security




Today at the Intel Industrial Summit 2020, Intel announced new enhanced internet of things (IoT) capabilities. The 11th Gen Intel® Core™ Processors,Intel® Atom® x6000E Series and Intel Pentium® and Celeron® N and J Series, bring new artificial intelligence (AI), security, functional safety and real-time capabilities to edge customers. With a robust hardware and software portfolio, an unparalleled ecosystem and 15,000 customer deployments globally, Intel is providing robust solutions for the $65 billion edge silicon market opportunity by 2024.

“By 2023, up to 70% of all enterprises will process data at the edge[1]. 11thGen Intel Core Processors, Intel Atom x6000E Series and Intel Pentium and Celeron N and J Series processors represent our most significant step forward yet in enhancements for IoT, bringing features that address our customers’ current needs, while setting the foundation for capabilities with advancements in AI and 5G.” — John Healy, Intel vice president in the Internet of Things Group and general manager of Platform Management and Customer Engineering

Why it Matters

Intel works closely with customers to build proof of concepts, optimize solutions and collect feedback along the way. Innovations delivered with 11thGen Intel Core Processors, Intel Atom x6000E Series and Intel Pentium and Celeron N and J Series processors are a response to challenges felt across the IoT industry: edge complexity, total cost of ownership (TCO) and a range of environmental conditions.

Combining a common and seamless developer experience with software and tools like the Edge Software Hub’s Edge Insights for Industrialand the Intel® Distribution of OpenVINO™ toolkit, Intel helps customers and developers get to market faster and deliver more powerful outcomes with optimized, containerized packages to enable sensing, vision, automation and other transformative edge applications. For example, the combination of 11thGen’s SuperFin process improvements, other architectural enhancements and OpenVINO software toolkit optimizations delivers up to 50% faster inferences per second than the previous 8thGen processor using CPU mode or up to 90% faster using its integrated GPU-accelerated mode.

11thGen Core

Building on the recently announced client processors, 11thGen Core is enhanced specifically for essential IoT applications that require high-speed processing, computer vision and low latency deterministic computing. They bring up to 23% performance gain in single-thread performance, 19% gain in multi-thread performance and up to 2.95x performance gain in graphics gen on gen[2]. New dual video decode boxes allow the processor to ingest up to 40 simultaneous video streams at 1080p 30 frames per second and output up to four channels of 4K or two channels of 8K video. AI inferencing algorithms can run on up to 96 graphic execution units (INT8) or run on the CPU with VNNI built in. With Intel Time Coordinated Computing and Time Sensitive Networking technologies, these processors enable real-time computing demands while delivering deterministic performance across a variety of use cases:

  • Industrial sector: Mission-critical control systems (PLC, robotics, etc.), industrial PCs and human-machine interfaces.
  • Retail, banking and hospitality: Intelligent, immersive digital signage, interactive kiosks and automated checkout.
  • Healthcare:Next-generation medical imaging devices with high-resolution displays and AI-powered diagnostics.
  • Smart city: Smart network video recorders with onboard AI inferencing and analytics.

Intel’s 11thGen already has over 90 partners committed to delivering solutions to meet customers’ demands.

About Intel Atom x6000E Series and Intel Pentium and Celeron N and J Series

These represent Intel’s first processor platform enhanced for IoT. They deliver enhanced real-time performance and efficiency, up to 2x better 3D graphics[3], a dedicated real-time offload engine, the Intel® Programmable Services Engine, which supports out-of-band and in-band remote device management, enhanced I/O and storage options and  integrated 2.5GbE time sensitive networking (TSN). They can support 4Kp60 resolution on up to three simultaneous displays, meet strict functional safety requirements with the Intel® Safety Island and include built-in hardware-based security. These processors[4]have a variety of use cases, including:

  • Industrial:Real-time control systems and devices that meet functional safety requirements for industrial robots and for chemical, oil field and energy grid control applications.
  • Transportation:Vehicle controls, fleet monitoring and management systems that synchronize inputs from multiple sensors and direct actions in semiautonomous buses, trains, ships and trucks.
  • Healthcare: Medical displays, carts, service robots, entry-level ultrasound machines, gateways and kiosks that require AI and computer vision with reduced energy consumption.
  • Retail and Hospitality:Fixed and mobile point of sale systems for retail and quick service restaurant with high-resolution graphics.

Intel Atom x6000E Series and Intel Pentium and Celeron N and J Series already have over 100 partners committed to delivering solutions.

Spread the love
Continue Reading


Huma Abidi, Senior Director of AI Software Products at Intel – Interview Series




Photo By O’Reilly Media

Huma Abidi is a Senior Director of AI Software Products at Intel, responsible for strategy, roadmaps, requirements, machine learning and analytics software products. She leads a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions. Huma joined Intel as a software engineer and has since worked in a variety of engineering, validation and management roles in the area of compilers, binary translation, and AI and deep learning. She is passionate about women’s education, supporting several organizations around the world for this cause, and was a finalist for VentureBeat’s 2019 Women in AI award in the mentorship category.

What initially sparked your interest in AI?

I’ve always found it interesting to imagine what could happen if machines could speak, or see, or interact intelligently with humans. Because of some big technical breakthroughs in the last decade, including deep learning gaining popularity because of the availability of data, compute power, and algorithms, AI has now moved from science fiction to real world applications. Solutions we had imagined previously are now within reach. It is truly an exciting time!

In my previous job, I was leading a Binary Translation engineering team, focused on optimizing software for Intel hardware platforms. At Intel, we recognized that the developments in AI would lead to huge industry transformations, demanding tremendous growth in compute power from devices to Edge to cloud and we sharpened our focus to become a data-centric company.

Realizing the need for powerful software to make AI a reality, the first challenge I took on was to lead the team in creating AI software to run efficiently on Intel Xeon CPUs by optimizing deep learning frameworks like Caffe and TensorFlow. We were able to demonstrate more than 200-fold performance increases due to a combination of Intel hardware and software innovations.

We are working to make all of our customer workloads in various domains run faster and better on Intel technology.


What can we do as a society to attract women to AI?

It’s a priority for me and for Intel to get more women in STEM and computer science in general, because diverse groups will build better products for a diverse population. It’s especially important to get more women and underrepresented minorities in AI, because of potential biases lack of representation can cause when creating AI solutions.

In order to attract women, we need to do a better job explaining to girls and young women how AI is relevant in the world, and how they can be part of creating exciting and impactful solutions. We need to show them that AI spans so many different areas of life, and they can use AI technology in their domain of interest, whether it’s art or robotics or data journalism or television. Exciting applications of AI they can easily see making an impact e.g. virtual assistants like Alexa, self-driving cars, social media, how Netflix knows which movies they want to watch, etc.

Another key part of attracting women is representation. Fortunately, there are many women leaders in AI who can serve as excellent role models, including Fei-Fei Li, who is leading human-centered AI at Stanford, and Meredith Whittaker, who is working on social implications through the AI Now Institute at NYU.

We need to work together to adopt inclusive business practices and expand access of technology skills to women and underrepresented minorities. At Intel, our 2030 goal is to increase women in technical roles to 40% and we can only achieve that by working with other companies, institutes, and communities.


How can women best break into the industry?  

There are a few options if you want to break into AI specifically. There are numerous online courses in AI, including UDACITY’s free Intel Edge AI Fundamentals course. Or you could go back to school, for example at one of Maricopa County’s community colleges for an AI associate degree – and study for a career in AI e.g. Data Scientist, Data Engineer, ML/DL developer, SW Engineer etc.

If you already work at a tech company, there are likely already AI teams. You could check out the option to spend part of your time on an AI team that you’re interested in.

You can also work on AI if you don’t work at a tech company. AI is extremely interdisciplinary, so you can apply AI to almost any domain you’re involved in. As AI frameworks and tools evolve and become more user-friendly, it becomes easier to use AI in different settings. Joining online events like Kaggle competitions is a great way to work on real-world machine learning problems that involve data sets you find interesting.

The tech industry also needs to put in time, effort, and money to reach out to and support women, including women who are also underrepresented ethnic minorities. On a personal note, I’m involved in organizations like Girls Who Code and Girl Geek X, which connect and inspire young women.


With Deep learning and reinforcement learning recently gaining the most traction, what other forms of machine learning should women pay attention to?

AI and machine learning are still evolving, and exciting new research papers are being published regularly. Some areas to focus on right now include:

  1. Classical ML techniques that continue to be important and are widely used.
  2. Responsible/Explainable AI, that has become a critical part of AI lifecycle and from a deployability of deep learning and reinforcement learning point-of-view.
  3. Graph Neural Networks and multi-modal learning that get insights by learning from rich relation information among graph data.


AI bias is a huge societal issue when it comes to bias towards women and minorities. What are some ways of solving these issues?

When it comes to AI, biases in training samples, human labelers and teams can be compounded to discriminate against diverse individuals, with serious consequences.

It is critical that diversity is prioritized at every step of the process. If women and other minorities from the community are part of the teams developing these tools, they will be more aware of what can go wrong.

It is also important to make sure to include leaders across multiple disciplines such as social scientists, doctors, philosophers and human rights experts to help define what is ethical and what is not.


Can you explain the AI blackbox problem, and why AI explainability is important?

In AI, models are trained on massive amounts of data before they make decisions. In most AI systems, we don’t know how these decisions were made — the decision-making process is a black box, even to its creators. And it may not be possible to really understand how a trained AI program is arriving at its specific decision. A problem arises when we suspect that the system isn’t working. If we suspect the system of algorithmic biases, it’s difficult to check and correct for them if the system is unable to explain its decision making.

There is currently a major research focus on eXplainable AI (XAI) that intends to equip AI models with transparency, explainability and accountability, which will hopefully lead to Responsible AI.


In your keynote address during MITEF Arab Startup Competition final award ceremony and conference you discussed Intel’s AI for Social Good initiatives. Which of these Social Good projects has caught your attention and why is it so important?

I continue to be very excited about all of Intel’s AI for Social Good initiatives, because breakthroughs in AI can lead to transformative changes in the way we tackle problem solving.

One that I especially care about is the Wheelie, an AI-powered wheelchair built in partnership with HOOBOX Robotics. The Wheelie allows extreme paraplegics to regain mobility by using facial expressions to drive. Another amazing initiative is TrailGuard AI, which uses Intel AI technology to fight illegal poaching and protect animals from extinction and species loss.

As part of Intel’s Pandemic Response Initiative, we have many on-going projects with our partners using AI. One key initiative is contactless fever detection or COVID-19 detection via chest radiography with Darwin AI. We’re also working on bots that can answer queries to increase awareness using natural language processing in regional languages.


For women who are interested in getting involved, are there books, websites, or other resources that you would recommend?  

There are many great resources online, for all experience levels and areas of interest. Coursera and Udacity offer excellent online courses on machine learning and seep learning, most of which can be audited for free. MIT’s OpenCourseWare is another great, free way to learn from some of the world’s best professors.

Companies such as Intel have AI portals that contain a lot of information about AI including offered solutions. There are many great books on AI: foundational computer science texts like Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell, and modern, philosophical books like Homo Deus by historian Yuval Hararri. I’d also recommend Lex Fridman’s AI podcast on great conversations from a wide range of perspectives and experts from different fields.


Do you have any last words for women who are curious about AI but are not yet ready to leap in?

AI is the future, and will change our society — in fact, it already has. It’s essential that we have honest, ethical people working on it. Whether in a technical role, or at a broader social level, now is a perfect time to get involved!

Thank you for the interview, you are certainly an inspiration for women the world over. Readers who wish to learn more about the software solutions at Intel should visit AI Software Products at Intel.

Spread the love
Continue Reading