Connect with us

Computing

Researchers Imitate Brain Neurons Using Semiconductor Material 

Published

 on

Computer chips are one of the most important aspects of artificial intelligence (AI). The powerful little pieces are foundational to automatic image recognition and are partly responsible for teaching robots how to do certain activities like walk. With the increasing potential of AI technology, today’s computer chips are required to be both extremely powerful and economical, but this is a difficult thing to accomplish. 

Since conventional microelectronics can only be optimized so much due to physical limitations, researchers have turned to the human brain, as they often do, for inspiration on how to process and store information more efficiently. 

Scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have successfully imitated the workings of brain neurons through the use of semiconductor materials, for the first time ever.

The research was published in the journal Nature Electronics. 

The work was done by three primary authors, including HZDR physicist Larysa Baraban, and it was an international collaboration between six institutions. 

Today’s Microelectronics vs the Artificial Neuron

The technique most often used today to improve the performance of microelectronics is by reducing component size. In the case of silicon computer chips, this reduction takes place for the individual transistors.

According to Baraban, “That can’t go on indefinitely — we need new approaches.” 

The researchers set out to mimic the brain and create an artificial neuron that could combine data processing and data storage.

“Our group has extensive experience with biological and chemical electronic sensors, “Barbara says. “So we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neuron transistor.”

This approach allows there to be simultaneous storage and information processing, all within one single component. In the transistor technology that is most used today, these two processes are separated, resulting in slower processing times and performance limitations.

The Human Brain

Researchers have been working on constructing computers based on the human brain for many years, but much of it has been unsuccessful. Some of the first attempts involved nerve cells being linked to electronics in Petri dishes, but as Gianaurelio Cuniberti puts it, who is Professor for Materials Science and Nanotechnology at TU Dresden, “a wet computer chip that has to be fed all the time is of no use to anybody.”

The team of researchers was successful in implementing the neurotransistor. 

“We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” says Cuniberti. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect. The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

According to the team, the chip will be less precise and would estimate mathematical computations, compared to calculating them down to the last decimal.

“But they would be more intelligent,” Cuniberti says. “For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” 

One of the other major benefits of this type of computer is that the plasticity allows it to make changes and adapt during operation. Much like the human brain, this means the computer can end up encountering and solving problems that it was never programmed to begin with.

 

Spread the love

Computing

Huma Abidi, Senior Director of AI Software Products at Intel – Interview Series

mm

Published

on

Photo By O’Reilly Media

Huma Abidi is a Senior Director of AI Software Products at Intel, responsible for strategy, roadmaps, requirements, machine learning and analytics software products. She leads a globally diverse team of engineers and technologists responsible for delivering world-class products that enable customers to create AI solutions. Huma joined Intel as a software engineer and has since worked in a variety of engineering, validation and management roles in the area of compilers, binary translation, and AI and deep learning. She is passionate about women’s education, supporting several organizations around the world for this cause, and was a finalist for VentureBeat’s 2019 Women in AI award in the mentorship category.

What initially sparked your interest in AI?

I’ve always found it interesting to imagine what could happen if machines could speak, or see, or interact intelligently with humans. Because of some big technical breakthroughs in the last decade, including deep learning gaining popularity because of the availability of data, compute power, and algorithms, AI has now moved from science fiction to real world applications. Solutions we had imagined previously are now within reach. It is truly an exciting time!

In my previous job, I was leading a Binary Translation engineering team, focused on optimizing software for Intel hardware platforms. At Intel, we recognized that the developments in AI would lead to huge industry transformations, demanding tremendous growth in compute power from devices to Edge to cloud and we sharpened our focus to become a data-centric company.

Realizing the need for powerful software to make AI a reality, the first challenge I took on was to lead the team in creating AI software to run efficiently on Intel Xeon CPUs by optimizing deep learning frameworks like Caffe and TensorFlow. We were able to demonstrate more than 200-fold performance increases due to a combination of Intel hardware and software innovations.

We are working to make all of our customer workloads in various domains run faster and better on Intel technology.

 

What can we do as a society to attract women to AI?

It’s a priority for me and for Intel to get more women in STEM and computer science in general, because diverse groups will build better products for a diverse population. It’s especially important to get more women and underrepresented minorities in AI, because of potential biases lack of representation can cause when creating AI solutions.

In order to attract women, we need to do a better job explaining to girls and young women how AI is relevant in the world, and how they can be part of creating exciting and impactful solutions. We need to show them that AI spans so many different areas of life, and they can use AI technology in their domain of interest, whether it’s art or robotics or data journalism or television. Exciting applications of AI they can easily see making an impact e.g. virtual assistants like Alexa, self-driving cars, social media, how Netflix knows which movies they want to watch, etc.

Another key part of attracting women is representation. Fortunately, there are many women leaders in AI who can serve as excellent role models, including Fei-Fei Li, who is leading human-centered AI at Stanford, and Meredith Whittaker, who is working on social implications through the AI Now Institute at NYU.

We need to work together to adopt inclusive business practices and expand access of technology skills to women and underrepresented minorities. At Intel, our 2030 goal is to increase women in technical roles to 40% and we can only achieve that by working with other companies, institutes, and communities.

 

How can women best break into the industry?  

There are a few options if you want to break into AI specifically. There are numerous online courses in AI, including UDACITY’s free Intel Edge AI Fundamentals course. Or you could go back to school, for example at one of Maricopa County’s community colleges for an AI associate degree – and study for a career in AI e.g. Data Scientist, Data Engineer, ML/DL developer, SW Engineer etc.

If you already work at a tech company, there are likely already AI teams. You could check out the option to spend part of your time on an AI team that you’re interested in.

You can also work on AI if you don’t work at a tech company. AI is extremely interdisciplinary, so you can apply AI to almost any domain you’re involved in. As AI frameworks and tools evolve and become more user-friendly, it becomes easier to use AI in different settings. Joining online events like Kaggle competitions is a great way to work on real-world machine learning problems that involve data sets you find interesting.

The tech industry also needs to put in time, effort, and money to reach out to and support women, including women who are also underrepresented ethnic minorities. On a personal note, I’m involved in organizations like Girls Who Code and Girl Geek X, which connect and inspire young women.

 

With Deep learning and reinforcement learning recently gaining the most traction, what other forms of machine learning should women pay attention to?

AI and machine learning are still evolving, and exciting new research papers are being published regularly. Some areas to focus on right now include:

  1. Classical ML techniques that continue to be important and are widely used.
  2. Responsible/Explainable AI, that has become a critical part of AI lifecycle and from a deployability of deep learning and reinforcement learning point-of-view.
  3. Graph Neural Networks and multi-modal learning that get insights by learning from rich relation information among graph data.

 

AI bias is a huge societal issue when it comes to bias towards women and minorities. What are some ways of solving these issues?

When it comes to AI, biases in training samples, human labelers and teams can be compounded to discriminate against diverse individuals, with serious consequences.

It is critical that diversity is prioritized at every step of the process. If women and other minorities from the community are part of the teams developing these tools, they will be more aware of what can go wrong.

It is also important to make sure to include leaders across multiple disciplines such as social scientists, doctors, philosophers and human rights experts to help define what is ethical and what is not.

 

Can you explain the AI blackbox problem, and why AI explainability is important?

In AI, models are trained on massive amounts of data before they make decisions. In most AI systems, we don’t know how these decisions were made — the decision-making process is a black box, even to its creators. And it may not be possible to really understand how a trained AI program is arriving at its specific decision. A problem arises when we suspect that the system isn’t working. If we suspect the system of algorithmic biases, it’s difficult to check and correct for them if the system is unable to explain its decision making.

There is currently a major research focus on eXplainable AI (XAI) that intends to equip AI models with transparency, explainability and accountability, which will hopefully lead to Responsible AI.

 

In your keynote address during MITEF Arab Startup Competition final award ceremony and conference you discussed Intel’s AI for Social Good initiatives. Which of these Social Good projects has caught your attention and why is it so important?

I continue to be very excited about all of Intel’s AI for Social Good initiatives, because breakthroughs in AI can lead to transformative changes in the way we tackle problem solving.

One that I especially care about is the Wheelie, an AI-powered wheelchair built in partnership with HOOBOX Robotics. The Wheelie allows extreme paraplegics to regain mobility by using facial expressions to drive. Another amazing initiative is TrailGuard AI, which uses Intel AI technology to fight illegal poaching and protect animals from extinction and species loss.

As part of Intel’s Pandemic Response Initiative, we have many on-going projects with our partners using AI. One key initiative is contactless fever detection or COVID-19 detection via chest radiography with Darwin AI. We’re also working on bots that can answer queries to increase awareness using natural language processing in regional languages.

 

For women who are interested in getting involved, are there books, websites, or other resources that you would recommend?  

There are many great resources online, for all experience levels and areas of interest. Coursera and Udacity offer excellent online courses on machine learning and seep learning, most of which can be audited for free. MIT’s OpenCourseWare is another great, free way to learn from some of the world’s best professors.

Companies such as Intel have AI portals that contain a lot of information about AI including offered solutions. There are many great books on AI: foundational computer science texts like Artificial Intelligence: A Modern Approach by Peter Norvig and Stuart Russell, and modern, philosophical books like Homo Deus by historian Yuval Hararri. I’d also recommend Lex Fridman’s AI podcast on great conversations from a wide range of perspectives and experts from different fields.

 

Do you have any last words for women who are curious about AI but are not yet ready to leap in?

AI is the future, and will change our society — in fact, it already has. It’s essential that we have honest, ethical people working on it. Whether in a technical role, or at a broader social level, now is a perfect time to get involved!

Thank you for the interview, you are certainly an inspiration for women the world over. Readers who wish to learn more about the software solutions at Intel should visit AI Software Products at Intel.

Spread the love
Continue Reading

Computing

Appen’s State of AI Annual Report Reveals Significant Industry Growth

mm

Published

on

Appen Limited (ASX: APX), the leading provider of high-quality training data for organizations that build effective AI systems at scale, today announced its annual State of AI Report for 2020.

The State of AI 2020 report is the output of a cross-industry, large-organization study of senior business leaders and technologists. The survey intended to examine and identify the main characteristics of the expanding AI and machine learning landscape by gathering responses from AI decision-makers.

There were multiple key takeaways:

  • While nearly 3 out of 4 organizations said AI is critical to their business, nearly half feel their organization is behind in their AI journey.
  • AI Budgets greater than $5M doubled YoY
  • An increasing number of enterprises are getting behind responsible AI as a component to business success, but only 25% of companies said unbiased AI is mission-critical.
  • 3 out of 4 organizations report updating their AI models at least quarterly, signifying a focus on the model’s life after deployment.
  • The gap between business leaders and technologists continues, despite their alignment being instrumental in building a strong AI infrastructure.
  • Despite turbulent times, more than two-thirds of respondents do not expect any negative impact from COVID-19 on their AI strategies.

One of the key findings is that nearly half of those who responded feel their company is behind in their AI journey, this suggests a critical gap exists between the strategic need and the ability to execute.

Lack of data and data management was reported as a main challenge, this includes training data which is foundational of AI and ML model deployments, so, unsurprisingly, 93% of companies report that high-quality training data is important to successful AI.

Organizations also reported using 25% more data types (text, image, video, audio, etc.) in 2020, compared to 2019. Not only are models getting more frequent updates, but teams are using increasingly more data types, and that will translate in an increasing need for investment in reliable training data.

One key indicator of exponential growth of AI was the rapid YoY growth in AI initiates. In 2019, only 39% of executives owned AI initiatives. In 2020, executive ownership of AI skyrocketed to 71%. With this increase in executive ownership, the number of organizations reporting budgets greater than $5M also doubled.

Global cloud providers gained significant traction as data science and ML tools compared to 2019. This may be due to increased budget and executive oversight. What is even more impressive is the increase of respondents who are reporting using global cloud machine learning providers which are identified as: Microsoft Azure (49%), Google Cloud (36%), IBM Watson (31%), AWS (25%), and Salesforce Einstein (17%). Each of these front runners saw double-digit adoption increases vs 2019, proving that as more companies are moving to scale, they’re looking for solutions that can scale with them.

Something of which AI developers may want to take note of is the variability in languages used to build models has also shifted from 2019. While Python remains the most used language in both 2019 and 2020, SQL and R were the second and third most commonly used language in 2019. However, in 2020, Java, C/C++, and JavaScript gained significant traction. Python, R, and SQL are often indicative of the pilot stage, while Java, C/C++, and JavaScript are more production stage languages.

To learn more, we recommend downloading the entire State of AI and Machine Learning Report.

Spread the love
Continue Reading

Computing

Vijay Kurkal, Chief Executive Officer for Resolve – Interview Series

mm

Published

on

Vijay Kurkal serves as the CEO for Resolve where he oversees the strategic growth of the company as it helps maximize the potential of AIOps and IT automation in enterprises around the world. Vijay has a long history in the tech industry, having spent the last twenty years working with numerous software and hardware companies that have run the gamut from mainframe to bleeding-edge, emerging tech. Before joining Resolve, he held leadership positions at IBM, VMware, Bain & Company, and Insight Partners, playing a critical role in accelerating the growth of a wide array of technology companies and introducing state-of-the-art product lines.

You’ve been leading Resolve since 2018, first as COO, and now as CEO. What initially drew you to this company?

There’s a huge need for automation and AIOps today given the challenges that enterprise IT organizations face. These teams are managing increasingly complex, highly virtualized, hybrid environments and are tasked with rapidly implementing new technologies to stay competitive. Without the aid of tools like automation and AIOps, it’s impossible to effectively manage these environments, and the complexity is only going to grow.

Given the tremendous market opportunity, I was immediately drawn to Resolve’s deep roots in automation. Drawing on my 20 years of experience with a wide range of tech companies, I am incredibly excited about the possibility for automation and AIOps to truly transform IT operations. These technologies are game changers for companies — not just to survive, but to thrive in the current environment. As we’ve seen over the last few months as digital transformation has rapidly accelerated, automation is absolutely necessary to succeed. Resolve is uniquely positioned to meet these needs and usher in the next generation of IT operations.

 

How would you best describe what Resolve offers IT companies?

By combining cutting-edge AIOps capabilities with our industry-leading automation platform, Resolve helps IT teams achieve more agile, autonomous IT operations even as infrastructure continues to expand in scope and complexity. Our unified product offers a closed loop of discovery, analysis, detection, prediction, and automation, including prebuilt automations that can be autonomously triggered by AIOps insights to stay ahead of problems and lighten the load on IT organizations.

Our goal is to help ITOps, NetOps, and Service Desk teams meet the growing demands on IT, streamline operations, reduce costs, improve MTTR and performance, and accelerate service delivery through the power of automation and AIOps.

 

For readers who are not familiar with the term AIOps, can you explain what this term describes and what makes it so important?

AIOps – or AI for IT Operations – helps streamline the management of complex, hybrid IT environments by deploying AI, machine learning and advanced analytics to aggregate, analyze, and contextualize tremendous amounts of data amassed from various sources across the IT ecosystem. These insights facilitate the identification of existing or potential performance issues while spotting anomalies and pinpointing the probable root cause of incidents. Over time, machine learning can predict future issues and proactively automate fixes before they affect the business.

Additionally, most AIOps tools offer advanced correlation capabilities help IT pros determine how alarms are related, reducing noise by grouping similar events and bringing the true issues to light, so people can focus on what matters most. Some AIOps solutions also perform auto-discovery and dependency mapping to provide deep visibility into how entities are connected to one another, and how outages might impact critical business services. This offers a wide range of benefits from keeping your CMDB up-to-data and accurate, to accelerating incident response and simplifying troubleshooting, change management, and compliance.

 

What are some of the data challenges faced by IT companies?

By far the biggest data challenge IT organizations face is managing increased complexity caused by exponential infrastructure growth and the daily onslaught of new technologies. Data volumes and alarm noise created by infrastructure growth have far exceeded human capacity to find the needle in the proverbial IT haystack. Gartner estimates a two- to three-fold increase in data volume growth per year. To survive in this dynamic environment, it’s critical for IT organizations to embrace AIOps and automation to help them cope with massive amounts of data and to streamline management of new technologies.

 

How can businesses overcome these challenges using Resolve?

Resolve enables businesses to manage increasing IT complexity with fewer resources through the powerful combination of AIOps and automation. The platform is designed to provide immediate relief, as well as long-term value.

Unlike many other AIOps solutions on the market, customers don’t have to wait months to start seeing value with Resolve. In fact, customers get value in literally minutes with Resolve’s automated discovery and dependency mapping. These capabilities enable us to generate complete infrastructure visualizations, detailed cross-domain topology maps, application dependency maps, and comprehensive views of inventory. Additionally, Resolve ingests data from many other tools (such as monitoring, event management, ITSM, and logging solutions) and aggregates it with telemetry data collected natively by our own platform. This allows customers to achieve the much-sought-after ‘single pane of glass’ that they need to effectively manage complex, hybrid infrastructure, and it provides significantly richer (and complete) visibility across domains.

Over the course of several weeks, these insights are enhanced and enriched as Resolve “learns” the environment and leverages machine learning to perform activities like event correlation and clustering, predictive analytics, multivariate anomaly detection, dynamic thresholding, and autonomous remediation – making the product exponentially more intelligent (and valuable) over time.

Our enterprise-class automation capabilities can take action on insights from the AIOps components or can be used independently. Built for the scale and complexity of modern, hybrid environments, the platform can handle everything from simple tasks to very complex processes that go well beyond the capacity of other tools. Combining AIOps with this level of automation offers an unparalleled ability to autonomously predict, prevent, and fix issues before they impact the business, and to radically improve overall operational efficiency.

 

Can you describe how Resolve makes it easier to investigate security incidents?

Resolve’s automated incident validation quickly determines which alarms are actual threats versus those that are simply false positives. Hours of manual effort are eliminated by automatically collecting data across the IT environment and security tools, including SIEMs, threat feeds, antivirus systems, and logs. All of that data gets unified into a customizable dashboard, so it’s easy to see the problem and determine how to fix it. Resolve centralizes orchestration of the end-to-end triage and investigation workflows to ensure that issues can be addressed quickly. We also capture a full audit trail of incident investigation steps and results to support compliance and governance.

 

One of the features of Resolve is it enables IT professionals to ignore ‘noise’ to focus on highlighted real problems. Can you discuss this?

IT pros are bombarded with alarm noise coming in from multiple systems. It’s hard to know where to focus since many of these alarms are false positives, and many others ultimately derive from the same underlying problem.

Take for example the case of an e-commerce system failing. Alarm bells will start ringing everywhere as IT pros frantically sort through multiple data sources to determine whether it’s the network, application, or one of many underlying pieces of infrastructure or services causing the problem. It could take hours to determine that the culprit was high CPU utilization that led to a slowing database and ultimately the failure of the e-commerce system. Even worse, with all of the alarm noise, the IT teams might miss the events altogether related to the e-commerce system and instead focus on a much lower priority issue that isn’t revenue related.

Resolve eliminates alarm noise by performing event correlation and clustering. Clustering machine learning algorithms are used to identify and group events (across systems and domains) that usually occur together, which dramatically compresses event volumes. Our platform also leverages sequential pattern analysis and time-series event correlation. Millions of events across applications and infrastructure are normalized and sequenced in a time series and then analyzed by machine learning to identify patterns. These patterns enable Resolve to reduce alarm noise and help pinpoint root cause – as well as proactively detect problems before they happen. Additionally, the time-series correlations can be leveraged to playback all of the events that occurred in a time period leading up to an outage.

In the case of the e-commerce example above, Resolve would be able to cluster all of the alarms related to the application failure, compressing those into a single event. The system could also track the root cause back to a spike in CPU utilization, making it fast and easy for the IT team to fix the issue rather than triaging hundreds of alarms independently as they look under every rock to get to the root of the matter. If desired, Resolve can even trigger an automated response to take care of the problem autonomously without human intervention.

 

Can you give us a case study of how an enterprise client used Resolve?

Fujitsu had a range of drivers for adopting automation to better deliver its suite of IT managed services to a wide range of global enterprises. Chiefly, Fujitsu needed to bring down operational costs while continuing to grow their infrastructure, improve organizational efficiency and standardize processes. We helped them achieve all of those goals by automating key processes, and we helped them improve MTTA and MTTR to ensure they were quickly addressing issues impacting their customers to meet their SLAs.

 

Is there anything else that you would like to share about Resolve?

Digital transformation has gained momentum in the wake of the global pandemic. We see an incredible need to alleviate the mounting strain on IT systems and staff that the crisis has created. Meanwhile, it’s also apparent that businesses need to be planning ahead for the next unexpected event. Automation and AIOps are both fundamental to achieving those ends as they can help safeguard business continuity and improve agility and resilience while reducing security risks and cost. Our mission is to help our customers excel even during challenging times by strategically leveraging these technologies.

Thank you for your wonderful answers. Anyone who wishes to learn more should visit Resolve.

Spread the love
Continue Reading