Connect with us

Artificial Neural Networks

Artificial Intelligence Is Now Engaged At Analyzing Art

mm

Published

 on

Artificial Intelligence Is Now Engaged At Analyzing Art

Dr. Ahmed Elgammal of The Rutgers University and Dr. Mariam Mazzone of College of Charleston have created a joint AI project that is now engaged at analyzing artworks and comparing them to what art historians and critics have concluded.

As Techworld reports, the two scientists “teamed up to investigate how machines classify styles of art and how that relates to the analysis of art historians. They decided to create a system based on the theories of Heinrich Wölfflin (1846–1945), a Swiss professor whose principles of classification were highly influential in the development of the discipline of art history.”

As Dr. Elgammal himself explains, “It was very hard to advance AI beyond what we have right now without looking at this cultural human product, because in the end, artificial intelligence is about making a machine that has perceptual and cognitive abilities, and when you look at art, that’s what’s happening.”

The approach Dr. Elgammal and Dr. Mazzone took was to exclude subject matter out of the analysis and to focus on the ‘visual schema’ of the work so that that it would be possible to identify style patterns through time. “ Its emphasis on distinctive features and binary logic matched well with machine learning.”

As explained, “Deep convolutional neural networks were trained to classify these styles along with a number of variables. They were fed almost 80,000 digitized paintings and trained to find the patterns. The system had been given no understanding of time or who created each artwork, but it nonetheless placed the paintings along a smooth chronology that was closely correlated with the times in which they were painted.

It placed them along a timeline starting at the renaissance and then progressing through baroque, neo-classicism, romanticism, impressionism, post-impressionism, expressionism, and cubism, before ending with abstract art.”

The researchers also “trained the machine to measure creativity by spotting unusual data points and comparing them to what appeared in other artworks.”

The results AI came up with mostly confirmed the ideas art historians already have. What it added was “computational evidence of what had previously been based on subjective analysis.”

According to Dr. Mazzone, the ability of AI to analyze “thousands of artworks could identify fundamental changes in styles that a human eye could never see. It could even predict the artistic forms of the future.” She added that AI “makes very few errors, and when it makes an error in some ways, it’s just the machine seeing something different than what the human is seeing. And that is interesting, too. What is it seeing that is unlike what human beings perceive?”

 

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Artificial Neural Networks

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

mm

Published

on

Amazon Creates New Tool To Engineer AI Models With Just A Few Lines Of Code

As efforts to make machine learning easier more accessible increase, different companies are creating tools to make the creation and optimization of deep learning models simpler. As VentureBeat reports, Amazon launched a new tool designed to help create and modify machine learning models in just a few lines of code.

Carrying out machine learning on a dataset is often a long, complex task. The data must be transformed and preprocessed, and then the proper model must be created and customized. Tweaking the hyperparameters of a model and then retraining can take a long time, and to help solve issues like this Amazon has launched AutoGluon. AutoGluon is an attempt to automate much of the overhead that typically comes with the creation of a machine learning system. For instance, not only do machine learning engineers have to decide on an appropriate architecture, they also need to experiment with the hyperparameters of the model. AutoGluon endeavors to make both the creation of the neural net architecture and the selection of appropriate hyperparameters easier.

AutoGluon is based on work initially begun by Microsoft and Amazon in 2017. The original Gluon was a machine learning interface designed to let developers mix and matched optimized components to create their own models, but AutoGluon just creates a model end-to-end, based on the desires of the user. AutoGluon is reportedly capable of producing a model and selecting the hyperparameters for the model, within a range of specified choices, with as few as three lines of code. The developer only has to provide a few arguments like their desired training completion time, and AutoGluon will calculate the best model that will complete within the specified runtime and given the available computation resources.

AutoGluon is currently capable of creating models for image classification, text classification, object detection, and tabular prediction. AutoGluon’s API is also intended to allow more experienced developers to be able to customize the auto-generated model and improve performance. At the moment, AutoGluon is only available for Linux and it requires Python 3.6 or 3.7.

Jonas Mueller, part of the AutoGluon development team, explained the reasoning behind the creation of AutoGluon:

“We developed AutoGluon to truly democratize machine learning, and make the power of deep learning available to all developers. AutoGluon solves this problem as all choices are automatically tuned within default ranges that are known to perform well for the particular task and model.”

AutoGluon is a new method within a long line of methods intended to reduce the expertise and time needed to train machine learning models. Software libraries like Theano automated the calculation of gradient vectors, while Keras let developers easily specify certain desired hyperparameters. Amazon believes that there is still more ground that can be covered when it comes to democratizing machine learning, like making data pre-processing and hyperparameter tuning simpler.

The creation of AutoGluon seems to be part of an effort by Amazon to make training and deploying machine learning systems easier and more accessible. Amazon has also made machine learning-centric changes to its AWS suite. For example, upgrades have been made to the AWS Sagemaker toolkit. The AWS SageMaker toolkit within the AWS suite lets developers train and deploy models to the cloud. SageMaker comes with a variety of tools that let developers automatically choose algorithms, train and validate models, and improve the accuracy of models.

Spread the love
Continue Reading

Artificial Neural Networks

Expert Predictions For AI’s Trajectory In 2020

mm

Published

on

Expert Predictions For AI's Trajectory In 2020

VentureBeat recently interviewed five of the most intelligent, expert minds in the AI field and asked them to make their predictions for where AI is heading over the course of the year to come. The individuals interviewed for their predictions were:

  • Soumith Chintala, creator of PyTorch.
  • Celeste Kidd, AI professor at the University of California.
  • Jeff Dean, chief of Google AI.
  • Anima Anandkumar, machine learning research director at Nvidia.
  • Dario Gil, IBM Research director.

Soumith Chintala

Chintala, the creator of Pytorch, which is arguably the most popular machine learning framework at the moment, predicted that 2020 will see a greater need for neural network hardware accelerators and methods of boosting model training speeds. Chintala expected that the next couple of years will see an increased focus on how to use GPUs optimally and how compiling can be done automatically for new hardware. Beyond this, Chintala expected that the AI community will begin pursuing other methods of quantifying AI performance more aggressively, placing less importance on pure accuracy. Factors for consideration include things like the amount of energy needed to train a model, how AI can be used to build the sort of society we want, and how the output of a network can be intuitively explained to human operators.

Celeste Kidd

Celeste Kidd has spent much of her recent career advocating for more responsibility on the part of designers of algorithms, tech platforms, and content recommendation systems. Kidd has often argued that systems that are designed to maximize engagement can end up having serious impacts regarding how people create their opinions and beliefs. More and more attention is being paid to the ethical use of AI algorithms and systems, and Kidd predicted that in 2020 there will be an increased awareness of how tech tools and platforms are influencing people’s lives and decisions, as well as a rejection of the idea that tech tools can be genuinely neutral in design.

“We really need to, as a society and especially as the people that are working on these tools, directly appreciate the responsibility that that comes with,” Kidd said.

Jeff Dean

Jeff Dean, the current head of Google AI, predicted that in 2020 there will be progress in multimodal learning and multitask learning. Multimodel learning is when AI is trained with multiple types of media at one time, while multitask learning endeavors to allow AI to train on multiple tasks at one time. Dean also expected further progress to be made regarding natural language processing models based on Transformer, such as Google’s BERT algorithm and the other models that topped the GLUE leaderboards. Dean also mentioned he would like to see less desire to create the most-advanced state-of-the-art performance models and more desire to create models that are more robust and flexible.

Anima Anandkumar

Anandkumar expected that the AI community will have to grapple with many challenges in 2020, especially the need for more diverse datasets and the need to ensure people’s privacy when training on data. Anandkumar explained that while face recognition often gets the most attention, there are many areas where people’s privacy can be violated and that these issues may come to the forefront of discussion during 2020.

Anandkumar also expected that further advancements will be made regarding Transformer based natural language processing models.

“We are still not at the stage of dialogue generation that’s interactive, that can keep track and have natural conversations. So I think there will be more serious attempts made in 2020 in that direction,” she said.

Finally, Anandkumar expected that the coming year will see more development of the iterative algorithm and self-supervision. These training methods allow AI systems to self-train in some respects, and can potentially help create models that can improve by self-training on data that are unlabeled.

Dario Gil

Gil predicted that in 2020 there will be more progress towards creating AI in a more computationally efficient manner, as the way deep neural networks are currently trained is inefficient in many ways. Because of this, Gil expected that this year will see progress in terms of creating reduced-precision architectures and generally training more efficiently. Much like some of the other experts who were interviewed, Gil predicted that in 2020 researchers will start to focus more on metrics aside from accuracy. Gil expressed an interest in neural symbolic AI, as IBM is examining ways to create probabilistic programming models using neural symbolic approaches. Finally, Gil emphasized the importance of making AI more accessible to those interested in machine learning and getting rid of the perception that only geniuses can work with AI and do data science.

“If we leave it as some mythical realm, this field of AI, that’s only accessible to the select PhDs that work on this, it doesn’t really contribute to its adoption,” Gil said.

Spread the love
Continue Reading

AI 101

What are Convolutional Neural Networks?

mm

Published

on

What are Convolutional Neural Networks?

Perhaps you’ve wondered how Facebook or Instagram is able to automatically recognize faces in an image, or how Google lets you search the web for similar photos just by uploading a photo of your own. These features are examples of computer vision, and they are powered by convolutional neural networks (CNNs). Yet what exactly are convolutional neural networks? Let’s take a deep dive into the architecture of a CNN and understand how they operate.

Defining Neural Networks

Before we begin talking about convolutional neural networks, let’s take a moment to define regular neural networks. There’s another article on the topic of neural networks available, so we won’t go too deep into them here. However, to briefly define them they are computational models inspired by the human brain. A neural network operates by taking in data and manipulating the data by adjusting “weights”, which are assumptions about how the input features are related to each other and the object’s class. As the network is trained the values of the weights are adjusted and they will hopefully converge on weights that accurately capture the relationships between features.

This is how a feed-forward neural network operates, and CNNs are comprised of two halves: a feed-forward neural network and a group of convolutional layers.

What’s A Convolution?

What are the “convolutions” that happen in a convolutional neural network? A convolution is a mathematical operation that creates a set of weights, essentially creating a representation of parts of the image. This set of weights is referred to as a kernel or filter. The filter that is created is smaller than the entire input image, covering just a subsection of the image. The values in the filter are multiplied with the values in the image. The filter is then moved over to form a representation of a new part of the image, and the process is repeated until the entire image has been covered.

Another way to think about this is to imagine a brick wall, with the bricks representing the pixels in the input image. A “window” is being slid back and forth along the wall, which is the filter. The bricks that are viewable through the window are the pixels having their value multiplied by the values within the filter. For this reason, this method of creating weights with a filter is often referred to as the “sliding windows” technique.

The output from the filters being moved around the entire input image is a two-dimensional array representing the whole image. This array is called a “feature map”.

Why Convolutions?

What is the purpose of creating convolutions anyway? Convolutions are necessary because a neural network has to be able to interpret the pixels in an image as numerical values. The function of the convolutional layers is to convert the image into numerical values that the neural network can interpret and then extract relevant patterns from. The job of the filters in the convolutional network is to create a two-dimensional array of values that can be passed into the later layers of a neural network, those that will learn the patterns in the image.

Filters And Channels

What are Convolutional Neural Networks?

Photo: cecebur via Wikimedia Commons, CC BY SA 4.0 (https://commons.wikimedia.org/wiki/File:Convolutional_Neural_Network_NeuralNetworkFeatureLayers.gif)

CNNs don’t use just one filter to learn patterns from the input images. Multiple filters are used, as the different arrays created by the different filters leads to a more complex, rich representation of the input image. Common numbers of filters for CNNs are 32, 64, 128, and 512. The more filters there are, the more opportunities the CNN has to examine the input data and learn from it.

A CNN analyzes the differences in pixel values in order to determine the borders of objects. In a grayscale image, the CNN would only look at the differences in black and white, light-to-dark terms. When the images are color images, not only does the CNN take dark and light into account, but it has to take the three different color channels – red, green, and blue – into account as well. In this case, the filters possess 3 channels, just like the image itself does. The number of channels that a filter has is referred to as its depth, and the number of channels in the filter must match the number of channels in the image.

The Architecture Of A Convolutional Neural Network

Let’s take a look at the complete architecture of a convolutional neural network. A convolutional layer is found at the beginning of every convolutional network, as it’s necessary to transform the image data into numerical arrays. However, convolutional layers can also come after other convolutional layers, meaning that these layers can be stacked on top of one another. Having multiple convolutional layers means that the outputs from one layer can undergo further convolutions and be grouped together in relevant patterns. Practically, this means that as the image data proceeds through the convolutional layers, the network begins to “recognize” more complex features of the image.

The early layers of a ConvNet are responsible for extracting the low-level features, such as the pixels that make up simple lines. Later layers of the ConvNet will join these lines together into shapes. This process of moving from surface-level analysis to deep-level analysis continues until the ConvNet is recognizing complex shapes like animals, human faces, and cars.

After the data has passed through all of the convolutional layers, it proceeds into the densely connected part of the CNN. The densely-connected layers are what a traditional feed-forward neural network looks like, a series of nodes arrayed into layers that are connected to one another. The data proceeds through these densely connected layers, which learns the patterns that were extracted by the convolutional layers, and in doing so the network becomes capable of recognizing objects.

Spread the love
Continue Reading