Connect with us

AI 101

What is a Decision Tree?

mm

Published

 on

A decision tree is a useful machine learning algorithm used for both regression and classification tasks. The name “decision tree” comes from the fact that the algorithm keeps dividing the dataset down into smaller and smaller portions until the data has been divided into single instances, which are then classified. If you were to visualize the results of the algorithm, the way the categories are divided would resemble a tree and many leaves.

That’s a quick definition of a decision tree, but let’s take a deep dive into how decision trees work. Having a better understanding of how decision trees operate, as well as their use cases, will assist you in knowing when to utilize them during your machine learning projects.

General Format of a Decision Tree

A decision tree is a lot like a flowchart. To utilize a flowchart you start at the starting point, or root, of the chart and then based on how you answer the filtering criteria of that starting node you move to one of the next possible nodes. This process is repeated until an ending is reached.

Decision trees operate in essentially the same manner, with every internal node in the tree being some sort of test/filtering criteria. The nodes on the outside, the endpoints of the tree, are the labels for the datapoint in question and they are dubbed “leaves”. The branches that lead from the internal nodes to the next node are features or conjunctions of features. The rules used to classify the datapoints are the paths that run from the root to the leaves.

Steps and Algorithms

Decision trees operate on an algorithmic approach which splits the dataset up into individual data points based on different criteria. These splits are done with different variables, or the different features of the dataset. For example, if the goal is to determine whether or not a dog or cat is being described by the input features, variables the data is split on might be things like “claws” and “barks”.

So what algorithms are used to actually split the data into branches and leaves? There are various methods that can be used to split a tree up, but the most common method of splitting is probably a technique referred to as “recursive binary split”. When carrying out this method of splitting, the process starts at the root and the number of features in the dataset represents the possible number of possible splits. A function is used to determine how much accuracy every possible split will cost, and the split is made using the criteria that sacrifices the least accuracy. This process is carried out recursively and sub-groups are formed using the same general strategy.

In order to determine the cost of the split, a cost function is used. A different cost function is used for regression tasks and classification tasks. The goal of both cost functions is to determine which branches have the most similar response values, or the most homogenous branches. Consider that you want test data of a certain class to follow certain paths and this makes intuitive sense.

In terms of the regression cost function for recursive binary split, the algorithm used to calculate the cost is as follows:

sum(y – prediction)^2

The prediction for a particular group of data points is the mean of the responses of the training data for that group. All the data points are run through the cost function to determine the cost for all the possible splits and the split with the lowest cost is selected.

Regarding the cost function for classification, the function is as follows:

G = sum(pk * (1 – pk))

This is the Gini score, and it is a measurement of the effectiveness of a split, based on how many instances of different classes are in the groups resulting from the split. In other words, it quantifies how mixed the groups are after the split. An optimal split is when all the groups resulting from the split consist only of inputs from one class. If an optimal split has been created the “pk” value will be either 0 or 1 and G will be equal to zero. You might be able to guess that the worst-case split is one where there is a 50-50 representation of the classes in the split, in the case of binary classification. In this case, the “pk” value would be 0.5 and G would also be 0.5.

The splitting process is terminated when all the data points have been turned into leaves and classified. However, you may want to stop the growth of the tree early. Large complex trees are prone to overfitting, but several different methods can be used to combat this. One method of reducing overfitting is to specify a minimum number of data points that will be used to create a leaf. Another method of controlling for overfitting is restricting the tree to a certain maximum depth, which controls how long a path can stretch from the root to a leaf.

Another process involved in the creation of decision trees is pruning. Pruning can help increase the performance of a decision tree by stripping out branches containing features that have little predictive power/little importance for the model. In this way, the complexity of the tree is reduced, it becomes less likely to overfit, and the predictive utility of the model is increased.

When conducting pruning, the process can start at either the top of the tree or the bottom of the tree. However, the easiest method of pruning is to start with the leaves and attempt to drop the node that contains the most common class within that leaf. If the accuracy of the model doesn’t deteriorate when this is done, then the change is preserved. There are other techniques used to carry out pruning, but the method described above – reduced error pruning – is probably the most common method of decision tree pruning.

Considerations For Using Decision Trees

Decision trees are often useful when classification needs to be carried out but computation time is a major constraint. Decision trees can make it clear which features in the chosen datasets wield the most predictive power. Furthermore, unlike many machine learning algorithms where the rules used to classify the data may be hard to interpret, decision trees can render interpretable rules. Decision trees are also able to make use of both categorical and continuous variables which means that less preprocessing is needed, compared to algorithms that can only handle one of these variable types.

Decision trees tend not to perform very well when used to determine the values of continuous attributes. Another limitation of decision trees is that, when doing classification, if there are few training examples but many classes the decision tree tends to be inaccurate.

To Learn More

Recommended Artificial Intelligence CoursesOffered ByDurationDifficulty


Introduction to Artificial Intelligence



IBM

9 Hours

Beginner


Deep Learning for Business


Yonsei University

8 Hours

Beginner


An Introduction to Practical Deep Learning


Intel Software

12 Hours

Intermediate


Machine Learning Foundations


University of Washington

24 Hours

Intermediate
Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

AI 101

What are Quantum Computers?

mm

Published

on

Quantum computers have the potential to dramatically increase the variety and accuracy of computations, opening up new applications for computers and enhancing our models of physical phenomenon. Yet while quantum computers are seeing increasing media coverage, many still aren’t sure of how quantum computers differ from regular computers. Let’s examine how quantum computers work, some of their applications, and their coming future.

What Is A Quantum Computer?

Before we can meaningfully examine how quantum computers operate, we need to first define quantum computers. The short definition of a quantum computer is this: a computer, based on quantum mechanics, that is able to carry out certain complex computations with much greater efficiency than traditional computers. That’s a quick definition of quantum computers, but we’ll want to take some time to really understand what separates quantum computers from traditional computers.

Regular computers encode information with a binary system: representing each bit of the data as either a one or zero. Series of ones and zeroes are chained together to represent complex chunks of information like text, images, and audio. Yet in these binary systems, the information can only ever be stored as ones and zeroes, meaning that there is a hard limit to how data is represented and interpreted and that as data becomes more complex it must necessarily become longer and longer strings of ones and zeroes.

The reason quantum computers are able to more efficiently store and interpret data is because they don’t use bits to represent data, rather they use “qubits”. Qubits are subatomic particles like photons and electrons. Qubits have a couple interesting properties that make them useful for new methods of computation. Qubits have two properties that computer engineers can take advantage of: superpositions and entanglement.

Quantum superpositions allow qubits to exist in not just the “one” state or the “zero” state, but along a continuum between these states, meaning more information can be held using qubits. Meanwhile, quantum entanglement refers to a phenomenon where pairs of qubits can be generated and if one qubit is altered the other qubit is altered, in a predictable fashion, as well. These quantum properties can be used to represent and structure complex data in more efficient ways.

How Quantum Computers Operate

Quantum “superpositions” get their name from the fact that they can be in more than one position at a time. While bits can be in just two positions, qubits can exist in multiple states at once.

Thanks in part to the existence of quantum superpositions, a quantum computer is capable of calculating many different potential outcomes at the same time. Once the calculations are done, the qubits are measured, which creates a final result through the collapse of the quantum state to either 0 or 1, meaning the result can then be interpreted by traditional computers.

Quantum computing researchers and engineers can alter the position the qubits are in by using microwaves or precision lasers.

Computer engineers can take advantage of quantum entanglement to dramatically improve the processing power of computers. Quantum entanglement refers to the fact that two qubits can be linked together in such a way that changing one of the qubits alters the other qubit in a reliable way. It’s not fully understood why qubits can establish such a relationship or how this phenomenon works exactly, but scientists do understand it well enough to potentially take advantage of it for quantum computers. Because of quantum entanglement, the addition of extra qubits to a quantum machine doesn’t just double the processing power of a computer it can scale the processing power exponentially.

If this has all seemed a bit too abstract, we can describe how superpositions are useful by imagining a maze. For a normal computer to attempt to solve a maze, it must try each path of the maze until it finds a successful route. However, a quantum computer could essentially explore all the different paths at once, since it isn’t tied down to any one given state.

All of this is to say that the properties of entanglement and superpositions make quantum computers useful because they can deal with uncertainty, they are capable of exploring more possible states and results. Quantum computers will help scientists and engineers better model and understand situations that are multi-faceted, with many variables.

What Are Quantum Computers Used For?

Now that we have a better intuition for how quantum computers operate, let’s explore the possible use cases for quantum computers.

We’ve already alluded to the fact that quantum computers can be used to carry out traditional computations at a much faster pace. However, quantum computer technology can be used to achieve things that may not even be possible, or are highly impractical, with traditional computers.

One of the most promising and interesting applications of quantum computers is in the field of artificial intelligence. Quantum computers have the power to improve the models created by neural networks, as well as the software that supports them. Google is currently using its quantum computers to assist in the creation of self-driving vehicles.

Quantum computers also have a role to play in the analysis of chemical interactions and reactions. Even the most advanced normal computers can only model reactions between relatively simple molecules, which they achieve by simulating the properties of the molecules in question. Quantum computers, however, allow researchers to create models that have the exact quantum properties as the molecules they are researching. Quicker, more accurate molecule modeling would aid in the creation of new therapeutic drugs and new materials for use in the creation of energy technology, such as more efficient solar panels.

Quantum computers can also be used to better predict weather. Weather is the confluence of many events and the formulas used to predict weather patterns are complicated, containing many variables. It can take an extremely long time to carry out all the calculations needed to predict the weather, during which the weather conditions themselves can evolve. Fortunately, the equations used to predict weather have a wave nature that a quantum computer can exploit. Quantum computers can help researchers build more accurate climate models, which are necessary in a world where the climate is changing.

Quantum computers and algorithms can also be used to help ensure people’s data privacy. Quantum cryptography makes use of the quantum uncertainty principle, where any attempt to measure an object ends up making changes to that object. Attempts to intercept communications would influence the resulting communication and show evidence of tampering.

Looking Ahead

Most of the uses for quantum computers will be confined to academics and businesses. It’s unlikely that consumers/the general public will get quantum smartphones, at least not anytime soon. This is because it requires specialized equipment to operate a quantum computer. Quantum computers are highly sensitive to disturbance, as even the most minute changes in the surrounding environment can cause qubits to shift position and drop out of the superposition state. This is called decoherence, and it’s one of the reasons that advances in quantum computers seem to come so slowly compared to regular computers. Quantum computers typically need to operate in conditions of extreme low temperatures, isolated from other electrical equipment.

Even with all the precautions, noise still manages to create errors in the calculations, and researchers are looking for ways to make qubits more reliable. To achieve quantum supremacy, where a quantum computer fully eclipses the power of a current supercomputer, qubits need to be linked together. A truly quantum supreme computer could require thousands of qubits, but the best quantum computers today can typically only deal with around 50 qubits. Researchers are constantly making in-roads towards creating more stable and reliable qubits. Experts in the field of quantum computers predict that powerful and reliable quantum devices may be here within a decade.

Spread the love
Continue Reading

AI 101

What Are Nanobots? Understanding Nanobot Structure, Operation, and Uses

mm

Published

on

As technology advances, things don’t always become bigger and better, objects also become smaller. In fact, nanotechnology is one of the fastest-growing technological fields, worth over 1 trillion USD, and it’s forecast to grow by approximately 17% over the next half-decade. Nanobots are a major part of the nanotechnology field, but what are they exactly and how do they operate? Let’s take a closer look at nanobots to understand how this transformative technology works and what it’s used for.

What Are Nanobots?

The field of nanotechnology is concerned with the research and development of technology approximately one to 100 nanometres in scale. Therefore, nanorobotics is focused on the creation of robots that are around this size. In practice, it’s difficult to engineer anything as small as one nanometer in scale and the term “nanorobotics” and “nanobot” is frequently applied to devices which are approximately 0.1 – 10 micrometers in size, which is still quite small.

It’s important to note that the term “nanorobot” is sometimes applied to devices which interact with objects at the nanoscale, manipulating nanoscale items. Therefore, even if the device itself is much larger, it may be considered a nanorobotic instrument. This article will focus on nanoscale robots themselves.

Much of the field of nanorobotics and nanobots is still in the theoretical phase, with research focused on solving the problems of construction at such a small scale. However, some prototype nanomachines and nanomotors have been designed and tested.

Most currently existing nanorobotic devices fall into one of four categories: switches, motors, shuttles, and cars.

Nanorobotic switches operate by being prompted to switch from an “off” state to an “on” state. Environmental factors are used to make the machine change shape, a process called conformational change. The environment is altered using processes like chemical reactions, UV light, and temperature, and the nanorobotic switches shift into different forms as a result, able to accomplish specific tasks.

Nanomotors are more complex than simple switches, and they utilize the energy created by the effects of the conformational change in order to move around and affect the molecules in the surrounding environment.

Shuttles are nanorobots that are capable of transporting chemicals like drugs to specific, targeted regions. The goal is to combine shuttles with nanorobot motors so that the shuttles are capable of a greater degree of movement through an environment.

Nanorobotic “cars” are the most advanced nanodevices at the moment, capable of moving independently with prompts from chemical or electromagnetic catalysts. The nanomotors that drive nanorobotic cars need to be controlled in order for the vehicle to be steered, and researchers are experimenting with various methods of nanorobotic control.

Nanorobotics researchers aim to synthesize these different components and technologies into nanomachines that can complete complex tasks, accomplished by swarms of nanobots working together.

Photo: Photo: ” Comparison of the sizes of nanomaterials with those of other common materials.” Sureshup vai Wikimedia Commons, CC BY 3.0 (https://en.wikipedia.org/wiki/File:Comparison_of_nanomaterials_sizes.jpg)

How Are Nanobots Created?

The field of nanorobotics is at the crossroads of many disciplines and the creation of nanobots involves the creation of sensors, actuators and motors. Physical modeling must be done as well, and all of this must be done at nanoscale. As mentioned above, nanomanipulation devices are used to assemble these nano-scale parts and manipulate artificial or biological components, which includes the manipulation of cells and molecules.

Nanorobotics engineers must be able to solve a multitude of problems. They have to address issues regarding sensation, control power, communications, and interactions between both inorganic and organic materials.

The size of a nanobot is roughly comparable to biological cells, and because of this fact future nanobots could be employed in disciplines like medicine and environmental preservation/remediation. Most “nanobots” that exist today are just specific molecules which have been manipulated to accomplish certain tasks. 

Complex nanobots are essentially just simple molecules joined together and manipulated with chemical processes. For instance, some nanobots are comprised of DNA, and they transport molecular cargo.

How Do Nanobots Operate?

Given the still heavily theoretical nature of nanobots, questions about how nanobots operate are answered with predictions rather than statements of fact. It’s likely that the first major uses for nanobots will be in the medical field, moving through the human body and accomplishing tasks like diagnosing diseases, monitoring vitals, and dispensing treatments. These nanobots will need to be able to navigate their way around the human body and move through tissues like blood vessels.

Navigation

In terms of nanobot navigation, there are a variety of techniques that nanobot researchers and engineers are investigating. One method of navigation is the utilization of ultrasonic signals for detection and deployment. A nanobot could emit ultrasonic signals that could be traced to locate the position of the nanobots, and the robots could then be guided to specific areas with the use of a special tool that directs their motion. Magnetic Resonance Imaging (MRI) devices could also be employed to track the position of nanobots, and early experiments with MRIs have demonstrated that the technology can be used to detect and even maneuver nanobots. Other methods of detecting and maneuvring nanobots include the use of X-rays, microwaves and radio-waves. At the moment, our control of these waves at the nano-scale is fairly limited, so new methods of utilizing these waves would have to be invented.

The navigation and detection systems described above are external methods, relying on the use of tools to move the nanobots. With the addition of onboard sensors, the nanobots could be more autonomous. For instance, chemical sensors included onboard nanobots could allow the robot to scan the surrounding environment and follow certain chemical markers to a target region.

Power

When it comes to powering the nanobots, there are also a variety of power solutions being explored by researchers. Solutions for powering nanobots include external power sources and onboard/internal power sources.

Internal power solutions include generators and capacitors. Generators onboard the nanobot could use the electrolytes found within the blood to produce energy, or nanobots could even be powered using the surrounding blood as a chemical catalyst that produces energy when combined with a chemical the nanobot carries with it. Capacitors operate similarly to batteries, storing electrical energy that could be used to propel the nanobot. Other options like tiny nuclear power sources have even been considered.

As far as external power sources go, incredibly small, thin wires could tether the nanobots to an outside power source. Such wires could be made out of miniature fiber optic cables, sending pulses of light down the wires and having the actual electricity be generated within the nanobot.

Other external power solutions include magnetic fields or ultrasonic signals. Nanobots could employ something called a piezoelectric membrane, which is capable of collecting ultrasonic waves and transforming them into electrical power. Magnetic fields can be used to catalyze electrical currents within a closed conducting loop contained onboard the nanobot. As a bonus, the magnetic field could also be used to control the direction of the nanobot.

Locomotion

Addressing the problem of nanobot locomotion requires some inventive solutions. Nanobots that aren’t tethered, or aren’t just free-floating in their environment, need to have some method of moving to their target locations. The propulsion system will need to be powerful and stable, able to propel the nanobot against currents in its surrounding environment, like the flow of the blood. Propulsion solutions under investigation are often inspired by the natural world, with researchers looking at how microscope organisms move through their environment. For instance, microorganisms often use long, whip-like tails called flagella to propel themselves, or they use a number of tiny, hair-like limbs dubbed cilia.

Researchers are also experimenting with giving robots small arm-like appendages that could allow the robot to swim, grip, and crawl. Currently, these appendages are controlled via magnetic fields outside the body, as the magnetic force prompts the robot’s arms to vibrate. An added benefit to this method of locomotion is that the energy for it comes from an outside source. This technology would need to be made even smaller to make it viable for true nanobots.

There are other, more inventive, propulsion strategies also under investigation. For instance, some researchers have proposed using capacitors to engineer an electromagnetic pump that would pull conductive fluids in and shoot it out like a jet, propelling the nanobot forward.

Regardless of the eventual application of nanobots, they must solve the problems described above, handling navigation, locomotion, and power.

What Are Nanobots Used For?

As mentioned, the first uses for nanobots will likely be in the medical field. Nanobots could be used to monitor for damage to the body, and potentially even facilitate the repair of this damage. Future nanobots could deliver medicine directly to the cells that need them. Currently, medicines are delivered orally or intravenously and they spread throughout the body instead of hitting just the target regions, causing side effects. Nanobots equipped with sensors could easily be used to monitor for changes in regions of cells, reporting changes at the first sign of damage or malfunction.

We are still a long way away from these hypothetical applications, but progress is being made all the time. As an example, in 2017 scientists created nanobots that targeted cancer cells and attacked them with a miniaturized drill, killing them. This year, a group of researchers from ITMO University designed a nanobot composed of DNA fragments, capable of destroying pathogenic RNA strands. DNA-based nanobots are also currently capable of transporting molecular cargo, The nanobot is made of three different DNA sections, maneuvering with a DNA “leg” and carrying specific molecules with the use of an “arm”.

Beyond medical applications, research is being done regarding the use of nanobots for the purposes of environmental cleanup and remediation. Nanobots could potentially be used to remove toxic heavy metals and plastics from bodies of water. The nanobots could carry compounds that render toxic substances inert when combined together, or they could be used to degrade plastic waste through similar processes. Research is also being done on the use of nanobots to facilitate the production of extremely small computer chips and processors, essentially using nanobots to produce microscale computer circuits.

Spread the love
Continue Reading

AI 101

What Are Deepfakes?

mm

Published

on

As deepfakes become easier to make and more prolific, more attention is paid to them. Deepfakes have become the focal point of discussions involving AI ethics, misinformation, openness of information and the internet, and regulation. It pays to be informed regarding deepfakes, and to have an intuitive understanding of what deepfakes are. This article will clarify the definition of a deepfake, examine their use cases, discuss how deepfakes can be detected, and examine the implications of deepfakes for society.

What Is A Deepfakes?

Before going on to discuss deepfakes further, it would be helpful to take some time and clarify what “deepfakes” actually are. There is a substantial amount of confusion regarding the term Deepfake, and often the term is misapplied to any falsified media, regardless of whether or not it is a genuine deepfake. In order to qualify as a Deepfake, the faked media in question must be generated with a machine-learning system, specifically a deep neural network.

The key ingredient of deepfakes is machine learning. Machine learning has made it possible for computers to automatically generate video and audio relatively quickly and easily. Deep neural networks are trained on footage of a real person in order for the network to learn how people look and move under the target environmental conditions. The trained network is then used on images of another individual and augmented with additional computer graphics techniques in order to combine the new person with the original footage. An encoder algorithm is used to determine the similarities between the original face and the target face. Once the common features of the faces have been isolated, a second AI algorithm called a decoder is used. The decoder examines the encoded (compressed) images and reconstructs them based off on the features in the original images. Two decoders are used, one on the original subject’s face and the second on the target person’s face. In order for the swap to be made, the decoder trained on images of person X is fed images of person Y. The result is that person Y’s face is reconstruction over Person X’s facial expressions and orientation.

Currently, it still takes a fair amount of time for a deepfake to be made. The creator of the fake has to spend a long time manually adjusting parameters of the model, as suboptimal parameters will lead to noticeable imperfections and image glitches that give away the fake’s true nature.

Although it’s frequently assumed that most deepfakes are made with a type of neural network called a generative adversarial network (GAN), many (perhaps most) deepfakes created these days do not rely on GANs. While GANs did play a prominent role in the creation of early deepfakes,  most deepfake videos are created through alternative methods, according to Siwei Lyu from SUNY Buffalo.

It takes a disproportionately large amount of training data in order to train a GAN, and GANs often take much longer to render an image compared to other image generation techniques. GANs are also better for generating static images than video, as GANs have difficulties maintaining consistencies from frame to frame. It’s much more common to use an encoder and multiple decoders to create deepfakes.

What Are Deepfakes Used For?

Many of the deepfakes found online are pornographic in nature. According to research done by Deeptrace, an AI firm, out of a sample of approximately 15,000 deepfake videos taken in September of 2019, approximately 95% of them were pornographic in nature. A troubling implication of this fact is that as the technology becomes easier to use, incidents of fake revenge porn could rise.

However, not all deep fakes are pornographic in nature. There are more legitimate uses for deepfake technology. Audio deepfake technology could help people broadcast their regular voices after they are damaged or lost due to illness or injury. Deepfakes can also be used for hiding the faces of people who are in sensitive, potentially dangerous situations, while still allowing their lips and expressions to be read. Deepfake technology can potentially be used to improve the dubbing on foreign-language films, aid in the repair of old and damaged media, and even create new styles of art.

Non-Video Deepfakes

While most people think of fake videos when they hear the term “deepfake”, fake videos are by no means the only kind of fake media produced with deepfake technology. Deepfake technology is used to create photo and audio fakes as well. As previously mentioned, GANs are frequently used to generate fake images. It’s thought that there have been many cases of fake LinkedIn and Facebook profiles that have profile images generated with deepfake algorithms.

It’s possible to create audio deepfakes as well. Deep neural networks are trained to produce voice clones/voice skins of different people, including celebrities and politicians. One famous example of an audio Deepfake is when the AI company Dessa made use of an AI model, supported by non-AI algorithms, to recreate the voice of the podcast host Joe Rogan.

How To Spot Deepfakes

As deepfakes become more and more sophisticated, distinguishing them from genuine media will become tougher and tougher. Currently, there are a few telltale signs people can look for to ascertain if a video is potentially a deepfake, like poor lip-syncing, unnatural movement, flickering around the edge of the face, and warping of fine details like hair, teeth, or reflections. Other potential signs of a deepfake include lower-quality parts of the same video, and irregular blinking of the eyes.

While these signs may help one spot a deepfake at the moment, as deepfake technology improves the only option for reliable deepfake detection might be other types of AI trained to distinguish fakes from real media.

Artificial intelligence companies, including many of the large tech companies, are researching methods of detecting deepfakes. Last December, a deepfake detection challenge was started, supported by three tech giants: Amazon, Facebook, and Microsoft. Research teams from around the world worked on methods of detecting deepfakes, competing to develop the best detection methods. Other groups of researchers, like a group of combined researchers from Google and Jigsaw, are working on a type of “face forensics” that can detect videos that have been altered, making their datasets open source and encouraging others to develop deepfake detection methods. The aforementioned Dessa has worked on refining deepfake detection techniques, trying to ensure that the detection models work on deepfake videos found in the wild (out on the internet) rather than just on pre-composed training and testing datasets, like the open-source dataset Google provided.

There are also other strategies that are being investigated to deal with the proliferation of deepfakes. For instance, checking videos for concordance with other sources of information is one strategy. Searches can be done for video of events potentially taken from other angles, or background details of the video (like weather patterns and locations) can be checked for incongruities. Beyond this, a Blockchain online ledger system could register videos when they are initially created, holding their original audio and images so that derivative videos can always be checked for manipulation.

Ultimately, it’s important that reliable methods of detecting deepfakes are created and that these detection methods keep up with the newest advances in deepfake technology. While it is hard to know exactly what the effects of deepfakes will be, if there are not reliable methods of detecting deepfakes (and other forms of fake media), misinformation could potentially run rampant and degrade people’s trust in society and institutions.

Implications of Deepfakes

What are the dangers of allowing deep fake to proliferate unchecked?

One of the biggest problems that deepfakes create currently is nonconsensual pornography, engineered by combining people’s faces with pornographic videos and images. AI ethicists are worried that deepfakes will see more use in the creation of fake revenge porn. Beyond this, deepfakes could be used to bully and damage the reputation of just about anyone, as they could be used to place people into controversial and compromising scenarios.

Companies and cybersecurity specialists have expressed concern about the use of deepfakes to facilitate scams, fraud, and extortion. Allegedly, deepfake audio has been used to convince employees of a company to transfer money to scammers

It’s possible that deepfakes could have harmful effects even beyond those listed above. Deepfakes could potentially erode people’s trust in media generally, and make it difficult for people to distinguish between real news and fake news. If many videos on the web are fake, it becomes easier for governments, companies, and other entities to cast doubt on legitimate controversies and unethical practices.

When it comes to governments, deepfakes may even pose threats to the operation of democracy. Democracy requires that citizens are able to make informed decisions about politicians based on reliable information. Misinformation undermines democratic processes. For example, the president of Gabon, Ali Bongo, appeared in a video attempting to reassure the Gabon citizenry. The president was assumed to be unwell for long a long period of time, and his sudden appearance in a likely fake video kicked off an attempted coup. President Donald Trump claimed that an audio recording of him bragging about grabbing women by the genitals was fake, despite also describing it as “locker room talk”. Prince Andrew also claimed that an image provided by Emily Maitilis’ attorney was fake, though the attorney insisted on its authenticity.

Ultimately, while there are legitimate uses for deepfake technology, there are many potential harms that can arise from the misuse of that technology. For that reason, it’s extremely important that methods to determine the authenticity of media be created and maintained.

Spread the love
Continue Reading