Connect with us

AI News

Facebook AI Fails at Detection of Murder

mm

Published

 on

Facebook has given another update on measures it took and what more it is doing in the wake of the livestreamed video of a gun massacre by a terrorist who killed 50 people in two mosques in Christchurch, New Zealand.

Before this week the company said the video of the slayings were seen less than 200 occasions throughout the livestream broadcast itself, and roughly about 4,000 times before it was removed from Facebook — with the stream never reported to Facebook till 12 minutes after it had stopped.

None of the users that watched the killings unfold on the company’s platform in real-time apparently reported that the stream to the company, according to the company.

It also previously stated it eliminated 1.5 million versions of the movie from its site in the first 24 hours following the livestream, with 1.2M of those captured in the point of upload — meaning it failed to prevent 300,000 uploads at that point. We found other versions of this movie still circulating on its stage 12 hours after.

In the wake of the livestreamed terror assault, Facebook has continued to confront calls from world leaders to do more to ensure such content can’t be distributed by its own platform.

The prime minister of New Zealand, Jacinda Ardern told media yesterday the video “should not be dispersed, accessible, able to be viewed”, dubbing it: “Horrendous.”

She confirmed Facebook had been connected with her government but emphasized that in her view the business hasn’t done enough.

She also later told that the New Zealand parliament:”We cannot simply sit back and accept that these platforms just exist and that what is stated on these is not the responsibility of the location where they are published. They’re the writer. Not just the postman.”

We requested Facebook for a response to Ardern’s phone for online content platforms to accept publisher-level responsibility for the material that they distribute. Its spokesman avoided the question — pointing instead to its latest item of emergency PR which it names:”A Further Update on New Zealand Terrorist Attack”.

Here it writes that”people are looking to know how online platforms such as Facebook were used to broadcast horrific videos of this terrorist attack”, saying it therefore “wished to present additional information out of our review to our products were used and how we can improve going forward”, before heading on to reiterate many of the details it’s put out.

Including the massacre movie was quickly shared to the 8chan message board by an individual posting a hyperlink to a copy of the movie on a file-sharing site. This was before Facebook itself being alerted to the movie being broadcast on its platform.

It goes on to imply 8chan was a heart for broader sharing of this video — claiming that:”Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar observable in a display recording, match the articles posted on 8chan.”

So it is clearly trying to make sure it isn’t singled out by political leaders seek coverage responses to the challenge posed by online hatred and terrorist content.

Further details it chooses to dwell on in the update is the way the AIs it utilizes to aid the human content inspection procedure of flagged Facebook Live flows are in fact tuned to”detect and prioritize videos which are likely to comprise suicidal or harmful acts” — together with the AI pushing such videos to the very top of human moderators’ content heaps, above all the other stuff that they also must look at.

Certainly”harmful acts” were involved in the New Zealand terrorist attack.

Facebook explains this by saying it is because it does not have the training data to create an algorithm that knows it’s looking at mass murder cooperating in real time.

Additionally, it implies the task of training an AI to capture such a dreadful scenario is exacerbated by the proliferation of movies of first person shooter video games on internet content programs.

To achieve that we will have to provide our systems with substantial volumes of information of this specific kind of content, something which is difficult as these events are mercifully rare. Another challenge would be to automatically discern this content from visually akin, innocuous content — for example if thousands of movies out of live-streamed video games have been flagged from our systems, our reviewers could miss the important real world videos where we could alert first responders to get help on the ground.”

The movie game element is a frightening detail to consider.

It indicates that a harmful real-life act that imitates a violent video game might just blend into the background, as much as AI moderation systems are worried; invisible in a sea of benign, virtually violent content churned out by players. (Which in turn makes you wonder whether the Internet-steeped killer in Christchurch understood — or guessed — that filming the attack from a video game-esque first person shooter perspective might offer a workaround to dupe Facebook’s pristine AI watchdogs.)

Facebook post is emphatic that AI is “not perfect” and is “not likely to be ideal”.

“People will continue to be part of this equation, whether it’s the folks on our team who review articles, or individuals who use our services and document content to us,” it writes, reiterating yet again that it has ~30,000 people working in “safety and security”, about half of whom are doing the sweating hideous toil of articles inspection.

This is, as we’ve said many times previously, a fantastically tiny variety of individual moderators given the vast scale of articles continually uploaded to Facebook’s 2.2BN+ user platform.

Moderating Facebook stays a hopeless task because so few humans do this.

Additionally AI can not really help. (Later in the site post Facebook also writes that there are”millions” of livestreams broadcast on its stage every single day, saying that’s why incorporating a short broadcast delay — such as TV channels do — wouldn’t at all help catch inappropriate real-time content.)

At the same time Facebook’s upgrade makes it clear just how much its’safety and security’ systems rely on outstanding humans too: Aka Facebook users taking the time and head to report harmful content.

Some may say that’s an excellent argument for a social networking taxation .

The fact Facebook didn’t get one report of this Christchurch massacre livestream while the terrorist attack unfolded supposed the content was not prioritized for”accelerated review” by its own systems, which it clarifies prioritize reports attached to videos which are still being streamed — since”when there’s real-world harm we have a better chance to alert first responders and attempt to get aid on the ground”.

Though it also says it enlarged its acceleration logic annually to”also cover videos that were very recently live, in the past few hours”.

But it did with a focus on suicide prevention — meaning that the Christchurch movie would only have been flagged for acceleration review in the hours following the flow ended if it had been reported as suicide content.

Therefore the’problem’ is that Facebook’s systems do not prioritize mass murder.

“In [the initial ] report, and a number of subsequent reports, the movie was noted for reasons aside from suicide and as such it had been treated according to different procedures,” it writes, adding it’s”learning from this” and”re-examining our reporting logic and experiences for both live and lately live videos in order to expand the classes that would get to accelerated review”.

No shit.

Facebook also discusses its failure to prevent variations of the massacre video from resurfacing on its platform, having been as it informs it –“so effective” at preventing the spread of propaganda out of terrorist organizations such as ISIS with the usage of image and video matching tech.

It asserts its tech was outfoxed in this case by”bad actors” creating many different edited versions of the movie to try to thwart filters, in addition to by the various manners”a broader set of people distributed the video and unwittingly created it more challenging to match copies”.

So, basically, the’virality’ of the awful event created a lot of versions of this video for Facebook’s matching tech to cope.

“Some people may have noticed the video onto a computer or TV, filmed that with a phone and sent it to a buddy. Still others may have watched the movie on their computer, listed their screen and passed that on.

In all Facebook says it discovered and blocked more than 800 visually-distinct versions of the movie that were circulating on its platform.

It reveals it resorted to utilizing audio fitting technology to attempt and detect videos which had been visually altered but had exactly the exact same soundtrack. And claims it’s attempting to learn and develop with better techniques for blocking content that’s being re-shared widely by people as well as being rebroadcast by mainstream media. So any kind of major news event, basically.

In a segment on following steps Facebook says advancing its matching technology to prevent the spread of improper viral videos being spread is its priority.

But audio matching clearly will not assist if malicious re-sharers only both re-edit the visuals and also switch the soundtrack also in future.

It also concedes it has to have the ability to react faster”to this sort of content onto a live streamed video” — though it doesn’t have any firm fixes to offer there either, saying only that it will explore”whether and how AI can be utilized for these instances, and how to get to consumer reports faster”.

Another priority it asserts among its”next steps” is fighting”hate speech of a variety on our platform”, stating that this includes more than 200 white supremacist organizations internationally”whose content we’re eliminating through proactive detection technology”.

It’s glossing over plenty of criticism on that front also though — including research that indicates banned much right hate preachers may be able to evade detection on its platform. Plus its foot-dragging on shutting far right extremists. (Facebook just eventually banned one infamous UK far right activist last month, for instance )

In its last PR sop, Facebook says it’s committed to expanding its business collaboration to undertake hate speech through the Global Internet Forum to Counter Terrorism (GIFCT), which shaped in 2017 as platforms were being squeezed by politicians to wash ISIS content — in a collective effort to fend off tighter regulation.

“We are experimenting with sharing URLs systematically rather than simply content hashes, are working to address the range of terrorists and violent extremists operating online, and mean to refine and improve our capacity to collaborate at a catastrophe,” Facebook writes now, offering more obscure experiments as politicians call for content responsibility.

Spread the love

Suzie is fascinated by anything machine learning and AI related, and she is a huge proponent of researching ethics in AI. She believes that Basic Universal Income will become inevitable as AI replaces a significant portion of the workforce.

AI News

Big Developments Bring Us Closer to Fully Untethered Soft Robots

mm

Published

on

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed new soft robotic systems that are inspired by origami. These new systems are able to move and change shape in response to external stimuli. The new developments bring us closer to having fully untethered soft robots. The soft robots that we possess today use external power and control. Because of this, they have to be tethered to off-board systems with hard components. 

The research was published in Science Robotics. Jennifer A. Lewis, a Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and co-lead author of the study, spoke about the new developments. 

“The ability to integrate active materials within 3D-printed objects enables the design and fabrication of entirely new classes of soft robotic matter,” she said. 

The researchers used origami as a model to create multifunctional soft robots. Origami, through sequential folds, is able to change into multiple shapes and functionalities while staying in a single structure. The research team used liquid crystal elastomers that are able to change their shape when exposed to heat. The team utilized 3D-printing to get two types of soft hinges. Those hinges fold depending on the temperature, and they can be programmed to fold into a specific order. 

Arda Kotikan is a graduate student at SEAS and the Graduate School of Arts and Sciences and the co-first author of the paper. 

“With our method of 3D printing active hinges, we have full programmability over temperature response, the amount of torque the hinges can exert, their bending angle, and fold orientation. Our fabrication method facilitates integrating these active components with other materials,” she said. 

Connor McMahan is a graduate student at Caltech and co-first author of the paper as well. 

“Using hinges makes it easier to program robotic functions and control how a robot will change shape. Instead of having the entire body of a soft robot deform in ways that can be difficult to predict, you only need to program how a few small regions of your structure will respond to changes in temperature,” he said.

The team of researchers built multiple soft devices. One of these devices was an untethered soft robot called “Rollbot.” It starts as an 8 centimeters long and 4 centimeters wide flat sheet. When it is in contact with a hot surface of around 200°C, one set of the hinges folds and shapes the robot into a pentagonal wheel. 

On each of the five sides of the wheel, there are more sets of hinges that fold when in contact with a hot surface. 

“Many existing soft robots require a tether to external power and control systems or are limited by the amount of force they can exert. These active hinges are useful because they allow soft robots to operate in environments where tethers are impractical and to lift objects many times heavier than the hinges,” said McMahan.

This research that was conducted focused solely on temperature responses. In the future, the liquid crystal elastomers will be studied further as they are also able to respond to light, pH, humidity, and other external stimuli. 

“This work demonstrates how the combination of responsive polymers in an architected composite can lead to materials with self-actuation in response to different stimuli. In the future, such materials can be programmed to perform ever more complex tasks, blurring the boundaries between materials and robots,” said Chiara Daraio, Professor of Mechanical Engineering and Applied Physics at Caltech and co-lead author of the study.

The research included co-authors Emily C. Davidson, Jalilah M. Muhammad, and Robert D. Weeks. The work was supported by the Army Research Office, Harvard Materials Research Science and Engineering Center through the National Science Foundation, and the NASA Space Technology Research Fellowship. 

 

Spread the love
Continue Reading

AI News

Modeling Artificial Neural Networks (ANNs) On Animal Brains

mm

Published

on

Cold Spring Harbor Laboratory (CSHL) neuroscientist Anthony Zador has shown that evolution and animal brains can be used as inspiration for machine learning. It can be beneficial in helping AI solve many different problems. 

According to CSHL neuroscientist Anthony Zador, Artificial Intelligence (AI) can be greatly improved by looking to animal brains. WIth this approach, neuroscientists and those working in the AI field have a new way of solving some of AI’s most pressing problems. 

Anthony Zador, M.D., Ph.D., has dedicated much of his career to explaining the complex neural networks within the living brain. He goes all the way down to the individual neuron. In the beginning of his career, he focused on something different. He studied artificial neural networks (ANNs). ANNs are computing systems that have been the basis of much of our developments in the AI secor. They are modeled after the networks in both animal and human brains. Until now, this is where the concept stopped. 

A recent perspective piece, authored by Zador, was published in Nature Communications. In that piece, Zador detailed how new and improved learning algorithms are helping AI systems develop to a point where they greatly outperform humans. This happens in a variety of tasks, problems, and games like chess and poker. Even though some of these computers are able to perform so well in a variety of complex problems, they are often confused by things us humans would consider simple. 

If those working in this field were able to solve this problem, robots could reach a point in development where they could learn to do extremely natural and organic things such as stalking prey or building a nest. They could even do something like washing the dishes, which has proven to be extremely difficult for robots. 

“The things that we find hard, like abstract thought or chess-playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”

Zador thinks that if we want robots to achieve quick learning, something that would change everything in the sector, we might not want to only look at a perfected general learning algorithm. What scientists and others should do is look towards biological neural networks that have been given to us through nature and evolution. These could be used as a base to build on for quick and easy learning of specific types of tasks, tasks that are important for survival. 

Zador talks about what we can learn from squirrels living in our own backyards if we just looked at genetics, neural networks, and genetic predisposition.

“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”

Zador believes that one thing that could come from genetic predisposition is the innate circuitry that is within an animal. It helps that animal and guides its early learning. One of the problems with attaching this to the AI world is that the networks used in machine learning, ones that are pursued by AI experts, are much more generalized than the ones in nature. 

If we are able to get to a point where ANNs reach a point in development where they can be modeled after the things we see in nature, robots could begin to do tasks that at one point were extremely difficult. 

 

Spread the love
Continue Reading

AI News

California Start-Up Cerebras Has Developed World’s Biggest Chip For AI

mm

Published

on

California start-up Cerebras has developed the world’s biggest computer chip to be used to train AI systems. It is set to be revealed after being in development for four years. 

Contrary to the normal progression of chips getting smaller, the new one developed by Cerebras has a surface area bigger than an IPad. It is more than 80 times bigger than any competitors, and it uses a large amount of electricity. 

The new development represents the astounding amount of computing power that is being used in AI. Included in this is the $1bn investment from Microsoft into OpenAI that was announced last month. OpenAI is trying to develop an Artificial General Intelligence (AGI) which will be a giant leap forward, something that will change much of what we know. 

Cerebras is unique in this field because of the enormous size of their chip. Other companies endlessly work to create extremely small chips. Most of our advanced chips today are assembled like this. According to Patrick Moorhead, a US chip analyst, Cerebras basically put an entire computing cluster on a single chip. 

Cerebras is looking to join the likes of other companies like Intel, Habana, Labs, and the UK start-up Graphcore. They are all building a new generation of specialized AI chips. This development in AI chips is reaching its biggest stage yet as the companies are going to start delivering the first chips to customers by the end of the year. Among the companies, Cerebras will be looking to be the go-to for massive computing tasks that are being done by our largest internet companies. 

There are many more companies and start-ups involved in this space including Graphcore, Wave Computing, and the Chinese based start-up Cambricon. They are all looking to develop specialized AI chips used for inference. They want to take a trained AI system and use it in real-world scenarios. 

Normally, it takes a long time for the development process to finish and actual products be shipped to people and companies. According to Linley Group, a US chip research firm, there are a lot of technical issues that are time-consuming. Although it takes awhile for products to be developed, there is still a big interest in these companies. Cerebras has raised over $200m in venture capital. As of late last year, they were valued at about $1.6bn. There is a lot of projected growth for the global revenue of these deep learning chipsets. 

The reason that these companies are focusing on this type of processor for AI is because of the huge amounts of data that are needed in order to train neural networks. Those neural networks are then used in deep-learning systems and are responsible for things such as image recognition. 

The chip from Cerebras is a single chip made out of a 300mm diameter circular wafer. It is the largest silicon disc to be made in the current chip factories. The norm is for these wafers to be split up into many individual chips instead of one giant one. Anyone who tried before ran into issues with putting circuitry into something so big. Cerebras got past this by connecting the different sectors on the wafers. Once this is done, they are able to communicate with each other and become a big processor. 

Cerebras is looking forward and will try to link cores in a matrix pattern to be able to communicate with each other. They want to connect 400,000 cores while keeping all of the processing on one single chip. 

It will be exciting to see these developments move forward with Cerebras and other companies continuing to advance our AI systems. 

 

Spread the love
Continue Reading