Tech & Science Google’s AI video classifiers are easily fooled by subliminal images

19:30  04 april  2017
19:30  04 april  2017 Source:   The Verge

Google is 'replacing' Google Talk — here's what that means for you

  Google is 'replacing' Google Talk — here's what that means for you Hangouts is a pretty solid replacement for Google Talk. Your chat contacts will transfer over, and you'll still be able to chat in Gmail — it'll just look a little different.Here's a Google Talk chat window: © Provided by Business Insider Inc Google Talk And here's what Hangouts in Gmail looks like: © Provided by Business Insider Inc Hangouts Heavy users might notice that the chat sidebar built into Gmail looks different. The Google Talk Android app will also be phased out, and Android users should download Hangouts instead.

Website Disabled. Google News: Hands-on preview: Cappin' fools in 'Borderlands 2′.

Since Google ’ s filters (in all likelihood) also use deep neural networks, the attack is much more likely to work than the brute force method. … @article{nguyen2015deep, title={Deep Neural Networks are Easily Fooled : High Confidence Predictions for Unrecognizable Images }, author={Nguyen, Anh and

  Google’s AI video classifiers are easily fooled by subliminal images © Provided by The Verge

Google is currently in a bit of hot water with with some of the world’s most powerful companies, who are peeved that their ads have been appearing next to racist, anti-semitic, and terrorist videos on YouTube. Recent reports brought the issue to light and in response, brands have been pulling ad campaigns while Google piles more AI resources into verifying videos’ content. But the problem is, the search giant’s current algorithms might just not be up to the task.

A recent research paper, published by the University of Washington and spotted by Quartz, makes the problem clear. It tests Google’s Cloud Video Intelligence API, which is used to automatically classify the content of videos using object recognition. (The system is currently in private beta, but has been “applied on large-scale media platforms like YouTube,” says Google.) The API, which is powered by deep neural networks, works very well against regular videos, but researchers found it was easily tricked by a determined adversary.

iPad users get their own version of Google Calendar

  iPad users get their own version of Google Calendar Users of Apple's tablet get a full version of the calendar app, complete with room scheduler and goal setting.iPad users will now get a full version, meaning they'll no longer have to make do without some features. In addition to keeping track of your upcoming client dinners and Little League games, the app's machine learning will help you schedule meetings by suggesting meeting times and locations based on your co-workers' availability. It will also help you set goals and schedule time into your Calendar to help you achieve them.

This website is temporarily unavailable, please try again later.

Google ’ s AI assistant now has services like a dog doctor and wine guide.

In the paper, the University of Washington researchers describe how a test video (provided by Google and named Animals.MP4) is given the tags “Animal,” “Wildlife,” “Zoo,” “Nature,” and “Tourism” by the company’s API. However, when the researchers inserted pictures of a car into the video the API said, with 98 percent certainty, that the video should be given the tag “Audi.” The frames — called “adversarial images” in this context — were inserted roughly once every two seconds.

<span style=An illustration of how images are inserted into videos to fool Google’s API. Image via “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos” " src="/upload/images/real/2017/04/04/span-style-font-size-13px-an-illustration-of-how-images-are-inserted-into-videos-to-fool-google-rsqu_191819_.jpg" /> © Provided by The Verge An illustration of how images are inserted into videos to fool Google’s API. Image via “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos” 

“Such vulnerability seriously undermines the applicability of the API in adversarial environments,” write the researchers. “For example [...] an adversary can bypass a video filtering system by inserting a benign image into a video with illegal contents.”

This work underscores a clear trend in the tech world. As companies like Google, Facebook, and Twitter deal with unsavory content on their platforms, they’re increasingly turning to artificial intelligence to help sort and classify data. However, AI systems are never perfect, and often make mistakes or are capable of being tricked. This already been proved with Google’s anti-troll filters, which are designed to classify insults but can be fooled by slang, rogue punctuation, and typos. It seems it still takes human to reliably tell us what humans are really up to. 

Google Photos can now stabilize all your shaky phone camera videos .
Google Photos is where all my photos are. Long ago I was a man of SmugMug, and then Flickr, and then at some point spent days and days copying years of images to iCloud Photo Library before eventually disregarding that and switching to Google. What can I say? I’m a simple person who can be easily delighted and swayed by automatic GIF creation and reliable backups. And Google Photos keeps getting better. Here’s the latest example: now the mobile app can automatically stabilize videos in your camera roll with a tap.

Source: http://au.pressfrom.com/news/tech-and-science/-16233-google-s-ai-video-classifiers-are-easily-fooled-by-subliminal-images/

—   Share news in the SOC. Networks
This is interesting!