The above images of Trevor Noah and Michael Kosta show what happens when they’re put through an AI image generator with the prompt “two men ballroom dancing,” as well as whether the image has or has not been modified to reject AI image manipulation.
Like many of the world’s best and worst ideas, MIT researchers’ plan to combat AI-generated deepfakes was hatched when one of their number watched their favorite not-news news show.

On the Oct. 25 episode of The Daily Show with Trevor Noah, OpenAI’s Chief Technology Officer Mira Murati talked up AI-generated images. Though she could likely discuss OpenAI’s AI image generator DALL-E 2 in great detail, it wasn’t a very in-depth interview. After all it was put out for all the folks who likely understand little to nothing about AI art. Still, it did offer a few nuggets of thought. Noah asked Murati if there was a way to make sure AI programs don’t lead us to a world “where nothing is real, and everything that’s real, isn’t?”
Read more
Go Funko Yourself This Halloween
Remembering <em>Enterprise</em>: The Test Shuttle That Never Flew to Space
Apple&#39;s 12 Most Embarrassing Product Failures
Last week, researchers at the Massachusetts Institute of Technology said they wanted to answer that question. They devised a relatively simple program that can use data poisoning techniques to essentially disturb pixels within an image to create invisible noise, effectively making AI art generators incapable of generating realistic deepfakes based on the photos they’re fed. Aleksander Madry, a computer professor at MIT, worked with the team of researchers to develop the program and posted their results on Twitter and his lab’s blog.
Last week on @TheDailyShow, @Trevornoah asked @OpenAI @miramurati a (v. important) Q: how can we safeguard against AI-powered photo editing for misinformation? https://t.co/awTVTX6oXf

My @MIT students hacked a way to “immunize” photos against edits: https://t.co/zsRxJ3P1Fb (1/8) pic.twitter.com/2anaeFC8LL
— Aleksander Madry (@aleks_madry) November 3, 2022
Using photos of Noah with Daily Show comedian Michael Kosta, they showed how this imperceptible noise in the image disrupts a diffusion model AI image generator from creating a new photo using the original template. The researchers proposed that anybody planning to upload an image to the internet could run their photo through their program, basically immunizing it to AI image generators.
Hadi Salman, a PHD student at MIT whose work revolves around machine learning models, told Gizmodo in a phone interview that the system he helped develop only takes a few seconds to introduce noise into a photo. Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed.
Google is creating its own AI image generator called Imagen, though few people have been able to put their system through its paces. The company is also working on a generative AI video system. Salman said they haven’t tested their system out on video, but in theory it should still work, though the MIT’s program would have to individually mock up every frame of a video, which could be tens of thousands of frames for any video longer than a few minutes.
Salman said he could imagine a future where companies, even the ones who generate the AI models, could certify that uploaded images are immunized against AI models. Of course, that isn’t much good news for the millions of images already uploaded to the open source library like LAION, but it could potentially make a difference for any image uploaded in the future.
Madry also told Gizmodo via phone that this system, though their data poisoning has worked in many of their tests, is more of a proof of concept than a product release of any kind. The researchers’ program proves that there are ways to defeat deepfakes before they happen.
Companies, he said, need to come to know this technology, and implement it into their own systems to make it even more resistant to tampering. Moreso, the companies would need to make sure that future renditions of their diffusion models, or any other kind of AI image generator, won’t be able to ignore the noise and generate new deepfakes.
“What really should happen moving forward is that all the companies that develop diffusion models should provide capability for healthy, robust immunization,” Madry said.
Other experts in the machine learning field did find some points to critique the MIT researchers.
Florian Tramèr, a computer science professor at ETH Zurich in Switzerland, tweeted that the major difficulty is you essentially get one try to fool all future attempts at creating a deepfake with an image. Tramèr was the co-author of a 2021 paper published by the International Conference on Learning Representations that essentially found that data poisoning, like what the MIT system does with its image noise, won’t stop future systems from finding ways around it. More so, creating these data poisoning systems will create an “arms race” between commercial AI image generators and those trying to prevent deepfakes.
There have been other data poisoning programs meant to deal with AI-based surveillance, such as Fawkes (yes, like the 5th of November), which was developed by researchers at the University of Chicago. Fawkes also distorts pixels in images in such a way that they disrupt companies like Clearview from achieving accurate facial recognition. Other researchers from the University of Melbourne in Australia and University of Peking in China have also analyzed possible systems that can create “unlearnable examples” that AI image generators can’t use.
The problem is, as noted by Fawkes developer Emily Wenger in an interview with MIT Technology Review, programs like Microsoft Azure managed to win out against Fawkes and detect faces despite their adversarial techniques.
Gautam Kamath, a computer science professor at the University of Waterloo in Onatrio, Canada, told Gizmodo in a Zoom interview that in the “cat and mouse game” between those trying to create AI models and those finding ways to defeat them, the people manufacturing new AI systems seem to have the edge since once an image is on the internet, it’s never really going away. Therefore, if an AI system manages to bypass attempts to keep it from being deepfaked, there’s no real way to remedy it.
“It’s possible, if not likely, that in the future we’ll be able to evade whatever defenses you put on that one particular image,” Kamath said. “And once it’s out there, you can’t take it back.”
Of course, there are some AI systems that can detect deepfake videos, and there are ways to train people to detect the small inconsistencies that show a video is being faked. The question is: will there come a time when neither human nor machine can discern if a photo or video has been manipulated?
For Madry and Salman, the answer is in getting the AI companies to play ball. Madry said they are looking to touch base with some of the major AI generator companies to see if they would be interested in facilitating their proposed system, though of course it’s still in early days, and the MIT team’s still working on a public API that would let users immunize their own photos (the code is available here).
In that way, it’s all dependent on the people who make the AI image platforms. While OpenAI’s Murati told Noah in that October episode they have “some guardrails” for their system, further claiming they don’t allow people to generate images based on public figures (which is a rather nebulous term in the age of social media where practically everyone has a public face). The team is also working on more filters that will restrict the system from creating images that contain violent or sexual images.
Back in September, OpenAI announced users could once again upload human faces to their system, but claimed they had built in ways to stop users from showing faces in violent or sexual contexts. It also asked users not to upload images of people without their consent, but it’s a lot to ask of the general internet to make promises without crossing their fingers.
However, that’s not to say other AI generators and the people who made them are as game at moderating the content their users generate. Stability AI, the company behind Stable Diffusion, have shown they’re much more reluctant to introduce any barriers that stop people from creating porn or derivative artwork using its system. While OpenAI has been, ahem, open about trying to stop their system from displaying bias in the images it generates, StabilityAI has kept pretty mum.
Emad Mostaque, the CEO of Stability AI, has argued for a system without government or corporate influence, and has so far fought back against calls to put more restrictions on his AI model. He has said he believes image generation will be “solved in a year” allowing users to create “anything you can dream.” Of course, that’s just the hype talking, but it does show Mostaque isn’t willing to back down from seeing the technology push itself further and further.
I maintain image will be fully solved in a year, create anything you can Dream.

The power of open source AI, now what’s next.. https://t.co/aUWpWal6au
— Emad (@EMostaque) October 16, 2022
Still, the MIT researchers are remaining steady.
“I think there’s a lot of very uncomfortable questions about what is the world when this kind of technology is easily accessible, and again, it’s already easily accessible and will be even more easy for use,” Madry said. “We’re really glad, and we are really excited about this fact that we can now do something about this consensually.”
More from Gizmodo
The Best Shortcuts On Mac: Snap Windows, Text to Speech, and More
How to Delete Your Twitter Account If Elon Musk Was Your Last Straw
Sign up for Gizmodo’s Newsletter. For the latest news, Facebook, Twitter and Instagram.
Click here to read the full article.
Musk sent Twitter staff an email for the first time, putting an end to remote work and saying, "The road ahead is arduous."
A woman realized she was being catfished by one of her dating app matches after receiving green texts messages from him.
We showed you Amazon’s best early Black Friday deals. We also told you about all the fantastic early Black Friday sales at Best Buy. Now, there’s another huge early Black Friday 2022 sale that just kicked off at Samsung! Here are some of Samsung’s best deals: Samsung Galaxy S22 Ultra: $225 off + Up to … The post Today’s deals: $0.99 Echo Dot, $15 Fire Stick, $79 Bose speaker, Samsung sale, AirPods, more appeared first on BGR.
Amazon unveiled a robot capable of handling individual items on Thursday that could reduce the company's reliance on human warehouse workers.
Ether's net supply increase has turned negative for the first time since Merge.
NYDIG has consistently passed on opportunities to partner with the likes of FTX, as well as Three Arrows, BlockFi, Celsius and others of their ilk, said Stevens.
Raytheon and Palantir are developing competing prototypes for the Tactical Intelligence Targeting Access Node, a key element of JADC2.
I've tried a number of Amazfit wearables over the past couple years, and this watch is one of the best yet.
Apple has limited the window of time a user can receive files via AirDrop from non-contacts to 10 minutes in China. It reportedly plans to release the new setting worldwide next year.
Shop the best tech deals from trusted retailers on laptops, tablets, headphones, space heaters, surge protectors and more.
Headlines about the massive job losses at Twitter have focused on the big numbers—half of the 7,500-strong team left without a position, trying to find employment when their contracts run out on January 3, and in the meantime feeling unwanted at headquarters in San Francisco. (Unless, of course, you’re one of the folks Elon Musk has decided he fired in error, in which case, sorry, please come back, he’s sorry hefired you.)
A Chinese YouTube channel has managed to create the first functioning, foldable iPhone in the world. YouTube channel 科技美学 (Technological Aesthetics) recently uploaded a video of an engineer using custom parts to create a working foldable iPhone. The ability to fold is a novel feature in modern smartphones that brands such as Samsung, Motorola and Huawei have explored in recent years.
Microsoft's plan to invest $1 billion in data center operations in North Carolina could follow a familiar path that resulted in Apple picking the Triangle for an East Coast hub.
This razor-thin and lightweight laptop was originally $1,050 and it now 75% off on Amazon. Plus, it comes with Windows 11 built in.
Raved one fan: "I know what everyone in my family is getting for Christmas this year!"
Amazon’s big Prime Early Access Sale was a huge success. It gave Prime members a special opportunity to shop the hottest deals of the holiday season. We saw so many best-selling products on sale at the lowest prices in 2022. Now, however, the first-ever Prime Early Access Sale is over. Or is it? 🎅🎄 Don’t … The post Oops! 57 crazy Prime Early Access deals that Amazon forgot to end appeared first on BGR.
Updating your iPhone is annoying. Yes, the updates are often full of new features and changes that make upgrading worthwhile, but it can be hard to find the time to install a brand new update on your iPhone. Taking your phone out of commission during the day isn’t always an option. Wouldn’t it be nice if Apple let you schedule these updates for when you actually had time? Well, ask and ye shall receive.
Apple Inc (NASDAQ: AAPL) committed $450 million of its Advanced Manufacturing Fund to support the critical infrastructure powering Emergency SOS via satellite, the groundbreaking safety capability for the iPhone 14 lineup. Available to customers in the U.S. and Canada shortly, the new service will allow iPhone 14 and iPhone 14 Pro models to connect directly to a satellite, enabling messaging with emergency services when outside of cellular and Wi-Fi coverage. Most of the funding goes to Globalst
Black Friday 2022 is one of the best times of the year to buy a laptop. Shop these expert-recommended deals from Apple, Lenovo, and other trusted brands.
Advanced Micro Devices Inc launched its latest data center chip on Thursday and said Microsoft Corp's Azure, Alphabet-owned Google Cloud and Oracle Corp would be some of its customers. The fourth generation EPYC processor, code named "Genoa", makes significant improvement on performance and energy efficiency compared with its previous chip, said Chief Executive Lisa Su. "What that means for enterprises and for cloud data centers is that it translates into lower capex, lower opex and lower total cost of ownership," she said.

source