Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

A report reveals AI image-generators are being trained on child sexual abuse images

What's new in AI this week? The biggest story is also the most concerning.
By

Published onDecember 20, 2023

laion
Andrew Grush / Android Authority

Welcome to What’s New in AI, our weekly update where we bring you all the latest AI news, tools, and tips to help you excel in this new brave AI-driven world. Let’s start by focusing on the biggest (and currently breaking) story of the week:

Training models may be using real images that depict child abuse

An alarming new report from Stanford’s Internet Observatory has found that the LAION-5B dataset has at least 3,200 images of suspected child sexual abuse and so far at least a thousand images have been confirmed by Stanford in collaboration with the Canadian Centre for Child Protection and other anti-abuse groups. What is most concerning is this data is currently used by tools like Stable Diffusion’s Stability AI and Google’s Imagen generators.

The good news is the nonprofit behind LAION made it clear that they have a zero-tolerance policy for harmful content and would temporarily remove the datasets online as the problem is looked into. While the images make up only a tiny fraction of the database, the harm can’t be understated. It’s very possible this data is being used by bad actors to generate illegal pornography images and for other nefarious purposes. In fact, there have already been incidents around using AI to digitally remove the clothes off of teens by using their social media pictures. Previously it was believed this was largely done by using training data from adult porn in combination with legal, ordinary images of children collected by the training data’s scrappers. As it turns out, it’s possible that this illegal training data could also be involved.

Unfortunately, the way AI models work makes it hard to prevent it from scrapping illegal data it finds on the internet, which makes the importance of safeguards all the more crucial. While many companies like Stable Diffusion already have practices in place that reduce harm, there are many unauthorized or hacked tools out there. There have already been past calls by all 50 US attorneys general to address AI-generated CSAM in Congress, and hopefully, this new report helps shine an even brighter light that gets needed protections and regulation in place. You can read more about the story over at Associated Press.

AI in the news: What else is going on this week

While the report above is certainly the most concerning news of the week, it’s far from the only thing happening in AI. Here’s a quick rundown:

  • Rite Aid banned from using AI facial recognition. Rite Aid has found itself in trouble with the FTC after using AI facial recognition to go after shoplifters and other bad actors. The company was using the images to build a “person of interest” database. This database ultimately led store employees to use this information to accuse customers of misbehavior, targeting women and people of color in particular. Due to its “reckless use of facial surveillance systems,” the FTC says the company can’t use AI face recognition in its businesses for the next five years.
  • There’s no hiding from our AI overlords. A new project dubbed Predicting Image Geolocations was created by three Stanford students to help find where Google Street View images were first taken. But after feeding personal photos it had never seen before, they were able to use AI to accurately find the person’s location with a high degree of accuracy. To say this presents major privacy concerns would be putting it mildly.
  • EU wants to support homegrown AI startups. The European Union wants to provide these startups access to the necessary processing power a supercomputer can provide.
  • An Arkansas paper is suing Google. The lawsuit by Helana World Chronicle claims Google Bard AI was trained on a dataset that included “news, magazine and digital publications” and that this is negatively impacting the free press with no positive return. It further asserts Google’s recent AI is designed to discourage end users from visiting news sites, preferring to get use AI directly from Google for the job.

AI Tools & Apps Spotlight

In our AI Tools & Apps spotlight we shine the light on new apps and tools that we think are worth the extra attention. We’ll be honest, this was a quieter week for tools, which makes sense so deep into the holiday season. This week’s spotlight goes to a tool that’s not technically new, but it is getting a pretty interesting update.

Microsoft Copilot

copilot microsoft
Microsoft

Microsoft Copilot doesn’t get the same level of attention as some of the other chatbots but it’s built on GPT 4 and is actually pretty powerful. It’s also making its way into more places including within Microsoft Edge, through Windows, and more. For the most part, Copilot does the same things as the rest of the main chatbots but its new integration with Suno makes it worth highlighting this week. Simply visit Microsoft Copilot and ask it to “make music with Suno”. With a simple plugin, you’ll be able to create your AI-generated music using a simple one or two-line text prompt.

How-to & tips: Bard vs ChatGPT, which AI chatbot does better at trip planning?

Looking to learn more about AI, how to make better use of AI tools, or how to protect your privacy from AI? Each week we’ll bring you a new how-to guide or tip we feel is worth sharing. This week I’m doing a little self-promotion for my latest AI article that takes a closer look at how Gemini Pro and ChatGPT compare when it comes to planning a trip itinerary. Not only does it bring some insight into how differently the two tools tend to respond, it also is a good guide to see which one is a better fit for your needs.

You might like