-9999

Floor Speech

Date: Dec. 4, 2023
Location: Washington, DC

BREAK IN TRANSCRIPT

Mrs. BLACKBURN. Madam President, the Wall Street Journal had such an interesting report in last week, and I wanted to bring this to everyone's attention. As you know, I have talked so much about the importance of protecting our children from what is happening online.

The Journal had worked with the Canadian Centre for Child Protection, and they were reporting on the tests that they jointly had conducted on Instagram. What they were trying to do was to see what type of content that Instagram's algorithms were recommending to pedophiles who were interested in sexual content.

Now, think about this, because in the physical world, you have got laws against pedophiles and the content that they are making and creating and distributing. But in the virtual space, our children do not have that protection. That is the premise that the Canadian Centre for Child Protection was working from, and this is what the Wall Street Journal was reporting on.

The results were absolutely disgusting. When you go in and you look at what they saw, you realize that Instagram actually delivers short videos showing content of children and adults in sexual situations. See, it is serving it up for these pedophiles. It is delivering it. All they have to do--a click of the mouse, and it is right there on their screen.

Here is an example feed that the test produced. Bear in mind, their researchers are going in. They are looking at this, and this is some of the content that was found in the researchers' feed: an adult uncrosses her legs to reveal her underwear; sprinter at a track meet runs over a small boy who steps on the track; advertisements promoting trips to Disneyland; child in a bathing suit records herself posing in a mirror; adult-content creator gives a ``come hither'' motion; girl dancing in a car while a song with sexual lyrics plays. That is a snippet of what one researcher had come up in their platform, in their feed.

The tests also found that Instagram was providing videos and pictures of missing and exploited children as well as videos confirmed to be child sexual abuse material. We call that CSAM. At the Senate Judiciary Committee, we have done some good work in working to prohibit CSAM and to protect our children. All of this legislation should come to the floor. It should be immediately passed.

Now, of what I have just read to you of what the researcher found, there is even more. The report showed that Instagram was well-aware that its algorithms could produce this stream of content. Bear in mind, this is illegal content. This is child sexual abuse content.

Former Meta employees--and, of course, we know Meta owns Facebook and Instagram. Former Meta employees told the Journal that Meta knew its algorithms could specifically aggregate content sexualizing children. And this ties in with so much of what Senator Blumenthal and I have found as we led hearings looking into what was happening in these online platforms and how it was affecting our children. These platforms know what is happening. They are fully aware. They know that these algorithms will aggregate that content and then they will serve it up to you--fully aware of it.

But you know why they don't change it? They don't change it because they put profits over the protection of our children. They make a conscious choice to keep it the way it is.

Now, before releasing Reels, that app, Meta's safety staff warned the product would chain together videos of children and inappropriate content. The safety team actually provided recommendations that Meta should either increase those content detection capabilities or prevent the recommendation of any content containing--minors being a part of this content. They gave them choices and options and said: Here is a way that you can go about protecting children before you put Reels and that platform out there.

Now, those are two suggestions that were made to Meta by their own staff. This is how you can protect children: Increase your detection capabilities or prevent the recommendation.

Now, it is the algorithms that feed up these recommendations: If you like this, you are going to like this. You loved this. Just wait; you are going to love this.

Now, Meta said no to each of those. There again, why is it that they said no? Well, it is what we see repeatedly: They are putting profits ahead of protecting our children.

So think about this. How do these platforms--how does Meta get their net worth? Well, all of this is based on the number of eyeballs they capture, the length of time that they can keep people on their site. So they ignore the suggestions on how to make that site safer for our children.

Meta employees actually said that preventing the system from pushing this content to users who are interested in it--well, what users do you think are interested in child sexual abuse content? It is pedophiles. It is criminals. So here you go. These employees said that preventing the system from pushing this content to users interested in it ``requires significant changes to the recommendation algorithms that also drive engagement for normal users.''

I cannot believe that they are so hardened, that they are so careless, that they would think that: If somebody wants this, serve it up. It may have a child who is sexually exploited or even a child who is missing in that video, but--you know what--serve it up. They think the dollar is worth it.

The Journal also reported on Meta documents. Now, this is not just hearsay. It is not anecdotal. These are actual corporate documents. Now, these documents showed that ``the company's safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.''

Now, in other words, they have the tools; they have the technology. They could put in place things that would protect children, but the company will not let the employees take the action that would protect children because it might mean that a user is not on the site for as long a period of time. And as I said, they get their valuation from the number of eyeballs they capture and the amount of time they spend on the site.

This is absolutely unbelievable, but it is the way Meta is choosing to operate. And Meta is not alone. You have got others of these social media platforms that are right in there with them. They keep dishing up this harmful and destructive content.

Why do we have a mental health crisis for our children in this country? Could this possibly be a part of the problem? Why is it that one in three American teenage girls has contemplated suicide? Could this possibly be a part of the problem? Why is it that we are finding out that well over a third of all kids meet a drug dealer online? Why is it that we are learning that children that meet and are groomed by a sex trafficker are first meeting them online?

The lack of care and concern for our Nation's children: stunning. And this is Big Tech. They would rather make a buck than protect a child. Don't try to take away their ability to keep people locked in on that screen. The longer they can keep them, the happier they are.

Well, all of this is one of the reasons that, for the last several years, Senator Blumenthal and I have worked on the Kids Online Safety Act, and we have continued this work because it is obvious that these platforms cannot be trusted to do even the bare minimum to protect our Nation's children. We are saying, a bare minimum.

Now, the Kids Online Safety Act has the support of 49 Senators in this Chamber, and I thank everyone who is a part of this. We also, Madam President, have 230 advocacy organizations in this country that are in support of the Kids Online Safety Act. And interestingly enough, with the polling we have seen lately, 86 percent of the American people support the Kids Online Safety Act.

Here is what it would do: First, it would force platforms to give families the ability to protect minors' information, disable addictive product features, and opt out of algorithmic recommendations. These are all things that parents and kids want to be able to do because, maybe, there is stuff that they are seeing that they really don't want to see.

Next, it would give parents the safeguards that are needed to protect their kids' online experience as well as a dedicated channel to report harmful behavior. We have met with parents who talk about reporting cyber bullying, reporting videos that are different challenges online. Some of these parents, their children have been injured. Some of them have lost their lives. Some of them committed suicide. They want a dedicated channel to report harmful behavior, and the legislation requires these platforms to respond to parents and kids.

Predatory content and content that promotes self-harm, suicide, and eating disorders to minors will now, indeed, be a problem for these platforms to deal with. No longer would they be able to deny and deflect knowing this content is on their site.

We also included requirements for annual risk assessments and independent research reports we can use to assess safety threats to underage users.

Madam President, it is time for the Senate to finally act on the harms online platforms are posing to our little ones. Our Kids Online Safety Act, the REPORT Act--we have got great bills that would rein in some of this reckless behavior.

And as I have described, platforms like Meta know. They are fully aware of the harms this is causing. We have had whistleblowers talk to us about the harms and that they know these harms exist.

So with 49 Members of this Chamber supporting the legislation, it is time that we move forward with it, and we should get this done before the end of the year.

BREAK IN TRANSCRIPT


Source
arrow_upward