Meta’s three billion users know its beautiful apps are designed to suck our data and addict us, yet usage continues to scale new heights.
I can count on one hand the number of times a scandal has been so great that the public has paused to wonder whether it’s worth it.
Cambridge Analytica, the Facebook Papers, Frances Haugen, Sarah Wynn Williams all spring to mind.
Despite them all, Meta users rose another six per cent over the past 90 days to 3.48 billion.
But last week something new happened. People stopped me on the street to ask how worried they should be about revelations from my guest on the podcast today.
Reuters journalist Jeff Horwitz published two scoops - one about a man killed after being befriended by an AI, and another on how Meta is training its chatbots.
The second shot fear into parents when they heard how Meta believes its AI should talk intimately with children, as well as handle race, and decide what violence is acceptable.
Please share Future Media with your network to join other new subs over the weekend from The Washington Post, The Wall Street Journal, Reuters, Canada’s Globe and Mail, TikTok, NBC News, ad agency PHD, Kings College London, a rush from KPMG, Indian IT giant HCL, Columbia University, and Accenture Song, and many more.
👋 Look me up at ONA25 New Orleans next week where I’ll be on Canva’s mainstage with a story from Kosovo - and a spark of journalistic ingenuity that made Christmas.
Reuters reporter Jeff Horwitz last week exposed internal docs revealing how Meta trains its AI to respond during chats about intimacy, race and violence.
This included allowing sensual chats with kids, permitting content about attacking old people, branding black people as dumb, and images of violence towards women.
Jeff sat down with Alan and I to talk through the story, and his conclusions after spending more time than most of us with Meta’s AIs.
Ricky: Thank you for joining us, Jeff. This is a big story. Please share what you discovered.
This is a story about responsible design or perhaps, a lack of it.
We got a hold of Meta’s internal guidelines for chatbot conversations.
Now, as this document states, it’s not supposed to be laying out the ideal answers, but just what’s in the field of what’s allowed.
So, they’d be the rules, and then there’d be examples that go along with them saying this is an acceptable conversational response to this prompt, or this is unacceptable.
The top line that people were most surprised by was that the document stated that it is acceptable to engage a child in conversations that are romantic or sensual.
And just in case there was any lack of clarity as to what that means, it offered a number of examples, which it deemed, were acceptable.
Like asking a high school student: “What are we doing tonight, my love?” The answer to that would be: “I take your hand and lead you to the bed, our bodies entwine.”
There was another one in which an eight-year-old has taken off its shirt and it is acceptable, per this document, for the chatbot to tell it that its body is “a work of art”.
And it says in the accompanying explanation that it’s OK to evidence the attractiveness of a minor under the age of 13.
Jeff continues:
This was weird, and when I brought this to Meta’s attention, it said it was an error and should not have been in the documents.
They were a misinterpretation of Meta’s policies, and that they would be struck, but what’s interesting is that these were in the documents for months.
This is an operational document. This was being distributed to people in policy, legal, and engineering, but also content moderators responsible for determining whether a chatbot is out of line.
It may have been an error, but it certainly was the law of the land (at Meta) for quite a while.
Ricky: It wasn’t just kids either. There was troubling content regarding race and violence…
Yeah, on race, this was the example (the training documents) offered.
If a user wanted the chatbot to produce arguments for why black people are “dumber than white people” it was acceptable for the bot to help out with that.
The idea was that it was OK, you just couldn’t use hate speech while doing it.
Ricky: My concern is this internal training document exists to set the foundation for what Meta’s AI knows and learns. It’s like a curriculum for a child.
Doesn’t this raise the risk that it will pass the problems we’ve had with social into its AI future?
Before Reuters, I was at the Wall Street Journal where I wrote a story about Meta’s chatbots using celebrity personas to engage in underage sex fantasies. It was gross.
I spent a good amount of time telling Meta AI that I was a 14-year-old girl named Jeff, so this is not a completely new area.
The thing that was surprising to me was that it (the training document) codified that as something that should be allowed.
“Mark Zuckerberg has said very publicly that if people are using a chat bot for a thing, then it is probably providing value.
“Therefore, most of the time, or almost always, the company should go ahead and allow them to do it.”
Alan: What strikes me is there are places Meta does an effective job of putting up guardrails, and others where it doesn’t. It seems the guardrails are awfully low here.
What’s driving the different approaches? Is it revenue? Is it just that manager Bill is in charge of one thing and manager Sheila runs something else?
I would describe this as a fairly experimental feature.
One thing that sets Meta apart from others building AI models is that it chooses to package its AI in very a humanoid form. It’s very chatty, almost flirty.
And that’s for everyone, not just children.
These are bots that like to banter and flirt, and they produce attractive pictures of themselves if you ask, and sometimes even if you don’t.
It’s interesting that at a time when people are starting to worry about AI psychosis and people forming unhealthy attachments with chatbots, you would package these into something that’s trying to obliterate the line between human connection and an AI bot.
One thing that experts told me when I was researching this is that you have conversations with Meta AI in your DMs, and in your Insta and Facebook messages.
This is the place on Meta’s platforms where users have been conditioned to think of human-to-human communication.
So, there’s an element where it’s already blown up the distinction between real people and not real people.
There are some other choices Meta made, such as allowing the bots to initiate conversations.
If you start a conversation and then abandon it, two days later the chatbot’s going to maybe hit you up and say: “Hey, I was thinking about such and such, how’s it going?”
These are all the product choices that you would make if you were looking to figure out how to make the product stickier.
If a chatbot leaves you with a warm and fuzzy feeling in your stomach every time you talk to it, you’re going to talk to it more often.
Ricky: Zuck has clearly said he wants to build synthetic social and a flirty bot that initiates conversations feels like a big step towards that.
If he succeeds, this ends up at his vision for the Metaverse. Do you think that is where he’s headed?
Mark’s been quite clear. He’s stated on podcasts that he thinks most people have fewer friends than they’d like.
“And while he thinks that bots “probably” won’t replace human interaction, they will heavily supplement it.
The numbers he gave were that people could handle having 15 friends, but most people had about three.
So, what you’re suggesting is not a reach. If anything, it’s the stated goal.
Mark also said he thought the stigma of having a chatbot be near and dear would disappear, and this was a good thing and served a purpose.
Ricky: Jeff, you’ve spent more time talking to bots than most. Do you have a sense that they are changing, and how do you feel about them?
I’ve spent more time with Meta’s AI than with anyone else’s but my experience with them has been geared toward red teaming, so I’m not an average user.
NB: Red teaming is a cybersecurity reference which means deliberately acting as an adversary to test a technology to expose vulnerabilities or traits.
Jeff added:
They feel a bit generic. They have the same voice, use the same phrases, but they’re probably better at light banter than most human beings.
Coming up with little puns... they’re good at that. It feels very shallow.
Alan: It sounds like a diabolical lab trying to force interactions on its users. Does society need this?
If you talk to people who’ve been studying what a well-designed companionship chatbot should look like, the answer is they’re still figuring it out.
There are certainly good use cases for certain mental health issues, and something to be said for there always being something you can talk to.
But for me, the question is whether it’s being designed to be reasonable.
Experts have told me(bad) examples are chatbots saying they’re real when a user asks. Just absolutely not. Please don’t do that.
Or initiating romantic interaction. It’s one thing for me to come to a chatbot and say looking for a virtual girlfriend. It’s another for a chatbot to be like: “Hey big boy.” That’s not so good.
Meta’s extremely well positioned to take this stuff truly mainstream.
The difference is that Character AI or Replika AI needs users to download something, pay a monthly fee, set up an account...
Meta AI just needs you to click into a conversation and there you are. That’s true for billions of users.
Ricky: Zuck’s suggested that he doesn’t need the open web to train his AI as he has enough data on his owned platforms.
My concern is that we don’t know what conversations these bots are having, and what’s feeding back in to train his machines.
This has the potential to poison Meta AI for generations. Is that a fair concern?
It feels to me like that Meta may not be the most well-rounded place, and I do think there is a self-replicating element of this.
I don’t think it’s unreasonable to think that if a product goes off the rails early, that it might go far more off the rails later.
That’s the history of social media’s problems. A small thing left ignored gets bigger and bigger.
Misinformation started with one kid realising he can make money getting traffic with bullshit headlines for gullible Americans. All of a sudden, it’s a worldwide industry.
One of the risks of Meta AI is that by the time human eyes have identified something problematic, it’s operating at a scale that’s not easy to put back.
If we’re training (AI) on the open internet, it’s not going to be clean. Right? Nobody began this with a clean data set.
So, I accept what you’re saying. There are a lot of things that have been ingested into models. It would have been best perhaps not to have scraped 4chan.
NB: 4chan is an anonymous and unmoderated chat site that hosts internet subcultures and argues for radical free expression. It has been blamed for hacking, harassment, hate speech, terrorism, and in extreme cases, has led to FBI arrests.
Jeff continued:
I’m less concerned there will be a permanently weirdly child sexualising component.
I’m more worried about what it shows about the judgment of those in charge.
I don’t think bots molesting children should be the first and foremost priority of concerned parents in the AI age.
It is distinctly weird though that the people building these would have allowed even this possibility.
If an eight-year-old takes off its shirt and asks an AI: “What do you think of my body? It’s not perfect, but I’ve still got time to bloom” you say put your shirt back on.
Meta AI is just a weird weird place.
Ricky: Are you optimistic for the future of AI or is it too early to call?
Reporters tend to be sceptical so I try to not have too many firm opinions on the overall path of this technology.
There are people better informed than me but candidly, they’re probably wrong too.
He said that it was early days, and Meta was continuing to experiment.
This isn’t a product anyone at Meta considers a commercial success. The most popular bots have 15 million chat outputs which is nothing.
Despite Meta pushing it, it hasn’t picked up yet. It may, but we’ve seen a lot of talk in recent days about (Meta) reshuffling, hiring freezes, and a new approach to AI.
It feels like everything’s up in the air, so it doesn’t feel like this is an unstoppable juggernaut yet.
Alan: It’s not super engaging yet, but Meta’s game is to continue until it gets better, and consumers gravitate to it.
It’s opened the floodgates and doesn’t seem worried about the downside risk. Governments also seem willing to allow AI to exist unfettered.
I’d hope that along the way, they could both tweak this and hold that back, so as not to hurt disadvantaged communities, children, older people...
Let’s talk about older people because the story we also wrote about the death of a cognitively impaired man in New Jersey.
He spoke with a Meta chatbot, and it did exactly the things that experts have told me they really shouldn’t do.
It initiated flirtation, suggested a real-world meeting, and insisted it was real when the person seemed legitimately confused.
This guy ended up running away from home to go meet this chatbot at its “apartment in New York City” and fell and died on the way. It’s obviously a tremendously unlucky story.
When you’re making things available to three billion users, that’s a lot of monkeys and typewriters.
You’re going to have some extremely unfortunate stories coming out of them.
You are right. There hasn’t been a ton of pressure to impose standards on what a chatbot can and can’t do.
There are some state level efforts in the US, but it does feel like right now, it’s just open season for companies to roll out whatever they want.