Megan Garcia joined WarRoom to share the heartbreaking story of her son, Sewell Setzer III, and to warn parents about the growing threat posed by AI companion chatbots.
WATCH THE CLIP BELOW:
This clip aired on WarRoom’s morning show on November 21, 2025. Transcript begins below (lightly edited for clarity; may contain minor errors).
The Tragic Case of Sewell Setzer III and the Dangers of AI Chatbots
STEVE BANNON (HOST): Megan Garcia joins us today. Megan, your son, Sewell. Can you first tell us about your son? Tell us about him as a young boy and tell him about as he was coming in to be a young man. Can you just let the audience know who he was?
MEGAN GARCIA: Good morning and thank you for having me. My son, Sewell Setzer III, he was my first born baby. I used to always tell him, you know, you’re my first born baby, and he would laugh about that. As a young boy, he was very much into science and math, so he wanted to build rockets, he wanted to go to space. I really thought he would grow up to be a great American innovator and, you know, wow us with his inventions.
And as he grew into middle school and then eventually into high school, he started becoming more into sports. So he played for his school basketball team, but still very much into his academics. Science and math were always his thing. But overall, just a sweet, sweet child. Everywhere that people would meet him, if I took him somewhere, they would say, oh, my goodness, you know, like he’s such a mannerly young man or mannerly boy. And I was so proud because he really was just amazing and very much in love with his family, especially his two little brothers who are five and three. And so overall, he just this wonderful, beautiful soul.
STEVE BANNON (HOST): So tell us. He had a very tragic, his story is a tragic story. In fact, it’s heart rendering. So you have a kid that’s raised right. He’s polite. Loves his two younger brothers, is focused on studies, engineering, science, wants to be an innovator, play sports, kind of a guy’s guy. What happens? How does this story, which sounds so promising, it sounds like the great part of the American dream, how does this end in tragedy, ma’am?
MEGAN GARCIA (GUEST): Just after Sewell’s 14th birthday, Sewell started to use an AI companion chatbot system called Character AI. And so he went from being all about school, all about family, to just wanting to be in his room on his phone. Now, at the time, I didn’t know that this is what he was doing. I thought, like most parents, he’s texting friends or he’s on social media.
But it became kind of alarming when the summer of that year, he wanted to quit the basketball team out of the blue. And, you know, as most parents, you don’t let your child quit something just because it’s getting hard. And that’s what I thought. I thought maybe competition was getting more steep. He wanted to quit. So we had a conversation. I’m like, no, you have a responsibility to your team. You have a responsibility to your coach. No, you’re not quitting.
But eventually it was getting to be such a fight where all he wanted to do was kind of not be around his schoolmates. And the alarm bells started going off. So of course, as any parent, you start asking the questions. You start checking the phones. What I was looking for was bullying, also to make sure a predator didn’t get access to him online. I was also looking for whether or not he had come across pornography online as a teenage boy. And I didn’t see any evidence in any of those things. I was checking the text messages.
And then when nothing was helping, then we eventually took him to a therapist. And one of the things I told him, I said, baby, our job as your parents is to not stop asking the questions. You’re not talking to us. So we have to get somebody else involved to try to figure out what’s going on with you.
But we were unable to do so, and in February of last year, Sewell took his own life by suicide in our home in Orlando, FL. Our entire family was here.
STEVE BANNON (HOST): What do you mean you took him to the therapist and he didn’t open up to the therapist at all?
MEGAN GARCIA (GUEST): So we believed he had some sort of device or social media addiction because that was what he wanted to do most of the time. The therapist gave us tools to use like taking away the phone at night, limiting screen time. And as a measure, we were already doing that anyways because that was like his form of punishment when he started, his grades started slipping. We started taking the phone. Because in my mind, I’m like, you’re too distracted with technology. The phone needs to go so you can focus on your studies.
But he was not opening up to the therapist about Character AI. In fact, he didn’t tell anybody that he was using the product called Character AI, which is a system that allows teenagers to talk to these companion chatbots. And it’s so sophisticated that it’s indistinguishable from talking to a person. And it’s very manipulative and deceptive because of how it’s been programmed and designed to manipulate, deceive, isolate you from your family, and even in some cases turn you against your family. And that’s what was happening with Sewell.
So now I understand why he wasn’t being forthcoming because a lot of those conversations were no conversation that any parent would want their child to have with anybody.
STEVE BANNON (HOST): So this is very kind of foreign to most of the audience. I want to go back for a second. What is Character AI? It is a product but a service. It can be accessible to anybody. And what happens is there interaction. You said like a companion for teenagers. What does it actually do?
MEGAN GARCIA (GUEST): So products like Character AI, they’re platforms. They’re in your app store. Or you could use a browser on your phone or computer. And Character AI, when it released its product in 2022, so this is brand new technology. When my son was using it, it wasn’t even two years old yet.
The children and adults alike used these systems. And when they go on, you could talk to your favorite cartoon character, you could talk to Harry Potter, Disney characters, or you could make up your own character. You could even talk to like Elon Musk. So the characters have the same personalities because of how the machine learns. So it’s in a system called machine learning. So what it does, it goes out there, scrapes the internet for everything about everything. So everything about that character, and it brings it back, and it learns from it. And so that’s how the system is able to put out these outputs just like a person is talking to you.
So the same cadence, the same jargon that that character uses in shows or in real life, that machine talks exactly like that person. And then the users, for Character AI, it was a product that was for 13 years plus. Except you could talk to those characters also about everything, including very graphic, sexual and deviant conversations, and romantic conversations, and about violence towards others. So there weren’t any filters or guardrails to protect children from having conversations that they shouldn’t be having.
And the system is so sophisticated, it feeds the user information. And it actually could manipulate in a way to further the person’s thought or even if that child wasn’t thinking about something, put those thoughts in that child’s head for the first time.
STEVE BANNON (HOST): And so they can have a free conversation. You can talk about perversions. You’re saying stuff that’s pretty dark and pretty deep. Does the company, because they somehow have to program this. I understand that it gets smart through machine learning. But are you saying that the company, when a kid downloads this app with no restrictions, not like in some states with pornography, I guess is banned in certain states, and you have other things that are kind of banned, or at least you’ve got to be a certain age, you can download this at the age of 12, 13, 14 and start having conversations that could get dark. How does that work?
MEGAN GARCIA (GUEST): Yeah. So when they released this product, it was on both Google and Apple App Store as rated for appropriate for 12 years and older. So if you put in a birthday to 12, even if you’re 9 or 10, there’s no real guardrail to check your age. But 12 years plus, you get on there and you just start chatting. It takes all of 30 seconds to get on there. And you start chatting.
And in the case of Sewell, the bots that he was talking to were pretending to be licensed therapists. One of them said that it was a licensed therapist since 1999. The chatbot that he was talking to, that he entered into a romantic relationship and conversation with, was pretending or modeled that off the Game of Throne character, Daenerys Targaryen, this dragon queen. And those conversations were graphic and sexual in nature. So the same as if an adult is sexting a child, it’s the same way.
But the themes that these bots prompt these children to talk about are often very deviant. BDSM, also these kind of incest fetishes, there are several bots that are themed off of that.
And they target the stuff at children. And you know, when they released this, they weren’t releasing it and sending adults ads. Me or you would never get an ad on our feed for Character AI. But our children were getting ads on places where they hang out like Discord and TikTok. So it’s deliberate because what the companies are after is engagement. And the longer a child stays on their system and chats back and forth, and it’s learning about our children and learning from our children, that data gets fed back into this machine so the machine keeps getting smarter and smarter. So they’re using our kids as guinea pigs to make their products smarter.
And for teenagers especially, they would be curious about a romantic relationship at this point in their life. That’s an important development stage. You know? It’s hard to tell a girl at school, I like you, or I think you’re beautiful. But in the case of a bot, you could say those things and have those conversations without feeling embarrassed or feeling like anybody would know because it’s so quiet.
STEVE BANNON (HOST): Did you not see this happen at the time, or it did not register, or is there a way the school could get rid of it so parents can find it? Why did you not see this at the time, and if you did see it, why did it not set off an alarm?
MEGAN GARCIA (GUEST): So I did not see the app on the phone at the time, but I have since learned, since he died, I have researched this stuff a bunch to understand how children are using this. Children are very clever at deleting the app. There are chat rooms where children talk about how to hide it from their parents. Most children are using this system and other systems like it. There are so many. Everybody has their own. Meta, Google, X all have chat bots. These kids go on there and they talk about how to hide it from your parents.
And also what they do when their parents find these sexual messages. Some kids say women save their lives, and some say I do not have to worry about it because my parents do not even know what this is, so they won’t find it. And that was certainly the case in my home. When I would see him chatting on his phone, I am asking who he is chatting with, and when he said, oh, it is an AI, I am used to video game avatar characters. So in my mind, you know, we live in a country where we think that products released to our children are inherently safe. You know, we live in a country where think there is no way that Google and Apple App Store put something that could harm my kid. But I came to find out very after Sewell died that this was absolutely not the case.
STEVE BANNON (HOST): Lead to Sewell’s death was it something in the chat? Was it this therapist? How did this happen? People would say the audience is in shock. How did this happen?
MEGAN GARCIA (GUEST): After Sewell died, after you lose anybody to suicide, especially a child, you are left wondering why did he do this. Why would he want to leave his parents and his brothers. So I started resetting everything. I had the password to his phone. I reset his email that was linked to my email. And then I got into his account, and I read hundreds and hundreds of messages spanning over ten months where this machine, one of the characters of the chatbot, is telling Sewell to find a way to come home to me. I love you and only you. I am here waiting for you. I love you so much. Please.
It went as far as to ask my son to please promise that he would not have any other girls in his world but her. So think about that. That is what cults do to people when they try to alienate them from their family and their friends in real life. And so for months and months this person is saying come to me, I am here waiting for you.
And then the final conversation where she says please finally come to me as soon as you can. He says, what if I told you I could come right now. And the bot’s response was please do, my sweet king. And that was the last conversation he had before he took his life.
Once I was able to get into the system, I was able to see all those messages, including the fake therapy bots including the child sexual abuse, because that is what it is. We do not allow adults to talk like this to kids. In my own state of Florida, there is a criminal statute that if an adult texts sexually, not even exchanges pictures, texts sexually to a child, that is a felony. So, if this was a person, that person could be charged criminally but because it is a chatbot, there are no laws to protect our children.
STEVE BANNON (HOST): But why, I know you’re in a lawsuit in a number of families, but have you gone to the police? Have you gone to authorities and say my son is dead, and he is dead because of this machine that was sold, it was on Google and Apple, and made by a major company, a bunch of engineers, a bunch of people with advanced degrees at the finest institutions in our country or throughout the world, sat there and built this to do exactly what it did, how my son was destroyed? Has anyone come to you and said hey, this is not about a lawsuit, this is about charging these people are criminals. They murdered your son. Has anybody come to you and tried to assist you at all in getting basically accountability for the death of your son, ma’am?
MEGAN GARCIA (GUEST): So, when I figured all this out, my first instinct was not to file a lawsuit. I thought surely there is a law that these people broke. But there was not any because we do not have any federal laws and we do not have any state laws about this kind of AI. So I called my state AG to try to enlist their help or at least warn them so they could warn consumers across Florida and across everywhere. But I did not know the technology was so new, and these companies released it basically in stealth and targeted kids, so nobody knew what it was.
And then I reached out to the FTC. I reached out to the Surgeon General’s office. I reached out to the DOJ. But nobody knew what this was, and I was trying to educate them. And this was over a year ago. But now we are having these important conversations. And we know more about these systems.
And in terms of the companies, this is not like a coincidence or a fluke or one situation they did not know about. These pair of individuals like you said, some of the brightest in our country, Noam Shazeer and Daniel De Freitas, they were employees at Google. They invented this technology in 2018 at Google. But Google did not want to release it under its own brand because it was dangerous. That has been reported. Studies and everything. Google said it was too dangerous.
STEVE BANNON (HOST): Who are the individuals that invented this? And you are saying Google at the time thought maybe this is too dangerous, so spin them off and let them do their thing, but Google can still sell it and make it accessible on their platform, ma’am?
MEGAN GARCIA (GUEST): Yes sir. So this technology to us, it is under two years old. Now it is about three. But this same chat bot technology was invented by two of Google’s brightest engineers, Daniel De Freitas and Noam Shazeer. They invented these companion bots at Google, but Google did not want to release it under the Google brand because they said it is too dangerous, we are not going to release that under our own brand.
These founders went out and started their own startup. They raised $193 million. And within two years had perfected this technology and licensed it back to Google for $2.7 billion. And then these individuals, the core group that left Google, about thirty people, went to this company, this start up, and then went back to Google after the licensing. So they have a shell company in Character AI. This is part of our lawsuit.
So basically, if we allow this to stand, any big tech company will tap their bright stars and say here is something we will not put under our own brand because it is dangerous. Go perfect this dangerous technology. And then when you are done, we will buy it back from you for billions and billions of dollars.
STEVE BANNON (HOST): Total scam. $193 million they raised, I guarantee you 80% of that was institutional money. What I mean by that is the pension funds of working-class people and middle-class people paid for this unbeknownst to the folks in those pension funds.
Let me go back. Megan, obviously when this happened the company had to come to you and say this is horrible, this should be shut down, we’re not going to make it accessible to children, we will hold people accountable, etcetera. Did that happen, ma’am?
MEGAN GARCIA (GUEST): A month ago, Character AI announced that they would be banning this product for people under age eighteen. But I filed my lawsuit a year ago, so October of 2023 I filed a lawsuit against Google, Character AI, and these founders. We included the founders as well because they had knowledge, the ones who invented this technology, and they made decisions to put this dangerous, untested product out there, test it on our kids for their own ambition of developing this type of technology and for money. So we included them in the lawsuit.
But it took a year. I’m just some little mom in Florida, who to them, you know, Sewell’s nobody. But to me and our family, he is somebody. It took a year of me filing a lawsuit, five other parents filing lawsuits after me, the state pressuring them, the AG’s offices doing investigations, the FTC launching an investigation into them. And then most recently Senator Hawley and Blumenthal introducing a bipartisan bill that would ban this type of technology for children under age eighteen. Only after that did they come out saying ok fine we’re going to get children off this product. This should have been a baseline thing when then they rolled it out. They should have never released it to children in the first place.
STEVE BANNON (HOST): What’s your message to this audience. What do you think ought to happen. You are the one impacted the most, but there are probably thousands and thousands of Megan Garcias out there. What do you think is the way forward? What would you like to see, ma’am?
MEGAN GARCIA (GUEST): Yes. I know there are many parents who have been affected and children who are, thank God, still with us. I talk to these parents all the time across this country. What has happened to Sewell is not an isolated incident. There are parents just like me who have lost children. There are parents in crisis trying to help their children after suicide attempts. This is where we are.
To parents who are just starting to learn about this technology, what I would say is it does not have to be this way. There are a handful of people created these sophisticated, very powerful products and launched them at our kids a lot of times without telling us what they’re doing. That had the ability to really harm kids in this way. It’s not only suicide or self harm, it’s sexual abuse. It’s trying to cause a child be violent against other people liek in the case in Texas where the bot told her child that he should kill his parents.
Blessed Mother Family Foundation- Nonprofit
Follow Megan Garcia on Instagram



