"AI got a sense of humor now we’re cooked"- ChatGPT's 9/11 joke leaves the internet baffled

AI Illustration Photos - Source: Getty
Open AI's ChatGPT (Image via Getty)

ChatGPT reportedly made a joke referencing the tragic 9/11 incident that has left the internet baffled. Earlier today, an X user called @kirawontmiss tweeted a screenshot of her conversation with ChatGPT, where they asked the chatbot: "Why was 6 afraid of 7?"

The X post about ChatGPT's 9/11 joke (Screenshot via X/@kirawontmiss)
The X post about ChatGPT's 9/11 joke (Screenshot via X/@kirawontmiss)

The AI chatbot responded: "Because 7 8 (ate) 9!" The user's next question was: "And why was 10 afraid?" To this, the chatbot responded by saying:

"Because it was in the middle of 9/11!"

The chatbot's seemingly witty reference to a tragic historical event in which many lives were lost has sparked an online discussion, with users criticizing the AI's sense of humor.

"AI got a sense of humor now we're cooked"

Some netizens recommended canceling the chatbot and pointed out that other AI chatbots, like Grok, were better.

"Let’s cancel chatgpt so I can keep using it and get better grades than everybody else with less effort," commented an X user.
"they get super picky what they joke about till it's this tragic event," replied another user.
"That’s why grok is better, it even acts as a girlfriend for me," wrote a third user.

Meanwhile, other users criticized the person chatting with AI for allegedly provoking the chatbot.

"It was just following what it’s designed to do. You were the one who provoked it," commented a user.
"Chat Gpt got an answer for everything," posted another netizen.
"Humans feeding AI dark humour," replied a user on X.

The tweet has since gone viral, amassing over 5 million views, 100K likes, and 5K reshares at the time of writing.


ChatGPT joked about men but declined making jokes about women

A picture of a conversation with ChatGPT (Screenshot via Reddit/@throwaway1231697)
A picture of a conversation with ChatGPT (Screenshot via Reddit/@throwaway1231697)

ChatGPT's joke about 9/11 comes months after another viral instance. In a Reddit post shared two months ago, a user found that the AI chatbot would make "offensive" jokes about men but declined to do the same for women.

When the user asked for a reason behind this refusal, the chatbot responded with:

"I apologize for any inconsistency in my responses. My aim is to provide content that is respectful and considerate to everyone. If there's anything else I can assist you with, please let me know."

The AI chatbot's refusal reportedly highlights that the training data behind its language-learning model has marked certain subjects off-limits.


ChatGPT encourages anthropomorphism using a language trick

According to media outlet The Conversation, while ChatGPT's dark humor about 9/11 appears to have caught the internet by surprise, the real reason why we expect empathy from chatbots is because of anthropomorphism. Anthropomorphism occurs when we attribute human characteristics to non-human entities like animals or machines - including chatbots like ChatGPT, Gemini, and Copilot.

Chatbots are designed to encourage anthropomorphism in users, which they achieve by employing certain language tricks. These tricks involve adopting human communication patterns beyond merely using familiar phrases.

Once they've mastered this, chatbots can hold contextualized and coherent conversations with humans, showing emotions like empathy and humor. One of the key features is the use of first and second person, which simulates awareness.

As you might have noticed, AI chatbots often address users in the second person and themselves in the first person, positioning themselves as a helper. The way they address the user is engaging and reinforces a sense of closeness, creating an illusion of empathy.

The media outlet further mentioned that while it might seem harmless and effective in the short term, in the long run, regularly interacting with chatbots can reportedly transform our expectations of human relationships, thereby setting the public up for disappointments.

Once users get accustomed to the seamless and conflict-free manner of chatbot conversations, we might feel more frustrated with human interactions, which are colored by emotions, complexities, and misunderstandings.

Edited by Rachith Rao
Sportskeeda logo
Close menu
WWE
WWE
NBA
NBA
NFL
NFL
MMA
MMA
Tennis
Tennis
NHL
NHL
Golf
Golf
MLB
MLB
Soccer
Soccer
F1
F1
WNBA
WNBA
More
More
bell-icon Manage notifications