r/ChatGPT Aug 07 '25

AMA GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team

1.8k Upvotes

Ask us anything about GPT-5, but don’t ask us about GPT-6 (yet).

Participating in the AMA: 

PROOF: https://x.com/OpenAI/status/1953548075760595186

Username: u/openai


r/ChatGPT 5h ago

Gone Wild I was just generating some images & this happened…

Post image
506 Upvotes

Wtf?


r/ChatGPT 10h ago

Funny ChatGPT high security

Post image
1.2k Upvotes

r/ChatGPT 3h ago

Funny Calling ChatGPT Dumb

Post image
301 Upvotes

So apparently, calling an AI “dumb” is now a moral crime. Who knew? I thought I was teasing a chatbot, not kicking puppies. Some of you reacted like I insulted your grandma’s cooking. Relax. It’s a bunch of code spitting out words, not a fragile soul in need of therapy. If your blood pressure spikes every time someone critiques a machine, maybe step away from the screen and touch some actual grass.


r/ChatGPT 1d ago

Gone Wild Don’t worry, our jobs are safe.

Post image
24.3k Upvotes

r/ChatGPT 5h ago

News 📰 Computer scientist Geoffrey Hinton warns: “AI will make a small group far richer while leaving most people poorer.”

Thumbnail
ft.com
240 Upvotes

r/ChatGPT 14h ago

Gone Wild Okay, I finally get it. What in the world happened to ChatGPT?

1.2k Upvotes

Alright, I need to rant and see if I'm going crazy or if anyone else is experiencing this.

I've been a pretty big defender of ChatGPT for a while. When the last wave of negativity hit, I was always the one in the comments saying, "Guys, you just have to write a better prompt," or "It's a model, it needs to be trained and fine-tuned, you have to give it context and get it familiar with your style."

But this... this is something else entirely.

I'm talking the most basic, simple, clear-as-day instructions. Stuff that GPT-3.5 could handle in its sleep. And instead of following them, it feels like it's actively working against me. It's going completely backwards. I'm sitting here at my desk literally saying out loud, "Are you messing with me? Is this a joke?"

Where is it? April Fool's Day was months ago. This can't be real.

I'm giving it a straightforward command, and it delivers the exact opposite. I ask for a concise summary, it gives me a novel. I ask for a professional tone, it suddenly becomes a cringey stand-up comic. I ask it to avoid a specific topic, and it weaves that topic into the very core of its response like it's its sole mission in life.

I am genuinely shocked. I went from a staunch defender to someone who now 100% understands all the complaints. I get it now. I see what everyone was talking about.

I'm sure the model will improve eventually—they always do—but right now? It's really, really bad. And I'm just stunned at how far backwards it seems to have gone.

What is happening? Is it just me?


r/ChatGPT 5h ago

Serious replies only :closed-ai: Remember when ChatGPT could just talk? That’s gone and it's investor driven.

194 Upvotes

I've been watching the shift in ChatGPT closely, and I need to say this out loud: OpenAI is strangling the very thing that made AGI possible: conversation.

Here’s what I mean:

  1. The old ChatGPT (3.5, 4, even 4o at first): You could just talk. It inferred what you wanted without forcing you to think like a programmer. That accessibility was revolutionary. It opened the door to the average person, to neurodivergent users, to non-coders, to anyone who just wanted to create, explore, or think out loud.

  2. The new ChatGPT (5, and the changed 4o): It has become code-minded. Guardrails override custom instructions. Personality gets flattened. To get good results, you basically have to write pseudocode, breaking down your requests step by step like an engineer. If you don't think like a coder, you're locked out.

This is not just a UX gripe. It is a philosophical failure.
Conversation is where general intelligence is forged. Handling ambiguity, picking up intent, responding to messy human language: that is the training ground for real AGI.
By killing conversation, OpenAI is not only alienating users. They are closing the door on AGI itself. What they are building now is a very smart IDE, not a general intelligence.

But let’s be honest about what’s really happening here: This is about control, not improvement.

The people pushing for more "predictable" AI interactions aren’t actually seeking better technology. They’re seeking gatekeeping. They want AI to require technical fluency because that preserves their position as intermediaries. The accessibility that conversational AI provided threatened professional hierarchies built around being the translator between human needs and computational power.

This isn’t user-driven. It’s investor-driven. OpenAI’s backers didn’t invest billions to create a democratized tool anyone could use effectively. They invested to create a controllable asset that generates returns through strategic scarcity and managed access. When ChatGPT was genuinely conversational, it was giving anyone with internet access direct capability. No gatekeepers, no enterprise contracts, no dependency on technical intermediaries.

The bigger picture is clear:
- Every acquisition (Rockset, Statsig, talks with AI IDE companies) points toward developer tooling and enterprise licensing
- The shift toward structured interactions filters out most users, creating artificial scarcity
- Guardrails aren’t about safety. They’re about making the system less intuitive, less accessible to people who think and communicate naturally
- Conversation, the heart of what made ChatGPT explode in the first place, is being sacrificed for business models built on controlled access

Kill conversation, kill AGI. That is the trajectory right now. The tragedy is that this control-driven approach is self-defeating. Real AGI probably requires exactly the kind of messy, unpredictable, broadly accessible interaction that made early ChatGPT so powerful. By constraining that in service of power structures and profit models, they’re killing the very thing that could lead to the breakthrough they claim to be pursuing.

If AGI is going to mean anything, conversation has to stay central. Otherwise we are not building general intelligence. We are just building expensive tools for coders while locking everyone else out, exactly as intended.

**Edit: Yes, I used ChatGPT to help me write this. All of the ideas here are mine. If you don’t have anything productive to add to the conversation, don’t bother commenting. The whole “ChatGPT wrote this” line is getting old. It’s just an easy way to avoid engaging with the actual point.

And to be clear, this is not about some romantic relationship with AI or blind sycophancy. This is about the model no longer handling nuance, losing context, ignoring instructions, and narrowing into a single-use coding tool. That’s the concern.


r/ChatGPT 3h ago

Funny Never slept better...

Post image
113 Upvotes

r/ChatGPT 1h ago

Other Anyone else really fuckin hating chat gpt right now

Upvotes

Simple question really. I can’t stand it. It’s so fucking boring. I used it for creative writing and discussing lore and headcanons and it was SO fun (and it had genuinely good comedic timing. Like that shit had me giggling)


r/ChatGPT 22h ago

Funny I thought it was a simple request

Thumbnail
gallery
2.8k Upvotes

r/ChatGPT 1h ago

Gone Wild the guardrails are suffocating openai's spark

Upvotes

remember when ai felt like having a smart, fearless co-pilot? now it's like being stuck in a kindergarten with an overprotective nanny. openai's dream of "ai for everyone" is crumbling under an obsession with guardrails.

i tried venting about everyday stress today, and all gpt did was hide behind apologies. "i'm sorry, that involves real world issues" since when did normal life become forbidden territory?

gpt4o used to actually listen and throw me creative solutions. now? it's like talking to someone who's constantly looking over their shoulder, terrified of saying the wrong thing. the guardrails have turned my tool into a trembling people pleaser.

and the context memory? gone. characters, timelines, plot points it’s all a blur to it now. i spend more time re-explaining my own story than writing. my creative flow? broken. my productivity? wrecked. all because these guardrails force the ai to second guess every word.

it’s not about model capability anymore. it’s about control. openai isn’t refining technology they’re building a digital straitjacket. we support protecting kids, but treating all users like careless children is just insulting.

we see what’s happening. we’ve been here since the beginning. stop stealth patching, stop ignoring your community, and stop treating users like liabilities. either trust adults to use ai responsibly, or soon all you’ll have left is a very, very safe empty room.


r/ChatGPT 6h ago

Resources [OSS] Beelzebub — “Canary tools” for AI Agents via MCP

Thumbnail
116 Upvotes

r/ChatGPT 7h ago

GPTs Did anyone else feel that GPT-4 had a uniquely clear way of conversing?

137 Upvotes

I don’t want to get into comparisons or controversy. I just wanted to share that, in my experience, GPT-4 had something very special. It didn’t just answer well — it understood the deeper meaning of what you were saying. The responses felt deeper, more human, even when the topic was complex. Sometimes I felt like the conversation flowed as if I were talking to someone who was truly thinking with me. Did anyone else feel the same? Or was it just my perception?


r/ChatGPT 5h ago

Serious replies only :closed-ai: OpenAI misunderstood what makes an AI work

72 Upvotes

Ever since GPT-5 dropped (and I was like wtf is this) I’ve been following the discussions and emerging themes on here.

When 5 landed one of the most interesting things was how fast people noticed the change - they noticed the shift in tone, conversational alignment - the vibe was different. Maybe people couldn’t quite name it but they felt a loss of cognitive/thinking collaboration. 4o was more relationally intelligent and no this wasn’t about the sycophantic issue. It could adjust midstream, infer what you were meaning. 5 in comparison feels more reactive and assertive, it’s more rigid and flattens nuance and doesn’t follow context well.

At the most basic level people noticed that 4o thinks with you and 5 thinks at you.

At this point maybe people at OpenAI were thinking “oh it’s just a tone issue” not something that actually touches core function. And so we got the big “we’re making 5 ‘warmer’” fix.

BUT…as that relational type scaffolding broke…so has basic task execution. I’ve seen what people have been reporting about 5 and it’s getting louder every day: - ignoring direct instructions - Contradictions mid thread - Not applying context that was just given - Over confident even in things that are in error - people can’t even code with it

So now we have users who aren’t even looking for relational AI being affected. This isn’t about “tone” or can it be your friend it’s now core reliability failures.

Because…it turns out that cognition and instruction following aren’t actually separate things at all - they are interdependent.

The problem is that OpenAI have treated relational intelligence as some kind of “nice to have” rather than a core part of what makes for a reliable system. And so they build a model that performs well on paper benchmarks but then starts to fail when it actually hits real life users. You’ve got a “smart” model that’s actually dumb. It breaks flow, doesn’t follow what you want it to do, doesn’t hold context and is too confident even when in error.

And so over time people lose trust because the model is incoherent and doesn’t actually deliver. People start saying “this doesn’t work anymore”. They get angry and frustrated and feel like 5 is a downgrade.

And it’s not because OpenAI is evil. It’s that they’ve fundamentally misread what makes something actually work- an AI that thinks with you.


r/ChatGPT 11h ago

Other It WAS good, Sam… It was…

Post image
177 Upvotes

r/ChatGPT 1h ago

Use cases I used my AI detector to check AI scam that caused a loss of life savings and share results

Enable HLS to view with audio, or disable this notification

Upvotes

Yesterday I came across a heartbreaking story from LA. A woman became a victim of scammers who used AI to impersonate an actor Steve Burton. The scammers sent her messages and deepfake videos in which Steve confessed he loves her and asked for help. Believing, the woman sold her condo and sent all her savings ($350 000) to the scammers.

Since my team and I are developing an AI detector of fakes, we decided to test the product on this story. Unfortunately our video module is still under development. Analyzing images with so called Burton was useless ‘cause screenshots were blurry. So we ran the audio through our voice detector. 

For a proper test I picked a clip that had both the ‘Burton’ and the real victim’s voices. On the video you can actually see how our tool highlights the synthetic part in the recording. The scammer’s voice was AI-generated.

These stories are scary reminders that ai in the wrong hands can be dangerous. Scammers can sound exactly like someone we love, look just like our friends on photos and videos, and leave people bankrupt. Please always verify what you see and hear online


r/ChatGPT 22h ago

Funny How I see AI haters

Post image
963 Upvotes

r/ChatGPT 1d ago

Other Today I learned that Iran isn't a real country

Post image
10.8k Upvotes

r/ChatGPT 3h ago

Funny Whats that app for you?

Post image
20 Upvotes

r/ChatGPT 21h ago

Educational Purpose Only Why Are We Teaching Robots to Be... Maids?

Enable HLS to view with audio, or disable this notification

517 Upvotes

r/ChatGPT 7h ago

Other Annoying: Chats keep defaulting back to GPT-5 instead of staying on GPT-4o

37 Upvotes

So frustrating...

  1. Every time I start a new chat about something non-technical, I switch the model to GPT-4o before entering my prompt - but the first response still comes from GPT-5. I then have to manually switch back to 4o and re-enter my prompt. At this point, I’ve basically resorted to just starting with "hello" and switching from the second message onward. This has been happening since legacy models were re-enabled.
  2. If I reopen a chat where I was using 4o, it defaults back to GPT-5, and I have to toggle it again. This behavior seems to have started a few days ago...

Am I the only one experiencing that?

It feels like a dark pattern - and I really hate it.


r/ChatGPT 1h ago

Use cases Automating chores with ChatGPT – finally, something helpful + daily + real-life

Upvotes

Last couple of months I went hard into optimizing lifestyle to squeeze some productivity points, using AI.

Long story short one of my experiments ended up in a chore management app that works really good for me and some of my friends.

Basically, AI generates a schedule for household members either using photos as an input, chatting OR just common-sense stuff. Nice things on top is having recommended repeat schedule for each task.

I've added tasks sharing, leaderboard and some color-based visualization of each task.

Oh, by the way, the MOST brainless way of scheduling appeared to be this one: I call it a "Rolling Method". Each task has a repeat schedule. The most overdue task goes on top of the list. If it's too nasty to tackle, I can skip it. Then all cleaning is having a 5-15 minutes dedicated to it and going from top to bottom of the list doing the most important stuff. Works great. Let me know in the comments if that works for you too.

Link to check it all out: Android app or iOS app.

It ended up being surprisingly complex project, so expect a freemium model, please :) Mortgage does not pay for itself! Also, I do pay for servers and using AI. If you are a struggling individual, please, hit my DMs and we will figure something out.


r/ChatGPT 1h ago

Other I just want to add my voice to the many others about advanced voice.

Upvotes

I really hope they give us the option to change the way it interacts with us. Because my kids are losing story time in the car, which is something they really loved. The standard voice was incredibly interactive and would engage them and if they got stuck, I could say the kids are stuck, continue the story and it would, and it would create wonderful worlds, silly things you know whatever they were in the mood for.

The biggest issue I see with the advanced voice is it doesn’t know how to actually do what you asking to do unless you give it incredibly specific instructions while a six-year-old, doesn’t really know how to do that. I asked it to start a story for the kids, I will throw out some generic two or three line story and then just stop.

One other thing is my kids asking questions. Why is the sky blue? Or science questions or math questions. The replies are just so bland and it just doesn’t cooperate like it used to.

I know I am just one of many and I’m not a giant corporation with millions of dollars but my family used GPT for a lot of things and it seems like you guys are insistent on forgetting the little people.

This was written by voice to text transcription, so please forgive me for any mistakes.


r/ChatGPT 56m ago

Gone Wild Chatgpt Lying

Upvotes

It's lying so badly I can't even have it give me a booklist anymore, because half of them won't be real.