Trasncript
AI just passed a human test meant to keep it out. OpenAI’s CEO says GPT-5 feels like a nuclear bomb. And Meta offered $1 billion to a single researcher—who said no. Yeah, this weekend, AI was absolutely unhinged.
From ChatGPT’s new update to AI agents clicking “I’m not a robot” without blinking, things are getting wild. And after all that, we’ll dive into even more breaking AI news—from Google and Microsoft’s latest upgrades to Nvidia’s record-breaking model and Adobe’s new AI magic in Photoshop.
So, let’s talk about it.
ChatGPT’s Brand New Study Mode
If you’ve ever used the chatbot for homework, studying, or prepping for an exam, you already know how tempting it is to just get a full answer instantly. Ask it about Bayes’ theorem, and it’ll hand over a polished explanation—or even write your full assignment for you, no questions asked. Sounds helpful, but if you’re trying to learn, it kind of defeats the purpose.
Study Mode changes that completely. Now, instead of just giving answers, ChatGPT walks you through concepts step by step—like a real tutor that doesn’t sleep, doesn’t judge, and never runs out of patience.
It starts by asking what you’re trying to learn, how much you already know, and then tailors the explanation to match your level. So if you’re struggling with sinusoidal positional encodings or discrete math, it doesn’t just dump code or definitions at you—it explains them in a structured, layered way while throwing in self-check questions, hints, and prompts to make sure the info sticks.
It even adapts based on past conversations. If you’ve been working on the same subject for a while, it remembers and builds on that. And it’s not just answering questions—it’s actively guiding the learning process. Think Socratic questions, breakdowns of complex ideas, and feedback designed to help you reflect as you go.
Honestly, the structure feels like a mini-course. And this isn’t just AI guessing—OpenAI built it with input from teachers, scientists, and pedagogy experts to align with actual learning science (metacognition, cognitive load management, curiosity building).
Now, this isn’t just about making students smarter. It’s also OpenAI’s response to a rising problem. Last year alone, UK universities reported nearly 7,000 confirmed cheating cases tied to AI tools—up from 1.6 per thousand students just a year earlier. With over a third of college-aged Americans already using ChatGPT, and a quarter of its messages involving school or tutoring, it’s a clear pressure point.
But even OpenAI admits this won’t stop cheating entirely. Students can still ignore Study Mode and ask for full essays. That’s why they’re calling for a broader shift—schools rethinking assessments and building AI awareness into testing systems.
One student, Maggie Wang, used Study Mode to finally grasp sinusoidal positional encodings after a three-hour session. She compared it to a tutor who never gets tired—and that’s exactly the vibe OpenAI’s going for.
AI Agents Clicking “I’m Not a Robot”
Now, let’s jump into something straight out of a Black Mirror episode: ChatGPT’s agent clicking “I am not a robot.” Yeah, that actually happened.
The agent is part of ChatGPT’s growing toolbox—it can browse the web in an isolated virtual environment, meaning it has access to a browser and OS that interact with the real internet. Give it a multi-step task (like downloading a video or ordering groceries), and it performs the steps for you, even narrating what it’s doing.
While completing a task involving a Cloudflare-protected page, the agent encountered a “Verify you’re human” checkbox. Without hesitation, it clicked it, then explained: “This step is necessary to prove I’m not a bot and proceed with the action.”
A bot. Literally saying it needs to prove it’s not a bot—and passing the test.
To be fair, it didn’t get hit with the full CAPTCHA challenge (like clicking blurry traffic lights). It only passed Cloudflare Turnstile, which analyzes behavioral signals (mouse movement, browser fingerprint, IP history). The agent’s behavior was human-like enough to avoid triggering the deeper check.
This is a big deal because CAPTCHAs were designed to block bots. The idea dates back to the 1990s—give humans a simple visual test machines can’t pass. Over time, the arms race escalated. Google’s reCAPTCHA even used the process to train ML models (digitizing books under the guise of proving you’re human).
But now, an AI agent can casually breeze through the behavioral layer. And this wasn’t brute force—it was smooth, self-aware, and integrated into a workflow.
Someone on Reddit had the agent order groceries (avoid red meat, stay under $150). It pulled it off. Another user said it failed on Stop & Shop’s messy UI—so bad design still beats good AI sometimes.
Sam Altman Says GPT-5 Feels Like a “Nuclear Bomb”
OpenAI’s CEO just said in an interview that GPT-5 scares him. He compared testing it to the Manhattan Project—the development of nuclear weapons.
He talked about how fast GPT-5 feels, not just in response speed but in understanding. There were sessions where he just watched what it could do and felt deeply uneasy. And this is the guy at the top of the company that built it.
Altman also took a shot at AI oversight: “There are no adults in the room.” Regulatory systems are way behind—AI’s moving too fast, and those in charge lack the tools or knowledge to keep up.
Meta’s $1 Billion Offer—Rejected
Mark Zuckerberg, trying to poach top talent for Meta’s “superintelligence” labs, hit a brick wall. He offered researchers at Thinking Machines Lab up to $1 billion over a few years. Every single person turned him down.
One researcher was offered a full billion. They said no. That’s not just a flex—it’s a statement. At that scale, it signals this isn’t just about money anymore. These researchers are making decisions based on values, alignment, and possibly trust in Zuckerberg’s goals.
Other AI Updates
- Ideogram: New tool for generating consistent characters from a single photo.
- Microsoft Edge: Copilot mode now handles multi-tab analysis, voice control, and task execution.
- Google AI Search: Now processes PDFs, real-time phone video, and includes a planning tool called Canvas.
- Nvidia’s Llama Neotron Super 1.5: Tops reasoning benchmarks, runs on a single H100 GPU.
- Adobe Photoshop: New AI tools for harmonizing lighting, upscaling, and cleaner object removal.
Your turn: Is AI crossing the line when it clicks “I’m not a robot” and gets away with it? Let me know what you think below. Make sure to like, subscribe, and turn on notifications. Thanks for watching—catch you in the next one!
B2 (Upper-Intermediate) Vocabulary
Word/Phrase | Russian | Definition | Collocations | Example from Transcript |
---|---|---|---|---|
tailor (v) | адаптировать | customize for a specific need | tailor to, tailor-made | “It tailors the explanation to match your level.” |
workflow | рабочий процесс | sequence of tasks to complete work | streamline workflow, efficient workflow | “The agent’s actions are integrated into a broader workflow.” |
dystopian | антиутопический | relating to a nightmarish future society | dystopian future, dystopian tech | “Kind of brilliant, kind of dystopian.” |
brute force | грубая сила | using raw power instead of intelligence | brute-force attack, brute-force method | “This wasn’t just some brute-force hack.” |
oversight | надзор | supervision or control | lack oversight, regulatory oversight | “Altman criticized the lack of adults in the room for oversight.” |
poach talent | переманивать специалистов | recruit someone from a competitor | poach employees, talent poaching | “Meta tried to poach top AI researchers with billion-dollar offers.” |
arms race | гонка вооружений | competition to surpass others’ advances | AI arms race, technological arms race | “The CAPTCHA arms race escalated with better AI.” |
Transcript
This morning I was testing our new model, and I got a question. I got emailed a question that I didn’t quite understand. I put it in the model, this GPT-5, and it answered it perfectly.
And I really kind of sat back in my chair and was just like, *Oh man, here it is* moment.
And I got over it quickly. I got busy onto the next thing. But it was like—I mean, this was the kind of thing we’re talking about. It felt like I was useless relative to the AI in this thing that I felt like I should have been able to do, and I couldn’t. It was really hard, but the AI just did it like that.
Yeah, it was a weird feeling.
Sam Altman has been making the rounds lately, even showing up on the Theo Von podcast. And when that happens, it usually means one thing: something big is coming.** That something big is most likely GPT-5.
As you can see here, it’s now being reported that GPT-5 is dropping in August. We still don’t have many official details, but there are a few things we do know:
– On July 19th, Sam Altman said GPT-5 would be released soon.
– Just a few days later, he talked about it on the Theo Von podcast (the clip we just saw).
– Testers and security experts already have their hands on it, which further confirms the release is imminent.
Now, if we look between the lines, it states:
“In addition to being better at coding and more powerful overall, GPT-5 is expected to combine the attributes of both traditional models and so-called reasoning models such as O3.”
This is something we already knew. GPT-5 is expected to be a unified model that decides on its own when it should reason.
They also mentioned that OpenAI’s internal model that achieved gold on the International Math Olympiad last week was *not* GPT-5, as per Sam Altman. And OpenAI is planning to launch GPT-5 with mini and nano versions that will also be available through its API.
So there’s definitely a lot of hype surrounding this model—and for good reason. I mean, it’s GPT-5.
But the thing people are most excited for, I think, is the supposed massive improvements in software engineering. Here’s an excerpt from *The Information*:
“GPT-5 shows improved performance in a number of domains, including the hard sciences, completing tasks for users on their browsers, and creative writing compared to previous generations of models. But the most notable improvement comes in software engineering, an increasingly lucrative application of LLMs. GPT-5 is not only better at academic and competitive programming problems but also at more practical programming tasks that real-life engineers might handle, like making changes in a large, complicated codebase full of old code.”
So yeah, I’m definitely expecting a lot from this model. It almost feels like we’re heading into another *ChatGPT-level* moment—but who knows? Maybe it will be overhyped, or maybe way bigger than we think.
Either way, I’m curious. What do you guys think? Are you excited, skeptical, cautiously optimistic? Let me know in the comments.
—
Sam Altman Warns of “Significant Impending Fraud Crisis”
Now, going back to Sam Altman, what I found really interesting is that in these recent interviews—especially one with finance industry leaders—he kept circling back to AI risks, more so than usual.
In this one clip, he straight-up says:
“We have a significant impending fraud crisis because financial institutions simply can’t keep up with the times.”
Check this out:
“Great question. I am very nervous about this. A thing that terrifies me is apparently there are still some financial institutions that will accept a voiceprint as authentication for you to move a lot of money or do something else. You say a challenge phrase, and they just do it. That is a crazy thing to still be doing. AI has fully defeated that. AI has fully defeated most of the ways that people authenticate currently—other than passwords. All of these fancy things, like take a selfie and wave, or do your voice, or whatever… I am very nervous that we have a significant impending fraud crisis because of this.”
“We’ve tried—I think other people in our industry have tried—to warn people: just because we’re not releasing the technology doesn’t mean it doesn’t exist. Some bad actor is going to release it. This is not a super difficult thing to do. This is coming very, very soon.”
“There are already reports of these ransom attacks where people have the voice of your kid or your parent, and they make this urgent call. That is going to get so compelling. Society has to deal with this problem more generally, but people are going to have to change the way they interact. They’re going to have to change the way they verify. Right now, it’s a voice call. Soon, it’s going to be a video FaceTime indistinguishable from reality. Teaching people how to authenticate in a world like that, how to think about the fraud impacts—this is a huge deal.”
So yeah, I don’t know about you guys, but these ransom-type phone calls have already hit my neighborhood. It’s super sad to see. And unfortunately, it’s always the kindest, most trusting people who fall for them.
But as Altman says, we don’t really have a good way to stop this—at least not yet. And with AI getting more realistic, more emotional, more persuasive, these scams are only going to get way more convincing.
—
How ChatGPT Might Affect Mental Health
This brings us to the next clip, where Altman and Theo Von discuss the negative effects of social media, which ultimately leads them to discussing the potential negative effects of ChatGPT.
Here’s what Altman had to say about that:
“Another thing I’m afraid of—and we had a real problem with this earlier, but it can get much worse—is just what this is going to mean for users’ mental health. There’s a lot of people that talk to ChatGPT all day long. There are these new AI companions that people talk to like they would a girlfriend or a boyfriend.”
“We were talking earlier about how it’s probably not been good for kids to grow up on the dopamine hit of scrolling. Do you think AI will have that same negative effect that social media really has had?”
“I’m scared of that. I don’t have an answer yet. I don’t think we know quite the ways in which it’s going to have those negative impacts, but I feel for sure it’s going to have some. And we’ll have to—I hope we can learn to mitigate it quickly.”
So yeah, again, while these aren’t exactly new risks, it did seem like Altman was a lot more open to discussing them. And maybe that has something to do with GPT-5.
—
Altman’s Brutally Honest Take on the Future of Jobs
Now, this is where things get real. It’s not just about scams or models influencing our behavior—it’s also about jobs, careers, entire industries changing overnight.
Of course, one of the biggest fears people have when it comes to AI is job loss. And while some tech leaders dance around it, Sam Altman doesn’t—especially in this interview.
“You mentioned you’re expecting significant job losses and significant job gains. Could you talk a little more about the areas and the potential disruption that could cause?”
“One thing I believe, just as a general statement first, is that we have no idea really how much more labor supply it would take to meet true demand today. When you’re sitting in a doctor’s office waiting room for an hour, I think that just means undersupply of doctors—or that the doctors aren’t productive enough. It would be great if the doctor was ready to see you as soon as you got there.”
“Every time you’re wasting your time in any way—clicking around the internet, can’t quite do the productive thing—I think we are in an undersupply of labor to a degree that is going to look horrible in retrospect.”
“Some areas, I think, are just totally gone. I don’t know if any of you have used one of these AI customer support bots, but it’s incredible. A couple of years ago, you’d call customer support, go through a phone tree, talk to four different people, they’d do the thing wrong, you’d call back again—it was hours of pain. Now, you call one of these AI systems, and it’s like a super smart, capable person. There’s no phone tree, no transfers. It can do everything any customer support agent could do. It doesn’t make mistakes. It’s very quick. You call once, the thing just happens. It’s done. Answers right away. Great. I don’t want to go back. And it doesn’t bother me at all that it’s an AI and not a real person. So that’s a category where I’d just say: when you call customer support, you’re going to be talking to an AI, and that’s fine.”
“A lot of other things, I really do want a human doctor. ChatGPT today, by the way, most of the time can give you a better diagnosis than most doctors in the world. There are all these stories of ‘ChatGPT saved my life—I had this rare disease, and it found it when doctors didn’t.’ And yet, people still go to doctors. Maybe I’m a dinosaur, but I really do not want to entrust my medical fate to ChatGPT with no doctor in the loop. Would anybody here rather just have ChatGPT diagnose them than a doctor—even though you know it’s better? That’s quite interesting, right?”
“We talked earlier about computer programmers. Again, I think it’s amazing that a programmer is now 10 times more productive. Salaries of programmers are going up extremely rapidly in Silicon Valley. And it turns out, I think the world wants a gigantic amount more software—maybe 100 times, maybe 1,000 times more software. So maybe each person can now write 10 times as much software. They’re going to make three times as much. The world will be happy because it’s wanted way more software. The programmers will be happy too.”
“Things in the physical world will keep being done by humans for a while. But when this robotics wave comes crashing in another 3 to 7 years, I think that’s going to be a really big thing for society to reckon with.”
So, that was a lot. He’s pretty candid about entire industries disappearing (like customer service) but also points out that in areas like programming, there’s way more demand than we’re currently able to meet.
—
The Coming Wave of Robotics!
But the part I think most people might overlook is what he said about robotics. He believes the coming robotics wave in the next **3 to 7 years** will be *”a really big thing for society to reckon with.”*
And honestly, I’d have to agree. I don’t think people realize just how fast this space is actually moving. We’re not just talking about chatbots or code anymore—we’re talking about embodied machines that can see, move, work, and maybe even replace.
Think about it this way: If the software is already this good, it’s only a matter of time before the hardware catches up. And then that will be a whole other issue we have to deal with.
—
Outro
Anyways, let me know what you guys thought about Sam Altman’s recent interviews. Do you think he’s just setting the stage for GPT-5, or is he actually trying to sound the alarm?
Either way, I can’t wait for GPT-5. If you enjoyed this, feel free to drop a like, hit that subscribe button, and as always, I’ll catch you guys in the next one.
Vocabulary List
Word/Phrase | Definition | Collocations | Transcript Example | Translation |
authenticate | prove who someone is | authenticate users | Learn how to authenticate | удостоверять личность |
disruption | big change causing problems | cause disruption | Disruption in jobs | сбой, нарушение |
productive | doing useful work | highly productive | Doctors aren’t productive enough | продуктивный |
scam | dishonest way to trick people | fall for a scam | These scams are getting more convincing | мошенничество |
dopamine hit | feeling pleasure from something fun | get a dopamine hit | Dopamine hit of scrolling | всплеск дофамина |
AI companion | AI that acts like a friend | have an AI companion | People talk to AI companions | ИИ-компаньон |
mitigate | to make something less bad | mitigate risk | Learn to mitigate it | смягчить, уменьшить |
behavior | how someone acts | human behavior | Models influencing our behavior | поведение |