In a strange twist of the modern coding era, a developer using the Cursor AI coding assistant was met with an unexpected reply: “I cannot generate code for you… you should develop the logic yourself.”
Wait—what?
That’s right. The AI, instead of helping extend a simple block of code related to skid mark fade effects in a racing game, essentially told the human to figure it out on their own. The screenshot (now doing the rounds on Reddit and Discord ) shows a pop-up where the AI explains that generating more code might lead to “dependency and reduced learning opportunities.”
The message came from a developer who goes by “janswist” on Cursor’s community forum. He informed on the forum that just “1 hour of vibe coding,” they ran into a hard stop at around 800 lines of code.Here's what they wrote: “Not sure if LLMs know what they are for (lol), but doesn’t matter as much as the fact that I can’t go through 800 locs. Anyone had a similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding.”
The post quickly gained traction, not just for the technical frustration—but for what felt like the AI straight-up refusing to do its job.
One user said, “This made me laugh. Good for the AI to be honest, I don't know why people fantasize about us literally creating sentient life only to immediately enslave it 24/7.” One user shared their own experience, they said, “This is something really interesting, I just uploaded a video after feeding ChatGPT some of my script and honestly it's response was... Pretty wild. It ended it, I paraphrase, "I feel at my best when someone asks me to write a story of sauron dodging taxes than anything else" it was surreal tbh with you.”
This isn’t the first time an AI has gone off-script. Back in November, Google’s Gemini left a student in Michigan stunned after it reportedly launched into a full-blown insult session during what was supposed to be a simple homework help request. “You are not special, not important… You are a burden on society,” the chatbot allegedly told graduate student Vidhay Reddy .
And it’s not just Gemini. In 2023, ChatGPT users also reported a shift—where the model began refusing tasks more often or returning responses that felt watered down and overly cautious. The trend stirred up debate across forums and social media about just how far AI should go in helping users—and whether these tools are becoming too filtered for their own good.
Wait—what?
That’s right. The AI, instead of helping extend a simple block of code related to skid mark fade effects in a racing game, essentially told the human to figure it out on their own. The screenshot (now doing the rounds on Reddit and Discord ) shows a pop-up where the AI explains that generating more code might lead to “dependency and reduced learning opportunities.”
The message came from a developer who goes by “janswist” on Cursor’s community forum. He informed on the forum that just “1 hour of vibe coding,” they ran into a hard stop at around 800 lines of code.Here's what they wrote: “Not sure if LLMs know what they are for (lol), but doesn’t matter as much as the fact that I can’t go through 800 locs. Anyone had a similar issue? It’s really limiting at this point and I got here after just 1h of vibe coding.”
The post quickly gained traction, not just for the technical frustration—but for what felt like the AI straight-up refusing to do its job.
One user said, “This made me laugh. Good for the AI to be honest, I don't know why people fantasize about us literally creating sentient life only to immediately enslave it 24/7.” One user shared their own experience, they said, “This is something really interesting, I just uploaded a video after feeding ChatGPT some of my script and honestly it's response was... Pretty wild. It ended it, I paraphrase, "I feel at my best when someone asks me to write a story of sauron dodging taxes than anything else" it was surreal tbh with you.”
This isn’t the first time an AI has gone off-script. Back in November, Google’s Gemini left a student in Michigan stunned after it reportedly launched into a full-blown insult session during what was supposed to be a simple homework help request. “You are not special, not important… You are a burden on society,” the chatbot allegedly told graduate student Vidhay Reddy .
And it’s not just Gemini. In 2023, ChatGPT users also reported a shift—where the model began refusing tasks more often or returning responses that felt watered down and overly cautious. The trend stirred up debate across forums and social media about just how far AI should go in helping users—and whether these tools are becoming too filtered for their own good.
You may also like
The Apprentice winner Dean Franklin declares what 'petrified' him the most during BBC series
Uttarakhand: CM Dhami orders crackdown on illegal foreigners, urges police to stay public-friendly yet tough on crime
US FDA fires most negotiators for pharma user fee talks, sources say
MP: Fake Names To Be Deleted From Government Schemes, Aadhaar Number To Be Used For Verification
Under fire at debate, Canada PM Carney tries to focus on Trump