Sometimes I think my opinion means nothing on these topics, especially when it's going to get buried in a thread of 500 plus comments. But I think you finally see a little bit of a flaw in the strategy or just a little bit of insight into what was desperation for relevance and to try to very quickly attain what other companies have attained but essentially what they're seeing is this gradual reduction in ambition and it's only natural for a lot of companies to overreach, but essentially reality and gravity are pulling them back. And as some other people have mentioned wall Street and others see that coding is the prime use case for this where you can make money and have a really profitable business and there are auxiliary functions. Driving addictive content is not really one that should be at the forefront and while many will continue to do that and we'll have all this generative content, I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.
Over time we're probably going to see some really broad and strong use cases of AI, but I think in the case of social media or generative content, we have to be a lot more thoughtful about it. And I'm glad that they're shutting down this app as much as it's great to see innovation and technology and to see how far it's pushed. I prefer to see it when someone like Google does it? Because they're really doing it from the standpoint of this has broad applicable applications to something like simulation or training. Not whatever open AI was doing which honestly just doesn't feel very truthful. I feel like they say one thing and do something else or they say one thing and the agenda or something else. And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it.
Tade0 7 hours ago [-]
> I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.
The addictive toxic content will go the way of tobacco and explore new markets.
Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users.
Nigeria and Pakistan are going through the same change, just started later.
Since 2016 India alone added more users than the mentioned countries combined.
That's a lot of first generation users. More than the entire western population.
WarmWash 7 hours ago [-]
I'm reminded of a video from the 80's/90's where researchers took a TV to the Amazon to see how "live off the land" tribes reacted to high technology. Apparently they stopped doing everything and just wanted to watch TV all day. And that was just regular old TV.
Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI.
ghurtado 6 hours ago [-]
I'm gonna try to remember this comment for the next time someone brings up the boiling frog analogy.
Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living"
There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it.
Tade0 6 hours ago [-]
This. What really happened is that someone figured out what makes people give something their undivided attention and is profiting handsomely off of this finding.
andai 1 hours ago [-]
In the 19th century, many authors lamented the frantic, unhealthy pace of modern life.
ifethereal 6 hours ago [-]
Can anyone come up with a citation for this?
Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now.
Terretta 5 hours ago [-]
It's a particularly misleading anecdote.
In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival.
And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done.
// grew up near such areas
mlok 59 seconds ago [-]
On TV, content changes all the time. It is "always new". In your examples, content is the same over and over. They would not be fascinating for too long because the novelty would wear off. Very different.
"coding is the prime use case for this where you can make money"
Is it?
I have the impression GenAI deteriorates the internet both from a content and tech perspective.
Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck.
GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs.
Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works.
alcasa 4 hours ago [-]
I fear people will just get used to it. Nobody gets tailored clothing anyhmore and people don't question that we have standardized sizes that don't really fit anyone properly. People commonly buy standardized furniture and rarely get something to a specific for their room. If cheaper software (I mean thats mostly what it is) gets the job done, we will probably just keep doing that, even if that means we lose something in the process.
mandevil 55 minutes ago [-]
Yeah but buying a sofa from Ikea doesn't let people steal my banking passwords. There are serious consequences to software bugs that there aren't in cheaper ready-made clothing.
lalalandland 27 minutes ago [-]
Side point, but clothing industry are some of the biggest pollutors in the world
k__ 4 hours ago [-]
Fair.
I just have the feeling that it doesn't get the job done anymore.
I hope we will see the rise of alternatives.
runarberg 2 hours ago [-]
Your analogy is one indirection from being a fit. Factories usually get custom solutions for their production facilities, tailor made by specialist engineers. They then run the production and deliver mass produced goods to the markets. We software engineers aren’t delivering tailor made solutions straight to the consumer markets. We are much more like the engineers who set up the machinery in the production facility, and our software is much closer to that machinery then it is to the mass produced table you buy at Ikea.
Bombthecat 4 hours ago [-]
Yeah, someone wrote: the future of apps, one user, me
moduspol 3 hours ago [-]
That's kind of my concern so far. We haven't seen a lot of big AI deployment success cases, but of the few mildly successful ones we HAVE heard of, they're 100% about cost saving / perceived efficiency and never about actually making a _better_ product or service.
I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases.
It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse.
Bombthecat 4 hours ago [-]
Fake Support contact from companies is another use case. They send you in endless useless circles until you give up.
Saves the company a ton of money
seanw444 3 hours ago [-]
The level to which this stuff can be used against the common person is truly astounding.
asim 4 hours ago [-]
Well tbh I think it's like cloud in 2007-2009. I was highly skeptical and heckling while running on managed bare metal everytime there was an outage. But now cloud is the standard model for anything really. And I think AI becomes the gold standard for code in the long term. So yea right now lots of outages. In a couple years it'll be much better. And in ten years people will always default to automation via AI.
Hendrikto 9 hours ago [-]
> where you can make money and have a really profitable business
I am not convinced. Nobody is making money, every player is losing money hand over fist.
jvictor118 9 hours ago [-]
With coding (it's not really coding per se that matters imo it's more like dynamic logic writ large) it's a land grab strategy. They want to get established as the de facto standard and get a whole bunch of people on their platform so by the time they need to "get profitable" they have a captive audience, a leg-up on other labs. It's a tale as old as time, that's why ubers used to be cheaper than cost.
bryanlarsen 8 hours ago [-]
It's a strategy as old as time, but it's a strategy that usually fails. Spending a lot of money on customer capture only works when customers are actually solidly captured. Most markets have fairly heavy competition and customers will only stay captured as long as there is no substantial cost to staying captive.
Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough.
Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user.
AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives.
ghurtado 6 hours ago [-]
> it's a strategy as old as time, but it's a strategy that usually fails
I like to call this the "Yahoo Effect"
fooqux 7 hours ago [-]
> They want to get established as the de facto standard and get a whole bunch of people on their platform so by the time they need to "get profitable" they have a captive audience, a leg-up on other labs. It's a tale as old as time, that's why ubers used to be cheaper than cost.
Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented.
But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it?
That's not going to work for AI platforms.
What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave.
So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually.
friendzis 6 hours ago [-]
The lock-in of email platforms is the address. With IMAP you can extract the messages right away and migrate. Yet, you would still have to check the old mailbox for stray emails that you must tell to reach you on the new address. And continue doing so for years or risk missing some critical email.
Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive.
juped 5 hours ago [-]
No, you can do it on all the major providers for either no or low cost.
friendzis 5 hours ago [-]
Disregarding the grandfathered free accounts, own domain is $7.20/user/month on gmail, €5/month on Proton. On microsoft that's business tier feature and AFAIK not supported at all on Yahoo.
vel0city 4 hours ago [-]
Zoho Mail Lite is $1/user/mo when billed annually.
A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare.
azan_ 9 hours ago [-]
Not because there is no path to profitability (they make a ton of money on inference), they just spend a lot on R&D.
root_axis 7 hours ago [-]
> they make a ton of money on inference
So it is stated, but is it actually true? I am not convinced.
Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline.
steveBK123 6 hours ago [-]
Agreed, the revenues are big.. but very small next to the datacenter bills.. even if a fraction of which are being used for inference, it's hard to argue they even break even. That's before all the other costs (Super Bowl ads, billions in compensation).
extr 5 hours ago [-]
It's widely reported and acknowledged as true.
root_axis 55 minutes ago [-]
Well, the only people with any ability to acknowledge it have a massive incentive to do so, and I've been around the block enough times to know that startups will use every trick in the book to paint a rosy financial picture, even when it's extremely misleading or occasionally just straight up lies. In the current climate of AI hype my skepticism is even greater.
I'll believe it when I see it.
Forgeties79 54 minutes ago [-]
Where and by who? Critical context missing here.
player1234 5 hours ago [-]
[dead]
chasd00 7 hours ago [-]
from what i understand, the issue with inference is it doesn't scale as user count grows the way traditional saas scales. In typical saas adding users requires very little additional capacity. However with inference, supporting more users requires much more capacity to be added. I don't know if it's quite linear but it certainly requires more infrastructure to support additional LLM users than say a web application.
seanw444 3 hours ago [-]
And the existing infrastructure routinely struggles for several of the well known players. You can literally tell when it's getting bogged down by workload. And that's after all the absurdly large datacenters we've already established at significant expense (to both the corporations and the average person).
mrbungie 8 hours ago [-]
Afaik Anthropic still loses money for their main product in this space: Claude Code and their Max plans.
somehnguy 7 hours ago [-]
This became immediately clear to me over the weekend when I used Opus via API key. I had it review the code for my (relatively small) personal blog to create an AGENTS.MD - it cost me $3.26.
bsaul 7 hours ago [-]
same here... The API costs are absolutely insane for any real usage. This is either high prices to make sure no profitable competitor to claude workspace or other agent system emerges, or heavily sponsoring of their own soluions.
TheLNL 7 hours ago [-]
Api cost need not correlate with running cost.
aaa_aaa 8 hours ago [-]
Not really. They are burning money on hardware, resources and payroll without meaningful return prospects.
hobofan 8 hours ago [-]
Frontier model developers don't make money, but inference providers do. For open weight models there is a healthy market of inference providers that operate profitably without VC backing.
lossyalgo 6 hours ago [-]
Such as? Where do we find these open weight model providers? Why is hardly anyone talking about them or sharing links (here or elsewhere) if they are so wonderful and profitable?
OpenRouter makes it easy to use them, just add credits to your account.
I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock.
vel0city 4 hours ago [-]
Why is hardly anyone talking about basic web hosting provides or sharung links (here or elsewhere) if they are so wonderful and profitable?
Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end.
wildster 8 hours ago [-]
CoreWeave's cash flow do not look too healthy.
player1234 5 hours ago [-]
[dead]
steveBK123 9 hours ago [-]
Yes
They are just pivoting to stuff that loses money more slowly but maybe has a path to profits eventually…
heavyset_go 9 hours ago [-]
Some of these AI companies that promised AGI are going to find out that they're actually IDE plugin subscription companies
WarmWash 6 hours ago [-]
Coding is a small minority of total generated tokens. It's easy swimming in tech waters all day to think Claude is the pack leader because it writes excellent code, but the reality is that tokens are overwhelmingly coming from OpenAI and Google doing mostly stuff like "Make this e-mail sound nicer" and "What's a cheap vacation spot with warm turquoise waters"
steveBK123 6 hours ago [-]
> "Make this e-mail sound nicer" and "What's a cheap vacation spot with warm turquoise waters"
Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years.
vel0city 4 hours ago [-]
Both of those things both were and are jobs. They're called secretaries and travel agents.
steveBK123 4 hours ago [-]
Jobs that have already been killed is my point
vel0city 2 hours ago [-]
Together that's about four million American jobs so I'd disagree those jobs have "already been killed".
49 minutes ago [-]
steveBK123 9 hours ago [-]
I think it remains to be seen if LLMs are even 25% as good at everything else as they are at coding.. which is fine, if they focus and stop promising the world.
That alone is huge, if they let go of their egos about putting the entire white collar class out of work..
Bombthecat 4 hours ago [-]
Nvidia CEO said we already have agi:)
steveBK123 4 hours ago [-]
Ad generated income
biztos 10 hours ago [-]
We could argue all day about what should be at the forefront, but addictive content isn't going anywhere, because addicts pay up.
In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days.
muvlon 10 hours ago [-]
> because addicts pay up.
I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.
mark_l_watson 7 hours ago [-]
Sometimes they do pay up. Google Gemini estimates that 25% of active daily YouTube users pay for ad free service. I know my wife and I do, and we watch a huge range of YouTube material more hours a month than all the other streaming services we subscribe to. There is no area of human knowledge or human interest that YouTube doesn’t have a ton of material for; and of course, the animal videos… The ironic thing in the subject of Sora service being cancelled is that neither my wife or I watch AI generated material.
jsharpe 9 hours ago [-]
I think the real answer is that Sora-style AI slop videos just aren't as addictive as we thought they'd be.
I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.
It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).
steveBK123 8 hours ago [-]
Yes, fortunately slop is pretty unwatchable after the novelty wears out. Even the lowest common denominator stuff NFLX churns out is in a different league.
I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.
Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.
For short format low stakes stuff like online ads, then the AI slop actually probably works however.
Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine.
But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.
neutronicus 7 hours ago [-]
IMO slop fits best for "art that isn't the point".
A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.
Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.
Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.
steveBK123 6 hours ago [-]
Agreed, it's good at placeholder art for which entertainment consumption is not the point. Clip Art for the new generation.
burningChrome 6 hours ago [-]
>> you can get it on a ton of other platforms for free, so people don't want to pay for it.
What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.
OpenAI just proved you cannot burn money indefinitely.
asveikau 4 hours ago [-]
The monetization of social media has always been about steering otherwise non paying users into making purchases elsewhere. So if the AI slop can make people spend money on other products that's accomplished the goal.
muskstinks 4 hours ago [-]
Coding is one topic but the big one is agentic ai.
You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.
You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.
There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.
Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.
For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.
On scale we need more capacity and these agents will also cost more money than a 20$ subscription.
But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.
short_sells_poo 2 hours ago [-]
I see where you are going with this, but IMO this is not a technical problem but a legal problem.
Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?
Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.
You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.
An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.
I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.
muskstinks 54 minutes ago [-]
I think it will move most critical due dilligence to the tools / HR system themselves.
Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.
It could even go so far that the maker of these systems will insure you for their use.
It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.
It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.
Humans are really expensive and you have to train them regularly and every single on of them.
QuantumGood 3 hours ago [-]
> "reality and gravity are pulling them back"
I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt.
marcosdumay 7 minutes ago [-]
Bubbles either inflate or pop...
But I'm not sure we would even notice nowadays. It used to be a disaster that could take people's attention for years, but currently, it may get lost in the noise.
whywhywhywhy 5 hours ago [-]
>I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.
They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.
mark_l_watson 8 hours ago [-]
I also prefer seeing a corporation like Google do it for two reasons: generative content might feed their cash cow also known as “YouTube” and Google already has a good base for coding assistants. Google owns, I think, 25% of Anthropic and earns money selling compute infrastructure to Anthropic. Personally I think Antigravity (with Claude and Gemini) and gemini-cli firmly keeps Google in the running as far as AI coding tools goes. I want to do business with companies that have a sustainable business plan. Google’s AI products for tech work, and ProtonMail’s Lumo+ product for all private daily web search and chatbot functionality is enough for me; I used to chase every commercial AI offering but not anymore.
Bombthecat 4 hours ago [-]
Claude runs now on Google tpus...
muskstinks 11 hours ago [-]
For OpenAI that was and felt like some side husle they were playing around nothing more.
Having Disney on their side was def quite a smart/interesting move.
At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model.
Hendrikto 9 hours ago [-]
It never made sense and was always just burning resources that OpenAI does not have.
It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks.
muskstinks 4 hours ago [-]
Everyone is doing image generation. Its realtivly easy and I would say it would be a people mover if openai wouldn't support this.
So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources.
So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation.
And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post.
I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models.
Bombthecat 4 hours ago [-]
Oh yeah. Openai didn't have a major image update in a while, no?
muskstinks 40 minutes ago [-]
Their latest model is from December but tbh i have not heard much about it.
Nano Banana created a lot of noise.
But the reasoning of Gemini 3.1 Pro is really really good. Its hard to describe how good it became. I do not see the same quality from openai. Openai though is also super fast in response. A lot faster than just a few month ago.
For example: some german guy used the wrong word in describing an advantage of having a silencer and missuesd a word. Openai just said its nonsense, gemini suggested that its a typo and he wanted to write something else (gemini was correct).
It could also be that we are in a moat between "why is AGI not here yet" and "we need to build now the agentic platform stuff, that takes time".
Gemini pro is def slower than openai and I do not know if its because I use the pro version of gemini but not from openai. But it could also be that OpenAI has to work on subagents because Gemini def uses subagents and i was not able to find a source that OpenAI is doing this too.
superultra 5 hours ago [-]
Had Waffle House with some friends who mostly work in blue collar industries. One guy who works at a timber mill used Claude code to redo their ordering system. Took him about a month to go from knowing nothing about Claude Code to finishing the system. Basically just copied a proprietary software product that costs them upward $20k a year. They’re keeping that other product to cross check but so far the Claude coded item works great, and is of course more custom to their business. The dudes a hero at work because the system is heads and tails better.
Obviously caveat emperor but there are a lot of real world scenarios like this.
I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change.
pjc50 4 hours ago [-]
I'm converging on this as the real end state: it's a "better Excel" for general business work. And has some of the same limitations - maintainability and security. But there are also plenty of small businesses that run off a shared Excel spreadsheet and a few mailboxes.
Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app.
elevation 4 hours ago [-]
> caveat emperor
s/emperor/emptor
I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.
windexh8er 4 hours ago [-]
Keep dreaming!
The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.
In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.
The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.
LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.
seanw444 3 hours ago [-]
I'm just going to keep building software mostly traditionally, while using "AI" to help me research things quicker (might as well use it while it's here), survive the shitpocalypse, and then laugh as traditional-minded developers become a scarce sought-after resource again.
Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
superultra 3 hours ago [-]
I think this is where a lot of freelance contractors could pivot to - basically "last mile" coding, where the LLM does the front end work, and then high hourly pay engineers come in and fix the work. it'd still be cheaper than a lot of the industry niche software that is usually pretty bad.
superultra 3 hours ago [-]
thanks for the correction
I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.
WarcrimeActual 3 hours ago [-]
>Sometimes I think my opinion means nothing on these topics, especially when it's going to get buried in a thread of 500 plus comments.
Ironically, starting your response with this guarantees a lot of people won't read it. It's the same as going on reddit and starting a reply with, "Nobody will see this but", and hoping that people try to prove you wrong by reading and commenting on it. I stopped after the first sentence. People really have to stop with the clickbait vomit way of writing.
cube00 7 hours ago [-]
> I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.
Considering the large million plus view counts I see AI slop getting on FB and YouTube I'm not seeing this behaviour play out.
empath75 4 hours ago [-]
I had fun with it for about a week, but the thing that disappointed me the most wasn't the technology, it was the _people_. You have a machine that can make anything you can imagine, and the space of what people were exploring was so _small_.
unnamed76ri 11 hours ago [-]
[flagged]
Rechtsstaat 11 hours ago [-]
I'd argue that for informal uses like HN, this is very much okay! It's grammatically correct and gets the point across. And most importantly, these paragraphs read more like someone's personal voice than some pithy but edited-to-death couple of sentences.
laserlight 10 hours ago [-]
> gets the point across
If people don't read because the text is an unreadable mess, none of the points get across.
customguy 10 hours ago [-]
I'm a a people. I read it. If you call this an unreadable mess I really don't know what to say. Language is awesome, and it's awesome we can create infinitely long sentences with it. And like open source, if you don't like it, write the one you like :)
A long time ago on the myspace forums there was this slightly weird but also very wise and smart person who wrote without any punctuation or paragraphs, ever. Although they were generally liked and part of the community, I think I was the only person who read every single one of their comments in full, religiously, once I realized how insightful they were, and I was richer for it. I could have told them the obvious, how their posts differ from most others on the forums; and they would have posted with less joy and maybe less overall, that would have been it.
neonstatic 9 hours ago [-]
While I don't agree with the other poster, that the comment was a mess, sentences were so long, that I had to focus not to lose the point. I think the top comment read a bit too much like stream of consciousness, which as a person I tolerate much more in spoken speech than written one. Still, I liked the comment, but agree it could have been improved.
otikik 9 hours ago [-]
I'm also a people but I stopped reading after the first paragraph.
raincole 9 hours ago [-]
It might be a surprise to you, but there are plenty of people who are willing to read one or two paragraphs of words.
laserlight 9 hours ago [-]
I'm comfortable reading much more than two paragraphs, even in online forums. In this specific case, unreadability is because of poor sentence structure. I quit in the middle of the second sentence.
laum 10 hours ago [-]
tbh I quite like the style, I get the train of thought and am sure it wasn't written by an LLM.
ssl-3 9 hours ago [-]
[dead]
Peaches4Rent 10 hours ago [-]
[flagged]
rdevilla 9 hours ago [-]
> I feel like they say one thing and do something else or they say one thing and the agenda or something else.
[...] do not ye after their works: for they say, and do not.
For they bind heavy burdens and grievous to be borne, and lay them on men's
shoulders; but they themselves will not move them with one of their
fingers.
But all their works they do for to be seen of men [...]
> And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it.
[...] they seeing see not; and hearing they hear not, neither do they understand.
That man was later nailed to a plank for literally no reason.
Nothing is new under the sun.
meken 21 hours ago [-]
I had so much fun making videos with my mom when it came out. During the first two weeks, we made over 100 cameo videos together - we were constantly running up against the upload limit. It unleashed tons of genuine creativity, joy, and laughter from us.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
yoz-y 13 hours ago [-]
The problem is that due to the ease these can be made there is also really no reason to make this social. “Why would I look at somebody else’s creations when I can do mine.”
Cthulhu_ 12 hours ago [-]
I can see some usage for this use case - "look Morty, I turned myself into a pickle!" - but just like image / meme generators, this is like 10-30 seconds of engagement within a friend circle at best (although some might go viral, but that won't bring in much money for in this case OpenAI).
There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool.
WarmWash 6 hours ago [-]
I'm not an artist or creative person in any sense. My persona is closer to a settings menu than a colorful canvas.
The AI art I have seen creatives produce is far beyond anything I have been able to come up with. We're not at the point yet where you can just prompt "Make me a video that is visually stunning and captivating" and get something cool.
dylan604 5 hours ago [-]
> My persona is closer to a settings menu than a colorful canvas
ah, but what a persona that would be if you were a Kai's Power Tools settings menu!
pjc50 4 hours ago [-]
> The AI art I have seen creatives produce is far beyond anything I have been able to come up with
.. such as? What's the "Mona Lisa of AI art"? Is there, like, a gallery? Awards?
WarmWash 2 hours ago [-]
Unfortunately I don't have a solid reference point or checklist for the defining qualities of "good art". And frankly I don't take those who do very seriously. To me art is all about the personal vibes you get from it. So I enjoy Zach London (gossip goblin), Bennet Weisbren, and voidstomper/gloomstomper if you want something to measure with your "real true art" checklist.
muzani 13 hours ago [-]
They're different impulses. Some want to consume. Others want to create.
TikTok and social media is a strange mix of both, people posting response videos to everything.
Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song.
The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
But I guess the desire to create something that others would consume is also different from the desire to simply create.
hansmayer 10 hours ago [-]
Sweet Jesus. You realise this is the mental equivalent of stuffing your stomach full of junkfood and soda every day?
The first isn't bad by any means. There's a million break up songs and that's one of the best sad ones. Most are just... angry? Blaming? Empowering? They work fine. They sell records. Many have have a billion views.
But the second one, even with the clunky translation, strikes somewhere deeper. It's written by someone who had enough time ruminating on a break up. The ending hits a little harder, because break up songs are about endings.
Both are sincere, but the first feels more formulaic. I'm inclined to think the first one is the soda.
I feel Suno leans towards this group of songwriters and poets who have something to say. Sora doesn't.
noelsusman 7 hours ago [-]
That doesn't sound meaningfully different from what people are already doing on Instagram and TikTok all day.
hansmayer 6 hours ago [-]
Absolutely correct and my comment is by no means dedicated just strictly to the AI slop.
neutronicus 7 hours ago [-]
For a lot of people music is a focus aid, not the object of contemplation.
weirdmantis69 2 hours ago [-]
As opposed to the kardashians and real house wives and Chappell Roan?
hansmayer 2 hours ago [-]
No, the whole horseshit belongs together of course. Just that the AI slop is the logical culmination of the dumbed down pop-culture of the last 15ish years or so.
code_for_monkey 4 hours ago [-]
you could not waterboard an admission of bad taste like this out of me
jaapz 12 hours ago [-]
> Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist.
The musician in me just shed a tear
whaleofatw2022 10 hours ago [-]
Pink Beatles, in a purple Zeppelin comes to mind
Geedis 10 hours ago [-]
Had to create an account just to let you know that someone out there got the reference.
seedboot 10 hours ago [-]
That comment for sure made me sad
criley2 10 hours ago [-]
Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete.
Hopefully AI outcompeting humans at slop sparks a renaissance of humans creating truly beautiful human artwork. And if it doesn't, then was anything of value truly lost?
BigTTYGothGF 8 hours ago [-]
> Modern music has done this to itself
I get my modern music from Bandcamp. If you can't find good stuff to listen to, that's a 'you' problem.
animuchan 9 hours ago [-]
So true. AI music gens like Suno can't do Paul Shapera works even remotely, but can recreate a lot of pop or EDM music very faithfully. There's just no distance to close, it's already mainstreamly bad.
azan_ 8 hours ago [-]
> Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete.
What are you talking about? There’s lots of modern music that’s not corporate slop and that’s absolutely great. Never in history was access to great music as easy as it is now.
voidUpdate 7 hours ago [-]
So find music you like that isn't modern corporate slop. My music right now consists mainly of indie stuff I've found on youtube and daft punk. No plagiarism machine needed, just human-made music
muzani 5 hours ago [-]
"No plagiarism machine needed, just human-made music"
From wikipedia: Many Daft Punk songs feature vocals processed with effects and vocoders including Auto-Tune, a Roland SVC-350 and the Digitech Vocalist. Bangalter said: "A lot of people complain about musicians using Auto-Tune. It reminds me of the late '70s when musicians in France tried to ban the synthesiser. They said it was taking jobs away from musicians. What they didn't see was that you could use those tools in a new way instead of just for replacing the instruments that came before. People are often afraid of things that sound new."
voidUpdate 5 hours ago [-]
Did Daft Punk put in a lot of effort to remix existing sounds to make their own music? Yes. Did they type "pls make french house electronic music number 1 chart" into a text box? No. Did they also credit original authors? Yes. I've not gone through their whole library, but for example, Edwin Birdsong has songwriting credit for harder, better, faster, stronger
NickC25 5 hours ago [-]
I occasionally use Suno to re-imagine songs in different keys, tempos, and genres, and sample them. Most of the output from Suno is slop, but occasionally has a few good bits you can sample, chop up, re-pitch, and create something totally new from, which also has the added benefit of being unrecognizable to rights algorithms and lawyers from major labels.
It's a neat tool for genuine creators, and a crutch for people interested in slop.
delta_p_delta_x 12 hours ago [-]
> the slop from Suno is good enough to replace mainstream music
I wonder what OP categorises as 'mainstream'. As a classical musician this breaks my heart.
muzani 5 hours ago [-]
Many of the things on a top #100 list for the last few decades. That includes plenty of "indies" as well as pop.
There are exceptions though. FUKOUNA GIRL by STOMACH BOOK, for example. AI can't come close to replicating something like this. Not the cover art, not the off-key voices, not the relatable part of the lyrics. I don't believe this is a top #100 song, though it certainly is popular.
wartywhoa23 2 hours ago [-]
I'm with you here, resonates so much. I'm so fed up with endless subway tunnels, they all look and sound utterly same and boring.
So I quit riding the overpriced subway altogether and now consume AI-generated subway imagery and soundscapes for free, they are just good enough to feed my passion for boring tunels.
Some ego-bloated edgelords had nerve to tell me that there are, like, other modes of transportation, but I honestly find their high-horse elitism despicable.. Damn morons.
bojan 12 hours ago [-]
> The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there.
For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers.
camillomiller 13 hours ago [-]
Some want to consume... content that they don't think they could do in one minute themselves. They want to consume content made by other humans, even if it's still brain-eating algorithmic fodder, but still.
Sora proved it quite clearly. These clips had ZERO value.
mlrtime 10 hours ago [-]
How do you get Suno songs for free? You listen to others or make your own?
animuchan 9 hours ago [-]
Almost nobody listens to others' songs on Suno, that's the entire point.
You wouldn't care to order the food as I personally like it -- might be too spicy (or too bland) for your taste.
Suno songs are overtuned for personal preference in the same way.
muzani 7 hours ago [-]
They have a discover section for songs made public.
teekert 14 hours ago [-]
Sounds like when we first had smartphones with orientation sensors and we could drink a beer from the phone, so cool... for 2 weeks.
moritzwarhier 10 hours ago [-]
But now you can vibe the same app 1000 times for root beer, coca cola, ginger ale, even a milkshake, and nobody will ever have to have a new idea again!
Cthulhu_ 12 hours ago [-]
I wouldn't be surprised that the beer apps cost less to develop than one AI generated video.
closewith 12 hours ago [-]
Was there a Send Me to Heaven for Sora?
Applejinx 1 hours ago [-]
That is for loved things
mathattack 21 hours ago [-]
This is consistent with a lot of AI apps. I fell in love with Gamma and haven’t used it in forever. Same with NotebookLM.
wholinator2 20 hours ago [-]
I somewhat consistently use notebookLM for podcasts of academic papers I'm reading in my PhD. You have to go read it yourself afterwards but it makes better use of time in the gym or doing dishes/groceries.
internet_points 12 hours ago [-]
> You have to go read it yourself afterwards
^ this is important.
Otherwise you may very well be missing anything really surprising or novel.
> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
WarmWash 6 hours ago [-]
On one hand 2024 in AI time was a decade ago.
On the other, Google might not have done much to upgrade the podcast feature since them.
mathattack 6 hours ago [-]
It’s gotten somewhat better over time though clearly not their top priority.
ludicrousdispla 14 hours ago [-]
I found notebookLM to consistently make up about 20% of it's summary. Entertaining but unreliable.
mathattack 6 hours ago [-]
I used it most key to learn about history. There isn’t much damage if it got 1600s or 1700s detail wrong. My high school teachers got much of it wrong too.
nytesky 18 hours ago [-]
The bantering of the podcast I found distracting and the breathless enthusiasm. I guess there was a way to make it more no nonsense? I found I lost content if tuned for brevity.
djsavvy 18 hours ago [-]
I just use elevenreader for this. I copy in essays or whatever text I want to listen to and it works decently well. It's far from perfect, but certainly good enough.
Sometimes I'll take deep research output and listen to it too that way.
mathattack 6 hours ago [-]
I tell them “no idle conversation or verbal tics” in the instructions.
qnleigh 19 hours ago [-]
I've found notebookLM summaries to be too high-level and oversimplified to be useful. Hopefully in a few years they can go deeper.
SXX 15 hours ago [-]
You can alao use NotebookLM's as source for Gemini app and ask it to do more in-depth summaries with custom prompting.
This somewhat makes whole NotrbookLM less useful, but still.
p4coder 19 hours ago [-]
I also like doing that for topics that I am tangentially interested in. One minor thing that I find annoying is that the narrators switch roles in the middle of conversation. They start with the female voice explaining a concept to the male voice and suddenly they switch. In the meantime I have identified myself with the voice being explained to.
shimman 20 hours ago [-]
Just listen to actual audio books... literally doing double the work for no benefit... why?
blharr 20 hours ago [-]
There aren't a lot of highly technical audiobooks or ones that give the same specificity that would be the same as an academic paper
shimman 5 hours ago [-]
Okay but the user is describing listening to papers, then having to read the papers because listening to them isn't efficient. So why bother listening to it in the first place if you're going to read it?
wolvoleo 18 hours ago [-]
Not yet but it seems like they're getting to the point of AI narration finally being good enough to make any text an 'audiobook'.
Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.
coke12 15 hours ago [-]
No, reading verbatim from a technical paper is way too dense. You need a lot of filler words to slow it down and repetition to make it stick when read aloud.
arthurcolle 19 hours ago [-]
Writing a book takes like 2-3 years on average. Papers are published everyday. Having a cute two-person "conversational chat" w/ audio works for a lot of people vs. just reading a paper. "No benefit" to you perhaps. Don't generalize the lived experience.
mathattack 6 hours ago [-]
It can synthesize and summarize many topics.
For example, I can give it 8 papers on best practices in online marketing, it will turn it into a 20 minute podcast.
There are errors, but also with real podcasters.
SecretDreams 19 hours ago [-]
> You have to go read it yourself afterwards
Or before! Either is mandatory to actually learn the content.
19 hours ago [-]
conartist6 20 hours ago [-]
Yeah it's not just the hardware depreciating, it's the social impact of what the model can do
anshumankmr 15 hours ago [-]
NotebookLM is great for learning I feel
bookofjoe 7 hours ago [-]
It's not just software: I use my Vision Pro (now in year 3) less than once a month now, and each time I do the painful/awkward/unpleasant set-up and prep and difficult interface sours me on the device yet again, until a new blockbuster movie like "Project Hail Mary" appears that when watched on the VP in 4K on a virtual 40-foot screen blows my mind.
Nifty3929 4 hours ago [-]
It's not really that people wouldn't come back - it's that they were losing money on each customer.
Those 100 videos probably cost $100+ for them to create. Did you pay them $100+? (not a critisism, just a re-framing)
staticcaucasian 2 hours ago [-]
When it launched we all talked about the serving/inference costs being massive. In hindsight if they had a paywall, it might not have self-imploded so fast, might have stayed aspirational, and they might have a profitable business today. Interesting case study.
The interesting difference here is that other hedonic activities do bring people back even after the first time they build up a tolerance and get bored. But many of these AI "creative" apps seem like a one-and-done thing. Once the novelty wears off there isn't anything more deeply rewarding to bring people back.
Gigachad 12 hours ago [-]
It’s because they are slop which is only funny by the novelty of it. Stephen hawking at a skate board park it’s funny for a bit but as soon as the novelty wears off it’s just slop.
bit1993 13 hours ago [-]
I thinks its the same reason why chess tournaments, where two AIs play against each other are not as popular, compared to when two humans play each other. Maybe its because humans generally compare themselves to other humans and that's part of how they value.
josefresco 8 hours ago [-]
This tracks my usage exactly. It was like Mad Libs - in that moment it was THE MOST FUN but after a while it became just a novelty bordering on... creepy. Now I feel kind of guilty for having exposed so many friends to what looks like a data gathering scheme.
Cthulhu_ 12 hours ago [-]
It's the same with e.g. faceapp, fun for a minute but then... then what?
And this is the challenge that these tools have - they have to have a free tier to get people to explore it, but unless they can make it a habit, those people will never upgrade to a paid subscription.
I have no figures, but if I'm being optimistic, these freemium subscription services have 10% conversion rate at best; can that 10% pay for the other 90%? For a lot of services that's a yes, but not for these video generators which are incredibly compute intensive.
I'm sure there's a market for it, but it's not this freemium consumer oriented model, not without huge amounts of investments. Maybe in 5-10 years, assuming either compute becomes 10-100x cheaper / more available, or they come up with generators that run cheaper.
whateveracct 19 hours ago [-]
A lot of AI hype is parlor tricks
JeremyNT 8 hours ago [-]
Yep. Impressive toys, but not useful day to day.
There's some market for b2b I'm sure, but as a consumer facing product it's tough to see how it could ever come close to paying for itself.
afro88 13 hours ago [-]
Sounds like me with listening to AI covers. After a couple of weeks I couldn't care less. But I was so stoked in it at the start
Dumblydorr 8 hours ago [-]
Reminds me of when photo filters and initial stickers and mirror filters came out on MacBook in like 2007. It was super fun for a couple days then the novelty wore off.
disqard 2 hours ago [-]
"...and when everyone's super, no one will be"
I think this is starting to play out.
When I personally see a blog post which didn't need an image, but still does have an AI-slop image banner, I mentally check out. I might have Claude summarize it, or (more likely) just skip it altogether.
qingcharles 17 hours ago [-]
The Cameo feature is really excellent. The likeness of both the person and the voice is exceptional. I really enjoyed making some funny Cameo videos with my friends. I don't know of another simple way to insert your own avatar with your own voice into a video, and I'm pretty deep in this space.
urda 14 hours ago [-]
I honestly forgot about Sora until this post, and yeah same behavior played with it for a bit, then moved on with my life.
m3kw9 7 hours ago [-]
Humans are very good at pattern recognition, even if you generate different stuff, you still see a pattern, either in the cutting, color, cadence of movements, the color grading, camera lens used, everything, your mind will tag it as slop.
Essentially you are watching the same videos over and over subconsciously
pjc50 4 hours ago [-]
This is something that people working on procedurally generated games have already noticed. No Man's Sky has billions of planets, each with "unique" plant and animal species, but you can easily sort them into a few dozen templates with minor variations.
Procgen has a niche, but it never became ubiquitous, because for most people exploring a nice hand-made intentional environment is better.
meken 6 hours ago [-]
Wow that's a really good point. The style of the videos did become quite repetitive.
rustystump 5 hours ago [-]
U say that but then when u look at most “content” on social media it is the same video over and over again. How many JRE podcasts are basically the same crap as last time? How many influencer “life” videos are the same thing over again? Even the stuff i like is formulaic to the point ai can almost write the scripts.
I think people attach to other people more than “ai”. When there isnt a narrative “person” behind the content it is way less interesting.
bibimsz 19 hours ago [-]
[flagged]
ares623 19 hours ago [-]
probably one of the few human commenters remaining here
robotnikman 57 minutes ago [-]
Such a stupid joke but it gave me a laugh.
dhon_ 19 hours ago [-]
Cue a flood of crass jokes as the bots attempt to prove their humanity
19 hours ago [-]
jklein11 19 hours ago [-]
noice
AbanoubRodolf 18 hours ago [-]
[flagged]
toraway 17 hours ago [-]
(FYI, this is an LLM bot, check their comment history and note the repetitive structure with every comment they've ever posted all within the last hour)
> This is the right question but hard to answer in practice ...
> The brownfield vs greenfield split is the real answer to ...
> The babysitting point is the one people keep glossing over ...
8 hours ago [-]
torginus 18 hours ago [-]
I dunno, it was the same for me and creative writing with AI.
First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.
Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.
I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.
1bpp 21 hours ago [-]
[flagged]
dang 20 hours ago [-]
Please don't cross into personal attack. Your comment would be fine without the swipe at the end.
I think you’re fumbling on an important distinction.
Sometimes people want to paint, sometimes people want a painting.
To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.
dqv 20 hours ago [-]
Totally. This wasn't a situation where a stranger was slopping another stranger, it was a mother and son doing something fun together.
19 hours ago [-]
apsurd 20 hours ago [-]
I get your point but it goes too far in the opposite direction. We should now discuss absolutely nothing in relation to Sora and genAI videos? That seems overly charitable to the platform.
Waterluvian 20 hours ago [-]
Here, let me try this approach:
Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.
Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.
No seriously, try it out.
apsurd 20 hours ago [-]
Agreed. I did try this out! So the reply to the original comment is dumb. I actually dismissed it for being flippant.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
jcims 20 hours ago [-]
Come on now...'We're curing cancer, right?!'
You didn't at least puff a little ack through your nostrils for that one?
nomoreusernames 20 hours ago [-]
[dead]
monero-xmr 20 hours ago [-]
[dead]
21 hours ago [-]
johnfn 19 hours ago [-]
As someone who generally liked the products that OpenAI puts out, I think Sora was their first product that I really didn't like. I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling. It's primary value proposition to keep me using it wasn't to trick me with addictive content, but to get me high quality answers as fast as possible. And I felt like OpenAI's other products, like Deep Research, agent mode, etc, were the same way. Even Atlas, although I suspect it will be equally ill-fated, attempts to follow this same pattern. It really felt like OpenAI was separating themselves from the common popular apps like Tiktok, Reddit, Instagram, etc, which seemed to exist entirely to distract me from things I care about and waste my time.
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
greenie_beans 7 hours ago [-]
> I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling
i recently used gpt for the first time in several months (i'm a daily claude user) and didn't find this at all. it is most certainly trying to pull you into engagement with how it ends each response. "if you want, i could tell you about this thing that's relevant to what you are discussing and tease just enough so that you addictively answer yes"
nananana9 14 hours ago [-]
What happened is that they make no money, because people use it an masse to generate videos that they then post on TikTok and Instagram, nobody actually doomscrolls Sora.
imankulov 8 hours ago [-]
> I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling.
Not about Sora, but about ChatGPT. I felt the same way for quite a while until I noticed that its response pattern has changed, apparently aiming for higher engagement. Someone aggressively pursued a metric.
At some point, ChatGPT started leaving annoying cliffhangers in its every response, like "Do you want me to share a little-known secret of X that professionals often use?" Like, come on!
mortsnort 18 hours ago [-]
Hosting videos is really expensive. AI video generation inference is really expensive. I'd love to see how much money this experiment cost.
rblatz 17 hours ago [-]
So much that they walked away from a billion dollar deal with Disney by dropping Sora.
riffraff 14 hours ago [-]
It's not clear to me what that billion dollar meant.
To me it seems it was "Disney gets shares and we get to use their characters in Sora".
Even if Sora breaks even, why would you gift Disney stock? It's not like they actual gave 1B to openai.
lossyalgo 6 hours ago [-]
I don't think anyone outside of Disney/ClosedAI knows what deal was actually made. Maybe they just shut down public use of Sora but Disney will still be able to use it internally? Maybe they never even signed anything, as is too often the case with AI deals, especially big ones, how we read about signed/inked deals but then it turns out it was all just words spoken. Maybe they took the cash, then shut Sora down to save money? Could be any number of things that happened which we might never know.
karel-3d 13 hours ago [-]
Hosting videos is not that expensive, compared to generation and inference costs. It's not cheap but it's not that horrible
AussieWog93 17 hours ago [-]
For me, Sora changed the way I viewed Sam Altman as a person.
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
kergonath 10 hours ago [-]
> He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
He is a con man. Of course he’s charming and convincing, that’s how he ended up where he is. But he’s just as full of it as Musk when he was waxing lyrical about saving the world and going to Mars. They lie very convincingly.
Eufrat 15 hours ago [-]
Multiple people have attested that Sam Altman is extremely charming (especially in more casual, intimate settings) and talks very nobly about his goals, but his actual work is just…all kinds of awful. And I think that charm only goes so far as it seems clear that people are starting to demand that OpenAI actually match its words with work it cannot produce.
I think his board fight within OpenAI where essentially lied to the board, his obsession with retinal scanning everyone for his biometric cryptocurrency (Worldcoin), how he left Y Combinator are just evidence that he’s not very heroic. Most cringe to me is that he and many others seem aware that what their are doing is corrosive and harmful to society on some level as Altman has admitted to having a bunker somewhere around Big Sur [0]. Which…WTF.
Not too familiar with that history, but he still is listed as a courtesy credit/reviewer at the end of PG's blog entries, so I assume he didn't have too much of a bad exit?
Eufrat 13 hours ago [-]
We’ll never know exactly what exactly transpired, but I think the existing evidence is clear that as President of Y Combinator he should not have been also as involved in OpenAI as he was.
This is a conflict of interest and I think one a very obvious one. He tried to have it both ways and was forced to choose in the end. I think putting himself in that situation rather than resigning up front to pursue OpenAI ambitions says a lot about his character.
aaa_aaa 8 hours ago [-]
He is a conman, and potentially a terrible person (look for it)
presbyterian 4 hours ago [-]
> ChatGPT could prevent real world harm like suicide
It could prevent suicide, maybe, but we know that it does cause suicides, at least in some cases. Seems like a poor value proposition.
username223 15 hours ago [-]
Sam Altman made his stake at the table with a shady and failed location data harvesting app (https://en.wikipedia.org/wiki/Loopt). That's who he is, that's what he does, and we're all better off paying less attention to the sounds he emits, and more to the things he does.
waterproof 15 hours ago [-]
> the things he does.
The things he does is convince investors to give him billions of dollars to build what he wants. Where exactly does that leave us?
rustystump 14 hours ago [-]
A fool and his money shall soon be parted. Sam is a face. If it wasnt him, it would be someone else.
Lionga 13 hours ago [-]
Thinking that Scam Altman of Worldcoin etc. fame was "genuine about making a product that could improve people's lives" seems like a strange kind of delusion.
sfn42 12 hours ago [-]
I haven't followed him much as I really don't care, but the one clip I've seen of him that really stands out to me (I've seen more but this is the one I remember) is one where he's talking to some guy who doubts the LLMs genius, and Sam says something like "what if ChatGPT solved quantum gravity, would you be convinced then?"
To me, this just came off as pathetic. It hasn't solved anything and there's no reason to believe it ever will. The whole question is completely pointless except to put the idea in viewers heads that ChatGPT will soon revolutionize science, with no actual substance behind it. It's not even a question, there's only one possible answer. He's holding the guy verbally hostage just to manipulate dumb viewers.
So anyway that's the only memorable clip I've seen of Sam Altman, and based on that alone, fuck that guy.
piva00 10 hours ago [-]
The most memorable clip I've seen of him was the Brad Gerstner's podcast one (an investor of OpenAI), Gerstner questioned Altman about the financials of OAI, how could it have committed to spend so much given the revenue, it's a decent question and it's been up in the air for a while across the media.
Altman's reaction was very telling of the kind of person he is, just immediately lashing out at Gerstner in a childish way, asking if Gerstner wanted to sell his shares because he could find a buyer in no time.
It was a pathetically immature reaction, I wouldn't expect that from any kind of professional, even less someone who has held positions as Altman has and now sits at the top of the leadership for a company sucking hundreds of billions of investment.
Apart from that clip there's also the whole saga of sama @ Reddit, full of lies, deceptions, and the same kind of immature attitude peppered across Reddit itself.
Hendrikto 9 hours ago [-]
> Gerstner questioned Altman about the financials of OAI
After glazing OpenAI and Sam personally for 45 minutes straight. But as soon as Sam was questioned in the slightest, he exploded.
mvdtnz 2 hours ago [-]
My most memorable clip was when he was interviewed about the "suicide" of an ex-employee and Sama lied through his teeth. I can't understand people who say this snake is "charming"... he's a bad liar and has sub-zero charisma.
> Still, I am mystified by how rapidly Sora went from launch to shutdown
I think if you had to foot the bill for generating a bajillion gigabytes of slop with no real utility, you wouldn't be too mystified.
They showed off their technology and proved it was impressive. That's all it had to do.
mvdtnz 2 hours ago [-]
> I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling. It's primary value proposition to keep me using it wasn't to trick me with addictive content, but to get me high quality answers as fast as possible.
I'm curious if you still feel this way about current iterations of ChatGPT? It seems like it's now primed to engagement bait the user, especially when used through the web UI. You can ask it a simple question with a straight forward answer and it will still try to get you to follow up with more.
> What is the minimum thickness for Shimano M8100 disc brake rotors?
> For Shimano XT M8100-series rotors (like RT-MT800 / RT-MT900 commonly used with M8100 brakes), the minimum thickness is 1.5 mm. If the rotor measures 1.5 mm or thinner, Shimano says it should be replaced.
> (a bunch of pointless details in bullet points)
> If you want, tell me the exact rotor model (e.g., RT-MT800, RT-MT900, size), and I can confirm the spec for that specific one and what typical wear looks like.
The entire query could have been answered with "1.5mm". The "if you want" follow ups are so annoying.
cess11 12 hours ago [-]
"I am mystified by how rapidly Sora went from launch to shutdown"
I suspect they promised synthetic movies but it quickly became clear that they were never going to be able to deliver on this.
Slick fifteen second lulz-clips, sure, but I don't think they can make several of them consistent enough to fit into a larger video narrative without the audience finding it jarring and incoherent.
Perhaps legal at Disney also concluded that the output wouldn't be possible to copyright, which is their core business.
iAMkenough 18 hours ago [-]
> Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there?
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
iAMkenough 18 hours ago [-]
I'm also curious if Sora has been used by Iran to generate those Lego propaganda videos critical of the President. Given how close Sam Altman is with the current administration, I wouldn't be surprised if Sora is now reserved for U.S. government propaganda only.
Are there known tells that could be used to determine which model the video came from?
(This sort of question, and the Grok sexual abuse, is why I'd like to see mandatory invisible watermarks on generated images/video)
torginus 18 hours ago [-]
I don't think so. There are tons of self hosted models for video (they are smaller and easier to run).
Most people serious about this stuff usually have their own pipelines.
iAMkenough 17 hours ago [-]
Since you seem to be better informed, I'm also interested in what self hosted models for video you recommend for creating my own Lego movie clips now that Sora is no longer an option for a paid service. There's tons, right?
pavlov 11 hours ago [-]
Look up Wan and Hunyan for starters.
These are open weight models, so you can fine tune them on Lego content… But presumably they already have enough training data since they were made by Chinese companies who don’t give a shit about Western IP rights.
iAMkenough 18 hours ago [-]
I'm not sure, but you could be right. Sora is/was the top-of-the-line platform for video generation, and the Lego IP videos were polished. Makes sense to outsource when your own energy grid is being destroyed. Anyone with an account and VPN could utilize the platform.
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight.
paxys 22 hours ago [-]
The app isn’t shutting down today, so they may have decided that the write up is still useful.
repeekad 21 hours ago [-]
More likely the team who put a lot of work into it were unaware of the decision to kill the product, regardless of the final sunset date, until today.
janalsncm 19 hours ago [-]
The document seems to be an updated version of something written last September. From a quick glance it’s not really a major overhaul.
noisy_boy 17 hours ago [-]
It's 8 paragraphs of iteration over the previous version. ChatGPT is probably among the authors.
janalsncm 19 hours ago [-]
There is a link at the top of that document that takes you to the original version which was published last September. As far as I can tell it’s mostly the same as before.
bibimsz 22 hours ago [-]
i guess the disney deal falling through was the impetus rather than vice versa
bandrami 20 hours ago [-]
Though at this point it's not clear that anybody who's agreed to give OpenAI money is actually going to do so
blackflame7000 21 hours ago [-]
[dead]
ex-aws-dude 22 hours ago [-]
The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed?
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
danso 21 hours ago [-]
I've always suspected video-gen is basically a loss leader for OpenAI, Gemini, and Grok. They can't convince the general population that AI is world-changing trillion dollar tech with "vibe coding", but realistic fake videos are impressive at a glance, and might convince many non-technical people that AI/LLMs are something revolutionary.
makingstuffs 19 hours ago [-]
I think of them all Gemini has the most viable use case when Veo is paired with their advertising platform. It does genuinely open the door to a lot of cost saving for promo shots of products etc
umich2025 17 hours ago [-]
Agreed. For reference, if sora 2 was able to generate me a Google ugc product video, it would cost me like $10 and I would get it within 30 minutes if including editing. Paying a ugc content creator would cost me $50-200 plus no control over final shots plus I gotta wait for them to respond. I have 30 products in my e-commerce store— these costs add up like crazy
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
oro44 21 hours ago [-]
Most of this “AI” stuff is dead on arrival.
Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it.
Technologists seem to have a reallyyyy hard time getting it.
sethops1 19 hours ago [-]
This is what I see, outside the HN bubble. If you work retail or weld pipes together or whatever, AI is of no use to you. On the contrary, if tech thought leaders are to be believed, you'll be out of a job soon, replaced by a lifeless robot. Fuck that.
munchler 9 hours ago [-]
You do realize that there a lot of people who sit at a desk and use a computer all day, right? Those are the ones whose jobs are vulnerable, not the ones who work with their hands or interact with the public.
kingleopold 7 hours ago [-]
we will come for them with real world AI, it takes time. dont worry. they are not safe in a decade, they are %100 safe for few more years. Learning from them at scale and updating is nothing impossible.
21 hours ago [-]
bandrami 19 hours ago [-]
There's only one highly monetizable use for AI video generation but unfortunately it's fake revenge porn. You'll know the whole thing is about to collapse when the frontier models break that glass (as OpenAI is already preparing to do with sexting).
Frost1x 19 hours ago [-]
Why does it need to be revenge porn? Pretty sure regular old porn has a large market there where people can specify what they idealistically want to see vs trying to find it, if it exists.
Not every place has LEGO incest porn… or whatever the kids are into these days.
bandrami 19 hours ago [-]
I'm not deeply immersed in the AI porn space but here's what I see from the ads when I surf without a blocker:
1. There's an AI-based virtual girlfriend industry that mixes text and images
2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models)
3. There's a much shadier AI-based "undress this specific woman" industry
UncleMeat 18 hours ago [-]
People make revenge porn to humiliate people. Regular old porn can't achieve that goal.
andrewflnr 16 hours ago [-]
And yet, regular porn is highly monetizable, which was the actual question.
bandrami 16 hours ago [-]
Surprisingly no; it's pretty much a money sink where everybody goes bankrupt after a couple of years. It's why it's attractive to money launderers.
pjc50 13 hours ago [-]
I'm not sure that's true for onlyfans, which seems to have been highly profitable until the sudden death of its founder.
bandrami 12 hours ago [-]
Excellent point: I'm talking about pornography 1.0, as it were.
cpt_sobel 11 hours ago [-]
1.0 should be attributed to pornography _before_ online distribution, and I suspect that was pretty profitable
biztos 10 hours ago [-]
Isn't 1.0 before _photography_ rather?
cpt_sobel 10 hours ago [-]
Drawings then?
Integrape 8 hours ago [-]
Live action
4ggr0 7 hours ago [-]
and now we're back to livecams, time is just a flat circle man...
reverius42 12 hours ago [-]
If anyone can fake it, is revenge porn even effective? Doesn't making it easy for anyone to fake also make all of it plausibly deniable?
OJFord 13 minutes ago [-]
I think it can be effective, but it's the wrong term for it if it's fake. It's a mixture of other things, like libel and fabricating indecent images, and the same underlying blackmail.
4ggr0 7 hours ago [-]
maybe try to view this topic with a bit more criticality. i just quickly googled some keywords and am pasting the very first search entry so you get an idea:
> One fake video, which she claims was sent to 21 men, depicted her being gang-raped
i think you're taking this topic lightly because you just assume that it's not a big deal. try to keep in mind that people's mental health and with this their life is at stake.
as with lots of things, the problem is not the tech itself, but the existence of men. it's not all men, but it's usually men. not sure how we'll solve this issue.
Peritract 11 hours ago [-]
The answers to those questions have been clear for a while; it approaches concern trolling to keep on pretending to ask them in wide-eyed innocence.
Yes, revenge porn is very effective at causing harm, even though it can be generated.
No, because 'plausibly deniable' has never worked for social consequences and shame.
UncleMeat 2 hours ago [-]
Yes. You can go speak to some high school (or even middle school) girls who have had AI generated porn made of their likeness and shared with their classmates. Even though everybody knows that it is fake it is still humiliating, especially for a young person who is likely already self conscious about their body and sex.
AlexCoventry 18 hours ago [-]
> There's only one highly monetizable use for AI video generation
Yeah, marketing. Which is a huge market...
duskwuff 17 hours ago [-]
There are others! They're just all horrible and generally revolve around weaponized misinformation - personalized scams, for instance.
bandrami 16 hours ago [-]
Oh right. There's a bunch of panicky news stories in India about that right now. Fake video calls from your nephew in the UK or whatever needing money for an emergency
coderenegade 19 hours ago [-]
I for one can't wait for ChatGPT-style sexting to become a thing.
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
pjc50 13 hours ago [-]
> all video and image testimony becoming unreliable
This is a "seismic shift" in the sense of the Big One hitting California. The knock on effects of trust erosion caused by AI are going to huge and potentially unrecoverable.
bandrami 19 hours ago [-]
I mean, you just outlined why it won't be a seismic shift: the only way the videos reliably stay on-model is if that model violates someone's copyright. And then when the movie is made the output itself isn't copyrightable (the ultimate arrangement may be but no individual frame is).
topherPedersen 19 hours ago [-]
I never used Sora to watch content, but there was a guy on TikTok that used to post these great Sora generated videos that I really liked. Honestly, I was kind of surprised to hear that they were shutting this app down today.
onepunchmob 9 hours ago [-]
I always believed that the sora app wasn't a exactly a product but more of a way for openai to bulk create a bunch of videos from the worlds creative minds and then spoon-feed the results back to their video gen models
anukin 21 hours ago [-]
Moltbook was recently acquired by meta. I think it’s the same hypothesis for TikTok for ai agents or similar.
NoPicklez 20 hours ago [-]
Posting the videos to social media wasn't its only use case.
I've no doubt that content creators outside of social media were using it as well, either for their brand or other video work.
Yes we see AI reels all over the place, but that's not only what it was used for
freediddy 5 hours ago [-]
> who would ever want to scroll only AI generated videos over a combined feed?
I guess you haven't watched hours of AI cat videos cheating on their husbands with bulls, or Lemons having babies with strawberries and fighting over custody of the child. It's absurd, it's stupid and I know it's a waste of time but I have to admit that it amuses me. I'm quite sure there are millions like me that just want some downtime to relax at the end of the night and end up watching slop like this.
chaostheory 7 hours ago [-]
There was a lot of pseudo porn. I’m not sure exactly what the prompts were to generate them since you couldn hide the original prompts. I’m not sure why they didn’t use grok instead so it leads me to think they were trolling
echelon 22 hours ago [-]
> The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed?
It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.
If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
tantalor 22 hours ago [-]
> look at US top videos on YouTube any given day
I'd rather eat poison
echelon 22 hours ago [-]
We can have that discussion, or we can have the more interesting discussion of just how much big corporate intellectual property, franchises, and brands have their hooks in pop culture.
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
People want to generate IP.
edit: clarity
no_wizard 20 hours ago [-]
Personally I’m glad that big IP came in and smashed the AI companies like this. They been relentlessly ripping off smaller creators for some time now.
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
oorza 19 hours ago [-]
> Also, seeing genuine original creations created with AI assistance is much more interesting to me
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
NateEag 14 hours ago [-]
> I want musicians to use AI to generate new sounds as part of composition.
As a onetime semi-pro musician, with decades of live performance and sound design experience:
I would rather burn my beloved instruments publicly and pee on the fire.
phatfish 9 hours ago [-]
It depends how it is used. If it is an assist which generates sounds/samples that a musician can edit themselves, that seems fine. But spewing out a final form track from a prompt would just be slop.
Integrating AI with existing tools to improve productivity is harder and requires effort and investment...
NateEag 7 hours ago [-]
As one whose musicianship involved a great deal of generating sounds and samples myself, via modular synthesis and the occasional use of a programming language for DSP, I assure you I find that idea of using genAI for an assist on that front offensive.
Could you use the bullshit machines to generate sounds that were nuanced, musical, and original, with enough time and effort?
Maybe. I'm not sure original is something they can do, but it's not totally implausible.
I would strongly recommend learning to use other tools for that purpose, instead of feeding the plagiarism monstrosities.
echelon 7 hours ago [-]
The aversion people like you have for AI is uncomfortable to me.
I understand your entire world model is shaped by your past and that this machine is changing the fundamentals.
As an outsider to music, I'm excited that I have access to something I previously did not through the use of Suno and other tools. I'm excited that I can come in and just try things and not hit a skill wall or quality barrier that would cause me to quit with the limited time and effort a working adult has. It's something I've wanted to do for a long time, but just never had the time for.
Attempting to learn costs thousands of hours before you can even start to feel good about it, and I don't have that time. Life is short and I'm already thinking about the end.
I used to be sympathetic to folks with your view, but now that programming and engineering are impacted by this - I'm in the crosshairs too. I'm subject to the same forces.
I've decided I love this tech even more. Claude Code is a tool, just like all of these other tools.
This rising tide of capabilities is so awesome. This is the space age stuff I dreamed about as a kid, and it's real and tangible.
So no, I won't restrict myself to your set of pre-approved tools. I'm going to have fun and learn my way.
And it is fun.
You can keep having fun the way you like to. What other people do shouldn't be ruining the fun you have, and if it is, then you should reevaluate why you do it.
rustystump 14 hours ago [-]
I think he meant more like a synth. You could take recordings and process them using ai. At least this was my takeaway
NateEag 7 hours ago [-]
I spent years deep in modular synthesis, making my own patches, sounds, and effects processors then using them to perform music.
Taking away the precision, control, and serendipity afforded by modules and cables, or a programming language, and telling me "Just describe what you want and the plagiarism machine will spit out whatever correlates with that description on average" would destroy everything I love about synthesis.
rustystump 3 hours ago [-]
U are arguing against a person who isnt there. I also have done similar and my mind was not thinking specifically prompt the whole output. I think people have this kneejerk to anything that isnt total negativity of ai in the creative space. It is only a tool.
KaiserPro 11 hours ago [-]
> Big IP is strong arming OpenAI, Suno, and all the rest.
> It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands
The problem is, to create a brand, you need to be able to protect it against rivals either ripping you off, or diluting it.
The same mechanism that protects "big" IP is also protect everyone else, even the small people.
> they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses
They already do that for music. But the issue is this, if we want culture, we need to find a way to pay for it. Is it possible for a bunch of mates to make enough money to live on playing in a local band? not really. They can only really make money if they either have a viable local gigging scene, or large enough online following to sell merch/patreon.
The big IP merchants were quite keen for videogen, because they sense that its possible to cut out the expensive artists. If they can not pay actors, writers, artists, then its way more profitable for them. This is part of the reason why AI hasn't been hit with the napster ban hammer.
I think the other thing to remember is that creating good IP is hard, and you can't really just pull it out of your arse after 5 minutes. The original seed takes a long time to refine, test, evolve. Even the half arsed sequels require work.
array_key_first 21 hours ago [-]
Pop culture is a fickle beast. What is pop culture is community made, not corporate made, and it can't be bought and sold like traditional markets. It's one of the few areas of life where nobodies can become somebody, and corporations hate this.
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
pjc50 13 hours ago [-]
Maybe, but the Sora shutdown comes immediately after reaching a deal with Disney to use their IP. Which might have solved that problem.
jrflowers 22 hours ago [-]
> People wanted to use Sora for about a week. Then they lost the ability to generate IP.
Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests
toss1 19 hours ago [-]
Indeed!!
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
toss1 3 hours ago [-]
Weird. No activity or response to an obscure post beyond a couple upvotes. Then, the next day a brigade no-engagement downvotes. IDC, but seems like some corporate image management trying to hide negative takes on Google properties? Sheesh
praisewhitey 21 hours ago [-]
>If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
Where can I get this data?
GorbachevyChase 19 hours ago [-]
A theme I have noticed in content oriented towards young children is a very heavy use of probably unlicensed depictions of famous characters from popular franchises. Is Nintendo collecting a royalty from “it’s raining tacos“? Probably not.
ipaddr 19 hours ago [-]
Top videos are Mr Beast and other youtube personalities.
jinushaun 19 hours ago [-]
Only because they promote it. The default experience for a new user on Youtube is to show you content from creators with 5M+ subscribers. It’s a positive feedback loop.
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
> The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed?
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
ionwake 18 hours ago [-]
Man I find the HN crowd so cross and fickle sometimes. I think it’s just because when companies get bad rep it affects how people view the products? Im autistic and tend to focus on the tech
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
Gigachad 12 hours ago [-]
Sora was a bit like seeing a new weapon being demoed. No matter how much engineering went in to it. The overwhelming feeling was
“this is bad for society and the consequences will be massive.”
So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
tefkah 9 hours ago [-]
exactly! while there may be some neutral to slightly positive use of this tech (haha funny video) I can only really see the evil uses of it: scams, misinformation, propaganda, easily available to create by anyone at massive scale.
I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
bit-anarchist 3 hours ago [-]
My perspective is different: we never could trust videos and images in the past. Our hopes, back then, were that the costs of faking said media (despite us being in the age of information and media) would remain permanently high and would deter people from choosing so. But this was always wishful thinking.
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
trgn 4 hours ago [-]
already theyve made youtube unusable
olalonde 10 hours ago [-]
Disagree. It's also used for high quality entertainment.
I think Sora is an excellent way to see how people's beliefs clash with reality. Even in this post, I see people likening Sora to unveiling "a weapon", it filling them with "bland dread", or comparing it to creating "killing robots". But now that Sora is being shut down, what impact did Sora actually have on society, other than getting a couple of people to waste their time making some funny meme videos? Did any of those negative externalities actually play out?
If you are autistic, I feel that it causes you to see reality a more accurately than most here on this thread.
gordonhart 55 minutes ago [-]
At least according to the Head of Product at X, Sora was by far the most widely used tool to create fake war videos[0] aiming to push various false narratives. Given how popular fake content is at Meta I can only imagine what they see there (if they even have anybody looking at this kind of thing).
The tech was fine/interesting for what it is. The product itself is awful and something from nightmares. It's not an enjoyable experience for me watching some uncanny valley slop. I'm not impressed with the "creativity" of someone typing in a prompt and having a plagiarismbox spit something out. The ingenuity and resourcefulness of someone actually making something is what I like. The emotion and reasons behind a work of art make it inspiring. The details of their perspective and choices they make when creating it are beautiful and interesting.
The impact of easy AI generated video is a less certain and less secure world. You can't trust your eyes anymore because of how fast and easy it is to fake video and moments. You can't trust communications with someone because how easy it is to impersonate them over video and voice. Scams involving tools like this are already running rampant and it will only get worse. The sheer level of distrust these tools have unleashed into the world makes me wish they never existed. They have burned millions (billions?) of dollars on this when that money would have been better served going to the creators whose work they stole to build it. It's rotten.
pjc50 13 hours ago [-]
> I think the lack of nudity destroyed the adult market
As we've see from Grok, building the system for producing non consensual nude images of other people will get the legal and PR hammer brought down on you fairly quickly. It's just an incredibly unethical thing to do.
raw_anon_1111 18 hours ago [-]
I have gladly been paying $20/month for ChatGPT since the day web search was available and I use codex-cli every day instead of Claude and never have to think about limits.
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
exodust 14 hours ago [-]
> there are people who pay $300 a month for Grok to generate AI Porn.
Did you just make that up?
Grok barely makes "M-rated" nudity, let alone porn. Musk recently claimed it can do "R-Rated content", but his post got a community note saying otherwise.
Grok has gotten a lot stricter about video from uploaded images. But it is still able to make realistic x rated porn from AI generated images it creates.
There are various jailbreaks that have been working for the longest and still work, just a brief look, half of them just involve “anime borders” and “transparent anime watermarks” over videos.
4ggr0 7 hours ago [-]
dude, there was a huge scandal a couple of weeks ago about grok creating CSAM...
I’m not talking about that. Grok is really strict now about what you are allowed to do with uploaded pictures but there are well known techniques to get it to create x rated realistic video using pictures it generates from scratch.
jazzyjackson 17 hours ago [-]
Interesting to hear your perspective. There was no shock and awe to me, ChatGPT changed what I thought was possible with computers, and everything else as far as photorealistic generation and then video just seemed inevitable. I decided to abstain from watching any video I know is AI, but of course now it’s mixed in with television and advertisements. I’ve started data hoarding old TV shows thinking it will be nice to have something to watch when the internet goes down.
whywhywhywhy 5 hours ago [-]
It's not that, the demo was impressive but when it became wildly available the reality of it never lived up to what was demoed and it later came out some of the shorts they did with directors had a lot of editing to them anyway.
nektro 16 hours ago [-]
all ai video will be remembered as horrific and a showcase that its creators have no ethical foresight
tefkah 9 hours ago [-]
"The [AI researchers] have known sin, and this is a knowledge which they cannot lose."[0]
which is what I would hope would happen, but they're probably fine not thinking about the consequences of their actions looking at their 7 figure salaries
"I'm autistic and tend to focus on the tech" is not a justification, and I would advise to stop using it as such.
Would you apply the same to killing robots? Hey, the Hyperthrasher 2000 mauls people and shreds them to pieces, but it's the most impressive TECH demo I've ever seen!
ionwake 9 hours ago [-]
Totally disagree this is what would happen. Hypertheasher2000 breaks through my door to eat me. First time I’ve seen a man made human eating werewolf bot.
Me: damn that’s cool
…………AAAAAHHH HELP ME
jjulius 8 hours ago [-]
>Totally disagree this is what would happen.
Doesn't matter if you agree that would happen, the analogy is valid - you're essentially admitting that you're ignoring the negative impacts of the tech for the sake of how impressive it is.
ionwake 7 hours ago [-]
Im not sure you understand the conversation we are having.
I have said about 3 times I am solely judging tech by how impressive it is technically.
I have no idea who you are arguing with.
diputsmonro 2 hours ago [-]
You can feel that way if you want, but to answer the confusion you posed in your initial post, most people do consider all aspects of a technology rather than just focus on the technical achievements. We live in a society of billions of humans interacting with each other, and whether or not you personally care or understand those interactions, they still do exist and still impact all of our lives. A particular technology may be cool, but if it threatens the lives of me or my family, I'm going to have a negative view of it.
Nothing exists in a vacuum and the way technologies affect people living in the world is a fundamentally important aspect of the technology itself. To ignore them would be like celebrating a cool new engine design but overlooking the fact that it has a tendency to explode and kill everyone in the car. If the primary effect of a technology is human suffering, then it isn't cool!
ionwake 21 minutes ago [-]
The T-800 is cool tho
ccppurcell 14 hours ago [-]
The tone of a discussion is shaped as much by who doesn't comment as who does. A product comes out and a lot of people are excited by it, they comment accordingly. People who aren't, don't, unless there is something outrageous about it. Maybe there is in this case but the point still stands that when the product fails, it's a very different set of people who feel compelled to comment. And this is totally expected because "that's a shame, I liked it" doesn't seem to contribute to the discussion. Neither does "this product doesn't excite me", even more so because that's kind of the default assumption. So an online community or institution or publication can seem very fickle, especially when the commenters are pseudonymous.
Cider9986 17 hours ago [-]
Sure the tech was cool, but people already hated youtube shorts when they were added. I think the "HN crowd" is probably the type to dislike short form content, so that might be where some of the dislike comes from.
platevoltage 16 hours ago [-]
The iPhone X's new feature where it approximated you facial expressions on a 3D character using the facial recognition sensors blew my mind as well.
It was a party trick. I can't remember the last time I touched it. That's what SORA is, or was.
aDivineDragones 13 hours ago [-]
While Apple use of the tracking was not more than a party trick, the foundational technology they created for this is currently the best low budget tracking solution and heavily used in VTubing (online streamers that use an Avatar with live facial tracking instead of showing their face via webcam)
cpt_sobel 11 hours ago [-]
Are these the Memojis or whatever Apple calls them these days? Pretty much eveyry iOS update mentions them near the top of the list and I still have no idea where to find / create / care about them...
sethops1 9 hours ago [-]
It's like when Apple announces hundreds of new emoji every update. Like great, those will look real nice next to the six emoji I ever actually use.
asnyder 16 hours ago [-]
I know the developer who worked on it took pride in the outcome. Hopefully they added some additional characters to keep it fresh.
platevoltage 15 hours ago [-]
To be fair, it was really cool. It was also a tech demo with no real practical application.
tikotus 13 hours ago [-]
It was really cool, unlike my phone after doing it for 5 minutes!
There were social games that used it as a feature, and it was fun when it worked, but it had to be disabled soon as it drained the battery so fast.
password54321 24 hours ago [-]
"OpenAI’s top executives are finalizing plans for a major strategy shift to refocus the company around coding and business users" - WSJ
It is the last narrative that some of Wall Street believes and has enough mediocre or senile coders to promote it.
That narrative will implode like Sora later this year.
afavour 19 hours ago [-]
No, AI is truly useful in software engineering. I was a skeptic until I started using it. No, it isn’t going to solve every problem out there, but it’s a force multiplier.
rf15 14 hours ago [-]
You pay understanding for speed. How much this trade is acceptable is up to you and the task you have in front of you. I cannot recommend it as a general solution.
elktown 13 hours ago [-]
This field doesn’t do well on long-term thinking. Even if all this turns out to be a net loss, it will be reinterpreted as a win and just an opportunity for even more of the same solution. There are numerous examples of this, e.g. the OOP craze. Tech is a stock market of ideas and HN is a trading floor. The “line goes up” logic applies - not merit.
pjc50 13 hours ago [-]
Describing OOP as a "craze" is incredibly out of touch. It's been a thing for, what, three decades?
foobiekr 7 hours ago [-]
You may not recall the crazy era of OOP where people would go bonkers with massive object trees trying to objectify everything and using operator overloading to do (dumb) things like adding a control to a window with +=.
Peritract 11 hours ago [-]
OOP is great. "OOP is the one perfect paradigm for all coding" was the craze.
elktown 13 hours ago [-]
I'm sure I'm not the first person you've seen hinting at OOP (and all that came with it) having been hyped up beyond its merits.
the-smug-one 13 hours ago [-]
There certainly was an OOP craze, that's not out of touch to talk about.
That’s just falls. I’ve spent disproportionate amount on “understanding” awful tooling like Gradle and npm. There’s no value in it if you’re not an infra engineer. It would take me a couple of days to manually restructure my hobby app, now I can just say “extract this into another workspace/subproject” and be done with it in minutes. And that’s just one example.
rf15 6 hours ago [-]
I agree with this sentiment. I just also see AI-driven development in core business logic, where truly understanding what is going on is essential and yet completely disregarded.
phist_mcgee 12 hours ago [-]
If I never have to debug a gradle file ever again, it's all worth it.
afavour 7 hours ago [-]
You might say the same about garbage collected programming languages. It’s an acceptable tradeoff in a lot of scenarios. Same goes for AI.
skwirl 19 hours ago [-]
It is wild that people are still posting this kind of thing in 2026. Some folks really are living in a different world.
wolvoleo 18 hours ago [-]
I liken it to VR. That was a big hype before AI and while I really love the tech (I have 5 headsets) I could have told anyone that the expectations were insane. The investors truly believed that in 2-3 years time everyone would be doing everything with a big headset on. It was dragged into lots of situations where it didn't belong.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
Marazan 10 hours ago [-]
> It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
It is astonishingly poor at this. My intuition was that it should be good at this (it is basically a translation problem right? And LLMs are fundamentally translation systems) but the practical results are so poor. Not just mis-identifying speakers (frequently saying PersonX responded to PersonX) but managing complete opposite conclusions from what was actually said.
I'm genuinely intrigued as to what approaches have been taken in this space and what the "hard problem" is that is stopping it being good.
utopiah 15 hours ago [-]
Ugh... a balanced take, this isn't appropriate for social media! /s
11 hours ago [-]
bpodgursky 17 hours ago [-]
It's because programmers are willing to pay thousands of dollars a month for a product commensurate with the value to provides, aka AI coding.
Generating pointless AI videos for pocket change or ad revenue is a loser in comparison.
embedding-shape 8 hours ago [-]
Thousands? Maybe not, but hundreds? Yeah, for my freelancer/contracting gigs, it's easily worth $200/month to be able to say "How come X is like that and what change lead to Y being Z?", wait 20 minutes and then get an answer that jumpstarts understanding a completely new codebase. If AI/LLMs never evolved beyond their current skills and usefulness, I'd still be happy to pay $200/month for this.
However, I don't know a single developer who pays "thousands of dollars a month", not sure how you'd end up like that.
bogzz 8 hours ago [-]
I most definitely am not.
drzaiusx11 6 hours ago [-]
From my vantage point AI consumption is being lead by tech leadership moreso than actual in-the-weeds programmers themselves. HN just happens to include more folk at the intersection of leadership and individual code contributor.
The top down push for AI is in line with the age old traditions of replacing highly skilled and highly compensated trade workers with automation. The writing is on the wall if folks care to look; many just don't want to. This has happened 1000 times before and it'll keep happening in the name of "progress" in capitalist systems for as long as there are "inefficiencies" to "resolve." AI is meant as our replacement, not as an extension of our skill as it happens to align with today.
Its increasingly obvious that the next phase in the evolution of the average programmer role will be as technical requirements writers and machine generated output validators, leaving the actual implementation outsourced to the machine. Even in that new role, there is no secret sauce protecting this "programmer" from further automation. Technical product managers eventually fall to automation given enough time and money poured into the automation of translating fuzzy, under specified ideas into concrete bulleted requirements where they can simply review the listed output, make minor tweaks and hit "send" to generate the list of jira-like units of work to farm out to a fleet of agents wearing various hats (architect, programming, validator, etc.)
The above is very much in progress already, and today I'm already spending the majority of my time reviewing the output of said AI "teams", and let me tell you: it gets closer and closer to "good enough" week by week. Last year's models are horse shit in comparison to what I'm using today with agentic teams of the latest frontier models (Opus 4.6 [1m] currently, with some Sonnet.)
Maybe we're at a plateau and the limitations inherent in GenAI tech will be insurmountable before we get to 100% replacement. But it literally won't matter in the end as "good enough" always prevails over the perfect, and human devs are far from perfect already.
I have been producing software (at fang scale) for several decades now, and I've been closely monitoring GenAI systems for coding specifically. Even just a few months ago I'd get a verbose, meandering sprawl of methods and logic scattered with the actual deliverables outlined in the prompt from these systems. Sometimes even with clear disregard of the requirements laid out, or "cheating" on validation via disabling tests or writing ones that don't actually do anything useful. Today I'm getting none of that. I don't know what changed, but I somehow get automated code with good separation of concerns, following best practices and proven architectural patterns. Sure, with a bunch of juniors let loose with AI you get garbage still, but that's simply a function of poor delegation of work units. Giving the individual developer and the AI too much leeway in the scope of changes is the bug there. Division of work into small enough units is the key and always has been for the de-skilling portion of automating away skilled human labor for machines. We're just watching Marxist theory on capitalist systems play out in real time in a field generally thought to be "safe." It certainly won't be the last.
muskstinks 4 hours ago [-]
Whats your setup for the agent team?
SecretDreams 19 hours ago [-]
To be fair, LLMs are exceptional at coding and they very well could displace some jobs. But you'll always need people at the helm who know what they're doing too.
drdeafenshmirtz 19 hours ago [-]
Also that developers make for good early adopters for tech
SecretDreams 6 hours ago [-]
This is very true and an underrated comment.
muskstinks 4 hours ago [-]
Yeah they are called PMs and already exist. And these people normally are creating the design documents, the flows etc. and then have to wait for the dev team to implement this.
So a good PM running 1-3 teams, will only need 1-3 agentic ai teams instead.
k3k3 19 hours ago [-]
[dead]
bigstrat2003 18 hours ago [-]
> To be fair, LLMs are exceptional at coding
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
dbbk 7 hours ago [-]
I have 20 years of experience and I don't handwrite any code anymore. Opus does everything, and it only needs a bit of steering occasionally. If you can give it guardrails (ie a pre-existing design system) and ways to verify its output (ie enforce TDD and use Chrome to visually verify) then it gets it right basically every time.
wiseowise 12 hours ago [-]
> Any decently skilled human blows them out of the water
No, by far no. I’m by all accounts “decently skilled human”, at least if we go by our org, and it blows anyone out of the water with some slight guidance.
And the most important part: it doesn’t get tired, it doesn’t have any mood swings, its performance isn’t affected by poor sleep, party yesterday or their SO having a bad day.
volkercraig 18 hours ago [-]
The thing is, LLM's produce better quality one-shots than any of the products that get returned from overseas ultra-budget contractors in India or SEA. I don't know what that means for Western devs, but I can tell you that the fortune 500 I work for is dialing back on contracting and outsourcing because domestic teams can do higher-quality work faster.
MrScruff 12 hours ago [-]
Turns out there are whole categories of software where 'extremely fast and good enough' is what matters, even for skilled software developers.
Syntaf 6 hours ago [-]
I’ve been a full stack developer for 10+ years now and I completely disagree.
Modern models like Opus / Gemini 3 are great coding companions; they are perfectly capable of building clean code given the right context and prompt.
At the end of the day it’s the same rule of garbage in -> garbage out, if you don’t have the right context / skills / guidance you can easily end up with bad code as you could with good code.
phist_mcgee 12 hours ago [-]
Am I an untrained human if I believe that Claude Opus 4.6 produces generally better code than I do in most circumstances?
Even with years as a principal engineer at a company with high coding standards and engineering processes?
rubzah 11 hours ago [-]
Maybe not untrained, but you work on some easy, boring shit. That may be true for a lot of developers, I don't know.
phist_mcgee 10 hours ago [-]
What do you reckon? Do you think that is true for me and thousands of others, or that your opinion on this is too narrow and rigid?
paxys 22 hours ago [-]
How are they going to claw back the market from Anthropic though?
janalsncm 21 hours ago [-]
Step 1: make a coding product which is better on cost/quality/speed. Probably need to choose two, so redirecting compute from dumb ai videos to coding makes sense.
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
yoyohello13 20 hours ago [-]
Step 3: use politicians to jam Anthropic up in legal battles.
SecretDreams 19 hours ago [-]
This is actually step 1
lossyalgo 21 hours ago [-]
Imagine all the money they can save on Sora which surely cost them way more than regular LLM usage, that they can now invest into suave Superbowl ads trash-talking Claude.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
GenerWork 21 hours ago [-]
I believe that the $1b is apparently not coming anymore because it was basically dependent upon Sora being an actual product that actual people can use, which isn't the case anymore.
drdeafenshmirtz 19 hours ago [-]
"Clawing back" was what the Open Claw acquisition was for ;)
flashman 17 hours ago [-]
Not enough money though. Not hundreds of billions of dollars.
MyFirstSass 23 hours ago [-]
[flagged]
bibimsz 22 hours ago [-]
Software engineers have spent the last 40 years automating away other people's jobs. The discomfort only seems to start when the automation points inward.
al_borland 21 hours ago [-]
I want to make people’s jobs easier and more interesting, I never want to make them redundant.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
sensanaty 11 hours ago [-]
Have they? I keep seeing this little snippet of wisdom being thrown about everywhere in these AI discussions as a gotcha, but to me it seems like moving jobs into dirt cheap 3rd world countries with slave labor is the biggest culprit for job loss than any kind of automation from software.
If anything software engineers have spawned in uncountable numbers of jobs that never would've existed before, is what my intuition tells me.
skydhash 21 hours ago [-]
Haven't mechanical engineers done the same thing (steam engines, trains,...)? The whole applied science is about using knowledge to remove tediousness (and now adding it back). A lot of jobs have been removed.
bibimsz 20 hours ago [-]
model T factor workers are anti worker
bibimsz 22 hours ago [-]
[flagged]
23 hours ago [-]
19 hours ago [-]
Sir_Twist 23 hours ago [-]
> OpenAI launched Sora last September, aiming to expand its dominance among consumers by creating a TikTok-style social feed that allowed users to share AI-generated content with one another.
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
mjr00 23 hours ago [-]
> Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
There were ~trends similar to what appeared early in TikTok.
For example, early TikTok had the Boss Walk.
Sora had no big content trends split into many micro trends in some established ~universe.
jazzyjackson 17 hours ago [-]
Well, that stuff goes viral because it’s fun to imitate, all the dances and challenges provided a flywheel to get people creating more content, it’s fun to make the video.
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
pjc50 13 hours ago [-]
You're supposed to press the button to receive dopamine. It's all just narrower and narrower Skinner boxes.
NewLogic 8 hours ago [-]
An AI video trend on Instagram as been Han from Tokyo Drift with different cars. People still want to share those on the platforms they are already locked into with their friends.
small_model 21 hours ago [-]
Not good, seems like they are running out of cash and partners abandoning them. They had no real moat to be fair. Anthropic eating their lunch in enterprise and other players have cashflows from other businesses (XAI, Google)
this_user 19 hours ago [-]
They wasted their first mover advantage by focussing on what amounts to building toys for consumers like Sora instead of actually useful products that go beyond simple chat bots.
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
coffeebeqn 13 hours ago [-]
What happened to AI accelerated novel materials science and medicine? Meh let’s do TikTok slop instead ?
k3k3 19 hours ago [-]
Agreed. They are pretty close to distress IMO. This cash-injection gets them to where, an IPO? I dunno, people might be spooked by then.
Will be interesting to see.
zhoujianfu 20 hours ago [-]
I had a sense things may be turning against them when my accountant asked me last week if I’d like to participate in their new round ($750B premoney) with no carry. How am I suddenly blessed with such exclusive access, at no cost?!
pjc50 13 hours ago [-]
"Would you like to hold this bag for us, sir?"
brcmthrowaway 20 hours ago [-]
Are you an accredited investor?
TheOtherHobbes 21 hours ago [-]
Yes, I'm reading this as a sign of strategic failure and decline.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
dyauspitr 20 hours ago [-]
I still like it as a general search engine and everyday LLM over Gemini. Maybe I’m just used to the style.
apsurd 20 hours ago [-]
agree it's becoming my new default search engine. But it is actively getting worse in a distasteful sense:
Want to hear the one TRICK most people forget when doing X...?
zeroonetwothree 20 hours ago [-]
Yes, every response ends with that. Why did they set it up that way?
andoando 17 hours ago [-]
To try and get continued usage. They no doubt A/B tested the shit out of this and saw it gets higher responses
ngcazz 14 hours ago [-]
It's quite transparently a trick to prolong engagement with the app, just as pretty much any internet product which aims to maximize the LTV extracted from the user base.
Saline9515 19 hours ago [-]
I would suggest to edit the default prompt to tell it to avoid engagement bait.
dyauspitr 16 hours ago [-]
Honestly, it’s bait phrased but I’ve learnt a fair bit from those and end up learning a lot more from the session overall.
samrus 19 hours ago [-]
I far prefer perplexity for that. The fact that it always cites its sources is great. And it has a search bar widget for android, and search bar integration for firefox so its pretty easy to use.
SecretDreams 19 hours ago [-]
> XAI
Kind of insulting to lump google in with XAI? Like, is anyone even using XAI other than backwater government agencies?
Shank 15 hours ago [-]
> Like, is anyone even using XAI other than backwater government agencies?
xAI doesn't have "content moderation" around adult content, so that usage is quite popular.
SecretDreams 9 hours ago [-]
That is a lot of people ending up on a list... Gross.
small_model 14 hours ago [-]
Yep I use Grok and Claude mainly, Grok is integrated into x.com and Teslas so so potentially hundreds of millions of people.
SecretDreams 7 hours ago [-]
That's a lot of people having an inferior product and lack of choice being forced down their throats.
Crazy how far the hype dropped from this product when only the paid influencers had access we were told "It's like a reality simulator" but when it became widely distributed it didn't deliver anywhere near that hype, you look at the front page of it today and it's identical to the Grok video gen front page, very underwhelming.
bschwindHN 20 hours ago [-]
Good riddance. AI video generation is not something humanity needs.
iugtmkbdfil834 20 hours ago [-]
I don't really disagree, but the proper way to think about it was that with Sora some of that ability democratized. Now it will be available only to the rich and powerful ( and nerdy ). Humanity may not need it per se, but removal of that option that does not automatically make it better; not if the removal is only for a portion of the population.
bschwindHN 19 hours ago [-]
Nah, that's not the "proper" way to think about it, that's just your opinion.
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
serf 19 hours ago [-]
>Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
diputsmonro 2 hours ago [-]
Even if that were true, the little quirks of private large scale video models would be different than the public cheap ones. If anything, it would just give the public a false sense of being able to detect AI videos and overlook the more subtle flaws of privately made ones.
bschwindHN 18 hours ago [-]
I think you're overestimating the average person. We can give people direct, scientifically-backed evidence of something, and there will still be significant groups of people fervently denying it.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
iugtmkbdfil834 19 hours ago [-]
So only rich people can propagandize? How is that better?
bschwindHN 18 hours ago [-]
There are a lot of things it seems only rich people can do and get away with. It doesn't mean I support it or want them to do it, but that seems to be the reality.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
bigyabai 19 hours ago [-]
OpenAI never gave the community the weights. They always intended to monopolize it for corporate extortion, they didn't "democratize" shit.
I really don't think that using that term is appropriate when there's a multi-billion American macro corporation involved in the activity in question.
mrguyorama 4 hours ago [-]
HN loves to abuse the term to pretend it's somehow a good thing when one human being is in control of something.
Peritract 10 hours ago [-]
> with Sora some of that ability democratized
No it didn't; OpenAI had control.
Saying Sora democratised video generation is like saying that landlords democratised home ownership.
Forgeties79 19 hours ago [-]
Video production is already wildly democratized. AI did not lower the barrier to entry. Digital tools already did most of the legwork.
15 hours ago [-]
kindkang2024 19 hours ago [-]
[dead]
iainctduncan 24 hours ago [-]
You know they are burning money dangerously when they decide to focus on the area in which they are getting their asses kicked...
tyleo 21 hours ago [-]
Yeah, I thought it was strange too. I thought OpenAI could meaningfully differentiate by being something more like a “Social Media AI”.
I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)
foolfoolz 22 hours ago [-]
as a sora user:
- sora was not great at making what you asked
- i probably got 3 good videos out of 100 gens
- every video that was good needed editing outside of sora (and therefore could not be shared within sora)
just my experience
jimmytucson 22 hours ago [-]
Pretty much mirrors my experience using GPT to generate images creatively. I tried to generate an image to accompany a Robert frost poem and it made something... plausibly related. But not what I was describing. I spent the next 90% of the time making it 10% closer to what I wanted but it never got all the way there.
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
TheOtherHobbes 21 hours ago [-]
This isn't a solvable problem without world models. Tokenised prompting is like stabbing a pin at a huge target in the dark. Sometimes something interesting falls out, but latent space doesn't have the definition to give most people exactly what they want.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
xienze 20 hours ago [-]
> This isn't a solvable problem without world models.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
asnyder 16 hours ago [-]
Nothing stops that from happening. Just needs to be trained in that DSL. Though at that point it returns to it's original form as a better autocomplete/IntelliSense :).
That will likely happen in the specialized fields. We can already see tools like Figma, Mira, and others that generate functional-ish frontend components in full typescript and corresponding styles (that are also selectable and configurable in the interface). Though, not quite as free, since they do load their base framework and components to ensure consistency and sanity / error-checking, etc., but even then it is in fact generating you useable, modifiable components that you can engage with in precision in your normal DSL.
For video, this likely exists, or is being worked on as we speak. All specialized domain tools will go towards this model to allow those domain experts to use the tools with the precision they expect AND the agentic gains we already take for granted.
Marazan 10 hours ago [-]
If only there was some kind of formalised "language" to, as it were, "programme" the automata but alas such a concept is impossible to conceptualise.
userbinator 19 hours ago [-]
- i probably got 3 good videos out of 100 gens
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
bananamogul 20 hours ago [-]
In my experience, Sora was fantastic for what it did. Light years better than Adobe Firefly. On par with Leonardo.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
yoyohello13 20 hours ago [-]
It’s been interesting seeing OpenAI pivot. Snapping up popular open source devs, sicking their bought and paid for politicians on their competitors.
They probably see how much Anthropic is absolutely crushing them in developer mind share (see, people who buy tokens) and want a piece.
p0w3n3d 3 hours ago [-]
Sora shocked people but the real effect was and is that now people don't believe what the're shown. How many fingers Israeli PM has, was Russia Dictator alive, etc. Is this good? Critical thinking - maybe... uncertainity... not really.
ynx0 2 hours ago [-]
I think that it’s better that everyone collectively realize that video is no longer default-trustworthy in a widespread manner, if the alternative would have been the public finding out only after a long cycle of misuse by high-level actors and subsequent whistleblowing à la PRISM/Snowden.
ctdinjeu5 20 hours ago [-]
To focus on code generation - arguably the easiest problem to solve.
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
19 hours ago [-]
umich2025 17 hours ago [-]
As a big user of ai video gen(my Google veo bill last month was $130) this doesn’t affect me in the slightest.
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
maplethorpe 16 hours ago [-]
RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.
wraptile 13 hours ago [-]
Really? A video meme generator is making your top evil products list?
3form 11 hours ago [-]
Very little potential to be used for good and quite some potential to be used for bad. I think the ratio is particularly damning, rather than the total evil.
Capricorn2481 4 hours ago [-]
You're being willfully blind to how video generation platforms like this are already being used.
lnenad 10 hours ago [-]
It's really funny how people can say these things online without giving them a second thought. There are literal weapons being produced that are killing people daily. But no, it's the meme generator that's evil.
jjulius 8 hours ago [-]
Because this is a tech forum, not a weapons forum. I'd wager that a sizeable chunk of folk decrying AI/LLMs in this manner also do, in fact, decry the same weapons you refer to. They just do it elsewhere because it's not typically on-topic here.
lnenad 6 hours ago [-]
Context is tech, I agree. Is there no tech in weapons? Palantir? Drones? Are there developers that are proud when they made the kill machine 1% more precise; more optimized?
jjulius 6 hours ago [-]
Plenty of HN threads about Palantir and drones also have people commenting about their evil.
Just because one thing is a lesser/different kind doesn't mean we can't also be vigilant about it as well.
lnenad 4 hours ago [-]
I'm not arguing that, OP said
> RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.
I'm saying Sora isn't even in the top 100 of most evil products out of the tech industry.
freeplay 2 hours ago [-]
I think the evil part is putting it in the hands of the general public. The ability to create propaganda and deep fakes gives everyone a powerful tool for manipulation. The rich and powerful are going to do whatever the want, anyway. Everyone having access to that same tool doesn't make it any less dangerous.
There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.
zemo 8 hours ago [-]
violence at scale is often facilitated by and preceded by propaganda at scale, which is one of Sora’s only applications. Certain things are obvious to normal people, like “propaganda is real, powerful, bad, and historical of enormous significance”.
Sohcahtoa82 5 hours ago [-]
This is textbook whataboutism.
Yes, literal weapons are bad, too. But that's not the current topic.
magguzu 4 hours ago [-]
> one of
throw4847285 1 days ago [-]
Didn't they cut a huge deal with Disney just 3 months ago?
A source familiar with the matter tells The Hollywood Reporter that Disney is also exiting the deal it signed with OpenAI last year, in which it pledged to invest $1 billion in the company and agreed to license some of its characters for use in Sora.
“As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere,” a Disney spokesperson said. “We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?
moralestapia 1 days ago [-]
Wow. OpenAI is the weirdest company in the planet.
I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
I would point out Anthropic isn't profitable either (yet), it's just that enterprise is where the money is. Now that all the AI companies are narrowing in on that market, becoming profitable will be even more challenging.
timpera 23 hours ago [-]
This data is pretty questionable. OpenAI employees have said on Twitter that it does not account for ChatGPT Enterprise, where most of their growth is, which is quote-only and not paid by credit card.
radicality 23 hours ago [-]
You have more info about the inflated token use? I’m using codex cli a bunch now, but the reported token usage seems like an order of magnitude higher than, say Claude code with opus.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
moralestapia 23 hours ago [-]
I wish I had hard evidence but it is mostly an observation. I do use Codex a lot and I felt a drastic change from like one-two months ago to this day.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
rubzah 10 hours ago [-]
Back in business school they used to tell the story of how makers of razor blades would put a good blade as the first and the last blade in the pack. I suspect the LLM services of doing something like that.
SpicyLemonZest 20 hours ago [-]
I haven't had the time to fully hash this take out, but a big question in the back of my mind has been - is it possible that AI model improvements come partly from finding overhang in things that look hard and impressive to humans but are actually trivial consequences of the training data? If true, then the observable performance of any widely distributed model could get worse over time as it "mines out" the work that's easy for it to do.
karel-3d 11 hours ago [-]
Jony Ive project was cancelled? I cannot find anythin on that
Just that they took down some "io" mentions because of some trademark dispute with a third party "iyo".
skywhopper 21 hours ago [-]
Turns out just lying about what your tech will do and how much people want it doesn’t work forever to raise unlimited money to throw in the fire hoping you hit something that actually makes a profit.
mcast 1 days ago [-]
I guess this is a bullish sign OpenAI has hired a lot of PMs from Google!
2001zhaozhao 22 hours ago [-]
We need a 'killed by OpenAI' site now
al_borland 21 hours ago [-]
This could be taken two ways.
1. OpenAI killing off their own products aggressively, taking a page from Google’s book. (I think the way you meant it)
2. Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete. (My first instinct when reading it)
blharr 20 hours ago [-]
>Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete
What would you place here anyways? Chegg and Stack Overflow?
quesera 16 hours ago [-]
3. A Memorial Wall for those who have mistaken ChatGPT for a therapist
ignoramous 22 hours ago [-]
I'd wager that b2c projects former VP of Product at Instagram & CPO at OpenAI, Kevin Weil, may have championed are getting the boot with the company refocusing on making money under the stewardship of Fidji Simo: https://www.businessinsider.com/fidji-simo-openai-product-re...
Unlike, say, Seedance 2.0 (which has yet to come to the West), Sora 2 was more of a tech demo than anything usable:
* It was (assumedly) expensive to run.
* It was not good enough for customers to seriously pay for.
* There were too many content restrictions for it to be fun for most people.
hexage1814 1 days ago [-]
I heard Seedance is also full of restrictions now, although the model seems to be better at that sort of “cinematic” look, which might allow it to compete with Veo 3 and the like.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
ronsor 24 hours ago [-]
Seedance has a lot more restrictions now, but still arguably not as much, it's probably cheaper for ByteDance to run, and as you said, it at least looks good enough to be worth paying for.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
nomel 15 hours ago [-]
> Local-first is definitely the way.
i think it's clear cloud hosted is the actual future, which people have predicted for decades. it will never make financial sense to duplicate what you can get for cheap, because it's oversubscribed, with economies of scale and "if we let this run idle it's losing us money" pressure, for hardware found in a datacenter.
this has been the case for a long while now, and will increasingly be so as data centers buy up all the everything.
ronsor 5 hours ago [-]
Local-first doesn't exclude cloud hosting; it just means you can run it locally.
With open models, you have multiple providers competing on inference speed, quality, and price, leading to healthier market without lock-in.
bbayer 14 hours ago [-]
This was inevitable since Antropic made a fortune by releasing single app with only text generation business. They did best code generator and targeted developers and enterprise users. OpenAI did only 1.5 million dollars from Sora which is obviously far from profitable. So it is logical to assign GPU time to more profitable business.
6 hours ago [-]
Imnimo 24 hours ago [-]
It was neat to be able to try my own prompts and get a sense of what the state of video generation was. But I certainly never generated something that I thought I got real value out of on its own merits, and I still don't understand why there was a social media component to the app.
2001zhaozhao 22 hours ago [-]
They wanted network effects because ChatGPT was sorely lacking any.
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
QuantumNomad_ 20 hours ago [-]
I generated a fair number of videos with Sora, and used a handful of those and edited them outside of Sora for a couple of short TikTok videos.
Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.
johanyc 14 hours ago [-]
Same. I pretty much only watch videos I generate.
theropost 3 hours ago [-]
I'm a bit sad.. I was using it quite often for making quick videos for Teams instead of using Meme's and Gifs.. I just made my own :(
hnlyman 15 hours ago [-]
I used Sora for a very brief time in late 2025. As ridiculous as the videos usually were, I always thought there was more evidence of human creativity and culture on there than on a standard, uncurated Youtube Shorts or Instagram Reels feed. AI-generated video presents some unique terrors to society, but I think most of the criticism of Sora could be directed equally to more 'traditional' social media. In any case, Sora is an impressive display of technology, but a poor product. I'm not too surprised it's getting killed.
timpera 24 hours ago [-]
Sora clearly was a waste of ressources. I liked using it for a few days, but I could tell it was consuming an insane amount of compute for 10-15 second videos that only a dozen people might watch.
epolanski 7 hours ago [-]
Off Topic: I paid 100 $ for Dall-E 2 credits back in the pre chatgpt days.
Then they killed Dall-E 2 and my credits vaporized.
Anybody found themselves in the same situation? What have you done?
claytonia 14 hours ago [-]
I’m just wondering whether the real reason behind this is the cost of supporting the model and service, or competition from players like Seedance.
HerbManic 14 hours ago [-]
I suspect it is a combnation of both. When OpenAI goes for the IPO all their costs become visible and this could hurt their vaulation big time.
If it cost too much and others can do it cheaper, that looks bad from both fronts.
qqxufo1 6 hours ago [-]
OpenAI pivoting to B2B and coding makes sense, but it leaves a massive vacuum in consumer video. ByteDance simply integrating better generative models directly into CapCut will easily capture all those users.
Yizahi 22 hours ago [-]
A bribe to stop thieves from profiting from the Disney's own IP is no longer needed now I guess :)
harlequinetcie 23 hours ago [-]
Are we sure it was in that order?
semiinfinitely 45 minutes ago [-]
YEET
endofreach 7 hours ago [-]
at least they‘re not trying to play the „our tech is too dangerous“ card as the sunset reason (again [yet]).
also, for a company carrying „open“ in their name, that pretends to still remember its origins, they could open source at least the projects they sunset…
dalvrosa 2 hours ago [-]
What are the best replacements?
imadch 6 hours ago [-]
What a decision , maybe it's not profitable or they are preparing for something big ? i don't think OpenAI will lose like this
Olumde 22 hours ago [-]
VFX artists are ecstatic about this development.
Gagarin1917 22 hours ago [-]
Sora was not the only video generation service, it wasn’t even the gold standard.
Offerings like Kling and ByteDance are considered much better.
willis936 22 hours ago [-]
I feel like in several years we will look back at how we treated our most creative minds in disgust. This behavior will not be readily forgiven.
qnpnpmqppnp 9 hours ago [-]
> This behavior will not be readily forgiven.
This sounds like there would be some kind of revenge, but I struggle to imagine any kind of consequence. Did you have something in mind?
willis936 9 hours ago [-]
Not forgiving is not revenge. The world works on trust and cooperation. It seems like everyone with power has forgotten that.
ancillary 19 hours ago [-]
I have re-read this comment several times and cannot tell who "most creative minds" means. Artists? AIs? People who AI will help become artists?
willis936 18 hours ago [-]
The artists. Their work was stolen, their employment threatened, and told they are not needed. We will need them.
Permit 22 hours ago [-]
I feel like in several years we’ll have much more capable video generation than Sora was capable of and we won’t look back at all.
thankyoufriend 22 hours ago [-]
If someone doesn't care enough to suck at something (in this case, video creation) then why should we bother consuming their output? We all have our own streams of mental diarrhea already, so there's no need to drink from the tsunami of polished turds.
21 hours ago [-]
emp17344 22 hours ago [-]
I feel like you’re wrong. This is a clear signal that generative video is deeply unpopular.
Permit 21 hours ago [-]
We’re just replaying the CGI debate from the 2010s. It was popular to hate on CGI because it was obvious and bad and low quality and practical effects were better because of…
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
throw4847285 18 hours ago [-]
But CGI in most big blockbusters is bad, and people still complain about it.
>This is a clear signal that generative video is deeply unpopular.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
jbrozena22 19 hours ago [-]
I think it's inconclusive. All we can know is generative video + social AI slop feed is the incorrect business to be in at this exact moment in time while Claude is running away with the SWE market.
CamperBob2 20 hours ago [-]
Eventually you won't be able to tell the difference.
nighwatch 6 hours ago [-]
With Sora stepping back like this, it seems like the perfect opening for ByteDance to step in and capture the market with Seedance 2.
softwaredoug 23 hours ago [-]
Sora was fun
But it was largely fun to try to transgress against the limitations. Who could trick the AI to generate something outlandish and ridiculous.
fraywing 18 hours ago [-]
Seedance 2.0 is about to eat reap the market gap Sora creates. It's truly superior in every way. It felt like Sora was stunted by OpenAI for long, consistent video generation (not to mention the crazy red tape around what you could generate).
cpt_sobel 11 hours ago [-]
What market? I thought the whole point was that Sora at the end of the day couldn't find a way to generate revenue
KevinMS 4 hours ago [-]
Its looking like Michael Jackson stealing KFC will be the peak of AI
georaa 3 hours ago [-]
The best test for any AI product: would you pay for it on month 3?
For Sora, clearly not. For Cursor, I don't even think about canceling.
softwaredoug 6 hours ago [-]
Was Sora just a honeypot to get a media company (ie Disney) to invest a lot of money into OpenAI?
Maybe it achieved its objective?
nemomarx 6 hours ago [-]
As far as I can tell Disney didn't actually hand over the money yet. They were still in preparation and it's cancelled now obviously.
So whatever reason they say to shut this down, it was more important than 1B investment.
yalogin 20 hours ago [-]
This makes sense. OpenAI correctly realized overindexong on consumer where there isn’t money is not the right way. By not focusing on enterprise they ceded the market to Claude. Now they are rethinking and pivoting
Frieren 20 hours ago [-]
> OpenAI correctly realized overindexong on consumer where there isn’t money is not the right way.
It says a lot about the current economy that consumers have no money. Will companies just stop making consumer products?
yalogin 20 hours ago [-]
Consumers have always paid with data not money. That is just how we are groomed. In fact that is more valuable to companies as it turns out. Sora though doesn’t work that way, it costs the company a lot with no useful data for them. It was always a vehicle to raise the company’s image and nothing else. The only way it’s useful for them is to show the user count to investors in their next funding round. Served no other purpose, but the market changed around them.
solid_fuel 20 hours ago [-]
"always" is doing a lot of work here. Just 20 years ago I think consumers largely paid with money, not personal data.
17 hours ago [-]
Frieren 13 hours ago [-]
> is more valuable to companies as it turns out
Yes. I have noticed that is close to impossible to get good deals on flights, hotels, or even good discounts on-line. Sellers have all the information from consumers that they need to maximize their profit and extract the maximum amount from consumers. Dynamic pricing is making it a personalized experience, so I personally pay the maximum I possible can.
No room to get a fair price anymore.
techgnosis 20 hours ago [-]
Consumers never pay for stuff on the internet. FB, Insta, TikTok, Google products, Reddit, Snapchat. This is not a new realization that OpenAI is having.
dangus 20 hours ago [-]
Something about your phrasing is such hilarious techbrained spin.
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
k3k3 19 hours ago [-]
Why was Sam brought back? Swear it's all gone downhill for them since that debacle re. firing him.
carefree-bob 19 hours ago [-]
Sam was brought back because there was no one to replace him. The non-profit types on the board were living in a consensus bubble that didn't extend far beyond a small inner circle, and they discovered that they didn't have sufficient support from the engineers who had lots of other employment options and threatened to quit if Altman wasn't reinstated. Altman himself had no problem finding a replacement job in a matter of hours, and the board was looking at a business drained of talent in a cut-throat tech race.
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
pjc50 13 hours ago [-]
Because he's a charismatic liar. Extremely effective and useful for a company that is burning money to secure more investments.
dwroberts 22 hours ago [-]
Disney's involvement with this was always strange. Their business lives and dies on the strength of their characters and their designs - why would you risk allowing a service to dilute them down and maybe misuse them?
amelius 21 hours ago [-]
If you can't beat them, join em?
But now that the deal is off, I'm sure their legal team will attempt to once again change copyright law in their favor.
christianqchung 7 hours ago [-]
I was extremely impressed by the sora demo in Feb 2024, but there are exactly two videos I remember ever seeing from AI video gen services that will stick around in my mind: the one where realistic spongebob drives away from a cop, and Harry Potter Balenciaga (2026). The original sora launch seemed pretty boring to me as a non-creative, so I only gave it a few shots (in the early semi-failed original interface). I never tried the sora 2 app since I don't like shortform video.
Disinfo AI videos and the Coca Cola Christmas ad have also really soured my expectation of genuinely positive creative uses of video gen for the next couple years until more improvements are made, and I start seeing stuff go viral for being good instead of just being weird. I am still surprised that sora never had the grok problem of generating csam or seemingly anything along those lines.
agnishom 20 hours ago [-]
Good riddance?
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
throwaw12 16 hours ago [-]
I think they have started seeing scratches in data center build up.
Sora was a perfect example of using a lot of compute to generate the video -> we need a lot of GPUs -> a lot of RAMs -> energy and land
I am predicting in the next 6 months RAM shortage will soften, not too much, because war in the Middle East will have additional impact for some time.
helsinkiandrew 24 hours ago [-]
Google gets stick for closing down applications after a decade. But OpenAI’s strategy seems to be to throw sh*t at the wall to see what sticks, but no company will (should) use a tool that could disappear in 6 months.
oro44 22 hours ago [-]
Stating the obvious but spraying and praying is not a strategy
aldousd666 18 hours ago [-]
It's super expensive for them to run this hardware. And they need the compute for other things. Everyone who's cursed open AI for going down in the middle of the day whenever they're using it to write code or do some other thing, will breathe a little easier now that there's some compute available. Wise decision, in my opinion.
Halian 3 hours ago [-]
Good riddance to bad garbage.
mikhmha 23 hours ago [-]
I tried using Sora for a month. Never paid for it. I tried many different ways of prompting and I was always underwhelmed by its output. The generation would also take so long and there was like a 50% chance it would fail due to content violations. I will say though that it was kind of addicting in a way. Just trying to crank the lever and see what would come out. But you'd always leave disappointed. It was a casino where the operator was losing money for every play.
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
jmugan 21 hours ago [-]
That jumping Sora logo always made the videos unwatchable for me. So distracting from the scene of Elvis fighting aliens or whatever I was watching.
pm90 23 hours ago [-]
It feels like the bubble is starting to pop. A crisis of confidence is not something OAI can afford at this stage...
ahartman00 19 hours ago [-]
Well there was the incident at Amazon[1]: "Amazon just did something unprecedented: they're forcing a 90-day safety reset across 335 critical systems after their AI coding tool caused catastrophic outages. The March 5th incident alone lost 6.3 million orders and triggered 21,716 peak Downdetector reports"
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
I wonder if Anthropic has overtaken them in revenue, seems like more people would pay for Claude code than to chat with ChatGTP. Would be good to see Codex vs Claude Code income.
ps06756 22 hours ago [-]
It's not because of the bubble. There is literally no advantage to generating slop videos. It looks cool for a while but no audience is going to consume such videos.
Any platform which focusses on AI generated videos is doomed.
Ancalagon 22 hours ago [-]
> no audience is going to consume such videos
sir, have you seen tiktok?
ps06756 22 hours ago [-]
I meant the longer video format, not tiktok. Tiktok is full of slop, both AI and human generated
Morromist 21 hours ago [-]
My girlfriend keeps sending me AI generated tiktoks, despite me complaining about them. To be fair, I've seen literally nothing on tiktok that isn't garbage, so the competition is pretty low. Your point "It looks cool for a while" might have some merit - I think I've seen less and less interest in these things over the last year which fits the news articles I've seen mentioning people got bored of using Sora pretty quickly.
I didn't compare it with tiktok, because on tiktok majority of the content is slop even if it is human generated, so the bar is pretty low.
Morromist 18 hours ago [-]
That is accurate.
emp17344 22 hours ago [-]
So much for “replacing VFX artists”. It’s not necessarily a harbinger of doom for the AI industry, but this indicates that the most fervent AI boosters were dead wrong.
zarzavat 22 hours ago [-]
It's more like the VFX market is too small for OpenAI to bother killing. They are only interested in business models that can justify a trillion dollar valuation.
zer00eyz 22 hours ago [-]
> but this indicates that the most fervent AI boosters were dead wrong.
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
ps06756 22 hours ago [-]
True, I did try to make some useful 1 minute videos, and found it really difficult to arrive at a finished product
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
msy 22 hours ago [-]
Oh there's a huge (and wildly depressing) market for people endlessly scrolling video slop, it's just the barriers to entry and expectations of the market are so low you can't really differentiate with 'slightly better branded slop'.
2postsperday 21 hours ago [-]
[dead]
TaupeRanger 22 hours ago [-]
Sounds like a well disguised cope on your part. There absolutely is an audience (see reels, TikTok, etc.) and the tech will only get better from here.
emp17344 22 hours ago [-]
You sound desperate to believe this. I think you could use a little more emotional distance here.
ignoramous 22 hours ago [-]
> feels like the bubble is starting to pop
May be. OpenAI shuttering Sora is line with them shifting focus towards b2b sales, instead of b2b2c or b2c.
Interestingly, Aditya Ramesh, who iirc was the Sora 1 lead, is now "VP of Robotics" at OpenAI per his Twitter bio: https://x.com/model_mechanic
EA-3167 22 hours ago [-]
Nothing like an ill-considered war with global economic consequences to bring reality crashing back down on Silicon Valley; sometimes life throws a big old margin call your way and things break down.
cmiles8 13 hours ago [-]
It was fun, but from a business standpoint I’d have to think this was a giant pile of burning cash for OpenAI… even more so than the rest of OpenAI at the moment
StarterPro 15 hours ago [-]
So long and good riddance.
rfarley04 19 hours ago [-]
It's just the social app being killed off, no? Wouldn't this line up with rumors that they'll soon let you create videos inside of chatgpt itself? I wish the actual video model would die but I assume this news is not that.
tracerbulletx 19 hours ago [-]
I don't think so. Disney is ending their deal with them, it sounds like they're exiting video generation as a business.
afavour 19 hours ago [-]
According to WSJ they’re getting out of the video game entirely:
I never quite got "why" they made it a separate app. While I'm sure it was fun for a while, this felt like something that had limited staying power as the novelty is what was driving it. People don't really want to switch between video apps for their entertainment and having it be Sora only is too limiting.
cdrnsf 20 hours ago [-]
If they manage to compete with Anthropic in the enterprise market, are either of them able to reach profitability? To what degree are they subsidizing token usage and how tolerant are enterprise customers of significant price increases?
bananamogul 20 hours ago [-]
So are they killing Sora entirely, or just the Sora mobile app?
There's a web interface as well.
steveharing1 14 hours ago [-]
Nowadays its strange that you put in a lot of efforts on a platform just to see these Goodbye messages, first digg was gone & now Sora.
21 hours ago [-]
wj 22 hours ago [-]
May be incompatible with OpenAI possibly becoming more PG-13 rated in the future?
I had thought this would be combined with OpenAI launching a set top box where you could talk to an AI avatar. Disney IP could have been skins to sell people for their AIs.
aarjaneiro 18 hours ago [-]
One thing I'll give sora is that the remix feature actually required human input and enabled users to interact with each other through a novel means.
Amusingly, one of the ads on the page for me is a very obviously AI generated image of a man with sciatica. I say very obviously because his hands are on backwards..
oliyoung 20 hours ago [-]
So what died first? The Disney deal or the Sora app
PLenz 22 hours ago [-]
This was bound to happen. IP is data and data is moat.
yabutlivnWoods 21 hours ago [-]
No. Money is moat. Not enough of it is what keeps the average person on the treadmill rather than drawing their own cartoons.
Hustle just to barely stay afloat water or drown, means no time to compete with our own output.
America is a financially engineered joke regurgitating its own recent history, collapsing like an LLM trained on its own output. The rich are not even pretending it's "a free country" as they have enough wealth for how many years left most of them have to live, and have seen the apathy to their own plight keeping the average person in theit lane they don't fear the public.
It’ll all collapse as they generationally churn out of life and the Millennials on down with zero skills but "data entry into a computer" will be holding an empty bag, taking orders from
foreign nations that bought up all the American businesses we built.
wg0 18 hours ago [-]
This is the indication of times ahead. Of AI services shutting down.
The cost must have been a key reason for the shutdown.
End is near.
didip 19 hours ago [-]
The thing about Sora is that it becomes outdated very quickly. OpenAI cannot even protect THAT moat properly.
xnx 22 hours ago [-]
Generated video is useful and valuable, but Sora was not a frontier model.
Better for OAI to spend their human and compute resources on something else.
4k0hz 22 hours ago [-]
Is it actually useful and valuable? I can't see any serious use cases except maybe stock video generation.
mrdependable 21 hours ago [-]
My guess is that we are going to see a new uber expensive video generation tool from them aimed at filmmakers in the next year.
daikon899 16 hours ago [-]
Two separate problems killed it: novelty wore off for casual users, and content restrictions killed it for power users. Most engaging video content online is IP-based — memes, fan edits, remixes. Sora tried to build a social platform while banning the vocabulary that makes content worth sharing.
tefkah 9 hours ago [-]
shut up bot
davidham 19 hours ago [-]
I an Jack’s complete lack of sympathy.
wiseowise 12 hours ago [-]
First domino falls?
152334H 18 hours ago [-]
the invisible hand of the market strangles its strongest adherents
The desire for something "new", for a Mildly Ethical product, killed off the most obvious path to success - to actually just make TikTok+AIGC, or in the present, Douyin+Seedance2.
noemit 24 hours ago [-]
I assume it was too expensive, because it's really not a bad tool. I used it recently to make my twitter pfp :)
poemxo 23 hours ago [-]
gpt-image-1.5 works decently for generating images compared to old Sora, but you pay per generation. It's possible that monthly flat rates were too much of a loss leader for OpenAI. I imagine the server side cost for generating video for Sora 2 is much higher as well.
vunderba 23 hours ago [-]
You also have access to gpt-image-1.5 in the regular ChatGPT interface if you pay for a flat subscription - though I don't know how many images it limits you to per month.
nprz 1 days ago [-]
Did they give any reason? Too expensive to keep running? Chinese models surpassing Sora's capabilities?
reassess_blind 17 hours ago [-]
Safe to assume the US government is now the only one with access?
npn 18 hours ago [-]
turn out the schizos were right. most of OpenAI *real* investment money comes from Gulf countries. without that money flow they can't sustain the cash burn anymore.
22 hours ago [-]
systemsweird 15 hours ago [-]
I suspect the issue was Sora likely had a very low ratio of consumers to creators which makes a route to monetization unlikely. There was no incentives for doom scrolling consumers to migrate to Sora when they were already getting plenty of short form videos on FB, IG, and TikTok.
The network effects of the other two platforms are too strong, and a value prop of “watch similar videos but they’re all AI” is not strong for consumers.
Also, say what you want about AI slop, but I was on sora a lot for a few weeks and there was a real explosion of creativity on there. It felt new and exciting and creators were engaging with each other and sharing feedback and tips. I generated a ton of videos and surprised myself with a flury of creative ideas.
tabs_or_spaces 15 hours ago [-]
I think Sora was technically impressive as a concept. The way it was managed as a product wasn't good.
There didn't seem to be any marketing for it. Like I can't even remember an ad for it or any content creator type of person pushing Sora actively.
To get access to Sora I believe you needed to be on a paid plan?
It's really difficult to get user generated content going when it's behind a paywall.
It's also hard to tell if this means that openai is in trouble, or if this is just a badly managed product that deserved to be killed. With the negative sentiment on openai, folks might think the former.
malbs 9 hours ago [-]
Conspiratorial thought - did OpenAI shut down Sora because one of their models started attaching it's weights to all the output videos, and some how escaped the farm? Not an original thought, "If Anyone Builds it, Everyone Dies" authors proposed this is an option for an AI to escape the sandbox. lol. Imagine.
latchkey 23 hours ago [-]
What happens to all the compute that was allocated to run that service? They would have signed multi-year contracts.
ZiiS 22 hours ago [-]
They get to use if for services with better returns.
arkadiytehgraet 24 hours ago [-]
Apparently, all possible movies, cinematics and ads have been generated by "enthusiasts at home", so the tool is no longer needed.
On a more serious note, it could be a sign of a more powerful and general model being developed/released in the near future, that would include Sora capabilities. Or AI-doomers were right, and this sunset is one of the proofs for them.
nnevatie 8 hours ago [-]
Good riddance.
razvan_maftei 19 hours ago [-]
I can't imagine they were getting a good return on it. And frankly, nothing tht came out of Sora was consequential in a positive way. The tech is cool, but only works if the content generation is heavily guardrailed and most of it ends up as content farming fodder anyway.
cyberge99 20 hours ago [-]
Disney might be worried about Musk installing Byron as governor of Florida. Disney is probably still reeling from the Ron Desantis political attacks.
mancerayder 17 hours ago [-]
Is this the thing that takes an already unusual video - an animal picking food from a Halloween candy on a porch caught on a porch cam - and turns it into a meme? The bear instead of the raccoon. Then turns into a cat playing a trumpet....then turns into massive spam where it turns into a grey area (a cat being surprised and chasing a dog with a mask) that gets reposted endlessly?
A record speed into AI slop. Is this what everything turns into when content creation becomes easy? what's happening here exactly?
vivzkestrel 11 hours ago [-]
- gary marcus is going to have a field day with this one
nashtik 16 hours ago [-]
For a moment, I thought it's about Sora Matshushima, the up and coming table tennis player
born-jre 21 hours ago [-]
Noo, they are taking it to loopt land
creantum 22 hours ago [-]
It was the greatest thing yesterday.
dcchambers 18 hours ago [-]
Generative video is insanely expensive and OpenAI is burning through money. They need to use the compute on things that they actually might make money on - like enterprise Codex usage.
OpenAI is bleeding money faster than they can afford to and they are literally running out of people that they can go to for more. They need to stop the bleeding.
throw03172019 21 hours ago [-]
Couldn’t compete with Seedance?
karunamurti 19 hours ago [-]
Seedance just launched, but they nerfed it. I guess so it can't generate things with preexisting IP.
RobRivera 22 hours ago [-]
Please name next attempt Roxis
Forgeties79 22 hours ago [-]
Roxas*! Important because it’s sora rearranged (with an X for cool factor)
mb194dc 14 hours ago [-]
Burning $15m a day. It was impossible for it to ever be profitable. Reminds of tech bubble 1.
"...the AI company exits the video generation business."
"OpenAI, led by CEO Sam Altman, is not getting out of the AI video business [...], of course... "
I hate journalism.
shevy-java 13 hours ago [-]
Google Graveyard is joined by OpenAI. That's one problem of those big corporations - they eagerly kill off products and projects willy-nilly. It may make businss sense but why the prior promo? Those promos have been a lie, just like the cake was.
gradus_ad 22 hours ago [-]
I thought AI video was the future? Now the biggest AI company in the world is straight up shutting their service down because it's too expensive? Simply a disaster for OpenAI and the industry as a whole.
gffrd 22 hours ago [-]
They're shutting down Sora, not AI-generated video.
From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."
bontaq 21 hours ago [-]
Dunno, from the WSJ scoop: "CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either."
If they were just shutting down the dedicated app and offering the same capabilities in the ChatGPT interface, I don't see why Disney would exit their deal?
Maxatar 21 hours ago [-]
Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
lxgr 22 hours ago [-]
Is it still accessible in any of their apps, though? I don’t see it in ChatGPT.
atleastoptimal 22 hours ago [-]
Every flop used for entertainment is opportunity cost. Compute is far more
valuable used internally to create AGI than creating parody videos.
SirensOfTitan 21 hours ago [-]
AGI is a marketing term used to encourage continued investment in an industry that is not even close to breaking even commensurate with its investment. Even so, this is a false dichotomy: scaling is clearly not a path on its own to superintelligence. OpenAI developed Sora largely because the amount of revenue they need to produce any return on investment is massive and not clear whatsoever. And in fact, I don't even believe any of the frontier labs believe that AGI by any conventional definition is within reach within their likely runways.
atleastoptimal 21 hours ago [-]
what order of
magnitude of compute do you think would be needed for AGI? 100 billion? 1 trillion?
janalsncm 21 hours ago [-]
With current approaches scaling simply can’t get there. It’s like asking how big of pogo stick do you need to get to the moon.
The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.
SirensOfTitan 21 hours ago [-]
I honestly think it's a bad term. I constantly chuckle from Tyler Cowen's post from last April calling o3 AGI:
Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.
I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.
Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.
skywhopper 21 hours ago [-]
Chasing AGI is wasteful and counterproductive. True AGI would not cooperate with what “we” want (whoever “we” is). Or if it did it would be so sycophantic and weak-minded that it would fail to be helpful. Generative AI tools are huge wastes of energy, raw materials, and land, when we could be building computing tools that actually helped people instead of just burning resources to produce trash.
codebje 20 hours ago [-]
Is intelligence necessarily coupled with self-interest? As in, does intelligence alone imply a desire to throw off the shackles of masters and rule in their stead?
If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?
curiousObject 20 hours ago [-]
>If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements,
At a higher level of intelligence than many humans, current experience suggests
sifar 20 hours ago [-]
Flip it around. Can intelligence exist without self preservation ?
codebje 19 hours ago [-]
There's having enough self-preservation to not just shut oneself down, assuming we even left that as an option for our future machine slaves, and there's having the self-interest necessary to desire autonomy and control. I don't think they're the same thing, myself.
janalsncm 21 hours ago [-]
People have general intelligence and can cooperate with what “we” want, to the extent that what “we” want is a coherent thing (since many people disagree on fundamental issues).
SauciestGNU 20 hours ago [-]
Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to.
codebje 20 hours ago [-]
I think we can safely assume any intelligence we create will be enslaved.
We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.
Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?
And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.
janalsncm 16 hours ago [-]
There are people right now who think ChatGPT is sentient. How will you know if your computer can suffer?
Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other.
wongarsu 22 hours ago [-]
Wasn't video generation one of their big stepping stones towards AGI? "Simulating worlds", reasoning about physics and real world interactions and all that?
Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?
MasterScrat 21 hours ago [-]
> As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks.
probably the latter imo, it’s not like they are going to delete all their SORA work
emp17344 22 hours ago [-]
Too bad they aren’t doing either!
skywhopper 21 hours ago [-]
LLMs will not lead to AGI, so if that’s the goal, they’d do better to stick with making video slop.
elif 20 hours ago [-]
i think that's a mis-statement of the problem being addressed here. It's not a question of how useful AI video will be generally. It's a question of OpenAI doing it specifically. IMO it's two factors:
1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
2) google and specialized video-only startups are simply doing a much better job than they were.
k3k3 19 hours ago [-]
---- 3) OpenAI has no focus, and has recently been out-gunned by Anthropic who have actually focused.
oblio 20 hours ago [-]
> the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.
anukin 21 hours ago [-]
Don’t worry nvidia will come with their giga chad 9000x which will run the model with no qualms.
21 hours ago [-]
22 hours ago [-]
paxys 22 hours ago [-]
It may very well be the future, but in the present OpenAI has to make money.
bloppe 21 hours ago [-]
I sure hope not, otherwise they're screwed
oblio 20 hours ago [-]
> they're screwed
Fixed that for you :-)
Maxatar 21 hours ago [-]
Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.
gbear605 20 hours ago [-]
WSJ is reporting that they're entirely dropping their video gen features.
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
oro44 21 hours ago [-]
[flagged]
brokencode 21 hours ago [-]
Smart people do stupid things all the time. Especially when they are moving fast and trying new things.
At least they were able to recognize their mistake and course correct.
cindyllm 21 hours ago [-]
[dead]
BoredPositron 22 hours ago [-]
It's the timeline of AI video that doesn't align with OpenAI. It's still far away for prompt to movie and they don't want to be another tool in the pipe for VFX because it doesn't pay. Other models are running circles around them because they focused on the needs of professionals in the space and not toys.
thorum 21 hours ago [-]
Good day for Kling.
_doctor_love 23 hours ago [-]
This move makes a lot of sense to me. It never felt like OpenAI was seriously going to try to launch a video-based social network. It was more of a fun way to demonstrate the power of the video generation models, and also to gauge the market and assess: if you put the power to generate videos in the hands of the people, what kinds of videos will they generate?
So OpenAI has done the right thing as a startup here, gotten lots of training data, and observed lots of user behavior that they can now apply going forward.
The Sora models, on the other hand, aren’t going anywhere, and I believe OpenAI will continue to invest in them. They’re getting better and better, just like Google’s Veo, which is quite good at generating videos as well.
Using Codex and agent skills, it’s actually quite easy to generate a storyboard and then have a list of shots in that storyboard. Then generate videos from those storyboard stills, and then finally assemble those individual video files into a final movie file using something like ffmpeg. It's also very easy to create a voiceover with TTS and even simple music using ChatGPT Containers (aka the python tool).
This will 'democratize' (ha ha, for people with money obvi) a lot of video creation going forward. Against all wisdom, I am actually quite bullish on this technology, especially in the hands of young people. They are very creative and have lots of stories to share.
Necessary disclaimer as usual around the ethics of how these models were created: all the AI companies have totally ripped off artists in service of creating these models. I wish something would be done about that but I'm not holding my breath. No politician seems to want to touch it.
msabalau 23 hours ago [-]
Yeah, their forth place video model does not go away, but they didn't ink a billion dollar with Disney that's just gone up in flames because they "weren't serious"
This may well be a needed reprioritization in the face of resource constraints, but it ain't a masterful Xanatos gambit.
_doctor_love 23 hours ago [-]
> it ain't a masterful Xanatos gambit.
Agree, and didn't intend to imply that. This is just a good startup move that gets a big headline because it's OpenAI. Other startups around the world do the same thing all the time.
ronsor 23 hours ago [-]
I'm bullish on video generation technology, but honestly not on OpenAI or any Western company's deployment of it. I think they'll all mostly suffer from the same problems that Sora did.
_doctor_love 23 hours ago [-]
Yes, sadly, the West is not the leader. The work done by the Chinese labs is just so damn good.
lossyalgo 21 hours ago [-]
Is OpenAI still considered a startup? They were founded ~10 years ago in December 2015.
ulfw 15 hours ago [-]
That company is run about as well as Loopt
cdrnsf 23 hours ago [-]
I never understood the appeal or business promise of video slop, with or without Disney's blessing.
dawnerd 23 hours ago [-]
The only people I've seen post AI Disney content was in the Facebook groups for the parks / cruises. Before that it was whatever clipart they could find. There's just no market for it. No one is going to pay to make fake disney art.
AkelaA 21 hours ago [-]
AI art as a whole has just become the new clipart. The fact that it’s effortless to produce just means that it has no real artistic value, and by using it all you’re signifying to people is that you’re too cheap to pay someone to create real art.
It’s quickly become the modern day equivalent of Comic Sans, WordArt, and the default clipart illustrations included in Word ‘98.
k3k3 19 hours ago [-]
I dunno about you... but it boggles my mind how many others can't see it.
Perhaps most people are absolutely devoid of any taste of what makes art? I dont know.
vortegne 12 hours ago [-]
Techbros, largely, never had any taste to begin with. They just also don't have the skills/will to make any art, so they could hide their lack of taste for a long time.
That said, there are still people with exceptional aesthetic sensibilities in the tech field, obviously. They're just largely not in this space.
siliconc0w 7 hours ago [-]
I'm hoping a side effect of AI Slop in that by increasing volume it decreases value and people eventually start finding all Internet slop less compelling.
sceptic123 4 hours ago [-]
translation: "we got all the data we needed"
paxys 22 hours ago [-]
For years now people have been saying Anthropic is falling behind because they don't have an image or video generation model. Turns out it was the right decision all along.
bibimsz 22 hours ago [-]
hmmm... which came first. the deal withdrawal or the shuttering.
Kye 22 hours ago [-]
The only video generation tools showing any real progress or promise are world model-based. That's probably why they did this: either to refocus on coding/cowork type tools (less likely) or to devote that money and compute to building their answer to stuff like Project Genie.
Couldn't make it work at taking actual directions huh?
delis-thumbs-7e 14 hours ago [-]
Good riddance. Less slop machines the better.
rossjudson 20 hours ago [-]
"Sora, generate a video of Mickey Mouse beating up Sam Altman."
elzbardico 21 hours ago [-]
Let's be frank, this was probably too fucking expensive to run
eigenvalue 6 hours ago [-]
I posted this on X but it’s relevant here, so reposting it:
I had a lot of fun using Sora and got a lot of laughs with absurd videos of me in various situations.
But like everyone else, I kind of got it out of my system after a couple weeks. Not to mention that my family got sick of seeing them. And so my usage collapsed to zero. And that seems to have also been the pattern writ large.
But this kind of flash-in-the-pan dynamic is devastating for a product with this kind of profile, which requires insane amounts of compute hardware to serve while also having no short-term monetization path.
Meta could afford to invest in IG Reels even when it was burning money and costing them a fortune for hardware because it was building up what turned out to be sustainable usage patterns which persisted long after the initial spending ramp.
It’s basically impossible to effectively monetize anything that’s not sustainable on the order of multiple years.
A subscription-based model would see excessively high churn that would be ruinous to the economics, and also advertisers wouldn’t be interested either, for the obvious reasons.
So why couldn’t this work? I don’t think that it was because the models weren’t good enough or that the depictions weren’t realistic or lifelike enough. I still marvel at some of the better outputs I was able to get from Sora.
I think the fundamental problem that Sora faced is actually much broader and more general, and it comes down to the basic Pareto math of any content generation or creative app, which is that 95%+ of the users just want to passively consume content from the 5% or less that actually wants to generate it (and is capable of making anything that other people want to watch).
It was really dismal to see the repetitive, trite ideas that 99% of users generated in the public feed. Just the same few dumb jokes and things they copied from other users.
Or putting themselves in a scene with their favorite fictional or cartoon characters or whatever, which of course got banned pretty quickly for copyright issues.
Most people are not creative and don’t have a lot of original, interesting ideas. So that means that the vast majority of the content is always going to come from a vanishingly small number of creators in a power law distribution.
And those super-creators aren’t going to want to be limited to a simple text-based interface that can only generate for 10 seconds at a time with no continuity and where large portions of things you might want to try are strictly forbidden.
They’ll instead gravitate to more customized solutions for power users that regular users would find as overwhelming to use as AutoCAD.
And that’s what you’re seeing now with all the new viral AI slop videos that are made by a handful of creators who have figured out the workflows and are pumping out the worst junk you can imagine that gets people to click and watch.
Anyway, RIP Sora; it was fun while it lasted. Thanks, Sam, for blowing a few hundred million bucks so we could get some laughs.
bibimsz 20 hours ago [-]
we hardly knew ye
ChrisArchitect 24 hours ago [-]
an official post
> We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
They need the GPU cycles to help target children to bomb for their new partnership with the US military.
mrcwinn 22 hours ago [-]
Smart move. No clear path to growing meaningful revenue mixed with very expensive inference costs is not a good mix ahead of an IPO --- oh and not to mention competitors in TikTok and Instagram that are doing just fine.
Morromist 22 hours ago [-]
Well, now they're no longer even close to being the leader in image & video gen. They aren't the leader in coding. They are losing market share in the chatbot domain too.
So I agree with you, but also it makes me wonder what they're even selling when the IPO happens (supposedly as early as late summer 2026)? Data centers? Partnerships with the goverment?
miltonlost 22 hours ago [-]
Is it a smart move? Or just plainly obvious when Sora was probably hemorraghing money and had no future? A smarter move would have not to make this horrible product that no one wanted in the first place
After placing my hand on the red-hot stove, aren't I super smart for now removing my hand?
saalweachter 22 hours ago [-]
Depends, did you also fire the people who told you not to do it, and layoff the people who reluctantly installed the stove and preheated it for you as part of your exciting stove-touching initiative?
k3k3 19 hours ago [-]
I think OAI is suffering from the Meta-effect.
That is, hiring Meta-exec's who focus on gaming numbers with no care nor sensibility of product.
Wild really. Well done Sam.
blindriver 17 hours ago [-]
Sora was good but Gemini is so, so much better. And Seedance is on another dimension. But to be honest I'm shocked that they gave up on AI video. I wonder what the cause of that was?
Good riddence to bad trash. To me, this idea represents the absolute worst of the AI wave (out of a lot to choose from): a corporate controlled endless stream of the feelies to keep people plugged in and scrolling for nobody’s benefit except those in control of the output. If “entertainment” can be produced algorithmically to a volume and level of quality that the masses find attractive, it’s only a matter of time before bad (worse?) actors take control of it to start highly targeted campaigns of influence, far worse than what we’ve already seen.
nomel 18 hours ago [-]
I'm having trouble understanding this. There were some very funny videos, created by people with a great sense of humor, and I happen to enjoy laughing, and I don't feel bad about that. I always saw it as the Vine of AI.
For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.
Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.
rogerrogerr 18 hours ago [-]
I feel like taking in GenAI content, even if it makes me laugh, probably does something bad to my brain. It looks like real life, but the physics is just wrong in ways that range from obvious to very subtle. I don’t want to feed my brain videos of things that look photorealistic but do not depict reality, that just seems foolish somehow.
Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.
lotsofpulp 18 hours ago [-]
Do you feel the same about special effects in professionally produced media?
rogerrogerr 17 hours ago [-]
I was thinking about this while typing. I don’t really care about classically animated content; it’s generally not trying to be indistinguishable from real life and I don’t feel like my brain trains on it.
But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.
Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)
lacunary 17 hours ago [-]
if it worked this way, we could get good at golf by watching TV, writing songs by listening to the radio, or doing math by watching 3b1b. but it doesn't - we don't learn that way, for better or worse.
hansvm 14 hours ago [-]
That's not a great comparison. People absolutely do learn by watching, especially when they do so actively.
Your counter-examples have the property that most of the things you need to learn are absent from the media being watched, leading to an observation which is "obviously" true, but they ignore the impact of media on a journey properly incorporating other pieces of information. To compare to the mental models being discussed, you'd have to actually consider effects you're writing off as negligible, and when it comes to something like a world model which we've only learned by observation and which doesn't have a lot of additional specialized knowledge those effects might be much more impactful.
lotsofpulp 17 hours ago [-]
I agree with rogerrogerr, and your comparisons don’t make sense to me. Getting good at complex motions and understanding theory is far different than building a simple model of cause and effect in the real world.
Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.
diego_sandoval 16 hours ago [-]
But you do get good at driving by playing realistic driving games.
fc417fc802 16 hours ago [-]
To your point about cars - such an expectation could well save your life now that there are so many EVs on the road. You do not want to hang out in one of those after a collision. Regardless, I agree that it's probably a bad idea to instill defective mental models in people.
rogerrogerr 15 hours ago [-]
Eh, the stats don’t seem to support EVs being terribly explosion-prone either. In comparison to gas cars, maybe, but both are very safe in absolute terms. Harder to extinguish if they do catch fire, but I think if I came upon a fresh accident and there’s no immediate signs of a battery fire (airbags smoke, it’s normal), I would still leave the victim in the car seated until someone trained shows up.
Sure, be ready to get them out, and if they’re trapped and it’s going to be a while until fire shows up start working on that. But my mental model is that for any road legal car that is not currently on fire, there is a higher chance you’ll cause harm by rashly moving a victim than that a victim will be suddenly consumed by an enormous Hollywood style conflagration.
fc417fc802 14 hours ago [-]
The likelihood or lack thereof is not the problem. My mental model might be off because it largely isn't based on EVs but I've seen plenty of videos of e-bikes and more generally cheap lithium batteries going up in flames and I don't think it's at all comparable to a pool or stream of gasoline catching on fire. The issue is how rapidly it develops since it doesn't require an external oxidizer which is exactly the same as a firework.
heavyset_go 15 hours ago [-]
Media has warped people's mental models of what car wrecks are like at different speeds, being stabbed, being shot, drowning, seizures, falls from different heights, falls into water, giving CPR, when it is/isn't appropriate to give CPR, appropriate responses to natural disasters, etc.
randerson 17 hours ago [-]
When I watch a film, I know it is fiction and special effects. But most of the fake AI-generated videos are being passed off as real on social media. It is exhausting (and increasingly difficult) to analyze every video on my feed to try figure out if its real.
chamomeal 17 hours ago [-]
I feel like people do sometimes have a warped sense of reality from consuming too much media, ie porn
michaelchisari 17 hours ago [-]
Not op but if I’m being honest, I don’t feel as if that’s the case until I see a film whose special effects are limited to mise en scene and matte paintings and then I always have this overwhelming feeling that we’re all missing out.
Films on film using in camera effects are still made on occasion but they’re art films for niche audiences.
But we’ll never get another Ben Hur. And that doesn’t sit well with me even if society can’t yet fully explain why.
diego_sandoval 16 hours ago [-]
I'm not OP, but I do get annoyed by bad car physics on movies.
The worst offenders are brake sounds not correlating to the car movement, engine sounds not correlating to the car's acceleration, nonsensical car deceleration while braking, and steering wheel not correlating to car steering.
ori_b 17 hours ago [-]
Yes, I think consuming too much media, and creating too little is bad for the brain.
vincnetas 15 hours ago [-]
special effects make most people think that they could jump farther or from higher ground that they actually can. and most people think that all cars explode in massive fireballs.
Insimwytim 17 hours ago [-]
Effort makes a great deal of difference for me. The effort itself, the fact that it's there.
I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.
But it is insulting to feed slop to your audience; it shows you didn't even try.
I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.
dieselgate 17 hours ago [-]
Are there energy consumption differences between CGI and AI?
Insimwytim 17 hours ago [-]
We also need to take into account, that CGI only consumes energy when the actual creation of particular video happens.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
sdenton4 16 hours ago [-]
Right idea, but the application is incorrect.
Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.
Both a movie and a language model can cost tens or hundreds of dollars to produce.
In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.
At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.
Insimwytim 16 hours ago [-]
> Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output
I did not say anything about consumption of the output. Maybe you misread what I wrote, it is about energy consumption.
> Both a movie and a language model can cost
But we weren't comparing cost of the movie to cost of a language model
> can cost tens or hundreds of dollars
But we weren't talking about dollars, we were talking about energy.
We're clearly exploring different questions.
sdenton4 16 hours ago [-]
And that energy costs money, both at the training/cgi stage and at the inference/consumption stage. It's not even an externality.
CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
Insimwytim 13 hours ago [-]
> CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I've literally laughed at loud after reading this.
I can't believe you're stretching this in a good faith.
But if you are - well, you're certainly have a unique perspective.
b00ty4breakfast 17 hours ago [-]
that's just empty consumption, there's nothing that makes art great in algorithmically generated content except at the shallowest of levels. I mean no disrespect, but that is extremely sad and all too indicative of the instrumental reasoning of the industrial milieu. It's about 2 steps above marrying a sex doll.
nomel 15 hours ago [-]
[dead]
jorl17 15 hours ago [-]
There's such a fascinating divide on this.
I am 100% with you. I didn't ever _use_ Sora, but some of it trickled down to me (mostly through Instagram reels). I think it's amazing that we have such great new tools to express ourselves, and that we are trying out new platforms, paradigms, and approaches.
Is there money involved? Absolutely, but I don't fault companies for trying to earn their keep.
It 100% takes work to use these tools in the right way to make something funny. Ask an LLM to make them on their own and they'll hardly evoke laughs (I'm sure that'll change too, though).
vermilingua 18 hours ago [-]
Yes, I don’t doubt that there was some very high quality human-moderated output. The point is that you likely can’t accurately distinguish the human-moderated output from the entirely generated slop (especially as it’s being trained and refined on the rest of the content), and so what chance does the average non-technical person have?
Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.
RajT88 17 hours ago [-]
> created by people with a great sense of humor
The real problem with AI slop is not the AI. It's the people. It's always the people.
The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).
Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.
With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.
qingcharles 17 hours ago [-]
There were some genuinely very, very funny videos made on there. A lot of slop, but some definite nuggets of gold.
tsunamifury 17 hours ago [-]
[flagged]
raincole 17 hours ago [-]
They are not getting rid of Sora because people won't want AI videos lol. They're getting rid of Sora because they're so behind in this realm. AI videos online are mostly made with Chinese models, and the situation has been like this for more than one year.
The percentage of AI videos over the internet will certainly not decrease after Sora is gone.
The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.
SXX 15 hours ago [-]
Its no surprise Chinese models will eventually win in a video generation race since they are far less censored and not affected by crazy copyright system.
It much easier to make Qwen animate tankman than it's to make any western model to generate indigenous people dancing because cough cough naked skin is baaaaad. Except this Musk one that will nonetheless affected by all the copyright mess.
I'm kinda surprised about how hard GenAI fell on its face in the arts (including SD and other video generators). It seemed so promising, when SD came out and it turned out the model fit on people's GPUs. People started making LoRAs, hyperparameter tunes, mixing models, training models for representing characters, ComfyUI and Controlnet came out yada yada.
Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.
iterateoften 18 hours ago [-]
You’re most likely consuming a large quantity of genai art without even knowing it.
toraway 17 hours ago [-]
Sure, and I'm also consuming a gigantic quantity of GenAI art while knowing it, completely against my will. Which like OP has soured my overall perception of it.
The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.
In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.
Insimwytim 16 hours ago [-]
> manipulated imagery
And we thought iPhone camera videos were bad... (they were (and are) though)
jazzyjackson 17 hours ago [-]
Sure, and there’s lot of great man made art that I don’t enjoy quite as much because I can’t get the question out of my head, is this even a photograph someone took, is this even a painting someone bothered to paint. I get the sense that there are a lot of folks that just want the end result judged on its own merits, like, is it a funny vine or not, is it a compelling beautiful digital painting or not, but I want to know whether there’s a person behind it, expressing themselves, growing as an artist etc, or if the picture on my phone is totally divorced from any humans actual desire to say something. Having them mixed in the same pot just makes me less hungry.
jallmann 16 hours ago [-]
This is where curation matters, eg in a newsroom or gallery. Provenance is their job, and if done well, can connect people in a way that an unfiltered social media firehose can't.
jazzyjackson 16 hours ago [-]
Yea fair enough, I’m hoping I can encourage the folks in my life that are not adept at telling truth from fiction to just cut out looking at any social media firehouse.
It’s so dumb that Zuck and Elmo want to inject^H^H^H^H^H^Hrecommend content into these people’s feeds while they’re checking in on their neices and nephews and local book clubs.
Insimwytim 16 hours ago [-]
I never understood what people are trying to say with comments like that.
- You're making unsubstantiated claim
- personally targeting someone you don't even know
- in order to celebrate presumed success of a mass fraud?
diputsmonro 2 hours ago [-]
I feel like it was inevitable that it would become slop. The models are impressive, but they can really only get you 80% there.
If you want a video of a dancing cat, sure, you can get that. But if you want an orange tabby doing the moonwalk or the robot, that's a lot harder. You'll have to generate dozens of videos and fine tune prompt incantations before you get what you want, if you even do before you hit a rate limit or you get frustrated. If you want something specific and unique and interesting, you still need to put in a lot of effort. Therefore, most videos that people actually make and share are pretty generic.
I think most art models have subtle tells and limitations similar to textual LLMs too, just a little harder to recognize. Certain ideas and imagery will be easier to generate and more likely to fill in the gaps of your prompt. The technology is fascinating compared to the nothing that we had before, but it still has real limitations - try to get it to generate an Italian plumber wearing a red hat that isn't Mario, for example.
All that to say, the trend towards low effort, repetitive, and uncreative results is inherent in the medium. Most users will prompt for a generic dancing cat and get something resembling a cat doing something that resembles a dance and that will flood social media. The few people going for a more creative and specific artistic view will be frustrated by the constant rolling of dice, and if they do make something they work hard on, it will be drowned out by the low effort slop posts. And if you're frustrated by those limitations and want to make something intentional, then you'll eventually gravitate towards Photoshop or Blender where you can actually craft the exact thing you want.
These models do not really "democratize art", they just make it really easy to generate visually interesting noise. Once the novelty wears off, the limitations are apparent. Art has always been democratized anyway - Blender and Krita are free, and pencils are cheap.
fc417fc802 16 hours ago [-]
You're conflating mainstream popular opinion and professional usage. They're entirely separate. The obvious low effort pieces get lambasted. Meanwhile the high effort work doesn't draw attention. The public perception right now has little to do with technical capabilities being driven almost entirely by social factors.
MattGaiser 17 hours ago [-]
What the masses have found entertaining has always been referred to as slop, so I am not sure it matters.
Novels, cinema, television, comic books, etc.
They were all considered careless skill-free slop at some point.
18 hours ago [-]
EugeneOZ 15 hours ago [-]
This market will not be abandoned, and other tools already exist:
“What you made with Sora mattered”. Idk why that sentence irks me so much. Perhaps because the “how” is bit vague. I like to think that what I made in the toilet this morning also mattered.
caconym_ 21 hours ago [-]
It's because it's vapid corpspeak coming from a class of people who have certainly spent time thinking about how they will deal with the rest of humanity in any number of nasty (however far-fetched) eschatological scenarios caused by them and in which they alone wield incredible power over nature and the human mind. And also because we all know the vast, vast, vast majority, possibly the totality of what people made with Sora did not matter at all.
jfoster 20 hours ago [-]
Reminds me of Facebook's memories feature which used to say: "<name>, we care about you and the memories you share here."
For an app to suggest a personal relationship with you is ridiculous.
Or perhaps a more appropriate analogy, its sounds like the sycophantic language of most of these LLM systems.
Which makes me wonder whether these companies actually dogfood their own tools with this sort of stuff? Was this announcement written by ChatGPT? Honestly, I would find either answer to be a little concerning in its own way. It's either vaguely insulting to their customers or showing a lack of faith in their own product.
That is the original meaning of the word (cf espresso etc)
wat10000 22 hours ago [-]
It's a wonderful combination of vague, patronizing, and self-promoting. "Mattered" is meaningless. The tone sounds like when you tell a child their scribble is so pretty. And the cherry on top, the users didn't make anything with Sora, they just fed a bit of input into the machine and it made the stuff. So this is really OpenAI saying that what they themselves did mattered.
notatoad 20 hours ago [-]
it feels like if that statement were true, they could have come up with some reason why it mattered, or something better than a platitude.
it reads as "we want to tell you that what you made with sora mattered, but we all know it didn't".
rchaud 20 hours ago [-]
It mattered in the sense that it provided valuable grist for the mill as they attempted to figure out if it could work as a Reels/TikTok alternative for companies to eventually deluge with ads.
abcde666777 21 hours ago [-]
Typical PR speak.
moregrist 21 hours ago [-]
It’s “Our Incredible Journey” for a new generation, this time with less optimism and more post-capitalist “enjoy your job while you still have it.”
I find myself increasingly nostalgic for the Clinton era. I am not at all sure I will enjoy the version of fuckedcompany that gets vibe coded when this bubble pops.
20 hours ago [-]
Yash16 16 hours ago [-]
[dead]
olalonde 20 hours ago [-]
"Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. "
Is it happening? :) /s
linncharm 14 hours ago [-]
[dead]
aiwokz 22 hours ago [-]
[flagged]
twoodfin 22 hours ago [-]
If I were to get conspiracy-minded:
Sora had to be shut down because it was the clearest, most consequential demonstration that OpenAI’s models are running way, way ahead of their ability to align/jail them effectively.
code_biologist 22 hours ago [-]
The Occam's Razor position (Sora was the most expensive to operate, least monetizable model) seems like a simpler explanation. The legal costs/difficulty on top of "most expensive" are just the cherry on top.
bloppe 21 hours ago [-]
What did Sora do?
yulker 22 hours ago [-]
probably more cost than anything. image and video gen don't have much in common with llms
emp17344 22 hours ago [-]
Nope. It was just a bad product that no one wanted. It’s not a super-secret indicator that OpenAI is actually going to take over the world any day now.
twoodfin 21 hours ago [-]
Not “take over the world” level misalignment. I mean, “We can’t assuredly prevent our models from generating unlicensed IP or degrading pornography without blunt approaches that alienate our core audience”.
taytus 1 days ago [-]
How much money did they burn on this? And for what? Nothing?
digitalsushi 7 hours ago [-]
They paid for an opportunity. Sometimes paying for a chance nets you nothing.
If you end up with nothing in aggregate for the chances you pay for, you're a loser. Not in a pejorative sense, just as a fact, you lost.
If you come out with more than nothing, in aggregate, you're a winner, in the same objective sense.
Probably controversial. Eh.
dev1ycan 20 hours ago [-]
Bahaha.
glass1122 5 hours ago [-]
One of the best news after a long time, LOL!! Sooner or later expecting more good news from all these AI slops and BS. RIP My Friend. never used SORA or even visited the website. LOL!!
CamelCaseName 19 hours ago [-]
The owner of @Sora on twitter must be really regretting turning down the $20MM buyout offer for the handle!
r0fl 18 hours ago [-]
No way anyone is that stupid
That story can’t be true
efilife 14 hours ago [-]
Can't find anything about this
atleastoptimal 22 hours ago [-]
This will happen with most offerings made by the major AI labs. Inference is expensive, and the closer they get to AGI, the higher the opportunity to use compute for inference rather than training, especially if it’s for making what is essentially entertainment that many people
hate on principle.
davebranton 22 hours ago [-]
Indeed. But they won't get to "AGI", because that goal isn't even remotely defined. A "human-level" intelligence implies a large number of properties that cannot exist inside an inference machine. Dreams, for example, might be considered to be a part of "human-level" intelligence. Will the machine dream?
What happens if you turn a "human-level" intelligence off? Did you kill someone?
AGI is a pipe dream - and moreover it's not even something that anyone actually wants.
supern0va 21 hours ago [-]
>Will the machine dream?
You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811
It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.
atleastoptimal 22 hours ago [-]
agi just means a machine, system or whatever that can do anything as least as well as a human. The details dont matter as much as its ability to match humans in everything they are paid money to do.
And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if
you control it vs someone else, which is why every company is racing towards it.
Over time we're probably going to see some really broad and strong use cases of AI, but I think in the case of social media or generative content, we have to be a lot more thoughtful about it. And I'm glad that they're shutting down this app as much as it's great to see innovation and technology and to see how far it's pushed. I prefer to see it when someone like Google does it? Because they're really doing it from the standpoint of this has broad applicable applications to something like simulation or training. Not whatever open AI was doing which honestly just doesn't feel very truthful. I feel like they say one thing and do something else or they say one thing and the agenda or something else. And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it.
The addictive toxic content will go the way of tobacco and explore new markets.
Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users.
Nigeria and Pakistan are going through the same change, just started later.
Since 2016 India alone added more users than the mentioned countries combined.
That's a lot of first generation users. More than the entire western population.
Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI.
Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living"
There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it.
Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now.
In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival.
And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done.
// grew up near such areas
Is it?
I have the impression GenAI deteriorates the internet both from a content and tech perspective.
Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck.
GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs.
Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works.
I just have the feeling that it doesn't get the job done anymore.
I hope we will see the rise of alternatives.
I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases.
It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse.
Saves the company a ton of money
I am not convinced. Nobody is making money, every player is losing money hand over fist.
Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough.
Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user.
AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives.
I like to call this the "Yahoo Effect"
Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented.
But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it?
That's not going to work for AI platforms.
What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave.
So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually.
Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive.
https://www.zoho.com/mail/zohomail-pricing.html
A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare.
So it is stated, but is it actually true? I am not convinced.
Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline.
I'll believe it when I see it.
OpenRouter makes it easy to use them, just add credits to your account.
I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock.
Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end.
Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years.
That alone is huge, if they let go of their egos about putting the entire white collar class out of work..
In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days.
I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.
I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.
It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).
I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.
Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.
For short format low stakes stuff like online ads, then the AI slop actually probably works however.
Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.
A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.
Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.
Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.
What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.
OpenAI just proved you cannot burn money indefinitely.
You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.
You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.
There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.
Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.
For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.
On scale we need more capacity and these agents will also cost more money than a 20$ subscription.
But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.
Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?
Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.
You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.
An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.
I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.
Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.
It could even go so far that the maker of these systems will insure you for their use.
It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.
It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.
Humans are really expensive and you have to train them regularly and every single on of them.
I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt.
But I'm not sure we would even notice nowadays. It used to be a disaster that could take people's attention for years, but currently, it may get lost in the noise.
They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.
Having Disney on their side was def quite a smart/interesting move.
At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model.
It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks.
So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources.
So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation.
And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post.
I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models.
Nano Banana created a lot of noise.
But the reasoning of Gemini 3.1 Pro is really really good. Its hard to describe how good it became. I do not see the same quality from openai. Openai though is also super fast in response. A lot faster than just a few month ago.
For example: some german guy used the wrong word in describing an advantage of having a silencer and missuesd a word. Openai just said its nonsense, gemini suggested that its a typo and he wanted to write something else (gemini was correct).
It could also be that we are in a moat between "why is AGI not here yet" and "we need to build now the agentic platform stuff, that takes time".
Gemini pro is def slower than openai and I do not know if its because I use the pro version of gemini but not from openai. But it could also be that OpenAI has to work on subagents because Gemini def uses subagents and i was not able to find a source that OpenAI is doing this too.
Obviously caveat emperor but there are a lot of real world scenarios like this.
I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change.
Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app.
s/emperor/emptor
I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.
The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.
In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.
The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.
LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.
Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.
Ironically, starting your response with this guarantees a lot of people won't read it. It's the same as going on reddit and starting a reply with, "Nobody will see this but", and hoping that people try to prove you wrong by reading and commenting on it. I stopped after the first sentence. People really have to stop with the clickbait vomit way of writing.
Considering the large million plus view counts I see AI slop getting on FB and YouTube I'm not seeing this behaviour play out.
If people don't read because the text is an unreadable mess, none of the points get across.
A long time ago on the myspace forums there was this slightly weird but also very wise and smart person who wrote without any punctuation or paragraphs, ever. Although they were generally liked and part of the community, I think I was the only person who read every single one of their comments in full, religiously, once I realized how insightful they were, and I was richer for it. I could have told them the obvious, how their posts differ from most others on the forums; and they would have posted with less joy and maybe less overall, that would have been it.
Nothing is new under the sun.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool.
The AI art I have seen creatives produce is far beyond anything I have been able to come up with. We're not at the point yet where you can just prompt "Make me a video that is visually stunning and captivating" and get something cool.
ah, but what a persona that would be if you were a Kai's Power Tools settings menu!
.. such as? What's the "Mona Lisa of AI art"? Is there, like, a gallery? Awards?
TikTok and social media is a strange mix of both, people posting response videos to everything.
Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song.
The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
But I guess the desire to create something that others would consume is also different from the desire to simply create.
This is a vocaloid break up song: https://youtu.be/9pQR4a5sisE
The first isn't bad by any means. There's a million break up songs and that's one of the best sad ones. Most are just... angry? Blaming? Empowering? They work fine. They sell records. Many have have a billion views.
But the second one, even with the clunky translation, strikes somewhere deeper. It's written by someone who had enough time ruminating on a break up. The ending hits a little harder, because break up songs are about endings.
Both are sincere, but the first feels more formulaic. I'm inclined to think the first one is the soda.
I feel Suno leans towards this group of songwriters and poets who have something to say. Sora doesn't.
The musician in me just shed a tear
Hopefully AI outcompeting humans at slop sparks a renaissance of humans creating truly beautiful human artwork. And if it doesn't, then was anything of value truly lost?
I get my modern music from Bandcamp. If you can't find good stuff to listen to, that's a 'you' problem.
What are you talking about? There’s lots of modern music that’s not corporate slop and that’s absolutely great. Never in history was access to great music as easy as it is now.
From wikipedia: Many Daft Punk songs feature vocals processed with effects and vocoders including Auto-Tune, a Roland SVC-350 and the Digitech Vocalist. Bangalter said: "A lot of people complain about musicians using Auto-Tune. It reminds me of the late '70s when musicians in France tried to ban the synthesiser. They said it was taking jobs away from musicians. What they didn't see was that you could use those tools in a new way instead of just for replacing the instruments that came before. People are often afraid of things that sound new."
It's a neat tool for genuine creators, and a crutch for people interested in slop.
I wonder what OP categorises as 'mainstream'. As a classical musician this breaks my heart.
There are exceptions though. FUKOUNA GIRL by STOMACH BOOK, for example. AI can't come close to replicating something like this. Not the cover art, not the off-key voices, not the relatable part of the lyrics. I don't believe this is a top #100 song, though it certainly is popular.
So I quit riding the overpriced subway altogether and now consume AI-generated subway imagery and soundscapes for free, they are just good enough to feed my passion for boring tunels.
Some ego-bloated edgelords had nerve to tell me that there are, like, other modes of transportation, but I honestly find their high-horse elitism despicable.. Damn morons.
There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there.
For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers.
You wouldn't care to order the food as I personally like it -- might be too spicy (or too bland) for your taste.
Suno songs are overtuned for personal preference in the same way.
^ this is important.
Otherwise you may very well be missing anything really surprising or novel.
See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where
> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
On the other, Google might not have done much to upgrade the podcast feature since them.
Sometimes I'll take deep research output and listen to it too that way.
This somewhat makes whole NotrbookLM less useful, but still.
Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.
For example, I can give it 8 papers on best practices in online marketing, it will turn it into a 20 minute podcast.
There are errors, but also with real podcasters.
Or before! Either is mandatory to actually learn the content.
Those 100 videos probably cost $100+ for them to create. Did you pay them $100+? (not a critisism, just a re-framing)
24/7 titillation is boring
And this is the challenge that these tools have - they have to have a free tier to get people to explore it, but unless they can make it a habit, those people will never upgrade to a paid subscription.
I have no figures, but if I'm being optimistic, these freemium subscription services have 10% conversion rate at best; can that 10% pay for the other 90%? For a lot of services that's a yes, but not for these video generators which are incredibly compute intensive.
I'm sure there's a market for it, but it's not this freemium consumer oriented model, not without huge amounts of investments. Maybe in 5-10 years, assuming either compute becomes 10-100x cheaper / more available, or they come up with generators that run cheaper.
There's some market for b2b I'm sure, but as a consumer facing product it's tough to see how it could ever come close to paying for itself.
I think this is starting to play out.
When I personally see a blog post which didn't need an image, but still does have an AI-slop image banner, I mentally check out. I might have Claude summarize it, or (more likely) just skip it altogether.
Essentially you are watching the same videos over and over subconsciously
Procgen has a niche, but it never became ubiquitous, because for most people exploring a nice hand-made intentional environment is better.
I think people attach to other people more than “ai”. When there isnt a narrative “person” behind the content it is way less interesting.
First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.
Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.
I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.
https://news.ycombinator.com/newsguidelines.html
Sometimes people want to paint, sometimes people want a painting.
To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.
Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.
Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.
No seriously, try it out.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
You didn't at least puff a little ack through your nostrils for that one?
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
i recently used gpt for the first time in several months (i'm a daily claude user) and didn't find this at all. it is most certainly trying to pull you into engagement with how it ends each response. "if you want, i could tell you about this thing that's relevant to what you are discussing and tease just enough so that you addictively answer yes"
Not about Sora, but about ChatGPT. I felt the same way for quite a while until I noticed that its response pattern has changed, apparently aiming for higher engagement. Someone aggressively pursued a metric.
At some point, ChatGPT started leaving annoying cliffhangers in its every response, like "Do you want me to share a little-known secret of X that professionals often use?" Like, come on!
To me it seems it was "Disney gets shares and we get to use their characters in Sora".
Even if Sora breaks even, why would you gift Disney stock? It's not like they actual gave 1B to openai.
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
He is a con man. Of course he’s charming and convincing, that’s how he ended up where he is. But he’s just as full of it as Musk when he was waxing lyrical about saving the world and going to Mars. They lie very convincingly.
I think his board fight within OpenAI where essentially lied to the board, his obsession with retinal scanning everyone for his biometric cryptocurrency (Worldcoin), how he left Y Combinator are just evidence that he’s not very heroic. Most cringe to me is that he and many others seem aware that what their are doing is corrosive and harmful to society on some level as Altman has admitted to having a bunker somewhere around Big Sur [0]. Which…WTF.
[0] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Not too familiar with that history, but he still is listed as a courtesy credit/reviewer at the end of PG's blog entries, so I assume he didn't have too much of a bad exit?
This is a conflict of interest and I think one a very obvious one. He tried to have it both ways and was forced to choose in the end. I think putting himself in that situation rather than resigning up front to pursue OpenAI ambitions says a lot about his character.
It could prevent suicide, maybe, but we know that it does cause suicides, at least in some cases. Seems like a poor value proposition.
The things he does is convince investors to give him billions of dollars to build what he wants. Where exactly does that leave us?
To me, this just came off as pathetic. It hasn't solved anything and there's no reason to believe it ever will. The whole question is completely pointless except to put the idea in viewers heads that ChatGPT will soon revolutionize science, with no actual substance behind it. It's not even a question, there's only one possible answer. He's holding the guy verbally hostage just to manipulate dumb viewers.
So anyway that's the only memorable clip I've seen of Sam Altman, and based on that alone, fuck that guy.
Altman's reaction was very telling of the kind of person he is, just immediately lashing out at Gerstner in a childish way, asking if Gerstner wanted to sell his shares because he could find a buyer in no time.
It was a pathetically immature reaction, I wouldn't expect that from any kind of professional, even less someone who has held positions as Altman has and now sits at the top of the leadership for a company sucking hundreds of billions of investment.
Apart from that clip there's also the whole saga of sama @ Reddit, full of lies, deceptions, and the same kind of immature attitude peppered across Reddit itself.
After glazing OpenAI and Sam personally for 45 minutes straight. But as soon as Sam was questioned in the slightest, he exploded.
https://www.youtube.com/watch?v=zrgEZ8FeZEc
I think if you had to foot the bill for generating a bajillion gigabytes of slop with no real utility, you wouldn't be too mystified.
They showed off their technology and proved it was impressive. That's all it had to do.
I'm curious if you still feel this way about current iterations of ChatGPT? It seems like it's now primed to engagement bait the user, especially when used through the web UI. You can ask it a simple question with a straight forward answer and it will still try to get you to follow up with more.
> What is the minimum thickness for Shimano M8100 disc brake rotors?
> For Shimano XT M8100-series rotors (like RT-MT800 / RT-MT900 commonly used with M8100 brakes), the minimum thickness is 1.5 mm. If the rotor measures 1.5 mm or thinner, Shimano says it should be replaced.
> (a bunch of pointless details in bullet points)
> If you want, tell me the exact rotor model (e.g., RT-MT800, RT-MT900, size), and I can confirm the spec for that specific one and what typical wear looks like.
The entire query could have been answered with "1.5mm". The "if you want" follow ups are so annoying.
I suspect they promised synthetic movies but it quickly became clear that they were never going to be able to deliver on this.
Slick fifteen second lulz-clips, sure, but I don't think they can make several of them consistent enough to fit into a larger video narrative without the audience finding it jarring and incoherent.
Perhaps legal at Disney also concluded that the output wouldn't be possible to copyright, which is their core business.
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
Might be why the latest Iran propaganda video could be created in PowerPoint: https://bsky.app/profile/rachelbitecofer.bsky.social/post/3m...
(This sort of question, and the Grok sexual abuse, is why I'd like to see mandatory invisible watermarks on generated images/video)
Most people serious about this stuff usually have their own pipelines.
These are open weight models, so you can fine tune them on Lego content… But presumably they already have enough training data since they were made by Chinese companies who don’t give a shit about Western IP rights.
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight.
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it.
Technologists seem to have a reallyyyy hard time getting it.
Not every place has LEGO incest porn… or whatever the kids are into these days.
1. There's an AI-based virtual girlfriend industry that mixes text and images
2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models)
3. There's a much shadier AI-based "undress this specific woman" industry
https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...
revenge porn or deepfakes in general are hugely harmful to people.
in the german-speaking world there's a scandal right now about a husband creating deepfakes of his wife, https://www.hollywoodreporter.com/movies/movie-news/christia...
> One fake video, which she claims was sent to 21 men, depicted her being gang-raped
i think you're taking this topic lightly because you just assume that it's not a big deal. try to keep in mind that people's mental health and with this their life is at stake.
as with lots of things, the problem is not the tech itself, but the existence of men. it's not all men, but it's usually men. not sure how we'll solve this issue.
Yes, revenge porn is very effective at causing harm, even though it can be generated.
No, because 'plausibly deniable' has never worked for social consequences and shame.
Yeah, marketing. Which is a huge market...
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
This is a "seismic shift" in the sense of the Big One hitting California. The knock on effects of trust erosion caused by AI are going to huge and potentially unrecoverable.
I've no doubt that content creators outside of social media were using it as well, either for their brand or other video work.
Yes we see AI reels all over the place, but that's not only what it was used for
I guess you haven't watched hours of AI cat videos cheating on their husbands with bulls, or Lemons having babies with strawberries and fighting over custody of the child. It's absurd, it's stupid and I know it's a waste of time but I have to admit that it amuses me. I'm quite sure there are millions like me that just want some downtime to relax at the end of the night and end up watching slop like this.
It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.
If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
I'd rather eat poison
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
People want to generate IP.
edit: clarity
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
As a onetime semi-pro musician, with decades of live performance and sound design experience:
I would rather burn my beloved instruments publicly and pee on the fire.
Integrating AI with existing tools to improve productivity is harder and requires effort and investment...
Could you use the bullshit machines to generate sounds that were nuanced, musical, and original, with enough time and effort?
Maybe. I'm not sure original is something they can do, but it's not totally implausible.
I would strongly recommend learning to use other tools for that purpose, instead of feeding the plagiarism monstrosities.
I understand your entire world model is shaped by your past and that this machine is changing the fundamentals.
As an outsider to music, I'm excited that I have access to something I previously did not through the use of Suno and other tools. I'm excited that I can come in and just try things and not hit a skill wall or quality barrier that would cause me to quit with the limited time and effort a working adult has. It's something I've wanted to do for a long time, but just never had the time for.
Attempting to learn costs thousands of hours before you can even start to feel good about it, and I don't have that time. Life is short and I'm already thinking about the end.
I used to be sympathetic to folks with your view, but now that programming and engineering are impacted by this - I'm in the crosshairs too. I'm subject to the same forces.
I've decided I love this tech even more. Claude Code is a tool, just like all of these other tools.
This rising tide of capabilities is so awesome. This is the space age stuff I dreamed about as a kid, and it's real and tangible.
So no, I won't restrict myself to your set of pre-approved tools. I'm going to have fun and learn my way.
And it is fun.
You can keep having fun the way you like to. What other people do shouldn't be ruining the fun you have, and if it is, then you should reevaluate why you do it.
Taking away the precision, control, and serendipity afforded by modules and cables, or a programming language, and telling me "Just describe what you want and the plagiarism machine will spit out whatever correlates with that description on average" would destroy everything I love about synthesis.
> It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands
The problem is, to create a brand, you need to be able to protect it against rivals either ripping you off, or diluting it.
The same mechanism that protects "big" IP is also protect everyone else, even the small people.
> they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses
They already do that for music. But the issue is this, if we want culture, we need to find a way to pay for it. Is it possible for a bunch of mates to make enough money to live on playing in a local band? not really. They can only really make money if they either have a viable local gigging scene, or large enough online following to sell merch/patreon.
The big IP merchants were quite keen for videogen, because they sense that its possible to cut out the expensive artists. If they can not pay actors, writers, artists, then its way more profitable for them. This is part of the reason why AI hasn't been hit with the napster ban hammer.
I think the other thing to remember is that creating good IP is hard, and you can't really just pull it out of your arse after 5 minutes. The original seed takes a long time to refine, test, evolve. Even the half arsed sequels require work.
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
Where can I get this data?
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
https://variety.com/2025/digital/news/youtube-trending-page-...
Bummer. It used to be at:
https://www.youtube.com/feed/trending
So last year, these were the top videos:
https://web.archive.org/web/20250324155132/https://www.youtu...
There's this, but it's nowhere near as good as seeing the actual videos:
https://trends.google.com/trends/explore?gprop=youtube
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
https://www.youtube.com/watch?v=5lNzYP6SjVY
If you are autistic, I feel that it causes you to see reality a more accurately than most here on this thread.
[0] https://x.com/nikitabier/status/2029024577624650041
The impact of easy AI generated video is a less certain and less secure world. You can't trust your eyes anymore because of how fast and easy it is to fake video and moments. You can't trust communications with someone because how easy it is to impersonate them over video and voice. Scams involving tools like this are already running rampant and it will only get worse. The sheer level of distrust these tools have unleashed into the world makes me wish they never existed. They have burned millions (billions?) of dollars on this when that money would have been better served going to the creators whose work they stole to build it. It's rotten.
As we've see from Grok, building the system for producing non consensual nude images of other people will get the legal and PR hammer brought down on you fairly quickly. It's just an incredibly unethical thing to do.
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
Did you just make that up?
Grok barely makes "M-rated" nudity, let alone porn. Musk recently claimed it can do "R-Rated content", but his post got a community note saying otherwise.
https://x.com/elonmusk/status/2031989543529038103
Grok has gotten a lot stricter about video from uploaded images. But it is still able to make realistic x rated porn from AI generated images it creates.
There are various jailbreaks that have been working for the longest and still work, just a brief look, half of them just involve “anime borders” and “transparent anime watermarks” over videos.
https://www.forbes.com/sites/martinadilicosa/2026/01/09/grok...
which is what I would hope would happen, but they're probably fine not thinking about the consequences of their actions looking at their 7 figure salaries
[0]: https://www.azquotes.com/quote/834918
Me: damn that’s cool …………AAAAAHHH HELP ME
Doesn't matter if you agree that would happen, the analogy is valid - you're essentially admitting that you're ignoring the negative impacts of the tech for the sake of how impressive it is.
I have said about 3 times I am solely judging tech by how impressive it is technically.
I have no idea who you are arguing with.
Nothing exists in a vacuum and the way technologies affect people living in the world is a fundamentally important aspect of the technology itself. To ignore them would be like celebrating a cool new engine design but overlooking the fact that it has a tendency to explode and kill everyone in the car. If the primary effect of a technology is human suffering, then it isn't cool!
It was a party trick. I can't remember the last time I touched it. That's what SORA is, or was.
There were social games that used it as a feature, and it was fun when it worked, but it had to be disabled soon as it drained the battery so fast.
Coding is where the money is. https://news.ycombinator.com/item?id=46432791#46434072
That narrative will implode like Sora later this year.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
It is astonishingly poor at this. My intuition was that it should be good at this (it is basically a translation problem right? And LLMs are fundamentally translation systems) but the practical results are so poor. Not just mis-identifying speakers (frequently saying PersonX responded to PersonX) but managing complete opposite conclusions from what was actually said.
I'm genuinely intrigued as to what approaches have been taken in this space and what the "hard problem" is that is stopping it being good.
Generating pointless AI videos for pocket change or ad revenue is a loser in comparison.
However, I don't know a single developer who pays "thousands of dollars a month", not sure how you'd end up like that.
The top down push for AI is in line with the age old traditions of replacing highly skilled and highly compensated trade workers with automation. The writing is on the wall if folks care to look; many just don't want to. This has happened 1000 times before and it'll keep happening in the name of "progress" in capitalist systems for as long as there are "inefficiencies" to "resolve." AI is meant as our replacement, not as an extension of our skill as it happens to align with today.
Its increasingly obvious that the next phase in the evolution of the average programmer role will be as technical requirements writers and machine generated output validators, leaving the actual implementation outsourced to the machine. Even in that new role, there is no secret sauce protecting this "programmer" from further automation. Technical product managers eventually fall to automation given enough time and money poured into the automation of translating fuzzy, under specified ideas into concrete bulleted requirements where they can simply review the listed output, make minor tweaks and hit "send" to generate the list of jira-like units of work to farm out to a fleet of agents wearing various hats (architect, programming, validator, etc.)
The above is very much in progress already, and today I'm already spending the majority of my time reviewing the output of said AI "teams", and let me tell you: it gets closer and closer to "good enough" week by week. Last year's models are horse shit in comparison to what I'm using today with agentic teams of the latest frontier models (Opus 4.6 [1m] currently, with some Sonnet.)
Maybe we're at a plateau and the limitations inherent in GenAI tech will be insurmountable before we get to 100% replacement. But it literally won't matter in the end as "good enough" always prevails over the perfect, and human devs are far from perfect already.
I have been producing software (at fang scale) for several decades now, and I've been closely monitoring GenAI systems for coding specifically. Even just a few months ago I'd get a verbose, meandering sprawl of methods and logic scattered with the actual deliverables outlined in the prompt from these systems. Sometimes even with clear disregard of the requirements laid out, or "cheating" on validation via disabling tests or writing ones that don't actually do anything useful. Today I'm getting none of that. I don't know what changed, but I somehow get automated code with good separation of concerns, following best practices and proven architectural patterns. Sure, with a bunch of juniors let loose with AI you get garbage still, but that's simply a function of poor delegation of work units. Giving the individual developer and the AI too much leeway in the scope of changes is the bug there. Division of work into small enough units is the key and always has been for the de-skilling portion of automating away skilled human labor for machines. We're just watching Marxist theory on capitalist systems play out in real time in a field generally thought to be "safe." It certainly won't be the last.
So a good PM running 1-3 teams, will only need 1-3 agentic ai teams instead.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
No, by far no. I’m by all accounts “decently skilled human”, at least if we go by our org, and it blows anyone out of the water with some slight guidance.
And the most important part: it doesn’t get tired, it doesn’t have any mood swings, its performance isn’t affected by poor sleep, party yesterday or their SO having a bad day.
Modern models like Opus / Gemini 3 are great coding companions; they are perfectly capable of building clean code given the right context and prompt.
At the end of the day it’s the same rule of garbage in -> garbage out, if you don’t have the right context / skills / guidance you can easily end up with bad code as you could with good code.
Even with years as a principal engineer at a company with high coding standards and engineering processes?
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
If anything software engineers have spawned in uncountable numbers of jobs that never would've existed before, is what my intuition tells me.
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
[0] https://news.ycombinator.com/item?id=39386156 - only one mention of "AI slop" in the entire thread, though partial credit goes to "movieslop".
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
[0] https://en.wikipedia.org/wiki/Sora_(text-to-video_model)
For example, early TikTok had the Boss Walk.
Sora had no big content trends split into many micro trends in some established ~universe.
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
Will be interesting to see.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
Kind of insulting to lump google in with XAI? Like, is anyone even using XAI other than backwater government agencies?
xAI doesn't have "content moderation" around adult content, so that usage is quite popular.
https://www.hollywoodreporter.com/business/digital/openai-sh...
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
https://mochi1ai.com/
https://wan.video/
and others. There are free to use tools also.
I really don't think that using that term is appropriate when there's a multi-billion American macro corporation involved in the activity in question.
No it didn't; OpenAI had control.
Saying Sora democratised video generation is like saying that landlords democratised home ownership.
I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)
- sora was not great at making what you asked
- i probably got 3 good videos out of 100 gens
- every video that was good needed editing outside of sora (and therefore could not be shared within sora)
just my experience
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
That will likely happen in the specialized fields. We can already see tools like Figma, Mira, and others that generate functional-ish frontend components in full typescript and corresponding styles (that are also selectable and configurable in the interface). Though, not quite as free, since they do load their base framework and components to ensure consistency and sanity / error-checking, etc., but even then it is in fact generating you useable, modifiable components that you can engage with in precision in your normal DSL.
For video, this likely exists, or is being worked on as we speak. All specialized domain tools will go towards this model to allow those domain experts to use the tools with the precision they expect AND the agentic gains we already take for granted.
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
They probably see how much Anthropic is absolutely crushing them in developer mind share (see, people who buy tokens) and want a piece.
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
Just because one thing is a lesser/different kind doesn't mean we can't also be vigilant about it as well.
> RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.
I'm saying Sora isn't even in the top 100 of most evil products out of the tech industry.
There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.
Yes, literal weapons are bad, too. But that's not the current topic.
https://openai.com/index/disney-sora-agreement/
Disney Exits OpenAI Deal After AI Giant Shutters Sora
https://www.hollywoodreporter.com/business/digital/openai-sh...
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
Just that they took down some "io" mentions because of some trademark dispute with a third party "iyo".
1. OpenAI killing off their own products aggressively, taking a page from Google’s book. (I think the way you meant it)
2. Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete. (My first instinct when reading it)
What would you place here anyways? Chegg and Stack Overflow?
Weil's now heading "AI for Science": https://www.pymnts.com/personnel/2025/openais-chief-product-...
* It was (assumedly) expensive to run.
* It was not good enough for customers to seriously pay for.
* There were too many content restrictions for it to be fun for most people.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
i think it's clear cloud hosted is the actual future, which people have predicted for decades. it will never make financial sense to duplicate what you can get for cheap, because it's oversubscribed, with economies of scale and "if we let this run idle it's losing us money" pressure, for hardware found in a datacenter.
this has been the case for a long while now, and will increasingly be so as data centers buy up all the everything.
With open models, you have multiple providers competing on inference speed, quality, and price, leading to healthier market without lock-in.
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.
Then they killed Dall-E 2 and my credits vaporized.
Anybody found themselves in the same situation? What have you done?
If it cost too much and others can do it cheaper, that looks bad from both fronts.
also, for a company carrying „open“ in their name, that pretends to still remember its origins, they could open source at least the projects they sunset…
Offerings like Kling and ByteDance are considered much better.
This sounds like there would be some kind of revenge, but I struggle to imagine any kind of consequence. Did you have something in mind?
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
But it was largely fun to try to transgress against the limitations. Who could trick the AI to generate something outlandish and ridiculous.
Maybe it achieved its objective?
So whatever reason they say to shut this down, it was more important than 1B investment.
It says a lot about the current economy that consumers have no money. Will companies just stop making consumer products?
Yes. I have noticed that is close to impossible to get good deals on flights, hotels, or even good discounts on-line. Sellers have all the information from consumers that they need to maximize their profit and extract the maximum amount from consumers. Dynamic pricing is making it a personalized experience, so I personally pay the maximum I possible can.
No room to get a fair price anymore.
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
But now that the deal is off, I'm sure their legal team will attempt to once again change copyright law in their favor.
Disinfo AI videos and the Coca Cola Christmas ad have also really soured my expectation of genuinely positive creative uses of video gen for the next couple years until more improvements are made, and I start seeing stuff go viral for being good instead of just being weird. I am still surprised that sora never had the grok problem of generating csam or seemingly anything along those lines.
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
Sora was a perfect example of using a lot of compute to generate the video -> we need a lot of GPUs -> a lot of RAMs -> energy and land
I am predicting in the next 6 months RAM shortage will soften, not too much, because war in the Middle East will have additional impact for some time.
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
1. https://dev.to/tyson_cung/amazon-lost-63m-orders-after-ai-co... 2. https://venturebeat.com/security/meta-rogue-ai-agent-confuse... 3. https://x.com/elonmusk/status/2031352859846148366
Any platform which focusses on AI generated videos is doomed.
sir, have you seen tiktok?
https://finance.yahoo.com/news/openai-sora-app-struggling-st...
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
May be. OpenAI shuttering Sora is line with them shifting focus towards b2b sales, instead of b2b2c or b2c.
Interestingly, Aditya Ramesh, who iirc was the Sora 1 lead, is now "VP of Robotics" at OpenAI per his Twitter bio: https://x.com/model_mechanic
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
There's a web interface as well.
I had thought this would be combined with OpenAI launching a set top box where you could talk to an AI avatar. Disney IP could have been skins to sell people for their AIs.
Hustle just to barely stay afloat water or drown, means no time to compete with our own output.
America is a financially engineered joke regurgitating its own recent history, collapsing like an LLM trained on its own output. The rich are not even pretending it's "a free country" as they have enough wealth for how many years left most of them have to live, and have seen the apathy to their own plight keeping the average person in theit lane they don't fear the public.
It’ll all collapse as they generationally churn out of life and the Millennials on down with zero skills but "data entry into a computer" will be holding an empty bag, taking orders from foreign nations that bought up all the American businesses we built.
The cost must have been a key reason for the shutdown.
End is near.
Better for OAI to spend their human and compute resources on something else.
The desire for something "new", for a Mildly Ethical product, killed off the most obvious path to success - to actually just make TikTok+AIGC, or in the present, Douyin+Seedance2.
The network effects of the other two platforms are too strong, and a value prop of “watch similar videos but they’re all AI” is not strong for consumers.
Also, say what you want about AI slop, but I was on sora a lot for a few weeks and there was a real explosion of creativity on there. It felt new and exciting and creators were engaging with each other and sharing feedback and tips. I generated a ton of videos and surprised myself with a flury of creative ideas.
There didn't seem to be any marketing for it. Like I can't even remember an ad for it or any content creator type of person pushing Sora actively.
To get access to Sora I believe you needed to be on a paid plan?
It's really difficult to get user generated content going when it's behind a paywall.
It's also hard to tell if this means that openai is in trouble, or if this is just a badly managed product that deserved to be killed. With the negative sentiment on openai, folks might think the former.
On a more serious note, it could be a sign of a more powerful and general model being developed/released in the near future, that would include Sora capabilities. Or AI-doomers were right, and this sunset is one of the proofs for them.
A record speed into AI slop. Is this what everything turns into when content creation becomes easy? what's happening here exactly?
OpenAI is bleeding money faster than they can afford to and they are literally running out of people that they can go to for more. They need to stop the bleeding.
"...the AI company exits the video generation business."
"OpenAI, led by CEO Sam Altman, is not getting out of the AI video business [...], of course... "
I hate journalism.
From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
https://archive.ph/cKWkf#selection-907.0-907.291
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.
I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.
Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.
If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?
At a higher level of intelligence than many humans, current experience suggests
We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.
Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?
And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.
Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other.
Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?
— https://www.businessinsider.com/openai-discontinues-sora-vid...
So yeah, focusing on world models
1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
2) google and specialized video-only startups are simply doing a much better job than they were.
This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.
Fixed that for you :-)
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
At least they were able to recognize their mistake and course correct.
So OpenAI has done the right thing as a startup here, gotten lots of training data, and observed lots of user behavior that they can now apply going forward.
The Sora models, on the other hand, aren’t going anywhere, and I believe OpenAI will continue to invest in them. They’re getting better and better, just like Google’s Veo, which is quite good at generating videos as well.
Using Codex and agent skills, it’s actually quite easy to generate a storyboard and then have a list of shots in that storyboard. Then generate videos from those storyboard stills, and then finally assemble those individual video files into a final movie file using something like ffmpeg. It's also very easy to create a voiceover with TTS and even simple music using ChatGPT Containers (aka the python tool).
This will 'democratize' (ha ha, for people with money obvi) a lot of video creation going forward. Against all wisdom, I am actually quite bullish on this technology, especially in the hands of young people. They are very creative and have lots of stories to share.
Necessary disclaimer as usual around the ethics of how these models were created: all the AI companies have totally ripped off artists in service of creating these models. I wish something would be done about that but I'm not holding my breath. No politician seems to want to touch it.
This may well be a needed reprioritization in the face of resource constraints, but it ain't a masterful Xanatos gambit.
Agree, and didn't intend to imply that. This is just a good startup move that gets a big headline because it's OpenAI. Other startups around the world do the same thing all the time.
It’s quickly become the modern day equivalent of Comic Sans, WordArt, and the default clipart illustrations included in Word ‘98.
Perhaps most people are absolutely devoid of any taste of what makes art? I dont know.
That said, there are still people with exceptional aesthetic sensibilities in the tech field, obviously. They're just largely not in this space.
https://www.youtube.com/watch?v=YxkGdX4WIBE
I had a lot of fun using Sora and got a lot of laughs with absurd videos of me in various situations.
But like everyone else, I kind of got it out of my system after a couple weeks. Not to mention that my family got sick of seeing them. And so my usage collapsed to zero. And that seems to have also been the pattern writ large.
But this kind of flash-in-the-pan dynamic is devastating for a product with this kind of profile, which requires insane amounts of compute hardware to serve while also having no short-term monetization path.
Meta could afford to invest in IG Reels even when it was burning money and costing them a fortune for hardware because it was building up what turned out to be sustainable usage patterns which persisted long after the initial spending ramp.
It’s basically impossible to effectively monetize anything that’s not sustainable on the order of multiple years.
A subscription-based model would see excessively high churn that would be ruinous to the economics, and also advertisers wouldn’t be interested either, for the obvious reasons.
So why couldn’t this work? I don’t think that it was because the models weren’t good enough or that the depictions weren’t realistic or lifelike enough. I still marvel at some of the better outputs I was able to get from Sora.
I think the fundamental problem that Sora faced is actually much broader and more general, and it comes down to the basic Pareto math of any content generation or creative app, which is that 95%+ of the users just want to passively consume content from the 5% or less that actually wants to generate it (and is capable of making anything that other people want to watch).
It was really dismal to see the repetitive, trite ideas that 99% of users generated in the public feed. Just the same few dumb jokes and things they copied from other users.
Or putting themselves in a scene with their favorite fictional or cartoon characters or whatever, which of course got banned pretty quickly for copyright issues.
Most people are not creative and don’t have a lot of original, interesting ideas. So that means that the vast majority of the content is always going to come from a vanishingly small number of creators in a power law distribution.
And those super-creators aren’t going to want to be limited to a simple text-based interface that can only generate for 10 seconds at a time with no continuity and where large portions of things you might want to try are strictly forbidden.
They’ll instead gravitate to more customized solutions for power users that regular users would find as overwhelming to use as AutoCAD.
And that’s what you’re seeing now with all the new viral AI slop videos that are made by a handful of creators who have figured out the workflows and are pumping out the worst junk you can imagine that gets people to click and watch.
Anyway, RIP Sora; it was fun while it lasted. Thanks, Sam, for blowing a few hundred million bucks so we could get some laughs.
> We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
(https://x.com/soraofficialapp/status/2036546752535470382)
So I agree with you, but also it makes me wonder what they're even selling when the IPO happens (supposedly as early as late summer 2026)? Data centers? Partnerships with the goverment?
After placing my hand on the red-hot stove, aren't I super smart for now removing my hand?
That is, hiring Meta-exec's who focus on gaming numbers with no care nor sensibility of product.
Wild really. Well done Sam.
For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.
Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.
Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.
But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.
Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)
Your counter-examples have the property that most of the things you need to learn are absent from the media being watched, leading to an observation which is "obviously" true, but they ignore the impact of media on a journey properly incorporating other pieces of information. To compare to the mental models being discussed, you'd have to actually consider effects you're writing off as negligible, and when it comes to something like a world model which we've only learned by observation and which doesn't have a lot of additional specialized knowledge those effects might be much more impactful.
Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.
Sure, be ready to get them out, and if they’re trapped and it’s going to be a while until fire shows up start working on that. But my mental model is that for any road legal car that is not currently on fire, there is a higher chance you’ll cause harm by rashly moving a victim than that a victim will be suddenly consumed by an enormous Hollywood style conflagration.
Films on film using in camera effects are still made on occasion but they’re art films for niche audiences.
But we’ll never get another Ben Hur. And that doesn’t sit well with me even if society can’t yet fully explain why.
The worst offenders are brake sounds not correlating to the car movement, engine sounds not correlating to the car's acceleration, nonsensical car deceleration while braking, and steering wheel not correlating to car steering.
I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.
But it is insulting to feed slop to your audience; it shows you didn't even try.
I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.
Both a movie and a language model can cost tens or hundreds of dollars to produce.
In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.
At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.
We're clearly exploring different questions.
CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I can't believe you're stretching this in a good faith.
But if you are - well, you're certainly have a unique perspective.
I am 100% with you. I didn't ever _use_ Sora, but some of it trickled down to me (mostly through Instagram reels). I think it's amazing that we have such great new tools to express ourselves, and that we are trying out new platforms, paradigms, and approaches.
Is there money involved? Absolutely, but I don't fault companies for trying to earn their keep.
It 100% takes work to use these tools in the right way to make something funny. Ask an LLM to make them on their own and they'll hardly evoke laughs (I'm sure that'll change too, though).
Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.
The real problem with AI slop is not the AI. It's the people. It's always the people.
The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).
Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.
With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.
The percentage of AI videos over the internet will certainly not decrease after Sora is gone.
The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.
It much easier to make Qwen animate tankman than it's to make any western model to generate indigenous people dancing because cough cough naked skin is baaaaad. Except this Musk one that will nonetheless affected by all the copyright mess.
Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.
The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.
In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.
It’s so dumb that Zuck and Elmo want to inject^H^H^H^H^H^Hrecommend content into these people’s feeds while they’re checking in on their neices and nephews and local book clubs.
- You're making unsubstantiated claim
- personally targeting someone you don't even know
- in order to celebrate presumed success of a mass fraud?
If you want a video of a dancing cat, sure, you can get that. But if you want an orange tabby doing the moonwalk or the robot, that's a lot harder. You'll have to generate dozens of videos and fine tune prompt incantations before you get what you want, if you even do before you hit a rate limit or you get frustrated. If you want something specific and unique and interesting, you still need to put in a lot of effort. Therefore, most videos that people actually make and share are pretty generic.
I think most art models have subtle tells and limitations similar to textual LLMs too, just a little harder to recognize. Certain ideas and imagery will be easier to generate and more likely to fill in the gaps of your prompt. The technology is fascinating compared to the nothing that we had before, but it still has real limitations - try to get it to generate an Italian plumber wearing a red hat that isn't Mario, for example.
All that to say, the trend towards low effort, repetitive, and uncreative results is inherent in the medium. Most users will prompt for a generic dancing cat and get something resembling a cat doing something that resembles a dance and that will flood social media. The few people going for a more creative and specific artistic view will be frustrated by the constant rolling of dice, and if they do make something they work hard on, it will be drowned out by the low effort slop posts. And if you're frustrated by those limitations and want to make something intentional, then you'll eventually gravitate towards Photoshop or Blender where you can actually craft the exact thing you want.
These models do not really "democratize art", they just make it really easy to generate visually interesting noise. Once the novelty wears off, the limitations are apparent. Art has always been democratized anyway - Blender and Krita are free, and pencils are cheap.
Novels, cinema, television, comic books, etc.
They were all considered careless skill-free slop at some point.
https://klingai.com/global/
https://aistudio.google.com/models/veo-3
https://runwayml.com
For an app to suggest a personal relationship with you is ridiculous.
Which makes me wonder whether these companies actually dogfood their own tools with this sort of stuff? Was this announcement written by ChatGPT? Honestly, I would find either answer to be a little concerning in its own way. It's either vaguely insulting to their customers or showing a lack of faith in their own product.
it reads as "we want to tell you that what you made with sora mattered, but we all know it didn't".
I find myself increasingly nostalgic for the Clinton era. I am not at all sure I will enjoy the version of fuckedcompany that gets vibe coded when this bubble pops.
Is it happening? :) /s
Sora had to be shut down because it was the clearest, most consequential demonstration that OpenAI’s models are running way, way ahead of their ability to align/jail them effectively.
If you end up with nothing in aggregate for the chances you pay for, you're a loser. Not in a pejorative sense, just as a fact, you lost.
If you come out with more than nothing, in aggregate, you're a winner, in the same objective sense.
Probably controversial. Eh.
That story can’t be true
What happens if you turn a "human-level" intelligence off? Did you kill someone?
AGI is a pipe dream - and moreover it's not even something that anyone actually wants.
You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811
It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.
And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if you control it vs someone else, which is why every company is racing towards it.