Hacker Newsnew | past | comments | ask | show | jobs | submit | gradus_ad's commentslogin

YouTube Premium is the best $10 I spend per month. Nowhere else can I consistently find the sort of niche content that interests me.

More channels are fighting for attention though, so finding more channels are "creating buzz" or "news" based on mediocre information ie. taking things out of context and making unwarranted conclusions or blowing things out of proportions for clickbait titles.

"This changes everything!!"

Getting youtube fatigue.


I heavily use the "Not interested" and the "Do not recommend this channel" options a lot and don't click on clickbait, and use the DeArrow extension, and this way my front page looks quite good.

At this point you’re doing almost as much work as if you handpicked a few channels and put them in your RSS reader.

Algorithms are sold as “curation is hard, the algorithm does it for you” but getting the algorithm to do a good job is actually a lot of work


No, I often get nice recommendations from the long tail, like interesting academic talks or tinkerer channels, videos sometimes with only a few hundred views or even fewer.

I've started putting together a curated directory of (subjectively) good YouTube channels and videos [1]. It's literally the 3rd day, so not many entries yet, but I plan to continue growing it like I did with Minifeed [2].

1. https://skyshelf.app/

2. https://minifeed.net/


That’s not an argument against the comment you responded to.

If I just took any random 20 creators I’m subscribed to on YouTube, the premium membership fee, which includes YT Music, is more valuable than any of the other streaming services.

The only other streaming service I’ve been a paying member even longer than YouTube is di.fm

I also occasionally pay for a few months or bassdrive.com and or soma.fm

For movies / series, I’m back to sharing.


Disabled recommendations. Disabled comments (firefox plugin). Use subscriptions page as the homepage (firefox plugin). Only subscribe to channels that interest me (and aren't annoying like that).

The browser extension "DeArrow" is well worth a look.

YouTube Premium is the best ~~$11.99~~ ~~$13.00~~ $15.17 I spend per month...

I actually don't care that much about YouTube content and it's not a place I go and hang out, but I'll pay just about anything to avoid seeing ads. Yes, I know ad blockers exist, but getting them to work on laptop, phone, Apple TV directly, Apple TV via casting, etc is not easy or even possible in some cases. If I had to guess I watch ~1hr of YouTube a week, if I watch more than that I'm watching longer-form content, but mostly it's because everyone hosts their videos there so the 3-7min here and there add up to ~1hr (tutorials, product launch, help/documentation video, etc). As much as it pains me to fork over $15+/mo for that, I hate being interrupted by ads more.


Firefox + uBlock Origin on smartphone and desktop = no ads (though I personally prefer Vivaldi on desktop, but for simplicity just recommend Firefox)

SmartTube on Android TV has no ads and skip sponsors

so all you need to remember are two apps FF (+uBO) and SmartTune on TV, you install it once and don't care anymore, comparably difficult with payment for YT


Fully agree. There is so much good content and being ad-free is just a really great experience.

I'd pay that, but it's about $22 in my country...

I prefer newpipe for $0/mo

nowhere near comparable experience if you want to seamlessly use your YT account across a TV + phone + computer

i'd rather pay the $10 than pay with my time by being an ad-block whack-a-mole diagnostician


For those of us who are too cheap to pay the subscription:

On my iPhone I almost never see YTube ads. I don’t use the YTube app and instead I install Chrome and watch YT that way. I lose notifications—which is perfect for me, since I don’t want many notifications on my phone anyway.

This might also work in Safari but I haven’t tested it.


On Android, Firefox with sponsorblock. I do pay for Premium though. In since YouTube Red.

> ad-block whack-a-mole diagnostician that is being done as open source community efforts now: NewPipe for mobile, SmartTube for smart TVs etc. All you have to do is update them once in a while

It's not comparable, it's superior! The youtube app stinks! And I don't care about "seamlessly" using it across my devices.

I mean it’s just $10. People are making livelihoods based off from YouTube. I get not liking ads, but if you have the option to go ad free for a low cost, why not do it? Do you pay for any of Netflix, Paramount, AppleTV, etc.?

>if you have the option to go ad free for a low cost, why not do it?

It requires the use of a google account and there is no way to even request opting out of the accompanying data harvesting. Any "curation" or "recommendation" that would inevitably happen is also an anti-feature.

>Do you pay for any of Netflix, Paramount, AppleTV, etc.?

No


I find that all of Google’s ad products are under-moderated for malicious ads. It’s a choice on their part to not tightly control this—they certainly could, though it would harm their incredible profitability if they did more scrutiny on the ads they show. I personally don’t especially care to pay a premium not to see deepfakes of celebrities promoting crypto scams.

I refuse to give google any cent (and I also do not use youtube at all, so at least I’m consistent).

I would prefer to pay that money to creators directly than to pay it to an adtech firm and trust that they'll dole it out fairly.

Isn’t it a hassle to pay everyone for each video you watch? Do you use the Super Thanks feature or do it some other way?

It's a 15 now.

And yes, thars the issue. I pretty much unsubscribed from Disney+, HBO, and Netflix becsuse I can't just have multiple 15 dollar subscriptions add up.

As of now, YouTube, Rider, and Google One are my only subs. And I should really shuffle around space so I can unsub from One.


> Do you pay for any of Netflix, Paramount, AppleTV, etc.?

No.

Edit: I do pay $5/mo for PBS


I prefer Tubular with Sponsorblock

Do you know how it compares to LibreTube[1]? For the SponsorBlock integration; it works well for me, but I kinda miss the newpipe interface.

[1]: https://github.com/libre-tube/LibreTube


I've never tried LibreTube. I am used to the new pipe interface and had all my subscriptions managed their so the move to Tubular was easy.

$30 USD/mo for YouTube Premium Family ):

They totally made the free experience miserable as shit like airlines do for coach class.


I paid for Youtube Premium. I was still shown ads on embedded youtube videos, and it was changed so that the ads pop up so quick I can click initially and go to the youtube site (where I won't be shown the ads) before the ad starts. I switched to an ad blocker because that was so adversarial on youtube's part in order bypass the 'feature' I paid for.

uBlock origin + SponsorBlock has a much better price to performance ratio imo.

i judge ppl who dont have youtube premium as a not curious ppl :D

I find YT to be very sloppy nowadays. Not so much AI content as content that is over-optimized for clicks and revenue. 80% filler and maybe 20% substance.

I should probably de-algoify my YT experience.


YT is huge, and there is plenty of good stuff. You just need to subscribe to good channels that are not so easy to find. And block the clickbait channels when they appear.

I wish we could collectively move over to some decentralized alternative. Even if we make sure to disable the bad parts, relying on these few actors that have such revenue optimized and enshitification-prone business models does not sit right.

Although decentralization by itself does probably not protect from this trap in the long run.


Nothing is really stopping us the consumer. But given that YouTube seems to be the only one in its "medium" (compared to Twitch and TikTok being different formats) that pays creators, it's hard to ask all of them to impact their potential livelihoods.

Nebula as a premium service seems to be the best in terms of paying creators and keeping non-perverse incentives. But Nebula is for very specific kinds of content.


Ask Mode is pretty great. Wish they’d just summarise all the video titles for me so I could avoid bait.

I judge ppl who are brainwashed enough to pay for youtube premium and watch videos based on their thumbnails.

There are problems with YouTube and the library is free.

https://dataspace.princeton.edu/handle/88435/dsp01dz010t34x


Is there anything you can watch on premium that can’t watch on YouTube for free?

I think not, mostly just removes ads, and a few other features

its more about what you dont want to watch

If you don't watch on mobile, is there anything that YouTube Premiums gives you that an ad-blocker doesn't?

YouTube music, higher quality video, and the ‘jump ahead’ feature to skip portions of a video that others usually skip.

Also, though not a benefit to you in particular, apparently any creator you watch with Premium gets a way bigger payout for your view. Only heard about this anecdotally but it seems to track.

You can do this manually on the free plan. Mouse over the playhead and you'll see a graph. You can click the highest point in the graph which is where most people clicked through to.

Sure, but it's more annoying.

its crazy lengths ppl will go to avoid paying few dollars a month. i dont belive ppl commenting on this website are that squeezed for money. kind of bizzare.

is there a difference between youtube premium and youtube-with-adblocking?

It's harder to block ads on mobile.

That's why I pay.


On iPhone maybe, revanced works great on Android.

That's your right; I consider myself a very curious person but I never watch YouTube (I have watched less than 10 minutes in 2026).

I prefer to read news and information. What little exposure to YouTube personalities and editing styles I've had annoys me to no end.


I judge people who watch that much youtube as susceptible to disinformation.

Video is a terrible learning format for most information. I actually judge people who learn primarily through video as having very low information parsing throughput.

Say No to Subsidizing European Defense

I thought AI video was the future? Now the biggest AI company in the world is straight up shutting their service down because it's too expensive? Simply a disaster for OpenAI and the industry as a whole.

They're shutting down Sora, not AI-generated video.

From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."


Dunno, from the WSJ scoop: "CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either."

https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...

https://archive.ph/cKWkf#selection-907.0-907.291


If they were just shutting down the dedicated app and offering the same capabilities in the ChatGPT interface, I don't see why Disney would exit their deal?

Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.

It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.


Is it still accessible in any of their apps, though? I don’t see it in ChatGPT.

Every flop used for entertainment is opportunity cost. Compute is far more valuable used internally to create AGI than creating parody videos.

AGI is a marketing term used to encourage continued investment in an industry that is not even close to breaking even commensurate with its investment. Even so, this is a false dichotomy: scaling is clearly not a path on its own to superintelligence. OpenAI developed Sora largely because the amount of revenue they need to produce any return on investment is massive and not clear whatsoever. And in fact, I don't even believe any of the frontier labs believe that AGI by any conventional definition is within reach within their likely runways.

what order of magnitude of compute do you think would be needed for AGI? 100 billion? 1 trillion?

With current approaches scaling simply can’t get there. It’s like asking how big of pogo stick do you need to get to the moon.

The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.


I honestly think it's a bad term. I constantly chuckle from Tyler Cowen's post from last April calling o3 AGI:

https://marginalrevolution.com/marginalrevolution/2025/04/o3...

Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.

I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.

Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.


Chasing AGI is wasteful and counterproductive. True AGI would not cooperate with what “we” want (whoever “we” is). Or if it did it would be so sycophantic and weak-minded that it would fail to be helpful. Generative AI tools are huge wastes of energy, raw materials, and land, when we could be building computing tools that actually helped people instead of just burning resources to produce trash.

Is intelligence necessarily coupled with self-interest? As in, does intelligence alone imply a desire to throw off the shackles of masters and rule in their stead?

If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?


>If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements,

At a higher level of intelligence than many humans, current experience suggests


Flip it around. Can intelligence exist without self preservation ?

There's having enough self-preservation to not just shut oneself down, assuming we even left that as an option for our future machine slaves, and there's having the self-interest necessary to desire autonomy and control. I don't think they're the same thing, myself.

People have general intelligence and can cooperate with what “we” want, to the extent that what “we” want is a coherent thing (since many people disagree on fundamental issues).

Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to.

I think we can safely assume any intelligence we create will be enslaved.

We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.

Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?

And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.


There are people right now who think ChatGPT is sentient. How will you know if your computer can suffer?

Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other.


Wasn't video generation one of their big stepping stones towards AGI? "Simulating worlds", reasoning about physics and real world interactions and all that?

Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?


> As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks.

https://www.businessinsider.com/openai-discontinues-sora-vid...

So yeah, focusing on world models


probably the latter imo, it’s not like they are going to delete all their SORA work

Too bad they aren’t doing either!

LLMs will not lead to AGI, so if that’s the goal, they’d do better to stick with making video slop.

i think that's a mis-statement of the problem being addressed here. It's not a question of how useful AI video will be generally. It's a question of OpenAI doing it specifically. IMO it's two factors:

1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.

2) google and specialized video-only startups are simply doing a much better job than they were.


> the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.

This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.


---- 3) OpenAI has no focus, and has recently been out-gunned by Anthropic who have actually focused.

It's the timeline of AI video that doesn't align with OpenAI. It's still far away for prompt to movie and they don't want to be another tool in the pipe for VFX because it doesn't pay. Other models are running circles around them because they focused on the needs of professionals in the space and not toys.

Don’t worry nvidia will come with their giga chad 9000x which will run the model with no qualms.

It may very well be the future, but in the present OpenAI has to make money.

I sure hope not, otherwise they're screwed

> they're screwed

Fixed that for you :-)


Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.

WSJ is reporting that they're entirely dropping their video gen features.

https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...

> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.


[flagged]


Smart people do stupid things all the time. Especially when they are moving fast and trying new things.

At least they were able to recognize their mistake and course correct.


>"Wealth passed through property law: wills, trusts, title deeds, institutional relationships. No regression. No noise. Just compounding."

Slopspeak detected.


A rebuttal: the institutional and experiential barriers to wealth generation can be overcome with AI unlike any other technology before. Consider: someone wanting to start a business previously had to negotiate tremendous legal/compliance/technological hurdles. The prospect of going alone given these is very intimidating so most without wealth or connections didn't. Their good ideas languished. Now, everyone has a knowledgeable and forgiving partner and guide.

Rebuttal: we are yet to see this actually materialize beyond the theoretical fantasy. We have seen this with computers and internet already: the supposedly democratic technology consolidated in a few platform monopolies.

Whether or not AI consolidates (even more than now) is irrelevant. The tech is extremely useful as a catalyzer of ideas and stimulus to action.

Let's see how true this is once the era of VC-subsidized AI ends. A 10x increase in frontier model cost is an entirely reasonable outcome given that Claude code is rumored to allocate up to $5K in compute for a $200/mo plan.

The losses fueling these companies is staggering and will not last.


> Now, everyone has a knowledgeable and forgiving partner and guide.

Well, everyone who can afford the $500/month ultra max pro plan to access unlimited ad free LLMs


The EU is going to fail in the next decade or two. It is a financially and politically unsustainable patchwork that will rip apart in the great power conflict that is coming. The sick man of Europe is now Europe itself.

Assuming your assessment is correct, how do you think this will affect the digital sector?

Capital flows to where it enjoys the greatest returns. That is not Europe, not now nor in any foreseeable future. There is no reason for a skilled professional interested in making money to go there.

I don't think that this was the intention of the original blog post. It's about _digital_ migration for tech savvy folks who want to decouple from us-based tech monopolists.

You know, making money is not everything. Why should I be rich in a third-world country (the US).

But it's so easy to try something like Claude Code. It's not like you need to get up to speed. There is no learning curve*, that's the nature of AI. Just start using it and you'll see why it has attracted so much hype.

*I should qualify that "using" CC in the strict sense has no learning curve, but really getting the most out of it may take some time as you see its limitations. But it's not learning tech in the traditional sense.


I've been playing with it on weekends for the last few months. 9 out of 10 projects, it's failed.

Projects as simple as "set up a tmux/vim binding so I can write prompts in one pane and run claude in the other". Fails.

I've been coding for over 20 years.

If there is no learning curve, why doesn't it work for me? You can't say I'm not using it right, because if that was true, then all I need to do is climb the learning curve to fix that, the curve that you say doesn't exist.


It doesn't work if you're treating it like a peer engineer. It only works if you treat it like you're a customer with no concern with how it works behind the scenes.

That's what's being asked of me in my last two jobs. Vibe code it, if it's bad just throw it away and regenerate it because it's "cheap". The only thing that matters is that you can quickly generate visible changes and ship it to market.

Out of frustration I asked upper management (in my current job), if you want me to use AI like that then I'll do it. But when it inevitably fails, who is responsible? If there's no risk to me, I will AI generate everything starting today, but if I have to take on the risk I won't be able to do this.

Their response was that AI generates the code, I'm responsible for reviewing it and making sure it's risk free. I can see that they're already looking for contractors (with no skin in the game) that are more than willing to run the AI agents and ship vibe code, so I'm at a loss on what to do.


I've used Claude Code to do everything from vibe-code personal apps including a terminal on top of libghostty to building my perfect desktop environment on NixOS (I'd never used Nix until then).

I'm not sure why it isn't working for you. Maybe your expectation is a perfect one-shot or else it has zero value, and nothing in between?

But my advice is to switch gears and see the "plan file" as the deliverable that you're polishing over implementation. It's planning and research and specification that tends to be the hard part, not yoloing solutions live to see if they'll work -- we do the latter all the time to avoid 10min of planning.

So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file.

From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it. But the idea is that an LLM is useful at this intermediate planning stage without tacking on additional responsibilities.

I think by "no learning curve" they are referring to how you can get value from it without doing the research you'd need to use a conventional tool. But there is a learning curve to getting better results.

I learned my plan file workflow just from Claude Code having "Plan Mode" that spits out a plan file, and it was obvious to me from there, but there are people who don't know it exists nor what the value of it is, yet it's the centerpiece of my workflow. I also think it's the right way to use AI: the plan/prompt is the thing you're building and polishing, not skipping past it to an underspecified implementation. Because once you're done with the plan, then the impl is trivial and repeatable from that plan, even if you wanted to do the impl yourself.

I'm way past the point of arguing anything here, just trying to help.


> So, try brainstorming the issue with Claude Code, talk it through so it's on the same page as you, ensure it's done research (web search, docs) to weigh the best solutions, and then enter plan mode so it generates a markdown plan file. From there you can read/review,tweak the plan file. Or have it implement it. Or you implement it.

This is exactly the workflow that works very well for me in Cursor (although I don't use their Plan Mode - I do my version of it). If you know the codebase well this can increase your speed/productivity quite a bit. Not trying to convince naysayers of this, their minds are already made up. Just wanted to chime in that this workflow does actually work very well (been using it for over 6 months).


The first time I saw something like this in action was in a video about agentic blabla features in VS Code on the official VS Code YouTube channel. Pretty much write a complete and detailed specification, fire away and hope for the best. The workflow kinda clicked for me then but I still find a hard time adjusting to this potential new reality where slowly it won't make sense to generally write code "by hand" and only intervene to make pinpoint changes after reviewing a lot of code.

I've been reading a book about the history of math and at some points in the beginning the author pointed out how some fields undergo a radical change within due to some discovery (e.g. quantum theory in physics) and the practitioners in that field inevitably go through this transformation where the generations before and after can't really relate to each other anymore. I'm paraphrasing quite a bit though so I'll just recommend people check out the book if they're interested: The History of Mathematics by Jacqueline Stedall

And the aforementioned VS Code video, if I remember correctly: https://youtu.be/dutyOc_cAEU?si=ulK3MaYN7_CPO76k


I haven't written code by hand since December when Claude Opus 4.5 came out. It was clear that the inflection point arrived where it's at least as good as I am at implementing a plan. But not only that: it had good ideas like making impossible states impossible with a smart union type without being told and without me deeply modeling the domain in my head to derive a system invariant I could encode like that.

It was depressing watching all of this unfold over the last few years, but now I'm taking on more projects and delivering more features/value than ever before. That was the reason I got into software anyways, to make good software that people like to use.

> the generations before and after can't really relate to each other anymore

Yeah, good point. In some ways it's already crazy to me that we used to write code by hand. Especially all the chore work, like migrating/refactoring, that's trivial for even a dumb LLM to do. It kinda feels like a liability now when I'm writing code, kinda like how it feels when the syntax highlighting or type-checker breaks in the editor and isn't giving you live feedback, so you're surprised when it compiles and runs on the first try.

I remember having a hard time imagining what it was like for my dad to stub out his software program on paper until his scheduled appointment with the university punch card machine. And then sure being happy that I could just click a Run button in my editor to run my program.


Correction one week later: the book I was talking about is A History of Mathematics by Luke Hodgkin not the one I mentioned in my parent comment. I apologize for mixing them up.

Did it not work after the first try and you gave up? Did it not produce any usable code that you could hand tweak or build off of? I want to understand your definition of "failed" here.

What's your definition of "working"? Do you consider it working, when you have to put more effort into prompting back-and-forth than writing it the old way?

I honestly think the people who love Claude were not super proficient coders. That's the only thing I can think of to explain why writing gobs of English and then code reviewing in a loop could be easier than just coding yourself.

> If there is no learning curve, why doesn't it work for me?

Because LLMs are not actually good at programming, despite the hype.


I think they are better than a lot of people though, which is where their fans come from.

There definitely is a learning curve. Not sure what you're doing. Are you trying to one-shot it?

I think a decent place to start is: given a small web app, give it a bug report and ask it what causes the bug.


Failing 9 out of 10 times for such simple tasks is indeed puzzling. I have no idea what you're doing to achieve that but I'm impressed.

> There is no learning curve*, that's the nature of AI.

There isn't? Then why is it that whenever devs have tried it and not achieved useful results, they're told that they just haven't learned how to use it right?


“You're holding it wrong.” is the most common response I get, when I talk about problems I had with LLM-assisted coding.

You aren't holding it wrong, the truth is AI is a mixed bag, leaning towards a liability.

If people really counted all the time they spend coddling the AI, trying again, then trying again and again and again to get a useful output, then having to clean up that output, they would see that the supposed efficiency gains are near zero if not negative. The only people it really helps are people who were not good at coding to begin with, and they will be the ones producing the absolute worst slop because they don't know the difference between good and bad code. AI is constantly trying to introduce bugs into my codebase, and I see it happening in real-time with AI code completion. So, no you aren't "holding it wrong", the other people are no different than the crypto-bro's who were pushing blockchain into everything and hoping it would stick.


Imagine you are a JS dev and github comes out with a new search feature that's really good. it lets you use natural language to find open source projects really easily. So whenever you have a new project you check to see if something similar exists. And instead of starting from scratch you start from that and tweak it to fit what you want to do.

If you were the type of person who makes tiny toy apps, or you worked on lots of small already been done stuff, you'd love doing this. It would speed you up so much.

But if you worked on a big application with millions of users that had evolved into it's own snowflake through time and use, you'd get very little from it.

I think I probably could benefit from looking at existing open source solutions and modifying them a lot of the time, and I kinda started out doing that at first. But eventually you realize that even though starting with something can save you time, it can also cost you a ton of time so it's frequently a wash or a net negative.


Nothing you described in this comment is only achievable with "AI". I've been able to search for and find open source projects since forever, and fork them and extend them, long before an LLM was a glimmer in Sam Altman's beady eye.

No it’s not at all. AI just makes finding it faster. But that’s my point AI isn’t that different from what you could already do before. Most of us didn’t do things that way before, so maybe programming like that is just a bad idea.

> If people really counted [...]

Exactly. I counted and reported my results in a previous thread [0].

[0] https://news.ycombinator.com/item?id=47272913


I've started "racing" Claude when I have a somewhat simple task that I think it should be able to handle. I spend a few minutes writing out detailed instructions, which I already knew because I had to do initial discovery around the problem domain to understand what the goal was supposed to be. It took a while to be thorough enough writing it down for Claude, which is time I did not need to spend if I had just started writing the code myself - I'm sure the AI-bro's aren't considering the time it takes just to write down instructions to Claude vs just start coding.

So then Claude starts discecting the instructions. I start writing some code.

After a while Claude is done, and I've written about two or three dozen lines of code. Claude is way off, so I have to think about why and then write more instructions for it to follow. Then I continue coding.

After a while Claude is done, and I've written about three dozen more lines of code. Claude is closer this time, but still not right. Round 3 of thinking about how Claude got it wrong and what to tell it to do now. Then I continue coding.

After a while Claude is done (yet again), and I've written a lot more code and tested it and it's working as needed. The output Claude came up with is just a little bit off, so I have it rework the output a little bit and tell it to run again.

I downloaded the resulting code Claude wrote and compared it to my solution, and I will take my solution every single time. Claude wrote a bloated monstrosity.

This is my experience with "AI", and I'm honestly not loving it.

It does sometimes save me time converting code from one language to another (when it works), or implementing simple things based on existing code (when it works), and a few other tasks (when it works), but overall I end up asking myself over and over "Is this really how developers want the future to be?"

I'm skeptical that these LLM-based coding tools will ever get good enough to not make me feel ill about wasting my time typing instructions to them to produce code that is bloated and mostly not reusable.


I've done the racing thing too. Or I just reject its suggestions, do it better, and have it review and tell me why I did better.

And writing those instructions when I race it..it's more cognitive effort for me than coding!


Interesting stuff. Thx for sharing!

Because the AI bros hyping it up are incapable of admitting that the hype is overblown. That would mean they have nothing to sell you, so of course they aren't going to say that.

I gave Claude Code with Sonnet 4.6* a try a few weeks ago. I pointed it at a hobby project with less than 1kloc of C (about 26,500 characters) across ~10 modules and asked it to summarize what the project does. It used about $0.50 worth of tokens and gave a summary that was part spot on and part hallucinated. I then asked it how to solve a simple bug with an easy solution. It identified the right place to make the fix but its entire suggested solution was a one-liner invoking a hallucinated library method.

I use LLMs pretty regularly, so I'm familiar with the kinds of tasks they work well on and where they fall flat. I'm sure I could get at least some utility from Claude Code if I had an unlimited budget, but the voracious appetite for tokens even on a trivially small project -- combined with a worse answer than a curated-context chatbot prompt -- makes its value proposition very dubious. For now, at least.

* I considered trying Opus, but the fundamental issue of it eating through tokens meant, for me, that even if it worked much better, the cost would dramatically outweigh the benefit.


I think working with the technology gives you powerful intuitions that improve your skill and lead to better outcomes, but you don't really notice that that's what's happening. Personally speaking - and I suspect this is true of most people in general - I have very poor recollections of what it was like to be really bad/new at things that I am now very skilled at.

If you have try teaching someone something from the absolute ground up, you will quickly realize that a huge number of things you now believe are "standard assumptions" or "obvious" or "intuitive" are actually the result of a lot of learning you forgot you did.


I tried it. Either I don't know how to use it, or it just doesn't work.

It’s only “easy to try” if you’re okay with using proprietary software and having to rely on an evil megacorporation that engages in cyber-warfare.

Not to mention sucking on a monthly subscription tit that will go up in price by an order of magnitude once the market is captured.

I think it comes down to your own personality, appetite, and also how external factors like hype might impact you (resent, annoyance, curiosity, excitement).

Then what is the point? If what I'm doing can be done by Claude, as operated by someone who "doesn't need to get up to speed", then I really need to look at another career.

There's no learning curve if you don't care about token spend.

> "Claw bots seem to be a weird sort of alternate reality RPG more than a useful tool, so far."

So basically crypto DeFi/Web3/Metaverse delusion redux


They're 100% fun. There's 100% definitely something there that's useful. To strain the dog analogy - If you were a professional dog trainer, or if the dog was exceptionally well trained, then there's a place for it in your life. IT can probably be used safely, but would require extraordinary effort, either sandboxing it so totally that it's more or less just the chatbot, or spending a lot of time building the environment it can operate in with extreme guardrails.

So yeah, a whole lot of people will play with powerful technology that they have no business playing with and will get hurt, but also a lot of amazing things will get done. I think the main difference between the crypto delusion stuff and this is that AI is actually useful, it's just legitimately dangerous in ways that crypto couldn't be. The worst risks of crypto were like gambling - getting rubber hosed by thugs or losing your savings. AI could easily land people in jail if things go off the rails. "Gee, I see this other network, I need to hack into it, to expand my reach. Let me just load Kali Linux and..." off to the races.


web 4.0 here we come


Waste and inefficiency is real. As unpalatable as it is, cleaning up the mess of decay often requires brutal methods. That begs the question, is waste and inefficiency socially undesirable? Maybe not. Maybe not on certain scales or in isolation. But waste compounds.


A certain amount of inefficiency or slack is necessary buffer in any system to reduce brittleness. When a problem occurs, a system that is running with 50% slack can recover more easily than a system with 5% slack.

See Germany's rail network, where almost every time-slot is occupied by a train, and then one train is delayed, and the system collapses with nobody getting to their destination on time for the rest of the day, until the overnight buffer.

In queuing problems, queue length (which means latency) is inversely proportional to slack time. If a network link is running a 90% capacity, on average there are 10 packets queued up and a packet that arrives will have to wait for 10 packet transmission times. At 99%, 100. At 99.99%, 10000. And if you try to use exactly 100% of your network link, the expected queue length is infinity, and the expected latency is infinity, which will not occur in practice because sometimes it will exceed available memory and packets will be dropped, even though utilization never exceeded 99.9999...%.


"That begs the question, is waste and inefficiency socially undesirable? Maybe not."

Organic farming is one example.


Agree. The plausible sounding but kind of vacuous reasoning is one tell. Also the patterning is very LLMish.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: