Hacker Newsnew | past | comments | ask | show | jobs | submit | sanitycheck's commentslogin

It's both, really.

The companies selling us the service aren't saying "you should treat this LLM as a potentially hostile user on your machine and set up a new restricted account for it accordingly", they're just saying "download our app! connect it to all your stuff!" and we can't really blame ordinary users for doing that and getting into trouble.


There's a growing ecosystem of guardrailing methods, and these companies are contributing. Antrophic specifically puts in a lot of effort to better steer and characterize their models AFAIK.

I primarily use Claude via VS Code, and it defaults to asking first before taking any action.

It's simply not the wild west out here that you make it out to be, nor does it need to be. These are statistical systems, so issues cannot be fully eliminated, but they can be materially mitigated. And if they stand to provide any value, they should be.

I can appreciate being upset with marketing practices, but I don't think there's value in pretending to having taken them at face value when you didn't, and when you think people shouldn't.


> It's simply not the wild west out here that you make it out to be

It is though. They are not talking about users using Claude code via vscode, they’re talking about non technical users creating apps that pipe user input to llms. This is a growing thing.


The best solution to which are the aforementioned better defaults, stricter controls, and sandboxing (and less snakeoil marketing).

Less so the better tuning of models, unlike in this case, where that is going to be exactly the best fit approach most probably.


I'm a naturally paranoid, very detail-oriented, man who has been a professional software developer for >25 years. Do you know anyone who read the full terms and conditions for their last car rental agreement prior to signing anything? I did that.

I do not expect other people to be as careful with this stuff as I am, and my perception of risk comes not only from the "hang on, wtf?" feeling when reading official docs but also from seeing what supposedly technical users are talking about actually doing on Reddit, here, etc.

Of course I use Claude Code, I'm not a Luddite (though they had a point), but I don't trust it and I don't think other people should either.


I'm only good enough to impress people who don't know what a good guitar player sounds like.

My advice to people, which seems to work OK, is just to have the guitar out and ready to play wherever you're likely to be - maybe even in the way so it has to be moved sometimes - and just pick it up and play it as often as possible.

Waiting for the kettle to boil? Play the guitar. TV is showing ads? Mute it and play the guitar. Your partner needs to go to the bathroom before you both go out? Play the guitar.

It doesn't matter what you play, it doesn't have to be good, it can be a random improvisation, it can be scales. Your fingers are learning.


It depends on what your goals are. If you're doing it for fun or as a creative outlet this is great advice. If you're trying to actively get better you won't do it this way after a certain point. You need to be actively practicing and engaging your brain. It does matter what you play and how you play it.

Sure, there's "deliberate practice" and it matters - but so many people seem to think if they're playing that's what they should be doing, or it's a waste of time. In reality that often isn't much fun, and they start to associate the instrument with this sort of difficult and often disappointing experience, and they give up.

You are right.

I think there are quite a lot of people who are only interested in playing and never deliberately practising. They do not get that far (they do not have to!).

And then the vast majority of aspring guitar players who frequent learning online material (including me) spend all of their time practising and learning, and too little of it playing for fun and performing. Most are constantly frustrated about their progress.

Then there is a small group of people, who spend a lot of time playing for fun and performing, but also a good amount of time deliberately practising. In my experience, those tend to be the ones people think are great players.


For me it was the "it's not x"/"it's y" stuff and some other structures Claude is very fond of using all the time. Perhaps humans are starting to write like LLMs!

Perhaps, just perhaps, LLMs are just statistical models that literally can't create novel things, therefore any structure LLMs write was learnt from human writing?

But who knows!


What kind of human writing has "it's not X—it's Y" in every single paragraph?

The answer is none. LLMs haven't accurately modeled human writing for years, current models have been smacked on the head with the coding RLHF bat so much, they all write distinctly inhuman text.


The thing is, people are screaming “AI” when they see a single “it's not X—it's Y" pattern in a post, despite this being a fairly common construct.

People are nitpicking every tiny thing in their search for proof of AI. It’s not useful and ends up dominating the conversation. AI panic is degrading the value of forums at least as much as actual AI at this point.


The user thing is what I currently do too. I've thought about containers but then it's confusing for everyone when I ask it to create and use containers itself.


So don't let them interact with anything external. You can push and pull to their git project folders over the local filesystem or network, they don't even need access to a remote.


Unless you are talking about running a local model, that’s not possible.


Obviously if you're running Claude Code you need a token for that and an internet connection, that's kind of a given. What I'm talking about is permission (OS level, not a leaky sandbox) to access the user's files, environment variables, project credentials for git remotes, signing keys, etc etc.


Any sign of AI, TBH. I don't come to HN to ask Claude, I already pay Anthropic for that.


I'll be sticking with Lightroom 6 (non-subscription) and the old cameras it supports, until the sad but inevitable day I can no longer run it.

I don't find editing takes much time, because I now have so many custom presets I can apply on import or in bulk that do 90% of the work.

What does take ages is picking out the best shots, but really the only way to make that quicker is to take fewer photos. Which I suppose shooting film actually does force you to do. (But so would a 2GB SD card.)


This is great, I discovered it a year or two ago - nice work! Excited to hear there might be more development happening.


I don't understand how they found nothing in the raid, wouldn't they normally bring drugs with them to plant? If they forgot those that's a whole new level of police incompetence.


> wouldn't they normally bring drugs with them to plant?

Why do you think they were so annoyed at all the cameras?


I guess they assumed that a musician whose whole persona is built around weed would supply the evidence.


You say that, but I’ve been watching a lot of those body cam channels over the past few years.

I remember one quite well, in which the police were raiding some suburban Texas home and decided to steal the guns and cocaine they found in it.

The kicker: They forgot to turn off their body cams while doing and discussing it.


> Common enough to be a minor plot point in a current cop show...

You've reversed cause and effect. Cop shows don't base their plots on what is real, they base them on what people will believe is plausible.


Anecdotally I know approximately zero 'normal' (non-tech) people who are intentionally using generative AI, several who have been badly misled by Google's AI summaries, and quite a few who are vehemently anti-AI (usually artists and writers).

(Except when mandated by their employers, which nobody is happy about or finds particularly useful.)


Every single person I know outside of my profession is using it, including all relatives of all ages. Even if it's at the top of the google search results :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: