Hacker Newsnew | past | comments | ask | show | jobs | submit | schnitzelstoat's commentslogin

> The boom had a common cause: after COVID cratered immigration in 2020, every major destination opened the taps to fill labor shortages.

I'm not so sure this is the cause as in the countries I am familiar with (UK, Spain), such sentiment long preceded the pandemic.

It's a complicated issue and immigrants aren't some homogenous fungible group. Many European countries have had a very different type of immigration to what the US has had, for example.

Immigrants coming on visas to work (and having to pay all the related visa fees and other surcharges like the NHS surcharge in the UK) are generally much more accepted than the influx of asylum seekers who are dependent upon the State and have no reliable criminal background check etc.

Politics feels increasingly polarised though so I'm not sure we'll arrive at the nuanced solution it deserves.


I remember having the Voodoo card to play Thief: The Dark Project. It felt incredible at the time.

I don't think they need to learn 'AI workflows' (whatever that means). But I think it makes sense to use the LLM's as a resource.

I've used them when studying new languages (human languages not programming languages) and ML algorithms and they've been really useful.

Learning to check the citations it gives you is a useful skill too. I wish many adults were more sceptical about the things they are told.


It's true that you can use LLMs as a learning resource and to unblock you. But students just aren't. They are using them as a way to avoid thinking, avoid research, and just spit out an answer they can paste in to their homework.

Because the students learned that school is designed by old morons, without understating why writing book reports and doing math drills has the intent of creating students that can read and write or learn other transferable skills.

They should at least require handwritten work, the kids will still be AI-stupid but will at least be able to write.

You remember better when you write, too.

I assume "AI workflows" means knowing how to split up a task to create a chain of agents that can complete a specific task reliably.

A bit like software development.


The problem is that the task you've defined "split up a task to create a chain of agents" has changed dramatically in just the last six months, nevermind the last two years.

You're wasting effort and teaching an obsolete technology if you try to make primary/secondary education too topical. Students can learn how to decompose a task and how to think critically without ever touching a Large Language Model.


Also when the subsidies go away it will be prohibitively expensive for most businesses, and is probably already too expensive for schools.

A lot of software products are heavily subsidized for university students because that yields lock-in when they go and get hired.

(Solidworks, Matlab, GitHub)

Primary schoolers will probably get priced out, and not a day too soon!


Probably. But the difference is the marginal cost of selling an Adobe CS license, Avid Media Composer, or the other costly software I bought at a steep discount is pretty much nothing. When you discount inference you lose money.

Pulling the plug on K-12 on the other hand, seriously, can’t happen fast enough.


True! But Adobe gives cloud compute and some gen AI credits with their edu license. Autodesk does too! They both lose money on that proposition, but clearly not thousands or tens of thousands of dollars per user.

I think it's better to use books and not have so many distractions in the classroom.

But equally it's really helpful to be able to ask ChatGPT or whatever for a different explanation when you get stuck - but that is probably better done at home when studying the homework. It stops you getting frustrated and helps keep you making progress and in the 'flow state'.

I guess a big problem for schools now will be how to get them to use AI to help them learn rather than simply getting it do to their homework so they can go and play video games or whatever. I know if I'd had it as a kid I would've been tempted to do the latter.


> But equally it's really helpful to be able to ask ChatGPT or whatever for a different explanation when you get stuck - but that is probably better done at home when studying the homework. It stops you getting frustrated and helps keep you making progress and in the 'flow state'.

Yeah sure, then get a (sometimes) wrong answer with high confidence and believe it?


It's quite rare that it gives a wrong answer nowadays. Even more so if you ask it to use the internet etc.

But yeah, it's not infallible and sometimes even when it gives you a source it will incorrectly summarise it, but you can double check the information in the source itself.

It just makes it a lot easier to do quickly rather than having to go and find the right Wikipedia article or dig through lots of documentation. Just like Wikipedia and online docs made it easier than having to go to the library or leaf through a 500-page manual etc.


Only if you are asking surface level questions. There are also certain topics that seem to be worse than others. For asking about how to do things in software guis modern LLMs seem to have a high rate of making up features or paths to reach them. For asking advice in games I've seen an extremely high rate of hallucinations. Asking why something is broken in my codebase has about a 95% hallucination rate.

If you are just asking basic science questions or phone reviews then its pretty reliable.


> Only if you are asking surface level questions.

I find it pretty accurate well beyond that level. How much of that is actually a problem in K-12 education?


I've used it for languages and studying some reinforcement learning stuff, including examples in PyTorch. I haven't had many problems with it really.

Once when I asked it some questions about a strategy game (Shadow Empire) it got them wrong, but the sources it cited had the correct information.


Why do you think children will learn anything from a remark on a specific problem? If it were that simple, teaching would be easy. (Notice that teaching smart kids is easy).

Much of education requires making errors until you get it right a few times in a row, and paying attention of the errors. Getting an explation of your errors is only part of that process. No LLM can provide the rest of it.


using AI for education is one of the worst ideas for education.

Exactly - in my company we had some NLP models in Customer Service (bag-of-words for classifying tickets) but everywhere else it was just classification or regression problems.

So yeah, the bag-of-words model got replaced with a chatbot several years ago (when chatbots were all the rage back in like 2017) and will probably get replaced again with an LLM-enhanced chatbot soon. But the meat and potatoes are those classification and regression models and they aren't going anywhere.


I think most use-cases will still use simpler models like XGBoost etc. rather than LLM's. Customer segmentation is a really common use-case with no need for an LLM. Same for revenue/LTV forecasting.

Perhaps they can use the LLM to write and deploy these models without needing a Data Scientist but that seems risky to say the least.

In my company, the most Data Scientist-adjacent people are the Data Analysts but they tend not to have programming experience beyond SQL and basic Python and they aren't used to using the terminal etc.


Do those use cases need LLMs? Probably not. but if good results can be had with a day of prompting (in addition to the stuff mentioned in the article, which you have to do anyway) and a smaller model like Haiku gives good results why would you build a classifer before you have literally millions of customers?

The LLM solution will be much more flexible because prompts can change more easily than training data and input tokens are cheap.


> Do those use cases need LLMs? Probably not.

One of the points of the article is the importance of gathering data to support your conclusions.

> prompts can change more easily than training data

Training data is real, and prompts are not. I don’t think this is an apples to apples comparison.


I don't disagree that very numerical tasks like revenue forecasting are not a good fit for LLMs. But neither did a lot of data scientist concerns themselves with such things (compared to business analysts and the like). Software to achieve this has been commoditized.

Yeah, the smaller subreddits are good. The problem is it’s basically killed off alternative forums.

I never thought I’d miss vBulletin so much.


somethingaweful forums are still very much alive

Yeah, I don't think local LLM's will keep up with what the massive corporations put out. But they might get to a level of performance where it just doesn't matter for most users.

And people would prefer to run a model locally for 'free' (not counting the energy cost) rather than paying for an LLM subscription.


It's a winner-takes-all market and everyone wants to be the next Google and not the next Lycos or AskJeeves etc.

It'd be interesting to see what they spend all the money on though as we seem to be hitting diminishing returns and I'm not sure if the typical enterprise user really cares about small improvements on benchmarks.

It seems like it'd probably be better to spend all that on marketing, free trials, exclusivity/bundle deals etc. ChatGPT already has a strong advantage there as it has so much brand recognition. I've seen lay people refer to all LLM's as ChatGPT like my grandparents did with Nintendo and all video game consoles.


It’s absolutely not winner take all. LLMs have become a commodity and the cost of switching models is essentially nil.

Even if ChatGPT has brand recognition amongst lay people, your grandparents aren’t the ones shelling out $200/mo for a Claude code subscription and paying for extra Opus tokens on top of that. Anthropic’s revenue is now neck and neck with OpenAI, but if tomorrow they increased the price of Opus by 5x without increasing its capabilities, many would switch to Gemini, GPT 5.4, Cursor, or any cheap Chinese model. In fact I know many engineers that have multiple subscriptions active and switch when they hit the rate limits of one, precisely the tools are so interchangeable.

At some point it could even become cheaper to just buy 8x H100s and host Qwen/Deepseek/Kimi/etc yourself if you’re one of those companies paying $3k/mo per engineers in tokens.


I have non-tech friends telling me about preferring other models like gemini, this feels like the early days of search engines when people were willing to switch to find better results.

Yep i have nontech friends and even the younger generation students talking about how Claude is better at certain tasks or types of homework problems lol.

If it's used as a tool not just search, then people will definitely talk about the other stuff. Students who rely on free tiers will also definitely just have everything bookmarked.


> It's a winner-takes-all market and everyone wants to be the next Google

absolutely isn't! if billed per token, there is no reason to be married to a single model family provider at all. the models have very different strengths and weaknesses, you should be taking advantage of this at all times.


people used to say this about search engines and web browsers, as well

regardless, eventually Google became the universal default for both. When it comes to software, the average person doesn't shop around for the technologically optimal choice, they just use what everyone else is using.


Google search is free to use. if they spike the models price up, people will look for alternatives

AI (that is, plain chat) is always going to be free to use as well. Google and Microsoft are going to keep it that way. And make the money back via ads.

That's why ChatGPT still has a free option. If they didn't, they would lose a billion users overnight to Gemini.


my point is today there is no clear winner. opus, gpt 5.4 and gemini have different strengths. google search was running circles around competition in basically all use cases.

Where to go next? I don't think anyone has gotten close to automating everyday PC usage, likely via screen capture and raw keyboard+mouse inputs. Imagine how much bigger would that market be than vibecoding.

tbh I don't think this use case is going to be as big as people seem to think

there are a lot of reasons, but in brief - I think AI desktop use is a product that the average person isn't going to get much value out of. to make an analogy - the creators of Segway thought people would buy them in large numbers, but it turned out most people don't mind walking manually (or at least, don't mind it enough to spend money on a scooter). I think makers of AI Desktop Use products are going to find out the same thing as it relates to everyday tasks like checking email and shopping.


I was thinking more remotely managing the computer in a warehouse, replacing the mouse of an architect, or some physical object engineer. That your grandma can finally find Discord by speaking to such a bot is just a nice side effect.

well yeah I wasn't even talking about professional use, since I think in professional use cases it will turn out make a lot more sense to set up APIs that AIs, use, than to set up screen scraping and mouse+keyboard use.

in fact even in rare cases where it's not possible to get an API or CLI to interface with some piece of software, I think people will find that their best bet is to first create a deterministic screen-scraping program for that specific software, then have that program serve an API for the AI to use. it would be so much cheaper to run (inference-wise) and so much more reliable, than having the AI itself perform the image interpretation and clicking.

I see AI desktop use as mainly a consumer product for that reason, since that's the situation where you have to react "on the fly" to whatever the user asks you to do and whatever program happens to be on their computer (versus professional cases which are more large-scale and repetitive, and where you can have a software developer on hand).


Automating GUI use is a silly idea when the AI can do much of the same things by getting access to a *nix command line - which is how all coding models work. It matters when driving proprietary apps or browsing websites that aren't providing a clean machine-readable API, not really otherwise.

I don't think it's winner-takes-all. Google is Google in 2026 because Lycos and AskJeeves were bad in comparison. The average user doesn't care whose LLM they're using because they're all close enough. It's hard to see past the bubble bursting, but I expect most people will use multiple of them depending on context (Copilot via the integration in windows, Gemini via Siri on their phone, etc), likely without paying.

I was wondering the same thing about Iron Maiden the other day - they seem more of a merch company than a heavy metal band these days.

You can get Iron Maiden beer, Iron Maiden wine, Iron Maiden sunglasses etc. let alone the common merch like T-shirts.

Given many more people can buy merch than can buy a concert ticket (which has inherently limited numbers) I wonder how the two revenue sources compare.


Kiss too. Kiss was a merch juggernaut.


Poor take. In the last three years alone they've played over 100 concerts. Their set is two hours. They're all in/approaching their 70s. If that's not a band, I'm a pterodactyl.


Even if they are indeed a band, that doesn't mean you are not a pterodactyl, mind you.

But pterodactyl are pretty cool too my mind, so no offense really.


You won't even notice the pterodactyls - they're often in the bathroom!

But the p is silent.


It's the same with the "Star Wars" brand - the biggest chunk of revenue comes from merchandise and licensing, not the movies/shows. Lucas famously became a billionaire by securing merchandising rights in his original contract, not because of the cultural impact of the franchise.


They're tied together - Lucas wouldn't've had the billions if Star Wars hadn't had the cultural impact allowing it to sell all that merchandise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: