Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

i think this is the prevailing wisdom but theres an angle that openai doesnt value and therefore isnt mentioned. There's far more compute sitting idle in everyone's offices and homes and pockets than there are in the $100bn openai cluster. it just isnt useful for training because physics. but its useful for inference. local LLMs ship this-next year in Chrome (gemini nano) and Apple (apple intelligence) that will truly be available for everyone instead of going thru OpenAI's infra. they'll be worse than GPT4, but only for a couple more years.



Especially when you separate the ethereal "hard problems" from every day queries local LLMs can answer equally as well as SOTA models, the value proposition for these expensive models plummets. If it can't solve real hard, long horizon problems the 10% lift on a given benchmark is not a material value prop to the end user to choose a local free version over the API costs or the monthly subscription.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: