Hacker Newsnew | past | comments | ask | show | jobs | submit | echelon's commentslogin

Before LLMs, it was a frequent haunt of fiction writers.

Work is delivering value.

Yes, we have craftsmanship, but at the end of the day everything is ephemeral and impermanent and the world continues on without remembering us.

I think both the IC and executive are correct in superposition.


Indeed. Even the ur-craftsman, John Carmack, says that delivering value to customers is pretty much the only thing that matters in development. If AI lets you do that faster, cheaper, you'd be a fool not to use it. There's a reason why it's virtually a must in professional software engineering now.

Right, but Carmack has always delivered. I need someone without Carmack’s ability to show me what value they can deliver with AI

Local is a dead end.

Open source efforts need to give up on local AI and embrace cloud compute.

We need to stop building toy models to run on RTX and instead try to compete with the hyperscalers. We need open weights models that are big and run on H200s. Those are the class of models that will be able to compete.

When the hyperscalers reach take off, we're done for. If we can stay within ~6months, we might be able to slow them down or even break them.

If there was something 80-90% as good as Opus or Seedance or Nano Banana, more of the ecosystem would switch to open source because it offers control and sovereignty. But we don't have that right now.

If we had really competitive open weights models, universities, research teams, other labs, and other companies would be able to collaboratively contribute to the effort.

Everyone in the open source world is trying to shrink these models to fit on their 3090 instead, though, and that's such a wasted effort. It's short term thinking.

An "OpenRunPod/OpenOpenRouter" + one click deploy of models just as good as Gemini will win over LMStudio and ComfyUI trying to hack a solution on your own Nvidia gaming card.

That's such a tiny segment of the market, and the tools are all horrible to use anyway. It's like we learned nothing from "The Year of Linux on Desktop 1999". Only when we realized the data center was our friend did we frame our open source effort appropriately.


> We need open weights models that are big and run on H200s.

We have this class of models already, Kimi 2.5 and GLM-5 are proper SOTA models. Nemotron might also release a larger-sized model at some time in the future. With the new NVMe-based offload being worked on as of late you can even experiment with these models on your own hardware, but of course there's plenty of cheap third-party inference platforms for these too.


> Open source efforts need to give up on local AI and embrace cloud compute.

Oh god no, please not more slop, you're already consuming over 1 percent of human energy output, could you, like, chill a bit?


In a similar vein: seek efficiency.

I.e., /if/ I am going to consume LLM tokens, I figure that a local LLM with 10s of billions of parameters running on commodity hardware at home will still consume far more energy per token than that of a frontier model running on commercial hardware which is very strongly incentivized to be as efficient as possible. Do the math; it isn't even close. (Maybe it'd be closer in your local winter, where your compute heat could offset your heating requirements. But that gets harder to quantify.)

Maybe it's different if you have insane and modern local hardware, but at least in my situation that is not the case.


But commodity hardware that's right-sized for your own private needs is many orders of magnitude cheaper than datacenter hardware that's intended to serve millions of users simultaneously while consuming gigawatts in power. You're mostly paying for that hardware when you buy LLM tokens, not just for power efficiency. And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

>And your own hardware stays available for non-AI related needs, while paying for these tokens would require you to address these needs separately in some way.

^ Fair. Yep, I agree the calculus changes if you don't have _any_ local hardware and you're needing to factor in the cost of acquiring such hardware.

When I did this napkin math, I was mostly interested in the energy aspect, using cost as a proxy. I was calculating the $/token (taking into consideration the cost of a KWh from my utility, the measured power draw of my M1 work machine, and the measured tokens per second processed by a ~20BP open-weight model). I then compared this to the published $/token rate of a frontier provider, and it was something like two orders of magnitude in favor of the frontier model. I get it, they're subsidizing, but I've got to imagine there's some truth in the numbers.

I wonder, does (or will) the $/token ratio fall asymptotically toward the cost of electricity? In my mind I'm drawing a parallel to how the value of mined cryptocurrency approximately tracks the cost of electricity... but I might be misremembering that detail.


I doubt it because you aren't going to get the utilisation that a commercial setup would. No point wasting tons of money on hardware that is sat idle most of the time.

If you're running agentic workloads in the background (either some coding agent or personal claw-agent type) that's enough utilization that the hardware won't be sitting idle.

Y'all aren't seeing the same future I am, I guess.

- Our career is reaching the end of the line

- 99.9999% of users will be using the cloud

- if we don't have strong open source models, we're going to be locked into hyperscaler APIs for life

- piddly little home GPUs don't do squat against this

Why are you building for hobby uses?

Build for freedom of the ability to make and scale businesses. To remain competitive. To have options in the future independent of hyperscalers.

We're going to be locked out of the game soon.

Everyone should be panicking about losing the ability to participate.

Play with your RTXes all you like. They might as well be raspberry pis. They're toys.

Our future depends on our ability to run and access large scale, competitive, open weights. Not stuff you run with LM Studio or ComfyUI as a hobby.


I don't agree that we are being left behind with regards to AI, I believe it's simply not worth participating in. I hope it all comes crashing down.

That's not the right perspective to have.

Also, the only thing crashing down will be the economic participation of everyday people if we don't have ownership over the means of creation. Hyperscalers will be just fine.


Ai isn't means to creation. AI is a scam on the whole of humanity, big tech stole humanities output in form of books, research and media, and are now selling access to the slop generator.

I have created things long, long, long before AI existed, and intend on continuing to do so, without AI.


You're going to get run over.

These tools are faster and better than you when used by an expert.

Literally will run circles around you.

Assuming you're senior and good, someone of similar stature will get 3-5x your workload done in the same number of hours.

Humans are cooked without AI. I mean it.

Please try Claude Code and figure out your place in the world. It got good. Really fucking good.

I also shouldn't have to explain Nano Banana Pro or Seedance 2.0. Those models are godlike.

Please wake the hell up and stop being blind to this. You're about to get run over by a freight train.

Your output will be diminished to arts and crafts without it.


Eventually, we are going to figure out how to do more inference with less RAM. There is simply no way that current transformer-based LLMs are the right thing to do. LLMs still rely on emergent properties that no one fully understands, where the sheer quantity of weights and duration of training are the dominant factors driving performance.

There is no reason on God's green earth why a coding model should need to ingest all of Shakespeare, five dozen gluten-free cookbooks, the complete works of Stephen King, and 30 GB of bad fanfic from alt.binaries.furry. Yet for reasons nobody understands, all of that crap is somehow needed in order to achieve the best output quality and accuracy in unrelated fields. This state of ignorance can't last. Language models shouldn't need 10% of the RAM they are taking now.

Every other point you raise is very valid, but I really don't think hardware is going to be the problem that everybody assumes it will be.


Man, going to personal computing was a mistake, we should’ve stayed jacked to the mainframes /s

Entire device categories, like smartphones, are locked down. That's our future.

Here's my retort: https://news.ycombinator.com/item?id=47543367


Gambling outside of equities and securities is a negative externality. (Investment in startups, goods, etc. makes the economy spin. Investing in what hoop a ball goes through does no good at all.)

This puts plenty of people who shouldn't gamble into debt and lowers their societal fitness.

The other side of the gamble is probably losing on average too. Only the house and infrequent insiders win.

These private companies are fleecing our economy's dynamicity without reinvesting it in aligned positive externalities.

It's a cancer.

At least the lottery goes to education, in theory. My college was subsidized by the lotto, so there's that.

Kalshi and PolyMarket aren't doing anything positive.


It is not for you to decide what one should or shouldn't do. Plenty of people want to gamble, and don't give a fuck of what you think they should or shouldn't do. And they are right: it's none of your fucking business.

Kalshi and PolyMarket are doing something absolutely wonderful for those people (i.e. the only people who should care about these "prediction markets" at all): they actually make betting fair, which was impossible before. It is not impossible now, because in fact there are much better decentralized markets than these (basically all you need to make a completely decentralized betting platform are Ethereum contracts), but they are handier to use and hence more popular. But it was impossible with traditional gambling, where a bookmaker can set any odds and reject any bets.


> It is not for you to decide what one should or shouldn't do.

What's the limit?

Murder? Taking fentanyl? Selling fentanyl? Shitting on the sidewalk? Taking photos of kids in public? Screaming fire in a theater? Defamation? Misleading poor people into thinking they can get ahead by spending their savings?

If there was no limit to what I should or should not do, I could just kill everyone I disagree with and take their money. But we know that's preposterous.

Here's the thing though: in aggregate, that's exactly what this is. Taking money from poor and undereducated people is like killing them. It's stealing their lives, indebting them, making them less fit, putting them on a lower trajectory for life. Killing their chances.

That's about the biggest fucking negative externality big tech has introduced to the world. And for what? To make a handful of people rich?

This does nothing for society. It's worse than crypto.

> Plenty of people want to gamble, and don't give a fuck of what you think they should or shouldn't do. And they are right: it's none of your fucking business.

Plenty of people want to rape kids, and it is our business to stop them. This argument is bunk.

A low level of gambling was okay because there was a controlled lever on it. Only a handful of casinos existed, and the government had the monopoly on the lottery program.

Now anyone can do it anytime. That is not good for our economy.

This is not a harmless activity. People are losing their financial well-being. Becoming addicted. Needing to fix their debt by becoming more in debt.

People getting exiting the workforce as productive members of society because they lose their shirt. That is not good for individuals or society.

We're damaging the robustness of our economy by allowing these entities to exist. They should be taxed at 100% of their gross revenue, and those funds should go to educating kids about statistics while they're young and impressionable.


Stripe wants to own you. All of the hyperscalers do.

Avoid.


We need more infra in the cloud instead of focusing on local RTX cards.

We need OpenRunPods to run thick open weights models.

Build in the cloud rather than bet on "at the edge" being a Renaissance.


> forest fires

Fire bug

> earthquakes

Dynamite the fault

> hurricanes

Crazy, but, hear me out: mirrors in space warming the Atlantic, mirrors in Africa warming the atmosphere ("solar power"), Trump wanting to nuke a hurricane, etc.

Pandemic? Go harvest bats and put them in a cage with chickens. You don't even need a molecular bio lab.

Stock market crash? Bombs. Terrorist attacks.

Energy prices? Derail a train carrying fuel cars. Bonus points if it's in a major metro and has a blast radius. Or, I dunno, start a war with Iran.

This could get really bad.


Dynamite the fault? I presume you haven't seriously thought about how much dynamite it would take and how deeply you'd have to plant it.

Mirrors in space? Again, have you done the math? How many thousands of acres of mirrors would you need, how many rockets would it take, and how much would they cost? Could you make enough on the betting market to break even?


"Dynamite the fault"? What are you, a Bond villain?

The Bond villain would be the person running the betting market.

Assassinations markets are what's next.

eg. someone will bet $1M that Elon Musk will be assassinated in 2026.

But these don't themselves even have to be legal. Second order wagers will be placed on SpaceX and Tesla stock prices, bets that "a hundred billionaire will die in 2026", etc.

A bet that Putin will be assassinated could be encoded in, "there will be regime change in Russia."


Someone would need to take the opposite side of that bet. And who would do that knowing someone might try to assassinate him in order to win that bet?

In this scenario, that would be the people paying for the assassination. The people who want it to happen bet that it won't. The people who want to do it bet that it will. The net result is that if one of the people who bet on it happening makes it happen, they are being paid by the people betting against it, in a plausibly deniable way.

A country leader seeing someone suddenly take out a $50 million position on them not being assassinated is not the $50 million vote of confidence a naive read on the market might indicate, it's a $50 million payout to the assassin. Albeit inefficiently so, since others can take the other side of the bet and do nothing. But the deniability may be worth it.


What's even more interesting is when you consider that A) it doesn't have to be one person taking out a large position, it can be multiple people, over time, and B) the assassin doesn't have to be known or confirmed ahead of time, if someone decides their "reserve price" has been met, all they have to do to receive a payout is place the appropriate bet before performing the act.

The end result is a combination of Kickstarter and Doordash for targeted homicide.


> The end result is a combination of Kickstarter and Doordash for targeted homicide.

or kidnappers. Someone could take the opposite side, kidnap the individual and guarantee their survival for the year. When time is up they just dump them in the street and collect the bet.


I'm not sure there's any deniability in placing the "won't be assassinated" bet, when you could equally state it as "I will pay $1M to whoever accepts this bet and assassinated this person"

Anyways, how exactly is this assassin going to collect on their bet? I'm pretty sure law enforcement will be looking into the fact that somebody place that bet and then shortly after, the assassination happened.


This could make for fun anti-life insurance.

"I bet I won't die this year."

The only life insurance you get to collect on while you're alive.


Ah, you beat me to it. I learned about assassination markets in a previous post about the overall gambling/prediction markets topic a few weeks ago; the concept is so coherent that it has its own Wikipedia page: https://en.wikipedia.org/wiki/Assassination_market

If someone want him dead, someone have to bet that million on the target being alive at a specific date. Unless someone plan to do the execution themselves, the bet must be lost for the target to be unalive !

Is it actually illegal to bet on an assassination?

I am not sure, but you don’t have to technically bet on assassination. You can bet on an event which would happen as a result of said assassination. X won’t get re-elected. Company Y CEO will change in 2027. This is artist Z last tour. Athlete K won’t participate in this event etc.

It's the inverse, here how you have to bet (unless you plan to be doing the hands on assassination works) : X will get re-elected. Company Y CEO will not change in 2027. This is not artist Z last tour. Athlete K will participate in this event etc.

Like I said elsewhere in this thread the bet have to be lost if you want your target dead.


I wonder if there's a critical failure mode / safety feature of our species for some percentage of the population to always dislike whatever some other large percentage of the population likes.

As if it's to prevent the species from over-indexing on a particular set of behaviors.

Like how divisive films such as "Signs", "Cloud Atlas", and even "The Last Jedi" are loved by some and utterly reviled by others.

While that's kind of a silly case, maybe it's not just some random statistical fluke, but actually a function of the species at a population level to keep us from over-indexing and suboptimizing in some local minima or exploring some dangerous slope, etc.


It could be tied to the "wandering gene" which is believe to ensure that we spread out and don't get stuck in some local optimization.

> I am wondering where to hang up my Techpriest robes in search of more elite pastures.

Capital and tech improvement will beat anyone chasing that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: