Hacker Newsnew | past | comments | ask | show | jobs | submit | konschubert's commentslogin

Orbital data centers are a dumb idea. Put a solar farm in the desert and add some batteries? That’s cheaper.

So I wouldn’t be too worried about this, the economics of this won’t pan out.


Datacenters need cooling. Cooling always externalizes heat go the environment. The places you want centers are cold and water available.

Neither space or typical deaert.


A desert isn’t necessarily hot.

It's actually very easy to cool in deserts, because low humidity makes it very easy to move heat into the ambient air. You have to contend against ambient temperatures, but that's what insulation is for. The other big things you need for datacenters are reliable power and a low probability of infrastructure-disrupting natural disasters.

Still easier to cool in the desert than in space

You can do radiative cooling in space (you just need big radiators). You cannot do that reasonably in desert.

Taking aside you certainly can do radiative cooling in desert at night just fine - you have air, which even if hot to desert standards during the day is still by magnitudes more effective for cooling via direct heat transfer than radiating heat away in vacuum.

I did realize geothermal would be the way to do it in the desert; the ground is still typically cooler than the heat computers give off.

It's still problematic that most deserts dont haveaccess to groundwater, so bootstraping and maintenance are an issue.


You can try to put the heat underground. Maybe there is an aquifer you can use. Or maybe your desert is close to the coast!

Still easier than radiating it into space.


> Still easier to cool in the desert than in space

What is the physics and the math than let you conclude that?


Presumably that vacuum is an insulator.

Lack of medium in space so can't rely on convection or conduction there? This is really basic.

But what if you need compute for activities in space? Is the desert on earth still a good option?

With today's very high orbital launch costs, it's trivially true that the desert is cheaper.

With very low orbital launch costs, it's trivially true that space would become cheaper. Solar panels have no atmosphere/night/seasons and are always pointed at the Sun, no cover glass for hail, no 24h battery either. Radiators are 1/10th the area of PV which is very doable.

The question is, where exactly is the tipping point between those two extremes, and will Starship reach that? Opinions on this naturally bifurcate depending on one's feelings about Elon Musk.

I wouldn't be too worried because SpaceX engineers put a great deal of effort into reflection mitigation, including developing a space-rated mirror able to have an RF signal fire transparently through it.[1] The strategy is to bounce all the sunlight away from Earth, which makes satellites darker than even (hypothetically) covering a satellite in Vantablack.

[1] https://youtu.be/MNc5yCYth5E?t=1717


I don’t want to be foolishly dismissive, but I just don’t see how launch costs could be small enough to compensate for the huge overhead of putting things into space and maintaining things in space as opposed to literally any other place on earth.

I think the burden of proof is on the people who want to tell us that this is economical to show the numbers


Ok I’ll try:

Starship becomes “fully and rapidly reusable”, needing little to no refurbishment between launches. Then the lower bound of launch costs is just the expendables (methane, oxygen, nitrogen) which could cost as little as $1M per launch.

SpaceX uses custom silicon (produced by “TeraFab”) that can run at higher temperatures then the radiative cooling requirements goes down significantly and a 100 kW satellite might weight around 1 ton.

Starship should be able to launch at least 100T payload. Assuming they could fit that many, that puts the launch cost per 100 kW at $10,000, which is a rounding error compared to the cost of the chips alone, even if it’s off by a factor of 10.

Obviously a lot needs to go right for this to happen, but it’s not impossible.


Counter argument:

Before the cost of flying very heavy shit and dealing with all the problems of operating that shit in space goes to zero, the cost of doing it terrestrially will go to zero. The idea that shooting any amount of payload into space could some how be more economical than just not doing that is completely bonkers and laughable.

It's like people completely forgot that there was 15+ years of connectivity infrastructure build out on earth before Musk did his shittier space version, not the other way around.


Transport doesn't "go to zero." Terrestrial transportation is already fully reusable, so it doesn't have the same cost headroom for improvement vs orbital launch.

Thanks, I really needed this post. I'm saving this for when people inevitably try to re-write history by saying "we didn't need Elon, because did anyone really doubted space-based AI would be the winner?? It was obvious all along because blah blah... <insert 20/20 hindsight>"


You thought you made an actual counter argument there?

The world seems to have become an abstract plaything for these billionaires why would they give a damn about practicality. This idiot shot a car into space for no good reason.

Not completely 'no good reason'—they needed to test the ability to send heavy payloads, it's great marketing for SpaceX (who intend to make money by having people pay them to put things in space for them) and brand awareness for Tesla.

Bollocks. Making the payload weird just shows the lengths that idiot will go to to give the finger to everyone

I honestly think he believes he’s doing good for mankind in a “fun and quirky way”, I don’t think he’s particularly trying to flip everyone off.

He can blow his money on 1 million satellites that will all decay back into the atmosphere within a few years

> He can blow his money on 1 million satellites that will all decay back into the atmosphere within a few years

He can also 'blow' his money on helping people by giving them opportunities:

> In 1993, Harris Rosen “adopted” a run-down, drug-infested section of Orlando called Tangelo Park. Rosen offers free preschool for all children prior to kindergarten and a free college education for high school graduates. Today, the high school graduation rate for Tangelo Park is 100 percent. And no, that is not a typo.

* https://www.ucf.edu/pegasus/harris-rosen/

* https://www.today.com/news/millionaire-uses-fortune-help-kid...


Helping people with ALS speak again seems worthy, as does helping humanity become a multi planetary species.

Is throwing up "1 million satellites" going to do those things?

How about running DOGE and gutting USAID?

Or helping Trump get elected? Was that a worthy endeavour? How's that working out for the average American (or anyone else on the planet) with four dollar gas and five dollar diesel?


Bringing satellite coverage to the world, including Iran and Ukraine is noble, yes.

As is volunteering to help get rid of waste and fraud, particularly when his time could be spent on more lucrative pursuits.

There are more things to life than the price of gas.


> Bringing satellite coverage to world, including Iran and Ukraine is noble, yes.

Are we including cutting off Ukraine’s coverage at keys times? Or Russian usage?

No need to discuss the DOGE bit, no one believes that trillion dollar saving was real.

‘Musk the Noble’ sure has a smell to it.


> cutting off Ukraine’s coverage at keys times?

The only 'key times' were Ukrainian military usage of Starlink inside Russia. Ukraine was given Starlink to use to defend Ukraine, not attack Russia.

> Or Russian usage?

Which was explicitly identified and cut off.

> No need to discuss the DOGE bit

Exactly. Nobody can defend fraud and abuse. Since your main issue is that the savings weren't as big as expected it sounds like you know that.


> The only 'key times' were Ukrainian military usage of Starlink inside Russia. Ukraine was given Starlink to use to defend Ukraine, not attack Russia.

Fighting without hurting the enemy? What’s the point? The approach of the Trump administration is just letting Ukraine bleed out.

Russian starlink usage has only just been cut off, how many years did that take?

> Nobody can defend fraud and abuse

This administration is anti-fraud and anti-abuse?


> Fighting without hurting the enemy?

No. Nobody said that except you.

> What’s the point?

Getting Russia out of the Ukraine.

> Russian starlink usage has only just been cut off

No. Russians have tried to use Starlink in late 2023 early 2024, there were no direct or indirect sales and terminals were disabled on a blacklist basis. They moved from a blacklist to a whitelist in February this year.

> This administration is anti-fraud and anti-abuse

In some ways, yes. I won't defend "Trump coin" but it's pretty clear with things like USAID, Minnesota child care center scams, and the California hospice scam the democrats were in favour of and participated in fraud and abuse.


> the Ukraine.

Correcting self: Ukraine.


> Bringing satellite coverage to the world, including Iran and Ukraine is noble, yes.

The general "world" is getting connectivity just fine via mobile phones for a lot less than what it would cost them to get a Starlink system.

Useful for the war in Ukraine, not so useful in Iran:

* https://en.wikipedia.org/wiki/2026_Internet_blackout_in_Iran...

> As is volunteering to help get rid of waste and fraud, particularly when his time could be spent on more lucrative pursuits.

When did he do this? Are you referring to (LOL) DOGE? Nothing like raising unemployment without saving any money:

* https://www.cato.org/blog/doge-produced-largest-peacetime-wo...

* https://fordschool.umich.edu/news/2025/reality-doges-mediocr...

And let's not start on all the illegal actions:

* https://www.theguardian.com/us-news/2025/feb/10/elon-musk-do...

Those (supposed) efficiency cuts in (e.g.) USAID have been estimated to have caused many tens of thousands of deaths:

* https://www.thelancet.com/article/S0140-6736(25)01186-9/full...

* https://hsph.harvard.edu/news/usaid-shutdown-has-led-to-hund...

And how much of the (alleged) money that was saved is now going towards the Iran war? The Pentagon is asking for $200B:

* https://www.nationaldefensemagazine.org/articles/2026/3/25/c...

> There are more things to life than the price of gas.

That is a very privileged view. In the US specifically, with its abysmal public transportation due to car-centric {ex,sub}urban design, a lot of people will need to pay more for getting to work and will have to cut back on (e.g.) groceries.

Globally, oil prices are wreaking havoc in all sorts of ways on daily life:

> Worsening fuel shortages resulting from the war in the Middle East are threatening sacred funeral ceremonies in Thailand, where Buddhist temples are scrambling to obtain diesel for cremations.

> The abbot of Wat Saman Rattanaram in Chachoengsao province, about 80km (50 miles) east of Bangkok, warned that a suspension of cremation services was a real possibility. Some petrol stations have run out of fuel, while others allow sales only to vehicle operators.

* https://www.scmp.com/news/asia/southeast-asia/article/334692...


> Useful for the war in Ukraine, not so useful in Iran

Wikipedia isn't a source, but regardless Wikipedia confirms the utility of Starlink in the war.

> Are you referring to DOGE

Yes.

> without saving any money

Your source at Cato Institute confirm 150B.

> USAID have been estimated to have caused many tens of thousands of deaths

Lancet is a political advocacy magazine. USAID isn' an AID agency. Not funding gay and lesbian theatre in Serbia doesn't stop anyone from dying.

> much of the (alleged) money that was saved

You said zero money was saved earlier. What is it?

> is now going towards the Iran war?

It seems like a better investment than giving Iran 1.6 billion dollars to fund terrorism across the middle east, wouldn't you say?


Decay and be replaced. You make it sound as if this is short term, like flinging confetti up in the air, instead of long term, like tiling a roof.

Why should he be allowed to pollute the night sky like that?

Not to mention the planet. Launching satellites takes an incredible amount of fuel.

We gotta build a lot of solar and batteries. And wind.

Let's go.


You have to invert the priorities. Lots of wind, then useful amounts of storage and backup energy (probably domestic coal in Germany) and little bit solar.

In Europe most gas consumption is in winter, when PV does not produce much. (The sun is not shining much, that's the reason why outside temperature are low...).

https://www.bruegel.org/dataset/european-natural-gas-demand-...

Also we don't build solar or batteries in any significant amount in Europe, we import them from Asia. We only install them in Europe.


Europe will burn gas in winter for a long time, but it can stop burning gas in summer.

Solar is cheaper than wind. For the energy transition, it doesn’t matter who makes the panels. I mean I agree that keeping the know how in Europe is important. But it’s not like they suddenly stop working (unless they get hacked, I guess.)


I fall strictly into camp 1, but I disagree on code quality.

Code quality makes the difference between a janky system that works most of the time and a rock solid system that is an enjoyment to use.

QA can only apply duct tape. If your state management isn’t clean, the UX will suck. If your functions aren’t clean, you will keep chasing bugs.

Luckily, AI is capable of writing good code. Today, that still requires some amount of hand holding, but it’s getting better.


Today, there is also AHSCT.

There are already clinics where they basically remove your immune system and give you a new one. If you don’t die in the process, you are likely to be cured of MS.

(Any existing damage will remain.)

Currently this is reserved for the most quickly progressing cases but if we can make this safer and cheaper, it might in future be applied as an early stage cure, so people can go on to live healthy lives.

That being said, Astra Zenecas approach does seem much safer, if it’s proven to be effective!


Yeah AHSCT is no joke. I mentioned in another comment my wife has MS - diagnosed last year in her mid 40s with thankfully no severe impairment. They discussed AHSCT with us but didn’t recommend it unless another disease modifying treatment didn’t work. Thankfully, Tysabri seems to be working well for her.

My mom passed from leukemia years ago. Or rather, from an infection as she was starting HSCT. I’m sure it’s safer than it was 30 years ago, but being without an immune system for a period of time really is still a last resort.


> There are already clinics where they basically remove your immune system and give you a new one. If you don’t die in the process

Of side effects of the process, or of opportunistic diseases during the transition?


The latter is my understanding.

I would not mind remyelination + being on a DMT, heh.

I disagree with every sentence of this.

> solves the problem of too much demand for inference

False, it creates consumer demand for inference chips, which will be badly utilised.

> also would use less electricity

What makes you think that? (MAYBE you can save power on cooling. But not if the data center is close to a natural heat sink)

> It's just a matter of getting the performance good enough.

The performance limitations are inherent to the limited compute and memory.

> Most users don't need frontier model performance.

What makes you think that?


> False, it creates consumer demand for inference chips, which will be badly utilised.

I think the opposite is true. Local inference doesn't have to go over the wire and through a bunch of firewalls and what have you. The performance from just regular consumer hardware with local, smaller models is already decent. You're utilizing the hardware you already have.

> The performance limitations are inherent to the limited compute and memory.

When you plug in a local LLM and inference engine into an agent that is built around the assumption of using a cloud/frontier model then that's true.

But agents can be built around local assumptions and more specific workflows and problems. That also includes the model orchestration and model choice per task (or even tool).

The Jevons Paradox comes into play with using cloud models. But when you have less resources you are forced to move into more deterministic workflows. That includes tighter control over what the agent can do at any point in time, but also per project/session workflows where you generate intermediate programs/scripts instead of letting the agent just do what ever it wants.

I give you an example:

When you ask a cloud based agent to do something and it wants more information, it will often do a series of tool calls to gather what it thinks it needs before proceeding. Very often you can front load that part, by first writing a testable program that gathers most of the necessary information up front and only then moving into an agentic workflow.

This approach can produce a bunch of .json, .md files or it can move things into a structured database or you can use embeddings or what have you.

This can save you a lot of inference, make things more reusable and you don't need a model that is as capable if its context is already available and tailored to a specific task.


Parallel inference on large compute scales in superlinear ways. There is no way to beat the reduction in memory transfers that a data-center inference model provides with hardware that fits at anything called a home. It is much more energy efficient to process huge batches of parallel requests compared to having one or a handful of queries running on an accelerator.

Aren't data centers extremely energy inneficient due to network latency, memory bottlenecks and so on? I mean the models that run on them are extremely powerful compared to what you can run on consumer hardware, but I wouldn't call them efficient...

I'm sorry to get into this conversation, but the performance of a model is some orders of magnitude lower (meaning it requires greater amounts of specific computing power) than all the network stack of all the nodes involved in the internet traffic of some particular request.

Meaning: these 5000 tokens consume tiny amounts of energy being moved all around from the data center to your PC, but enormous amounts of energy being generated at all. An equivalent webpage with the same amount of text as these tokens would be perceived as instant in any network configuration. Just some kilobytes of text. Much smaller than most background graphics. The two things can't be compared at all.

However, just last week there have been huge improvements on the hardware required to run some particular models, thanks to some very clever quantisation. This lowers the memory required 6x in our home hardware, which is great.

In the end, we spent more energy playing videogames during the last two decades, than all this AI craze, and it was never a problem. We surely can run models locally, and heat our homes in winter.


> What makes you think that?

The fact that today's and yesterday's models are quite capable of handling mundane tasks, and even companies behind frontier models are investing heavily in strategies to manage context instead of blindly plowing through problems with brute-force generalist models.

But let's flip this around: what on earth even suggests to you that most users need frontier models?


Everybody has difficult decisions to make in their daily lives and in their work.

Having access to a model that is drawing from good sources and takes time to think instead of hallucinating a response is important in many domains of life.


> False, it creates consumer demand for inference chips, which will be badly utilised.

There are so many CPUs, GPUs, RAM and SSDs which are underutilized. I have some in my closet doing 5% load at peek times. Why would inference chips be special once they become commodity hardware?


Thats the point, they’re better utilized in the cloud

"consumer demand for inference chips, which will be badly utilised"

why do you assume it will be badly utilised? Can't be worse than what we have now which is chips already badly utilised for windows' bloatware


> What makes you think that?

Looking at actual users of LLMs


While not everybody is a professional in YOUR domain, many people are professionals in SOME domain. And even outside of that, they deserve a smart conversation partner, for example on topics like health and politics.

Let's say I have long time series of past solar irradiation and long time series of past weather forecasts. Can this model make use of weather forecasts for time X in the future to predict electricity prices in the future?

That is, can it use one time series at time X to predict another time series at time X?

Or is this strictly about finding patterns WITHIN a time series.


The paper suggests it’s for forecasting. How this doesn’t just represent the relatively small number of training samples isn’t obvious to me. If most of the time series for training go up and to the right then I assume that’s what the model will (generally) do, but who knows.

Yea. Cycling around self-driving cars is obviously much safer and many more people will be encouraged to do it.


I wonder if that is an opportunity to build an Open-Source platform focused on this, replacing GitHub as the collaboration platform of a time where code was valuable.


The bottlenecks today are:

* understanding the problem

* modelling a solution that is consistent with the existing modelling/architecture of the software and moves modelling and architecture in the right direction

* verifying that the the implementation of the solution is not introducing accidental complexity

These are the things LLMs can't do well yet. That's where contributions will be most appreciated. Producing code won't be it, maintainers have their own LLM subscriptions.


I still think there is value in external contributors solving problems using LLMs, assuming they do the research and know what they are doing. Getting a well written and tested solution from LLM is not as easy as writing a good prompt, it's a much longer/iterative process.


> assuming they do the research and know what they are doing.

This is the assumption that has almost always failed and thus has lead to the banning of AI code altogether in a lot of projects.


[flagged]


Some months back I would have agreed with you without any "but", but it really does help even if it only takes over "typing code".

Once you do understand the problem deep enough to know exactly what to ask for without ambiguity, the AI will produce the code that exactly solves your problem a heck of a lot quicker than you. And the time you don't spend on figuring out language syntax, you can instead spend on tweaking the code on a higher architecture level. Spend time where you, as a human, are better than the AI.


I don't know, I've had good experiences getting LLMs to understand and follow architecture and style guidelines. It may depend on how modular your codebase already is, because that by itself would focus/minimize any changes.


I don't know. The company I work at is inviting candidates for interviews, and we have to make compromises because we can't get the exact profiles we are looking for. Something about your comment does not add up to me.


Locality. People want to work close to where they live and not all places are bustling with all kind of activity. I suspect you're hybrid or on site only, right?


not GP, but we're hybrid but remote-first and 80% is remote and we have the same experience. Getting juniors is easy, getting seniors+ is very difficult.


The model I am mentioning matches with this. Speaking from my own personal experience as well, when you're junior and young, you can move anywhere, especially if you're ambitious. As you gain experience, you also settle down a bit in your life, you have a wife, kids, a house. Their jobs and schools. Moving then is a _big deal_.

Of course, there are other factors that make juniors more abundant on the current job market, namely, most companies don't want them.


That absolutely makes sense, but I'm not sure it is the reason. I mentioned we're remote first: we hire _everywhere_. I've been with this company for 7 years, and haven't traveled to HQ even once, and have worked from home or a spot of my choosing (but honestly, that spot is almost always home!) every day, that's how remote first we are - nobody has to uproot their life to work with us.

But it's still extremely hard to find senior+. I'm sure our tech stack plays a role, and naturally senior developers are much less common than juniors. But whenever I hear about the job market being super hard, I feel like I'm living in a parallel universe.

AI is not replacing anyone from my perspective, but AI might become our only hope at some point, because we're growing aggressively. I have to keep mediocre people because I can't even replace at that level easily - the only ones I'm pruning are the ones that are net-negative contributors.


Ah, sorry, I misunderstood your original post then, I interpreted "hybrid, remote first" as... You can be remote most days but you _need_ to be in office a couple of days. This just goes to teach mea hybrid model has _a lot_ of variants.

Back to the point, I think I'm pretty senior, mostly embedded SW, thankfully I still have work, but the job market seems to havecratered. I have friends that are pretty good that are looking for jobs for about half of year now.

I'm incredibly curious now what is your tech stack. And how do you guys view people looking to switch tech stacks.


We're very boring, our stack is PHP/postgres/mysql. A lot of Symfony, a lot of Symfony-style-code on top of Wordpress (mentioning that usually puts people off but it's all PHP in the end, and you can choose to write clean code on either).

Lots of people see PHP in general as a dead end career-wise and WP specifically as almost an insult, so there aren't many that advanced their skills and have continued to work with PHP (or Wordpress, but I believe that an experienced PHP developer has no trouble picking up WP).

We're generally very neutral on how someone arrived where they are, we don't require certificates or degrees, we focus on experience and skills. I wouldn't hire someone who isn't experienced with at least one side of our stack though (unless they're extremely good) because it takes time from other developers to upskill them and that's the one resource we don't have.

I won't disclose where I work though as that would dox myself and I much prefer anonymity.


Looking for a remote-only job in 2026 is a big handicap, though not impossible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: