Hacker Newsnew | past | comments | ask | show | jobs | submit | cjbgkagh's commentslogin

I waited for AI to get better before adopting Nix as it seemed to be rather arcane, a bit like Arch Linux, and I was worried I wouldn’t have the time for it. In preparation I shifted my development environments entirely to docker scripts where I can copy and paste working snippets from the internet.

Nix and AI is a match made in heaven and I think we’re going to see a lot of good software that’s amenable for us by AI that is both cheaper to build and easier to use.


On the up side, perhaps this disaster will make it clear that WWIII won’t be a cakewalk and we can avoid an even more disastrous war with China.

during the beginning of the last Iraq war, I was at a social event and somehow ended up saying that 'at least this is been so obviously foolish that it should cause us to really think twice about doing anything like it again', and someone 20 years my senior turned to me from another conversation and said 'we said the same thing about Vietnam'

People get complacent thinking the system just keeps working no matter what, and not that a lot of effort from talented people goes in to to keeping it working. So they vote in a total moron and discover the good times aren't innate.

I guess someone right after the Great War / the war to end all wars could be forgiven for thinking people wouldn’t be trying that again anytime soon.

What I would consider different this time is that I think the US is in the looting stages of collapse and will be unable to credibly fight such a war even if a minority wanted to.

I have a bit of a conspiracy theory about Trump starting the Iran war as a grift to get $200B appropriated only to abandon the region and have the money disappear.


My hope is that it’s bureaucratic inertia. There really is little excuse. Especially with super high voltage power lines becoming more affordable.

I mean, the US was deploying significantly more renewable energy projects during the last administration than ever before, but the corrupt trump administration stopped many of them immediately after reentering office.

The bureaucracy was moving the right direction - towards renewables - until the conservatives in this country deliberately changed strategy to emphasize fossil fuels again.

You can draw your own conclusions about motive, but this isn’t an accident.


I wasn’t even thinking about the US but consider this administration an interlude. I’m hearing other countries they’re putting the breaks on Chinese solar in an effort to build indigenous production capacity which is incredibly stupid. At least solar scales down so individuals can circumvent and get their own.

I am curious what the drawbacks are for building indigenous photovoltaic production?

Cost, especially if bootstrapping as your more expensive electricity is used to make your still more expensive panels. If you are going to onshore production it would be far cheaper to bootstrap on Chinese panels.

Some of us have insider connections

It was written in assembly so goes through an assembler instead of a compiler.

I assume GP is talking about the bit in the article that goes

> RCT does this trick all the time, and even in its OpenRCT2 version, this syntax hasn’t been changed, since compilers won’t do this optimization for you.


That makes more sense, I second their sentiment, modern compilers will do this. I guess the trick is knowing to use numbers that have these options.

There was a recent article on HN about which compiler optimizations would occur and which wouldn't and it was surprising in two ways - first, it would make some that you might not expect, and it would not make others that you would - because in some obscure calling method, it wouldn't work. Fixing that path would usually get the expected optimization.

I’m an unusually good programmer, I’ve worked in over 25 different programming languages and have been doing it since I was 6. I’ve spent most of my career as an applied researcher in research orgs where my full time job is study.

Finding new relevant things to learn gets progressively more difficult and LLMs have blown that right open. Even if they haze zero new ideas the encoding and searching of existing ideas is nothing live I’ve seen before. If they can teach me things they can definitely teach less experienced people things as well. Sometimes it takes a bit of prodding, like it will insist something is impossible but when presented with evidence to the contrary will resume to give working prototypes. Which means in these very long tail instances it does still help to have some prerequisite knowledge. I wish they were more able to express uncertainty.

I think the primary reason Ed Tech hasn’t been disrupted is that an expensive education is a costly signal and a class demarcator, making it cheaper defeats the primary purpose. Grade creep, reproducibility crisis, plagiarism crisis, cheating scandals fail to undermine this purpose. In fact the worse it gets the more it becomes a costly signal. As inequality increases so does the importance social signals. In many countries Universities are given special privileges to act as a gateway to permanent residency which is extremely profitable. If anything is to replace education it would have to either supplant this role as a social signal or the reward for the social signal will need to be lost and I don’t see either happening anytime soon short of a major calamity.


Very much a thing and one of the many reasons I'm becoming more of a recluse, shared public spaces are becoming rather unpleasant. Mostly in the US and LatAm, a fair amount in the UK, not so much in Germany.

There are fewer and fewer shared public spaces every year anyway. It feels like everything is getting taken over by franchises that want to maximize customer throughput.

“Better to be feared than loved” - Niccolo Machiavelli

While the connections are important I think the individual cell behavior is also very important and that is driven by DNA. Brain cells last a lifetime and can modify their own DNA so each one ends up being unique. I do wonder how much of behavior/consciousness is encoded in the cells DNA versus the connections between the cells.

Do you have a citation for the notion they can modify their own DNA? I would fairly easily believe they can modify its expression, but I’m skeptical of the idea they can modify the sequence.

It is half true in that they can modify their epigenetics.

Right, that’s why it makes sense. And epigenetics are not changes to DNA sequences.

Surely all of behavior and consciousness are encoded in the connections between cells. I think the question you want to ask is how much those connections are determined by DNA.

The depth of complexity and innumerable interacting variables of biology make attempts to map brain function always seem like an absurdity

I worked on the Human Connectome Project.

If they freeze the vesicles that deliver transmitters and make them analyzable, you've got all the information you need. In terms of a modern ANN, it's the connections (axons) and the weights (transmitters/receptors in tandem).

That said, this article doesn't get to the point in the free section. How are they collecting the information? Slicing is inherently destructive. Someone's got to manufacture an entirely novel imaging modality. Perhaps they could scan millimeters ahead of the slice at a resolution high enough to image receptors. Not possible currently.


> If they freeze the vesicles that deliver transmitters and make them analyzable, you've got all the information you need.

How can we possibly know that the non-connectome details of the brain don't influence computation or conscious experience?

It seems we ignore these only because they don't fit neatly into our piles of linear algebra that we call ANNs.


Take a gander at the OpenWorm project. It's a great example of how simple neuronal activity is (given details like the connections, number of receptors, and transmitter infrastructure). SOTA models of neuronal activity are simple enough for problem sets in undergraduate biomedical engineering programs.

Sure, to your point, we don't know. But the worm above (nematode) swims and seeks food when dropped into a physics engine.

My main point is that the scale of the human brain is well beyond the capabilities of modern imaging modalities, and it will likely remain so indefinitely. Fascicles we can image, individual axons we cannot. I guess, theoretically, we'll eventually be able to (but it's not relevant to us or any of our remote descendants).


> But the worm above (nematode) swims and seeks food when dropped into a physics engine.

Nematode worms have an oxytocin analogue called nematocin that is known to influence learning and social behaviors like mating. As far as I can find, the project doesn't account for this, or only minimally, but aims to in the future.

It's not surprising that immediate short-term behaviors like movement depend mostly on the faster signaling of the connectome. But since we know of other mechanisms that most definitely influence the connectome's behavior, and we know we don't account for those at the moment, it is not accurate to say that the connectome is "all the information you need".

I agree that mapping the connectome of the human brain is impractical to the point of impossibility. But even if we could, the resulting "circuit diagram" would not capture all the details needed to fully replicate human cognition. Aspects of it, sure. Maybe even enough to make it do useful tasks for EvilCorp LLC while being prodded with virtual sticks and carrots. But it would be incomplete.


Why would axons be unimageable?

There's research on the translation process where cells are basically flash-frozen (to avoid water crystals), then imaged with cryoelectronmicroscopy / AFM etc. where they image the translation process (RNA to protein) in order to get snapshots and get a better understanding of how the folding proceeds and is aided.

If we can image sub-cellular features, what makes you believe we can't trace all the axons, dendrites and the synapses?

It seems more like a question of how to do it cost effectively at scale, not so much a question of "can we or not?".


I saw a putative 3D animation of a fly whose brain had been digitized and then run in a simulation. It buzzed around, sipped food it had found on the ground, even rubbed its forelegs together as flies do. A true Dixie Flyline. We live in strange times...

> If they freeze the vesicles that deliver transmitters and make them analyzable, you've got all the information you need. In terms of a modern ANN, it's the connections (axons) and the weights (transmitters/receptors in tandem).

This is exactly what I’m doubting, how can you be so sure?


Same question answered under other comment.

Yeah but it wasn’t though. I found your answer unconvincing. I suppose “we don’t know” is an answer but that is nothing like “we have all the information we need”

Am I right in thinking that even if you had all of the connections and weights mapped out for a brain, the specifics of synaptic plasticity are still pretty poorly understood?

What is the state of the art in regards to how neurons learn over time? Do existing neuron models account for that? Being trapped, unable to learn anything, sounds terrible.

All the information to replicate the structure we have delineated. But what else?

It is my understanding that for the animals where we have a simulation of the full connectome the behavior you see approximates the real behavior reasonably well, so maybe the jury is still out as to whether it is sufficient or not.

> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Are you genuinely saying that they'll all be left behind because they didn't learn your technology in utero? > No. That's obviously nonsense.

That's does not obviously follow, I do worry about the ever increasing proportion of humanity who are no longer 'economically viable' and this includes people who are not yet born.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: