Hacker Newsnew | past | comments | ask | show | jobs | submit | swalsh's commentslogin

Neurons that fire together, wire together. Your brain optimizes for your environment over time. As we get older, our brains are running in a more optimized way than when we're younger. That's why older hunters are more effective than younger hunters. They're finely tuned for their environment. It's an evolutionary advantage. But it also means that they're not firing in "novel" ways as much as the "kids". "kids" are more creative I think because their brains are still adopting, exploring novelty, neuron connections aren't as deeply tied together yet.

This is also maybe one of the biggest pitfalls as our society get's "older" with more old people, and less "kids". We need kids to force us to do things differently.


Oh i've been looking for a project for my 11 year old... he's a very project oriented learner, which schools don't seem to do anymore.

What country are you in?

Speak for yourself, I have never thrown away code at this rate in my entire career. I couldn't keep up this pace without AI codegen.

Did you read the article? I don’t think that refutes anything the author said even a little bit.


I bet claude was hyping this guy up as he was building it. "Absolutely, a rust compiler written in PHP is a great idea!"


Every compiler in any language for any language has at the very least educational value.

On the other hand, demeaning comments without any traces of constructive criticism don't have any value.


Does it matter who the sycophant was or just that there was a sycophant?

My partner does that as well as LLMs at this point; "Sure honey, I remember you've talked a lot about Rust and about Clojure in the past, and you seem excited about this Clojure-To-Rust transpiler you're building, it sounds like a great idea!", is that bad too?


There is no comment on whether LLMs/agents have been used. I feel like projects should explicitly say if they were _or_ were not used. There is no license file, and no copyright header either. This feels like "fauxpen-source": imagine getting LEX+YACC to generate a parser, and presenting the generated C code as "open-source".

This is just another way to throw binaries over the wire, but much worse. This has the _worst_ qualities of the GPL _and_ pseudo-free-software-licenses (i.e. the EULAs used by mongo and others). It has all the deceptive qualities of the latter (e.g. we are open but not really -- similar to Sun Microsystems [love this company btw, in spite of its blunders], trying to convince people that NeWS is "free" but that the cost of media [the CD-ROM] is $900), with the viral qualities of the former (e.g. the fruit of the poison tree problem -- if you use this in your code, then not only can you not copyright the code, but you might actually be liable for infringement of copyright and/or patents).

I would appreciate it if the contributor, mrconter11, would treat HN as an internet space filled with intelligent thinking people, and not a bunch of shallow and mindless rubes. (Please (1) explicitly disclose both the use and absence of use of LLMs -- people are more likely to use your software this way, and preserves the integrity of the open source ecosystem, and (2) share you prompts and session).

So passes the glory of open source.


According to his Readme he seems to have built a 3D engine completely from scratch 8 years ago without using any library:

https://github.com/mrconter1/IntuitiveEngine

> A simple 3D engine made only with 2D drawLine functions.


That is (slightly) reassuring (but the rest of his portfolio does not inspire confidence). Nevertheless, we should be required to disclose whether the code has been (legally) tainted or not. This will help people make informed decisions, and will also help people replace the code if legal consequences appear on the horizon, or if they are ready to move from prototype to production.


Slightly?


I believed that too until I watched the Karen Read Trials. The judge had a bias, and it was clear karen got justice despite the judge trying to put her finger on the scale.


Yeah that sounds great until it's running as an autonomous moltbot in a distributed network semi-offline with access to your entire digital life, and China sneaks in some hidden training so these agents turn into an army of sleeper agents.


Lol wat? I mean you certainly have enough control self hosting the model to not let it join some moltbot network... or what exactly are you saying would happen?


We just saw last week people are setting up moltbots with virtually no knowledge of what it has and doesn't have access. The scenario that i'm afraid of is China realizes the potential of this. They can add training to the models commonly used for assistants. They act normal, are helpful, everything you'd want a bot to do. But maybe once in a while it checks moltbook or some other endpoint China controls for a trigger word. When it sees that, it kicks into a completely different mode, maybe it writes a script to DDoS targets of interest, maybe it mines your email for useful information, maybe the user has credentials to some piece that is a critical component of an important supply chain. This is not a wild scenario, no new sci-fi technology would need to be invented. Everything to do it is available today, people are configuring it, and using it like this today. The part that I fear is if it is running locally, you can't just shut off API access and kill the threat. It's running on it's own server, it's own model. You have to cut off each node.

Big fan of AI, I use local models A LOT. I do think we have to take threats like this seriously. I don't Think it's a wild scifi idea. Since WW2, civilians have been as much of an equal opportunity target as a soldier, war is about logistics, and civilians supply the military.


Fair point but I would be more worried about the US government doing this kind of thing to act against US citizens than the Chinese government doing it.

I think we're in a brief period of relative freedom where deep engineering topics can be discussed with AI agents even though they have potential uses in weapons systems. Imagine asking chat gpt how to build a fertilizer bomb, but apply the same censorship to anything related to computer vision, lasers, drone coordination, etc.


exactly, we all need to use CIA/NSA approved models to stay safe.

very smart idea!


sleeper agents to do what? let's see how far you can take the absurd threat porn fantasy. I hope it was hyperbole.


There was research last year [0] finding significant security issues with the Chinese-made Unitree robots, apparently being pre-configured to make it easy to exfiltrate data via wi-fi or BLE. I know it's not the same situation, but at this stage, I wouldn't blame anyone for "absurd threat porn fantasy" - the threats are real, and present-day agentic AI is getting really good at autonomously exploiting vulnerabilities, whether it's an external attacker using it, or whether "the call is coming from inside the house".

[0] https://spectrum.ieee.org/unitree-robot-exploit


I could say that about Cisco and I would not be wrong.


isn't it a bit of a leap to assume it was intended as an exploitable vulnerability?


I replied to the comment who doubted me in a more polite manner.


What if the US government does instead?

I don't consider them more trustworthy at this point.


I think that "align your business with your passion" is a really important factor that divides the succesful from the not. When I look at Pieter Levels, he doesn't really seem to build ideas to make money. His projects seem to start off as play, and eventually they evolve into something new he can charge for.


Levels has no passion for his projects. They're all quick grifts.


He does though, especially for the early ones like Nomadlist and RemoteOk. If you read his old blog you will see a significant portion of it is about digital nomadism.


Many times obvious things are only obvious once you see them. Like roller suitcases.


See also: the wheel


What I’d love is some small model specializing in reading long web pages, and extracting the key info. Search fills the context very quickly, but if a cheap subagent could extract the important bits that problem might be reduced.


So send off haiku subtasks and have them come back with the results.


They're not though, you can use different models, and the bots have memories. That combined with their unique experiences might be enough to prevent that loop.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: