Hacker Newsnew | past | comments | ask | show | jobs | submit | floathub's commentslogin

And then in vim you can spawn a shell to run ... oh, never mind.

Free software has never mattered more.

All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.

Heck, even Claude Code would be far less useful without grep, diff, git, head, etc., etc., etc. And one can easily see a day where something like a local sort Claude Code talking to Open Weight and Open Source models is the core dev tool.


It's not just that open source code is useful in an age of AI, it's that the AI could only have been made because of the open source code.

> All the infrastructure that runs the whole AI-over-the-internet juggernaut is essentially all open source.

Exactly.

> Heck, even Claude Code would be far less useful without grep, diff, git, head, etc.

It wouldn't even work. It's constantly using those.

I remember reading a Claude Code CLI install doc and the first thing was "we need ripgrep" with zero shame.

All these tools also all basically run on top of Linux: with Claude Code actually installing, on Windows and MacOS, a full linux VM on the system.

It's all open-source command line tools, an open-source OS and piping program one to the other. I'm on Linux on the desktop (and servers ofc) since the Slackware days... And I was right all along.


The primary selling point of unix and unix-like operating systems has always been composability.

Without the ability to string together the basic utilities into a much greater sum, Unix would have been another blip.


> Free software has never mattered more.

But the Libre part of Free Software has never mattered less, at least so TFA argues and while I could niggle with the point, it's not wrong.


Wow, some corps could offload some of their costs to "the community" (unpair labor), while end users are as disenfranchised as ever! How validating!

Why isnt LLM training itself open sourced? With all the compute in the world, something like Folding@home here would be killer

data bandwidth limits distributed training under current architectures. really interesting implications if we can make progress on that

Limits but doesn't prohibit. See https://www.primeintellect.ai/blog/intellect-3 - still useful and can scale enormously. Takes a particular shape and relies heavily on RL, but still big.

What bandwith limits? Im assuming the forward and backward passes have to be done sequentially?

Yes also passing data within each layer

It is in some cases. NVIDIA's models are open source, in the truest sense that you can download the training set and training scripts and make your own.

It's either illegal or extremely expensive to source quality training material.

Yeah, turns out if you want to train a model without scrapping and overloading the whole of Internet while ignoring all the licenses and basic decency is actually hard & expensive!

Well it is, it's in the name "OpenAI". /S

The power generated from Niagara river stations was traveling on an international "grid" between Canada and the US in the late 1890s.

Man, how could they not wait 2.5 weeks until April 1 !!!


Emacs will solve this too:

https://github.com/tanrax/org-social

:-)


This may help, it has an example pizauth config (scroll down to "Authenticating with pass and pizauth"):

https://stuff.sigvaldason.com/email.html


This another similar resource with some additional stuff about using mu4e-org:

https://stuff.sigvaldason.com/email.html


The "watch" method is so awesome:

    mpv https://live0.emacsconf.org/gen.webm


I used Emacs for several years before I discovered "project" (it's built in). If you're navigating dired trees or similar to find files or grep for strings in groups of files, this is like magic:

C-x p f (find any file in the current "project", e.g. git repo)

C-x p v (grep the whole project super fast)

It's embarrassing how long it took me to realize it was there all along. :-)


I am consistently using `m` for marking relevant files/directories in the dired mode and then `A` to find a regex among all included files. It does not seem that I miss anything by not relying on such a project approach.


Thank you, that is greatly appreciated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: