Hacker Newsnew | past | comments | ask | show | jobs | submit | trashb's commentslogin

> The old model assumed one person, one branch, one terminal, one linear flow. Not only has the problem not been solved well for that old model, it’s now only been compounded with our new AI tools.

A bit of a strange thing to say in my book. Git isn't SVN and I think these problems are already solved with git. I agree that the interface is not always very intuitive but Git has the infrastructure which is very much focused on supporting alternatives to "one person, one branch, one terminal, one linear flow".

> the problem that Git has solved for the last 20 years is overdue for a redesign.

To me it's not clear what the problem is that would require a redesign.


The problem is how to make money from something that is more or less solved.

BitKeeper tried to do that. Git was built because the commercial license of BitKeeper became unworkable for the Linux kernel community.

"Those who cannot remember the past are condemned to repeat it".


>Git was built because the commercial license of BitKeeper became unworkable for the Linux kernel community.

BitKeeper was free to linux kernel developers with a "but no reverse engineering" clause, but Tridgell went exploring of his own volition because he wanted to and kinda sorta violated that, so the license was cancelled by BitKeeper.

I'm not taking sides or upset about any part of this, I just wouldn't call that "becoming unworkable for the linux kernel community"; that would be like "the fence around your yard became unworkable for me in my desire to trespass on your property so I climbed over it"

what Tridgell discovered was pretty dumb and could be considered a distinct lack of a fence, but he connected to a socket and typed "help" and it dutifully printed out a bunch of undocumented useful commands.


Yep, something that is sadly becoming more and more common. People with solutions spending insane money trying to convince others that a problem exists.

It's basically the entire context of this website.

I'm gonna go out on a limb and say these guys would never have raised if they didn't have "GitHub co-founder" on the first slide of the pitch deck

Ok, that explains everything. Who you know in the Valley is everything. Literally.

The beauty of it all is one doesn’t even have to invent a solution… they only have to invent a “problem” to be pitched for VC funding.

have you heard startups

As a spoon designer, I have had some difficulty finding work lately.

It’s not solved because it’s trash. There’s no good interface for it and people find it difficult to use.

Skill issue. It's the most popular VCS in the world by a huge margin, millions of devs use it every day just fine, countless forges have been built around it, and there's only one semi-compelling alternative frontend (jj). If you honestly find Git challenging, how are you coping with software engineering? Git is the easy part.

Millions of dev use it in the most rudimentary way, occasionally lose their stash, rm their local repo and start over, ask the office expert for help every time they need to figure out where-the-foxtrot that commit came from, don't even attempt to use reflog or bisect or interactive staging, etc.

sure, but solving conflicts is still hard in git. This can be simplified.

> To me it's not clear what the problem is that would require a redesign.

The interface is still bad. Teaching people to use git is still obnoxious because it's arcane. It's like 1e AD&D. It does everything it might need to, but it feels like every aspect of it is bespoke.

It's also relatively difficult to make certain corrections. Did you ever accidentally commit something that contains a secret that can't be in the repository? Well, you might want to throw that entire repository away and restore it from a backup before the offending commit because it's so difficult to fix and guarantee that it's not hiding in there somewhere and while also not breaking something else.

It's also taken over 10 years to address the SHA-1 limitation, and it's still not complete. It's a little astonishing that it was written so focused on SHA-1 never being a problem that it's taken this long to keep the same basic design and just allow a different hashing algorithm.


> Well, you might want to throw that entire repository away and restore it from a backup before the offending commit because it's so difficult to fix and guarantee that it's not hiding in there somewhere and while also not breaking something else.

I'm not a git expert but I cant image that's true


It’s not you just need to force push or generate a new key…

Perhaps proving the point here. That's not enough to eliminate the secret, the dangling commit will persist. Though this might be a nitpick, it's rather hard to get it from the remote without knowing the SHA.

> generate a new key

Is absolutely the right answer. If you pushed a key, you should treat it as already compromised and rotate it.


You also need to clear the caches of the remote

Yeah it doesn't seem hard to rewrite the commit history

The interface can be independent of the implementation. Under the hood git does everything you need. If learning to use it at a low level isnt appealing, then you can put an interface on top which is more ergonomic.

I'm a huge fan of lazygit

> Under the hood git does everything you need

No it doesn't. Git is buggy. It also doesn't work for anything that's not a text file. It is unbelievably slow.


> Did you ever accidentally commit something that contains a secret that can't be in the repository?

What do I need to do on top of a git force push, and some well documented remote reflog/gc cleanup, which I can’t find with a single search/LLM request? Are we there, where we don’t have enough developers who can do this without feeling it as a burden? Or are we there where this level of basic logic is not needed to implement anything production ready?


> What do I need to do on top of a git force push, and some well documented remote reflog/gc cleanup, which I can’t find with a single search/LLM request?

This is a self-defeating argument. You're essentially saying we shouldn't improve something because it can be done with a handful of commands (you already know btw) and prompting an LLM.

> Are we there, where we don’t have enough developers who can do this without feeling it as a burden?

The no true scotman.

> Or are we there where this level of basic logic is not needed to implement anything production ready?

Not sure how this fits in with the rest honestly.

It was never about whether it was possible. It was about how it's being done. Juniors (and even seniors) accidentally check in secrets. Arguing that there shouldn't be a simpler way to remove an enormous security flaw feels a bit disingenuous.


No, I’m saying that you can do this without replacing git. You can make it simpler even without replacing git. Aka you just did a strawman, if you are really into these. Also you answered to me in an authoritative way, when even according to you, you don’t understand my comment. You can figure out a logical fallacy name for this. And also of course a nice fallacy fallacy.

Btw, I’m also saying that who cannot find how it can be solved right now with git, those shouldn’t be allowed anywhere near a repo with write permission, no matter whether you use git or not. At least until now, this level of minimal logical skill was a requirement to be able to code. And btw regardless the tool, the flow will be the exact same: ask a search engine or ml model, and run those. The flow is like this for decades at this point. So those minimal logical skills will be needed anyway.

The problem mainly is that when they don’t even know that they shouldn’t push secrets. You won’t be able to help this either any tooling. At least not on git level.


git rebase -i <one commit before your mistake> git push origin mainline -f

git log --all --reflog -- path/to/secret-file

More power to them for re-visiting this, but agree with you:

> The old model assumed one person, one branch, one terminal, one linear flow.

That sounds exactly like the pre-git model that git solved..


I've always wanted a kind of broader and more integrated approach that isn't just about text diffs. the ability to link in substantial comments that would be displayed optionally and not piss off linear readers. links to design and reference documents. bugs and prs that were persistent and linked to the versioned code instead of being ephemeral.

think about all of the discussion we have around the code that gets lost. we certainly have the ability to keep and link all that stuff now. we don't really need to have arguments about squashing or not, we can just keep the fine grained commits if you really want to dig into them and maybe ask that people write a comprehensive summary of the changes in a patch set -in addition-.

but I guess none of that has anythig to do with AI


Ever since the scraping lawsuits [0] I realized linked-in has adopted the "I take all your data, you take none of mine" idea to another level.

Also the site doesn't even work well and is one of the main examples of "dark patterns" on the web [1].

Literally one of the worst companies and websites out there. Stallman has a summary of the additional reasons [2].

[0] https://www.eff.org/deeplinks/2017/12/eff-court-accessing-pu...

[1] https://medium.com/@danrschlosser/linkedin-dark-patterns-3ae...

[2] https://www.stallman.org/linkedin.html


> macOS machine (which is BSD-like enough down below)

That's like saying an Ubuntu .deb will work on Gentoo because it's all Linux anyway. It's not that simple. There is dependencies and there are differences in the packages, package managers and surrounding system for a reason. It's not 1:1. Perhaps the naming scheme happened to line up for the packages you where using, but this should be considered not assumed.

It would be nice if there was some sort of translator that could handle "most common cases". I think it would improve the usability of Jails. Perhaps that would require someone to keep a list of packages mapping certain packages between operating systems.

Something like "apt install python3-serial" -> "pkg install py311-pyserial" may suffice.

For anyone that would use something like that, you should implement a prototype, publish it and perhaps someone else will build upon what you started!


> It's not that simple.

It would tremendously benefit almost everyone if it were.

> There is dependencies and there are differences in the packages, package managers and surrounding system for a reason.

Yeah, the NIH syndrome. And sometimes, of course, there are decent technical reasons as well.


This is called https://brew.sh

Maybe it's just me but what happened to "don't send your government id to anyone". I am from the EU but this is what was indoctrinated to me. Just seems very strange to all off a sudden send all this information to any company you require a service from.

Also the person is not the company, why is Google making the developer identify oneself while many apps are released under a company? My understanding is that Google has been mishandling this for a while but with the verification linked to a government id that just seems like another can of worms.

A few scenarios to consider:

- The developer is fired/resigns and the company does not want to be associated with the developer, for example if the developer is convicted for something.

- The developer is fired/resigns and the developer does not want to be associated with the company, developer found out about certain practices of the company they don't condone.

- The developer and the company part in good faith, however one of them is being exploited/pressured by a third party to abuse the relationship to the app.

- The developer or the company is on legal hold due to legal issues, arrests, malpractice etc.

- The developer passes away or the company ceases to exist.

- How does this work if you are making an app as a developer for hire, when entering into a contract with a publisher for example. Who will verify and how will that work (especially on small scale apps).


> No, that's "ancient regex".

wouldn't the "ancient regex" be the ed "g/re/p" version?

  -E, --extended-regexp
    Interpret PATTERNS as extended regular expressions (EREs, see below).
  -G, --basic-regexp
    Interpret PATTERNS as basic regular expressions (BREs, see below).  This is the default.
  -P, --perl-regexp
    Interpret I<PATTERNS> as Perl-compatible regular expressions (PCREs).  This option is experimental when combined with the -z (--null-data) option, and grep  -P  may  warn of unimplemented features.
From the manpage it seems my grep make distinction between "Extended" "Basic" and "Perl" regexes.


>wouldn't the "ancient regex" be the ed "g/re/p" version?

That's "prehistoric regex"


Funny idea and it may make sense in some special scenarios.

However I would like to point out that you are limited to path and filename length.

Maximum file path length in Windows is 260 characters. (32767 characters with longpath enabled). Individual filenames max out at 255 characters.

Maximum file path length in Linux/Unix generally is 4096 characters. On ext4 it seems max filename length is 255 bytes.

Additionally you will be constrained by the characters allowed in files. Therefore it will be strange to pass a filepath to a program like this.


That's what directories of files are for. The file system as a cognitive twitter.


I may interpret this wrong, but the 9p protocol describes transfer protocol operations not data structure.

9p defines filesystem operations: attach, walk, open, create, read, write, clunk, remove, stat. And some additional handling operations: version, auth, error.

This project replaces those with RESTfull (CRUD?) operations. But this repository also seems to define what 9p does not, the structure of the data. It defines what files to write to and what to write to them. That seems outside of the 9p scope as you are defining the service behind the transfer protocol.

A RESTfull API to attach to a 9p backed does seem useful since the support for RESTfull API's is so huge. To me it's unclear how this monolithic approach is beneficial above a "RESTfull to 9p" proxy and a 9p service.


Modern (suburb) SUV's spectacularly suck at most tasks, you have been falsely advertised to.

A 2010 Toyota Corolla is most likely a better offroader, a 1.8t VW Passat is a better tow truck.

If not for the tax benefit these SUV's enjoy they are useless.


I think in this case the point being made is "bad software makes the whole product bad", not just "bad software is bad".

Its similar to how bad brakes or a roof prone to leaking makes the whole car a bad car. The "weakest link" undermines the whole system.

> software isn't the core competency

Software is a essential part of modern cars, remove the software and they don't function (or in some cases are not allowed on the road). The car manufacturers "core competency" is making cars so I would argue that software is definitely a "core competency" of a modern car manufacturer.


I agree it ruins the whole product.

I also agree traditional car manufacturers should have software as a core competency, but instead they're notoriously terrible at writing software.


Because you can choose to leave your phone at home and are travel everywhere by car if you don't want to be tracked. But you can't leave your car at home and travel anywhere.

It is true that we don't need cars sending telemetry to track us since there is a conveniently placed identification number on the front and rear of the car, the number plate (used by government), but this is physically broadcasted and that limits its reach.

So why should the manufacturer of my car have access (and the right to sell) a lot of my personal data like location, weight, age indefinitely just because I own a product manufactured by them?

It is an unnecessary overreach on very sensitive data and I can't really opt out (if buying a modern car) since all manufacturers are doing it.

Yes I also carried a phone everywhere the last 20years, but that doesn't make the tracking right (also on phone I think we should be tracked less).


I understand and agree in general, but the root issue is in the laws and what's permitted to companies. Giving your data to car manufacturers and 3rd parties should be mandated to be disabled by default by law and only enabled with proper informed consent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: