Hacker Newsnew | past | comments | ask | show | jobs | submit | Topfi's commentslogin

Honest question, did we ever get an answer what was the cause for the sudden change from the original Truecrypt developer?

Even if one doesn't want to maintain that project for purely private reasons, recommending Bitlocker as the drop-in-replacement always made it smell fishy to me.


It's more or less commonly accepted that its creator got jailed for being an arms dealer.

https://en.wikipedia.org/wiki/Paul_Le_Roux


I knew the speculation on him being involved in some capacity, but as the wiki page states, this was never confirmed in any substantial way.

More importantly, if development seized with no public comment, that would be one thing and may strengthen the "he got arrested" theory. However, there was some final communication, specific recommendations to rely on Bitlocker of all things, a new version of Truecrypt was released solely for decrypting existing disks and then the web page was removed, including a flag set on robots.txt to ensure it wouldn't appear on archive.org. All this concurrent to a crowd funded source code audit that, in the end, did not find any server issues or backdoors (I recall some speculation back in the day, that either known code quality issues or an intentional backdoor could have caused the exodus).

That all makes it hard to link this to an arrest of the main developer, though I dislike speculation without any hard evidence and if there is no new information, I'll keep this filed under "there is no answer".


I always believed that rather than publicly stating that they were about to be arrested or worse, which may alert regular, non-tech-savy people, he sent a hidden message in the arguably horrendous recommendation of replacing his tool with BitLocker.

I think he was trying to scream “Run!” without actually screaming “run”.


Wasn’t there something with 7.1A and that the canary was gone after that version too?

> He subsequently admitted to arranging or participating in seven murders, carried out as part of an extensive illegal business empire.

Yikes


Makes you wonder what kind of leverage/information you have to have to only get 25 years for admitting to being involved in 7 murders.

According to Wikipedia, the DEA gave him immunity on additional charges in return for pleading guilty and running a sting against his associates, but before the DEA knew about the murders.

My theory is that Le Roux was just financing the (two?) TrueCrypt developers.

One of the greatest men of our times.

I would also like to know why is it excluded from Archive.org

https://web.archive.org/web/20260000000000*/https://www.true...


This can be done by Archive.org doing it for whatever reason (asked, on their own, etc) or it can be triggered by the current owner of the domain modifying robots.txt I believe.

likely chose to shut down rather than bend over, same as Lavabit a year prior. I find it more plausible than the other theory.

I went on a Wikipedia dive and discovered this funny bit regarding the court process surrounding Lavabit and FBI's desire of the TLS private keys.

> The contempt of court was caused by Levison providing the keys printed in a tiny (4 point) font, which was deemed "largely illegible" by an FBI motion, which went on to complain that "To make use of these keys, the FBI would have to manually input all 2560 characters, and one incorrect keystroke in this laborious process would render the FBI collection system incapable of collecting decrypted data."

(And to be clear, that's all they ever saw of said keys)


> The court ordered Levison to be fined $5,000 a day beginning 6 August until he handed over electronic copies of the keys. Two days later Levison handed over the keys hours after he shuttered Lavabit.

I remember that. That was around the time they were using the National Security Letter to make things happen that were clearly illegal. Now look at where we are at. They are using Nation Security reasoning for anything.

That's just stupid. Take 10 people, each enters the data independently, compare their versions and select the most common of each character. With 1 second per character they would finish in an hour, coffee break included. They just didn't want to bother.

Fair assumption, but unlike Lava, TC never had customer/user data. The NSL/forced shut down theories also make little sense to me however, the fork was up by the end of the week and was easy to foresee. Kinda why this fascinates me so much, no theory I ever read survives basic scrutiny. Perhaps some things, we’ll never know.

https://en.wikipedia.org/wiki/Nils_Torvalds#Linux_kernel_sta...

>When my oldest son [Linus Torvalds] was asked the same question: "Has he been approached by the NSA about backdoors?" he said "No", but at the same time he nodded. Then he was sort of in the legal free. He had given the right answer, [but] everybody understood that the NSA had approached him.

so the assumption here is that TC were also asked to accept "contributions" from bioluminescent individuals, and chose not to. "just use Bitlocker" was a deafeningly loud dogwhistle, don't you think?


Agreed, that whole thing was suspicious. I still use TrueCrypt, because of the suspicious nature of how it all went down.

At least here in Austria, I honestly rarely, if ever, see them do that. Either roads or dedicated/mixed designated cycle paths. We do have enforcement even against cyclists, though more than anything, that catches all the "unlocked" e-bikes, because cycling on the sidewalks is not a thing anyone does.

Even with bikes being off the sidewalk, there is need for a quick way of getting others pedestrians attention.


It is amazing they openly shared their findings [0], but one thing I am missing is what this design would cost if put into mass production. To the biggest layman possible, it reads like while the design is clever and would be more expensive by virtue of more materials/size alone, it's not impractical, but maybe someone more informed on this type of manufacturing can correct my ignorance. If that's the case, hopefully we'll see these designs on the market soon as even with music+ANC, I have found certain sounds to be able to easily penetrate through when listening, though that is purely subjective and I don't have my music earbleedingly loud...

[0] https://cdn.skoda-storyboard.com/2026/04/Skoda-DuoBell-Resea...


I see frame drops when opening the start menu on a clean Windows 11 install on my work laptop (Intel Quad with 32GB memory from two years ago). I have seen the same on 3D Vcache Ryzens on systems from people who claimed there was not lag or sluggishness. It was there and they saw it once pointed out, the standards for Windows users have simply gotten so low that they’ve gotten used to the status quo.

On MacOS, meanwhile, Finder refuses to update any major changes done via CLI operations without a full Finder restart and the search indexing is currently broken, after prior versions of Ventura where stable functionality wise. I am however firm that Liquid Glass is a misstep and more made by the Figma crowd then actual UX experts. It is supposed to look good in static screenshots rather than have any UX advantage or purpose compared to e.g skeuomorphism.

If I may be a bit snarky, I’d advise anyone who does not see the window corner inconsistencies on current MacOS or the appealing lag on Windows 11 to seek an Ophthalmologist right away…

KDE and Gnome are the only projects that are still purely UX focused, though preferences can make one far more appealing than the other.


I know there is a lot of valid criticism of GitHubs poor performance when scrolling, but in this case I think we can let them off the hook.

I'll just leave this here: https://developers.google.com/search/help/report-quality-iss...


Title was editorialised from "BACK FROM THE GRAVE" to "Unreleased LG Rollable Smartphone Teardown" as the original title does not provide information what the video is on.

Having owned a Fold in the past, I still am firm that if we get flexible displays to be sufficiently resilient, this rollable design will win in the long term. It only requires one display, can be deployed far more seamlessly than opening via a hinge and allows for numerous display size between the max and minimum deployment, rather than a fixed number.

Had a LG Wing and V60, used them beyond the death of LGs smartphone division. Software/the skin was never their strong suit, but they did deviate from the pack in very interesting ways. Sadly, that likely contributed to their ultimate demise, though I maintain that they could have staved of their fate, had they heavily advertised being the only "normal looking" smartphone at the time that followed MIL-STD-810G1. That they barely advertised a differentiator which could have caught the attention of a large consumer base in the outdoors, hunting, etc. spaces is a missed opportunity in my eyes.


After these findings, any rational person would take a step back and consider whether they are actually using these models properly.

Maybe, even if you believe that LLM code output nowadays is both 100% perfect and always as high performant as possible (they aren't), having the lowest LOC is still the ideal cause the simplest functional implementation will always stay the best, all else being equal. Even more so considering this is a bloody Rails Blog, not a highly complex project with no existing reference point.

But Garry Tan, he isn't most people.

Instead, double down, call a teenager just doing some frankly fair, polite and professional analysis of a poor codebase names and do anything but reflect that maybe, just maybe, you might be wrong.

Mind you, this would be childish and stupid if it were him that had coded these offences. At least with handcrafted poor code, there is a sunk cost element to it. But here there is not. His emotional involvement in this code should be zero, just like the actual effort expended.

We are talking about code he has likely never even skimmed. Code that is unusably unoptimised. Code for a simple blog that contains deficiencies such as uncompressed pngs, broken accessibility, etc. which any decent hobbyist or old school automated tooling would catch without "AI" magic pretty quickly. One run of e.g. Lighthouse shows that this is unusably poor, though for that one must focus on something other than "look, I am spending thousands to get ever more unaudited output".

LLMs for coding, even agentic processes with limited intervention, are incredibly powerful and valuable. But even with me auditing every line of code I receive from a model, I have little to no emotional investment in said code and feel no issue throwing it out completely if I find any issue with it, far more so than before.

Despite all of that, rather than saying, "Yeah, this is poor, let's just get rid of it, thanks for pointing that out, egg on my face, let me just vibe code a better replacement now that I know what to look for", he became emotional and enraged, for code he never wrote.

gstack overall looks very odd for someone who does evals myself. I view this as build by someone who struggles to view these models through a lens beyond quantity=productivity which is the exact opposite of my goals. I will always tend towards less tokens of output with much higher quality. Faster, less expensive, easier to audit, what's there not to prefer?

In any case, if gstack makes LLMs struggle to create a maintainable blog (something these models with all their flaws most certainly can do), that should give major pause that maybe this isn't barking up the right tree. Maybe stop using gstack for a while and seeing that a solution in the hundreds of LOC can be just as achievable (likely better overall) might do a world of good.

Godspeed Garry, may we soon finish the DSM-VI with some new entires focused on the harm these LLMs can cause in certain people, so they may get the help they so desperately need. Alternatively, there is always starting his own FS and trying to get that into Linux kernel...


It's not like the current CEO of OpenAI ran arguably the most desirable VC until a few years ago...

If the vast majority of CEOs in this industry are to be believed, any company that achieves "AGI" will be undefeatable, their model improvements and research findings impossible to catch up to. Why risk that being Anthropic, Moonshot or any other competitor to OpenAI by spending your money on this?

The few months/years before "Everyone dies", wouldn't OpenAI want to be the "Anyone" that "build it" and is in control during that time? Unless, of course, OpenAI does not actually believe in that being a possibility, as suspected when they were working on social media...


I admit I'm surprised by the move, from a company that reportedly just talked about how they need to focus more on fewer, more strategic products.

But I also see the potential value. This is an entertaining and highly influential podcast, a lot of top VC's and founders watch it; it definitely punches well above it's audience KPI's in strategic value. I've seen many interviews or op-eds on the platform pretty clearly shape the startup discourse on X.

I also think it should run mostly autonomously, it'll only be as much of a distraction for OpenAI execs as they want it to be.

OpenAI just raised $122 billion (including future commitments), so whatever the purchase price was (we have no diea) is not going to even be a rounding error on their financial resources or their ability to pay their datacenter bills.


This is some insane delusion.

Focus on building a great product and you win. All this other stuff is noise.


No disrespect to them, but unless there is a financial incentive at stake for them (beyond SnP500 exposure), I've gotten to viewing this through the lens of sports teams, gaming consoles and religions. You pick your side, early and guided by hype and there is no way that choice can have been wrong (just like the Wii U, Dreamcast, etc. was the best).

Their viewpoint on this technology has become part of the identity for some unfortunately and any position that isn't either "AGI imminent" or "This is useless" can cause some major emotions.

Thing is, this finding being the case (along with all other LLM limits) does not mean that these models aren't impactful and shouldn't be scrutinised, nor does it mean they are useless. The truth is likely just a bit more nuanced than a narrow extreme.

Also, mental health impact, job losses for white collar workers, privacy issues, concerns of rights holders on training data collection, all the current day impacts of LLMs are easily brushed aside by someone believing that LLMs are near the "everyone dies" stage, which just so happens to be helpful if one were to run a lab. Same if you believe these are useless and will never get better, any discussion about real-life impacts is seen as trying to slowly get them to accept LLMs as a reality, when to them, they never were and never will be.


I have a friend who is a Microsoft stan who feels this way about LLMs too. He's convinced he'll become the most powerful, creative and productive genius of all time if he just manages to master the LLM workflow just right.

He's retired so I guess there's no harm in letting him try


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: