Hacker Newsnew | past | comments | ask | show | jobs | submit | just_once's commentslogin

Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.

This thread set off a software penalty called the flamewar detector.* I turned that off as soon as I saw it.

(* This was predictable from the title, because the question in it was inevitably going to trigger an avalanche of crap replies. Normally we'd change the title to something less baity, and indeed the article is so substantive that it deserves a considerably better one. But I'm not going to change it in this case, since the story has connections to YC - about that see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....)


I'm not sure what the flex is here.

Is the idea that prototypes give the Permission Granter more fidelity into a proposal and therefore can make better decisions? Whereas before, with Slide Decks, the Permission Granter couldn't experience certain things and therefore couldn't make as good decisions to grant permissions?

So in effect this remains a billionaire figure speaking from their own perspective and we're supposed to care?


it's easier for the 6-layers-removed CEO to convince themselves that "it's pretty much done" when seeing a shaky prototype vs. a powerpoint.

Nvidia, ByteDance, Tencent and OpenAI?! Wow!

Good, hearty group right there. But how about Palantir, NSO Group, Flock and Axon? Aren't they lending a hand too?

Always good to name drop a near universally hated group.

Which one? NVIDIA? OpenAI? Bytedance?


It wasn't even that long ago that Trump fired the BLS Commissioner and nominated someone that would "restore GREATNESS" to the BLS.

Putting aside the slop facade place atop the data....why would we trust the data?


This is turning into just another reality show. There are no adults anymore.


Proud of myself for recognizing this was a bot without having to inspect further than this comment!


What does that look like? Can you describe your worst case scenario?


Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.


If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.

Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.


Worst case? Armed officers entering your home without warrant, taking away your GPU card?


They can do that anyway. What does that have to do with the content of the proposed law?


Most of the examples that you've used gain very little from added specificity. It's essentially linguistic laziness. That linguistic laziness is not identically consequential in all contexts.


Would you feel better if people said "people of African descent are much more likely to have a genetic disease sickle cell anemia"?


Name names, George. It's the only way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: