Wasn’t them finally implementing competent (if overly annoying) iCloud MFA the result of this kind of thing, with social engineering/photo leaks from celebrities or something?
This article is poorly written. It’s so desperate to be clever and edgy that it’s hard to get the facts out of it.
ChatGPT isn’t really a solution because the source is both low quality and has questionable motives. Going to any of the other good articles on the subject that have been linked in this comment section is much better.
While I’ve seen a plenty of silly reports from big bank analysts, they usually have the advantage of not coming across like complete idiots when saying things like this
> We assign a preliminary A+ rating to the notes, one notch below Meta’s issuer credit rating,
It’s hard to get away with that when the report is attributed to a company and person which don’t seem to exist, hosted on some randos substack. Wording like that works way better when it comes from a sender with an address ending with @bigbank.com
Of course, the latter parts of the post (Disclaimer and Limitation of Liability) do reveal pretty definitively that this is obviously not intended to be a serious report.
As for the content itself? The author tries really hard to turn a whole lot of nothing into something, and horribly misinterprets the GAAP in the process.
How does it not have anything to do with the quality of the writing? The writing is supposed to convey some facts, but it's too busy pushing narratives and layering on snark that it fails to convey real facts. Even in this comment section the people who applaud the article don't really understand what's happening because they soaked up so much of the narrative-pushing from the article.
this is the future of human-written articles - they will obligatory be written like this as 99% of article comments on HN these days is “oh, this is AI written.” :)
It is not the reader's fault if the article is unreadable in the first place.
Not to mention that asking help to explain a text is extremely common. I can read English, but I have never read a US supreme court ruling. There are much better ways for me to understand those rulings to me as a non-lawyer.
Many SCOTUS opinions, especially the major ones, are very readable! The justices and clerks are excellent writers.
The most publicly notable cases (on things like abortion, gerrymandering, gun control, etc.) aren’t so tied down in complex precedent or laws the average person is unfamiliar with.
Although, even some of those (like, for me, issues around Native American sovereignty or maritime law) are quite readable as well.
> I can read English, but I have never read a US supreme court ruling. There are much better ways for me to understand those rulings to me as a non-lawyer.
Having admitted to never having read a SCOTUS ruling, how can you then proclaim there are better ways for you to understand? How could you possibly make that assertion if you've never read a SCOTUS ruling?
> Having admitted to never having read a SCOTUS ruling, how can you then proclaim there are better ways for you to understand? How could you possibly make that assertion if you've never read a SCOTUS ruling?
A SCOTUS ruling is a primary source, and there's a pretty good universal rule that primary sources can be difficult to properly digest if you don't fully have the context of the source; for most people, reading a secondary source or a tertiary source will be a superior vehicle than the primary source for understanding. Although that said, some secondary and tertiary sources do end up being just utter garbage (a standard example is the university press release for any scientific paper--the actual merits of that paper is generally mangled to hell.)
The difficulty understanding this piece comes from lack of knowledge about finance and ratings, not from an inability to read. The blog assumes a large amount of financial knowledge which is not common among the HN audience.
It seems fairly understandable even without financial knowledge?
1. Facebook creates a shell company.
2. The shell company borrows billions of dollars, and builds a data center.
3. Facebook leases the data center.
4. The fact that it is technically only a four-year lease with only one possible tenant can conveniently be ignored, as Facebook assumes essentially all possible risks. The shell company could only possibly lose money if Facebook itself goes under, so the lenders can treat the loan as just as reliable as Facebook itself.
5. Because Facebook technically only has a four-year lease, it can pretend it doesn't actually control the shell company: after all, it can always just decide not to renew the lease. The fact that is assumes essentially all possible risks can conveniently be ignored, so Facebook can treat it as a separate entity and doesn't have to treat the debt as its own.
So the lenders are happy because there's no real risk to them, and Facebook is happy because they can pretend a $27B loan doesn't exist. It's a win-win, except for the part where they are lying to their shareholders about not taking on a $27B loan.
This is a subtweet in blog form. Without concrete examples or critiques it isn’t any more substantial than whining about “kids these days”
Edit: I admit there are plenty of concrete critiques in the article, but if we’re supposed to stand up against slop, isn’t naming names the first step?
I'm not a fan of NYT either, but this feels like you're stretching for your conclusion:
> They hired "experts" who used prompt engineering and thousands of repetitions to find highly unusual and specific methods of eliciting text from training data that matched their articles....would have been the end of the situation if NYT was engaging in good faith.
I mean, if I was performing a bunch of investigative work and my publication was considered the source of truth in a great deal of journalistic effort and publication of information, and somebody just stole my newspaper off the back of a delivery truck every day and started rewriting my articles, and then suddenly nobody read my paper anymore because they could just ask chatgpt for free, that's a loss for everyone, right?
Even if I disagree with how they editorialize, the Times still does a hell of a lot of journalism, and chatgpt can never, and will never be able to actually do journalism.
> they want to insert themselves as middlemen - pure rent seeking, second hander, sleazy lawyer behavior
I'd love to hear exactly what you mean by this.
Between what and what are they trying to insert themselves as middlemen, and why is chatgpt the victim in their attempts to do it?
What does 'rent seeking' mean in this context?
What does 'second hander' mean?
I'm guessing that 'sleazy lawyer' is added as an intensifier, but I'm curious if it means something more specific than that as well, I suppose.
> Copyright law....the rest of it
Yeah. IP rights and laws are fucked basically everywhere. I'm not smart enough to think of ways to fix it, though. If you've got some viable ideas, let's go fix it. Until then, the Times kinda need to work with what we've got. Otherwise, OpenAI is going to keep taking their lunch money, along with every other journalist's on the internet, until there's no lunch money to be had from anyone.
They are still considered a paper of record, but I chose to use a hypothetical outfit because I don’t love the Times myself but I believe the argument to be valid.
I’m not interested in arguing about whether or not they deserve to fail, because that whole discussion is orthogonal to whether OpenAI is in the wrong.
If I’m on my deathbed, and somebody tries to smother me, I still hope they face consequences
This is the part that Times won't talk about because people stopped reading their paper long before AI, and they haven't been able to point to any credible harm in terms of reduced readership as a result of open AI launching. They just think that people might be using ChatGPT to read the New York Times without paying. But it's not a very good hypothesis because that's not what ChatGPT is good at.
It's like the people filing the lawsuit don't really understand the technology at all.
It'll be the lawyers who need to go through the data, and given the scale of it, they won't be able to do anything more than trawl for the evidence they need and find specific examples to cite. They don't give a shit if you're asking chatgpt how to put a hit out on your ex, and they're not there to editorialize.
I wont pretend to guess* how they'll perform the discovery, but I highly doubt it will result in humans reading more than a handful of the records in total outside of the ones found via whatever method they automate the discovery process.
If there's top secret information in there, and it was somehow stumbled upon by one of these lawyers or a paralegal somewhere, I find it impossibly unlikely they'd be stupid enough to do anything other than run directly to whomever is the rightful possessor of said information and say "hey we found this in this place it shouldn't be" and then let them deal with it. Which is what we'd want them to do.
*Though if I had to speculate on how they'd do it, I do think the funniest way would be to feed the records back into chatgpt and ask it to point out all the times the records show evidence of infringement
It sounds like the alternate path you're suggesting is for NYT to stop being wrong and let OpenAI continue being right, which doesn't sound much like a compromise to me.
> prevent the loss of being the middle-man between events and users
I'm confused by this phrase. I may be misreading but it sounds like you're frustrated, or at least cynical about NYT wanting to preserve their business model of writing about things that happen and selling the publication. To me it seems reasonable they'd want to keep doing that, and to protect their content from being stolen.
They certainly aren't the sole publication of written content about current events, so calling them "the middle-man between events and users" feels a bit strange.
If your concern is that they're trying to prevent OpenAI from getting a foot in the door of journalism, that confuses me even more. There are so, so many sources of news: other news agencies, independent journalists, randos spreading word-of-mouth information.
It is impossible for chatgpt to take over any aspect of being a "middle-man between events and users" because it can't tell you the news. it can only resynthesize journalism that it's stolen from somewhere else, and without stealing from others, it would be worse than the least reliable of the above sources. How could it ever be anything else?
This right here feels like probably a good understanding of why NYT wants openai to keep their gross little paws off their content. If I stole a newspaper off the back of a truck, and then turned around and charged $200 a month for the service of plagiarizing it to my customers, I would not be surprised if the Times's finest lawyers knocked on my door either.
Then again, I may be misinterpreting what you said. I tend to side with people who sue LLM companies for gobbling up all their work and regurgitating it, and spend zero effort trying to avoid that bias
> preserve their business model of writing about things that happen and selling the publication. To me it seems reasonable they'd want to keep doing that
Be very wary of companies that look to change the landscape to preserve their business model. They are almost always regressive in trying to prevent the emergence of something useful and new because it challenges their revenue stream. The New York Times should be developing their own AI and should not be ignoring the march of technological progress, but instead they are choosing to lawyer up and use the legal system to try to prevent progress. I don't have any sympathy for them; there is no right to a business model.
This feels less like changing the landscape and more like trying to stop a new neighbor from building a four-level shopping complex in front of your beach-front property while also strip-mining the forest behind.
As for whether the Times should be developing their own LLM bot, why on earth would they want that?
reply