> the idea that it's close to collapse is no better than any other online propaganda opinion
Not just that: how do you even define "the collapse of American society"?
What, exactly, do people think that would look like?
The Purge?
Complete anarchy? Riots in the streets?
The classic image of a burning metal garbage can in the street?
To the extent that a modern society like that of the US can "collapse", it's going to be a very, very slow and uneven thing. Most likely what it would look like is a Balkanization of the country—either de-facto, or full legal (or illegal) secession of groups of states, over the course of a number of years.
I think the likely scenario is Trump digging in post-mid terms. This is very likely, given the amount of flagrantly illegal stuff he's got floating around him and his crew.
Then two paths: he's either successful, forming the sort of "managed democracy" you see in Russia etc.
Or he's unsuccessful, and we see what happens. ICE are a militia beholden to the regime. Could get spicy.
Constitutionally, I think the framework that's supposed to check executive power is already shredded, or at least revealed for what it's been all along: pretty much norms.
If property rights are regulation, then so is anything that allows you to ignore them.
Once you get down to the level of property rights, the only alternative left is total might-makes-right anarchy.
Property rights are some of the earliest and most basic things protected by governments—indeed, to a large extent they precede governments, being protected with force by the people who wish to assert them.
Wipe out all regulations, all laws, all property rights, and try to run fiber across someone's property without their permission, and they're likely to come out with a shotgun and start shooting everyone digging. Follow the steps logically from that point, and you'll fairly quickly start reinventing governments and regulations.
> a boundary is an ultimatum you're setting on someone else's behavior
No, it's not.
A boundary is something you're saying about your behavior. "If you use racist language at me, I will have to end this conversation."
And much, much worse than someone with "a minefield full of unnecessary boundaries" is someone who has boundaries they don't tell you about.
You should only set boundaries that are real boundaries for you, not just whims or arbitrary decisions. But if you do have boundaries—and everyone does; if you think you don't, then you just haven't had someone cross them (or haven't realized that's what happened when they did)—you must communicate them in contexts where there's a real chance of them being crossed.
To do otherwise is unfair to everyone else and to yourself.
> you must communicate them in contexts where there's a real chance of them being crossed
I think this falls under de-escalation, and there's lots of approaches.
Communicating boundaries, or stating if-thens, can be an escalation in some situations.
Steering the conversation/situation away works in some situations.
Non-verbal communication can work, and be more tactful: it allows an accidentally-offensive person to recognise, pull back and show support. This smoothes out conversations, and is common enough that it's expected for many.
For groups of people that use non-verbal communication less, then perhaps explicitly stating things is the only option.
But don't be surprised if non-verbal communicators interpret it as combative!
"Wow, Foo got upset quickly at me, and in front of others. [Why didn't Foo make it clearer that they were getting uncomfortable [using non-verbal methods]]".
You make a distinction without a difference. In either case, without providing for compromise or alternative mutual understanding, it is likely confounding and demanding.
I guess my question is: What's wrong with an ultimatum over things that are actually egregious enough to need a hard boundary? It seems like you're stuck on the word "ultimatum", as if there's nothing that could possibly be acceptable to give an ultimatum over.
I mean...I'm a pretty easygoing guy overall. My boundaries are things like, "If you come up and scream in my face, I'll tell you to sod off." "If you punch me, I'll probably shove you back." Reasonable boundaries for other people might be "if you grab my butt, I will report you to HR", or "if you ask me to work unpaid overtime, I will refuse (and report you to HR/the NLRB)".
It seems like you think "people setting boundaries" looks like telling your coworkers things like "never ever speak to me in anything but the most respectful tones" or "if you ask me about my personal life, even the tiniest bit, I will call the police". Except in extremely unusual circumstances, "boundaries" like that are actually people being abusive of their coworkers.
Any given company could stop training tomorrow, and, as some others have said here, they'd be generating quite a bit of profit until their models visibly fell behind, however long that ended up taking, at which point they'd probably just fall over completely.
Over the whole industry? No; they can never, ever stop training, or they'll cease to be useful at all very soon.
Training is what keeps the models up-to-date on current events, which includes new programming languages, frameworks, and techniques. It's already been observed that using LLM assistance on some types of programming is much more effective than others, based on how well-represented they are in the training data: if everyone stopped training tomorrow, and next month a new programming language came out, none of them would ever be able to help you program in that new language.
This can be extended to other aspects of programming, too. If training stopped, coding assistants would gradually start giving you wrong answers on how to implement code for APIs, frameworks, and languages that continued to evolve, as they will always do, in much subtler (and likely harder-to-debug) ways than how they'd deal with a new language whose existence they don't even know about.
I don't know about others, but with Amazon specifically, it's always been very clear that their "losing money" in aggregate was purely on paper, for tax purposes: their ability to undercut everyone else was initially based on being online without the brick-and-mortar costs that other stores did, then on economies of scale, and now on being the 900kg juggernaut that just has more money than God and can blow it on running you out of business if they feel like it.
Frankly? That's Google's (well, Alphabet's, I guess) problem.
They're a multibillion-dollar international monopoly with absolutely staggering amounts of money and power, actively engaging in a wide variety of activities directly aimed at making the lives of every normal person on the planet worse so that they can have more power, more control, and more money. Me blocking ads on YouTube not only costs them effectively nothing, it's also the act of a flea against a polar bear.
If Alphabet showed any signs of actually wanting to create a sustainable alternative to the surveillance economy, I might have some sympathy for them. But not only do they not do this, they are the ones who created it in the first place.
I'm not sure where you got the idea that I'm boycotting "the ad-free model".
I'm boycotting them. After all, every cent that goes their way supports surveillance advertising (among other unsavory things).
I have other subscriptions that support ad-free creators.
If they choose to misconstrue my refusal to support them with either money or ad views, that's also their problem. (Also, that's patently never going to happen, because my signal vanishes instantly into the noise.)
Next, you have to have a clear path to reaching it.
Then, you have to have the resources to actually walk that path.
Only with all three of those can you make any credible claim that AGI is near.
As it stands, we have none of them—and the lack of the second is the most damning. It's very, very clear at this point that just scaling up the existing LLMs is not going to reach some critical mass and result in AGI, like the serendipitous sapience of Mycroft in The Moon Is A Harsh Mistress.
Given that, any path to AGI necessarily includes some new breakthrough on it (or more than one). And by their essential nature, breakthroughs are not something you can predict or schedule. Indeed, you cannot even be guaranteed that they will ever happen. (It is likely, assuming that it is physically possible to build AGI, that we will figure out how at some point...but not guaranteed.)
On the other hand approx human brain equivalence in computers has been predicted fairly accurately since about 1990 based on Moore's law type projections - mid 2020s by Moravec, 2029 by Kurzweil. The underlying assumption is once the hardware is widely available, hackers will hack something together.
But Moore's Law went out the window over a decade ago. Oh, sure, we're still getting things smaller and faster—but nowhere near the same rate we were before. Nowadays most (not all, but most) of the advances we're getting are in more and better parallelisation, rather than faster performance on each core.
And my (rough, limited) understanding is that based on much more recent projections, it would take several orders of magnitude more computing power to genuinely simulate a human brain than what we have.
The main Moore's law like metric they track is compute in instructions per second, per dollar. That's kept on a fairly steady exponential for a century or so and doesn't seem to be slowing. (https://www.bvp.com/assets/uploads/2024/03/price-performance...)
We are probably not near simulating a brain but near enough for similar functionality. Moravec estimated it by looking at the retina which was fairly well understood and seeing how much compute was needed for similar object detection etc. in his robots.
Not just that: how do you even define "the collapse of American society"?
What, exactly, do people think that would look like?
The Purge?
Complete anarchy? Riots in the streets?
The classic image of a burning metal garbage can in the street?
To the extent that a modern society like that of the US can "collapse", it's going to be a very, very slow and uneven thing. Most likely what it would look like is a Balkanization of the country—either de-facto, or full legal (or illegal) secession of groups of states, over the course of a number of years.
reply