Hacker Newsnew | past | comments | ask | show | jobs | submit | aurareturn's commentslogin

People overanalyze the reasons for Atlassian, Meta, Block layoffs. It's because of AI. Truly.

I work in a smaller tech company. As a senior leader, we've not replaced any of the employees who voluntarily left. We simply doubled down on using AI more and more.

If we are seeing this, surely big tech companies are also seeing it. I don't think they're that stupid.

Many companies are in industries where their growth is capped. No matter how many more employees they hire, they're not going to grow faster because the market conditions don't allow them to. The best way to generate more profits for them is to downsize while doing more or doing just as much. AI allows them to do that.

The laid off employees will go to companies where there is more growth potential.


You shouldn't care. Just like how a VC is willing to sponsor a startup to sell their products at a loss, you should be milking that as much as possible as a logical consumer.

Yes but quality of life gets better for Americans because they can get a superior product at a cheaper price.

Protectionism is how you end up with a country like North Korea on the extreme end.


That can only happen if the higher density coincides with equal economic growth in the neighborhood. Otherwise, the higher density could result in a negative home valuation trend.

Given the above uncertainty, and higher density could result in more traffic, noise, crime, nymbys are likely taking the correction position for wealth preservation and quality of life.


"Traffic" doesn't come from higher density, it comes from zoning bans on mixed-use neighborhoods which force people to drive everywhere. The "crime" argument is especially silly: why assume that higher density only ever attracts criminals? Usually, having more people around is a positive.

You can assume higher density has "more crime" because the increase in people means if you want to keep the same absolute rate of crimes (which is the only thing people ever notice--every violent or sexual crime will be repeated in the news), you have to correspondingly increase the efficiency of crime-fighting, and American police aren't up to the task, even if they were motivated to do so.

You can switch to another free LLM chat app that doesn't have ads. No problem until those inevitably must add ads to survive.

My hope is that we can get to the point I can run good-enough models on local hardware before they are all ad laden.

  Also, "free": "If you're not paying for it, you're the product being sold"
HN is "free" too. :)

> > Also, "free": "If you're not paying for it, you're the product being sold"

> HN is "free" too. :)

Indeed: you deliver valuable information about market trends, market sentiments, technology, ... to SV startups and investors.

Additionally, Hacker News is basically a marketing expense of YC.


Well pointed. Sometimes being the "product" is not a bad thing.

It's all about: do you derive an appropriate value for yourself from being the product?

For example, when you use the Google search engine, you are the product (Google's customers are advertisers). I hope you derive sufficient (average) value from each Google search so that you consider this to be worth it.


For a lot of people I think it's increasingly not worth it. Not only do we never click on ads, but results are getting worse, and often a (local) LLM can answer a large percentage of our questions faster and more privately.

And crowd sourced think tank.

Eh, a think tank usually has some kind of minimum requirements, such as education or industry experience. The usefulness of hackernews lies in "farming the opinions of the kind of dork that hangs out on hackernews" -- this is useful data, but "crowd sourced think tank" is trumping it up a bit i think

Eh, a think tank usually has some kind of minimum requirements

When paid for, I agree that is absolutely true.

When nearly free, thanks to team Daniel it's an input that can be weighted against paid options. The free but large crowd may have thought of things that paid think tank members members holding doctorates may not have. Great ideas are missed all the time and most often until it is too late. There may only be a few golden eggs and many bad eggs but there are quicker ways to sort that out nowadays without an Eggdicator and Oompa Loompas though I do miss the songs. One golden egg could pay for the entire cost of the staff running HN for a decade.


At least free to data mine by everyone (as far as I know).

that isn't the gotcha you think it is.

Y combinator absolutely profits from encouraging group think and positive attitudes about things they're involved in.

How else would you get a large part of the tech world to somehow believe that suckling on the teat of Venture capital until that elusive "exit" is the holy grail of business models?


Regardless of how you think HN makes the service free, I'm pointing out to the OP that he is using a service that is free and he is the product. It seems like he is ok with this concept. So why wouldn't he be ok with he Mozilla free VPN concept.

How you are "valuable" as the product is absolutely a relevant distinction to make.

When the valuable part is data collection, you don't get a choice in how your user data is sold.

When the valuable part is influencing opinions, you do get a choice in what you believe.


Not sure why you're downvoted but I absolutely do think that they changed their name for better branding. I also think they were involved in a number of antitrust lawsuits so renaming their company to Meta says "see, we're the underdog in this new big VR industry, we're not a monopoly".

  There’s a strong chance the IPO window has passed. I just don’t see investors willing to jump in here given all the questions about the financial viability of AI.
My guess, it has barely started. I think nearly all AI IPOs have done well so far.

What AI IPOs?

Coreweave, Nebius, most Chinese AI IPOs have done extremely well.

There was one that went up and then back down. Coreweave.

  One thing odd, maybe just to me, is why OpenAI has been stuffing its ranks with former Facebookers who are known to juice growth, find edges, and keep people addicted. They have little background in getting enterprises to buy into a product. Simo herself ran the Facebook app. That organization’s genius is consumer engagement: behavioral hooks, dopamine loops, the relentless optimization of the feed. You can see that in the recent iterations of ChatGPT. It has become such a sycophant, and creates answers and options, that you end up engaging with it. That’s juicing growth. Facebook style.
This is because ChatGPT is gearing up to sell ads. It's the only way to sustain a free chat service in the long term. Ads require engagement and usage. Hiring former Meta employees for this is smart business - even if HN crowd doesn't like it.

People say OpenAI is burning money and is on the verge of collapse. The same people will say OpenAI building an ads business on ChatGPT is "enshittifcation". These people are quite insufferable, no offense to the many who are exactly as I described.


So that’s why I am getting clickbaity last sentences in every response now at ChatGPT.

Things like ”If you want, I can also show a very fast Photoshop-style trick in Krita that lets you drag-copy an area in one step (without copy/paste). It’s hidden but extremely useful.”

Every single chat now has it. Not only the conversational prompt with “I can continue talking about this”, but very clickbaity terms like: almost nobody knows about this, you will be surprised, all VIPs are now using this car, do you want to know which it is? Etc


I find -again- Claude (web) here outstanding & very comfortable:

In most of my discussions throughout the day, it doesnt ask any "follow up" questions at the end. Very often it says thingslike: "you have two options: A - ..... and B - while the one includes X and the other Y..."

But this is was OP underlined: Claude is popular amongst businesses, most "non-tech" people dont even know that it exists.


Don't worry, claude will follow soon enough. It's not like anthropic faces different financial pressures than openai.

In case of Anthropic I just expect them to raise prices sky-high :-D

What would be the price at which you would stop subscribing? Im in tech, so I would willing to pay around up to 100 - 120 USd per month, Id guess (Im currently onthe 20 USD plan, which is supercheap and contains enough tokens currently)

But most private users ("at home") would not pay 100 USD+ per month? Spotify is around ~ 240/250 USD per year


Private users can switch to kimi. Model performs basically the same on programming tasks and is 10x cheaper. Why pay for a fat subscription when you can get an equivalent product for less?

Same here. “Do you want the one useful tip related to this topic that most people miss? It’s quite surprising.”

If it were so useful, just tell me in the first place! If you say “Yes” then it’s usually just a regurgitation of your prior conversation, not actually new information.

This immediately smelled of engagement bait as soon as the pattern started recently. It’s omnipresent and annoying.


Yes, ChatGPT just recently started to add these engagement phrased follow-ups; “If you want, I can also show you one very common sign people miss that tells you…”

You can tell it not to do this in your personalized context.

The model doesn’t always obey it, but 80% of the time it’s worked for me.


This and also constantly saying stupid things like “yes that is a great observation and that’s how the pros do it for this very reason!” for a specific question that doesn’t apply to anything anyone else is doing

This is not just OpenAI though. I don’t think this is new in general for these AI chat apps. Claude at the very least asks a question as the last part of its responses I believe every time.

Those "Prompt-YES-baity" last sentences are somehow counterproductive.

> One thing odd, maybe just to me, is why OpenAI has been stuffing its ranks with former Facebookers who are known to juice growth, find edges, and keep people addicted

There is a very simple answer for this: that’s how leadership ranks work in SV. When one “leader” moves from Company A to Company B, a lot of existing employees are pushed out or sidelined, and the ranks are filled with loyalists from previous companies. Sometimes this works out, but a lot of time it doesn’t and it stays that way until another “leader” is brought in. What’s good for the company doesn’t matter unless there clear incentives and targets lined out for them.


AI is ubiquitous to the point where it's permeating almost every desk job in the world. Even those who don't work are using AI to help them find work, research health problems, ask questions about their daily life. I can't think of anything else since the invention of the internet that has had this much of an impact on people's lives.

People will have to pay for this. I don't see it being free for long other than a few chats a day. If most people in the world are paying 10-200 bucks a month then AI companies will make money, and I doubt they will need to rely much on ads at all.


Anecdotally I know approximately zero 'normal' (non-tech) people who are intentionally using generative AI, several who have been badly misled by Google's AI summaries, and quite a few who are vehemently anti-AI (usually artists and writers).

(Except when mandated by their employers, which nobody is happy about or finds particularly useful.)


Every single person I know outside of my profession is using it, including all relatives of all ages. Even if it's at the top of the google search results :)

Or people are just using as much because it is free.

> I can't think of anything else since the invention of the internet that has had this much of an impact on people's lives.

If you reach a bit farther back, there's opium, an impactful product with limitless demand: https://en.wikipedia.org/wiki/Opium_Wars


On the other hand, costs are getting lower with time.

Sort of how now I have an unlimited 5G data plan for like 10 dollars, and in 2011 I didn't even have Internet on my phone. This is happening also with AI.


The worst are the ones who say things like “OpenAI only has 5% paying users!” As if that’s a really bad number. That is the same ratio YouTube, the world’s largest media company, has. And ChatGPT has like 800m users after only a few years of existence.

And “once they sell ads, they’ll lose all their users!” As if that happened to FB, Google, YouTube, or Instagram…

Some people are really rooting for the downfall of OpenAI that will simply not happen, and their rage makes them utterly unreasonable.


> And “once they sell ads, they’ll lose all their users!” As if that happened to FB, Google, YouTube, or Instagram…

Don't all those examples have network effects as a moat? As in, once the userbase is in, they lose quite a lot of value by switching to a competitor.

What value does a ChatGPT user lose by switching to a competitor?


Do you really believe ChatGPT will lose significant users?

Do you really believe that in your heart of hearts? Or are you trying to be the HN comment contrarian?


> Do you really believe ChatGPT will lose significant users?

I didn't say I believed that, I said that the reasons provided (for people to stick with it) were, to me, insufficient reasons.

The examples of people sticking with a product undergoing enshittification are not representative of the type of product that ChatGPT is. Those other products you mentioned had a strong moat - network effects.

Users had to stick with them, or lose their network.

AI Chat is, almost by definition, a non-network product. When you switch you don't lose updates from your friends, you don't lose subscribers to your channel, you don't lose your followers.

So, what exactly does someone lose when switching from AI Chat $FOO to AI Chat $BAR? Those saved conversations aren't exactly worth much, those "memories" that the Chat AI stored about you aren't worth much either (I was surprised at how many people thought those saved chats didn't contribute to the responses they get in the current chat).


I just can’t imagine anyone really bothering to switch, tbh. Even for a less enshittified product. For a better product, sure. Like if Google hadn’t rolled out Gemini in Search, ChatGPT would’ve crushed them. But not because of lack of ads in ChatGPT, because it was a better search product.

Google Search doesn’t have a network effect right? And people still tolerate their ads… they have 90% marketshare.

People still tolerate Netflix and Hulu ads right?

I think the only people that really care about enshittificafion are a few HN commenters and not broadly represented in the population.

Even at my company, our testing shows no drop in usage as we roll out ads.


> Google Search doesn’t have a network effect right?

In this specific case it does :-

1. People go to google because it is more likely to have the result they are looking for[1],

2. So, people can't search elsewhere, because the network of sites are on google and they lose that if they switch.

--------------------

[1] Well, until recently, anyway. Still, sites prioritise and optimise for Google search ranking above all other indexes.


> And “once they sell ads, they’ll lose all their users!” As if that happened to FB, Google, YouTube, or Instagram…

Enshittification only works for the middleman in a two-sided market, which is what those things are. LLMs are a commodity, so their path to monopoly profit is very different.


I will check back on this comment in a year to see who was right.

The only people that care about enshittification are a few crazies on HN.

Google has 90% market share.


100%. It’s about to become the sleaziest used car salesman the internet has ever seen.

> People say OpenAI is burning money and is on the verge of collapse. The same people will say OpenAI building an ads business on ChatGPT is "enshittifcation". These people are quite insufferable, no offense to the many who are exactly as I described.

I guess ignore the evidence of what I can see? If it provided the value everyone says it does, then charging the amount of what you would generate for ad revenue doesn't seem like a huge ask. But that's not the objective, is it? All the players want to become the defacto AI provider, and they know bait and switch tactics is all they have.

This sentiment comes off as an abusive relationship with the tech industry. Rewarding new ways to define a race to the bottom. We never demand or expect better, just gladly roll over and throw money at your new keeper. It's sad.


  If it provided the value everyone says it does, then charging the amount of what you would generate for ad revenue doesn't seem like a huge ask.
Vast majority of Youtube viewers do not pay for Premium. No one pays for Google search premium. No one pays for Instagram or Facebook or Whatsapp.

There are certain class of services that work best with ads driven business model. ChatGPT is one of them.

If Google and all other search engines locked search behind a subscription, it'd do a great disservice to the world since it means the poor can't use it.


Except that this product isn't comparable whatsoever to Youtube. Contrary to your point, there are whole businesses popping up because people are paying for search engines due to users feeling that Google's results are insufficient for serious search. I'm not sure this is a proper comparison.

In other words, they need more experts on enshittification.

I don't have a problem with the suggestions. Google search does the same at the end of searches.

It does very often suggest things I want to know more about.


Suggestions are absolutely fine. But this is baiting. Chatgpt could have easily given me that information without the bait. And I would have happily consumed it. And maybe if it did it once, it was fine - but it kept on doing it - bait after bait after bait.

The objective was to increase the engagement "metrics" clearly. The seems to me as if the leadership will take all 'shortcuts' required for growth.


It’s worse than baiting. What happens a lot to me is:

Me: [Explains situation, followed by a request.]

AI: [7–8 paragraphs and bullet point lists explaining the situation back to me]. Would you like me to [request]?

Me: That’s literally what I just asked you to do.


It might not even be the leadership at this stage. It’s entirely possible that “rounds of conversation” is a metric that their reinforcement learning has been told to optimise.

This seems overly cynical.

Firstly, tl;dr; is a very real thing. If the user asks a question and the LLM both answers the question but then writes an essay about every probable subsequent question, that would be negatively overwhelming to most people, and few would think that's a good idea. That isn't how a conversation works, either.

Worse still if you're on a usage quota or are paying by token and you ask a simple question and it gives you volumes of unasked information, most people would be very cynical about that, noting that they're trying to saturate usage unprompted.

Gemini often does the "Would you like to know more about {XYZ}" end to a response, and as an adult capable of making decisions and controlling my urges, 9 times out of 10 I just ignore it and move on having had my original question satisfied without digging deeper. I don't see the big issue here. Every now and then it piques me, though, and I actually find it beneficial.

The prompts for possible/probable follow-up lines of inquiry are a non-issue, and I see no issue at all with them. They are nothing compared to the user-glazing that these LLMs do.


Have you used ChatGPT lately?

What you describe is not quite what they are doing, they are adding nudges at the end of the follow-up question suggestions. For instance I was researching some IKEA furniture and it gives suggestions for followup, with nudges in parenthesis "IKEA-furniture many people use for this (very cool solution)" and at the end of another question suggestion: "(very simple, but surprisingly effective)". They are subtle cliffhangers trying to influence you to go on, not pure suggestions. I'm just waiting for the "(You wouldn't believe that this did!)". It has soured me on the service, Claude has a much better personality imo.


Yes, it very closely parallels the “one weird trick” bait from a decade ago.

I’ve seen it use “one weird trick” multiple times in its end of response baiting. Literally those words.

No, I don't use OpenAI products. Sam Altman is a weird creep and the company is headed into the abyss, so it isn't my cup.

However the original complaint was about continuation suggestions, which are a good feature and I suspect most users appreciate them. If ChatGPT uses bait or leading teases, then sure that's bad.


The current A/B test I seem to be in is that bad. But it will likely drive the metrics they are trying to drive.

Then just write the extra paragraph rather than bait?

Bait what exactly ? Getting the user to type "yes" ? Great accomplishment.

Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.

Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.


You send all the tokens an extra time at least

I’m not privy to their data on what this does to engagement, but intuitively it seems like the extra inference/token cost this incurs doesn’t align with their current model.

If they were doing it to API customers, sure, but getting the free or flat-rate customers to use more tokens seems counterproductive.


It juices their "engagement" metrics, which is the drug of choice for investors, right up there with net promoter scores.

We’ll see how this plays out. It’s a turbocharged version of enshittification, at a time when other models are showing stronger growth in B2B and other valuable markets.

I canceled my ChatGPT subscription and jumped to Claude, not for silly political theater, but just because the product was better for professional use. Looking at data from Ramp and others, I’m not alone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: