Hacker Newsnew | past | comments | ask | show | jobs | submit | Swizec's commentslogin

> Probably in 500 BC they said you had to hack at stone with a chisel for cognitive development, and then someone invented the pen and paper.

You are forgetting that in 500 BC literacy rates were well under 10%. Nobody optimized for anyone’s cognitive development.

The only cognitive development people cared about was for the rich (aristocrats, royalty, some merchants, etc). Much of that happened orally through hands-on tutoring by an army of people specifically employed to create the next generation of leaders.

Anyone would thrive with that much resources thrown at them. And I’m pretty sure many of them considered reading and writing beneath them. They got people for that.


> Wheras only profit should matter

Profit is money you couldn’t figure out how to spend. During growth, you want positive operating margins with nominal profits. When the company/market matures, you want pure profits because shareholders like money. If you can find a way to invest those profits in new areas of growth, that’s better.


> Profit is money you couldn’t figure out how to spend.

Profit is the money showing your business is sustainable. Ever since the ZIRP era US companies keep haemorrhaging money at a rate that is physically impossible to recoup.

If OpenAI plans to lose 100+ billion dollars per year for half a decade, what profits are you talking about to offset the losses?

> When the company/market matures, you want pure profits because shareholders like money.

Ah yes. Shareholders like money. And not, you know, basic accounting like "we need money to actually pay salaries, pay for equipment and offices etc. without perpetually relying on seeming endless investor money".


> what profits are you talking about to offset the losses?

You don’t need profit to offset the losses.

You can simply reduce spending / expenses.


In principle yes, but all metrics so far suggest they are losing money every user interaction. There is very little network effect with these tools so It's not like they can start cutting back on staff and feature deployment.

lol that’s a line so incredibly naive it hurts.

One does not “simply” reduce spending.


> One does not “simply” reduce spending.

Why does stock price go up after mass layoffs?


By your logic any company should just layoff everyone and profit on the stock price going to the infinity.

Company would no longer function of course but why it would matters if the stock price is through the Moon?


What happens when the only way to reduce spending is to reduce your assets? Seems like circular logic at that point. I suppose the market isn’t expected to be rational all the time, but eventually it is.

> Profit is the money showing your business is sustainable.

Notice I said you should have nominal profits.

> Ah yes. Shareholders like money. And not, you know, basic accounting like "we need money to actually pay salaries, pay for equipment and offices etc. without perpetually relying on seeming endless investor money".

All of these are costs that reduce your profits.

A maximally profitable business fires all employees except shareholders, closes every office, stops all RnD, and leases IP or real estate to others on long-term deals that never need to be renegotiated.


Not sure why you’re downvoted.

Everyone wants to treat OpenAI like a car wash business where they need to make a profit almost immediately. I don’t know why people can’t understand that the industry is in a rapid growth stage and investing the money is more important than making a profit now. The profits will come later.


> The profits will come later.

The nearly $1T hand wave. Forgive me if I ask how. Might give it some credence if Anthropic and Google weren't pulling even with or surpassing them in various way or markets.

Whats worse is they mostly seem to have retail market name recognition which is arguably the hardest, or maybe the impossible market to make money from.


  Whats worse is they mostly seem to have retail market name recognition which is arguably the hardest, or maybe the impossible market to make money from.
That doesn't seem to be the case at all. Meta and Google are two of the most profitable companies in history, off the backs of free users.

Apple is another one that focuses almost exclusively on retail and is also one of the most profitable in history.



> profits will come later

Holy crap, is it the year 2000 again?


2000s, 2010s, and 2020s. This is how tech companies work, especially in a new industry.

A lot of investments gets amortized over many years so even if you're investing all your free cash you'll still show a lot of profit.

> The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.

I am in both camps. Always have been.

Code janitors about to be in high demand. We’ve always been pretty popular with leadership and it’s gonna get even more important.

Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things

My output is org velocity.


I agree and I like how you describe it. The phrase from Django, "perfectionists with deadlines", also resonates with me.

> Treat code design and architecture as the thing that lets your slop canons (90% of engineers even pre-ai) move fast without breaking things

I'm currently of the opinion that humans should be laser focused on the data model. If you've got the right data model, the code is simpler. If you've got the relevant logical objects and events in the database with the right expressivity, you have a lot of optionality for pivoting as the architecture evolves.

It's about that solid foundation - and of course lots of tests on the other side.


> I 'm currently of the opinion that humans should be laser focused on the data model

yes. good programmers talk about data structures, bad programmers talk about code


How do you even converge on the right data model without refining code? Elegant code and elegant data model are the exact same thing!

"Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious."

Lots of people try to make their code better by revising their procedures and don't even think about changing their data model, and generally fail. You might not be able to change your data model without changing your code, but they're different activities.


Or not using the data model properly, zero foreign keys in databases, no triggers checking column contents etc.

"We'll do it on the app level".

sigh


No, that part is just smart. Databases have terrible support for nontrivial datastructures, doing everything at the app level is the only reasonable response.

It's called "systems analysis". Programmers are generally pretty terrible at it because it requires holistic, big-picture thinking. But it used to take up the bulk of the design activity for a new enterprise system.

And the result was usually a complex system that no one needed and could maintain. Successful refinement is needed when you try to accomplish something from the ground up, refining it as you're adding features. Not locking yourself in the ivory tower drawing UML diagrams and drafting requirements. Doing and thinking go in pair, not separately.

Yeah, the field of software engineering has come a long way since then. But just because previous implementations of the analysis phase were flawed doesn't mean that the phase itself was flawed.

Research conducted by M. Bryce and Associates suggested that use of a structured systems analysis phase before programming began resulted in time and cost savings vs. just "hacking it together" like programmers want to do. Locking yourself in the ivory tower is an unfair way to characterize systems analysis. Systems analysts talk to people in the business to understand what the business requirements actually are, and then design an information system (NOTE: not a computer system or software system; if it cannot in principle be run on pen and paper it is not an information system) that meets those needs. Programmers only come in when the automatable parts of the system need to be implemented, and work from a detailed and precise spec.

> Programmers only come in when the automatable parts of the system need to be implemented, and work from a detailed and precise spec.

A lot of systems are complex enough that you can’t get to that stage (and if you think you do, think again). Mostly because of communication issues and time concerns. Which is where the agile manifesto comes in and recommends the talk-do-show in short cycle. It needs not to be hacked together and the showing helps with communication, the talking guides the doing, and the doing is what pays for everything.


> My output is org velocity.

Amen, slow and steady and the feature fly wheel just keeps getting faster.


>slop cannons

I am stealing that phrase haha


> That's it. He's a colleague. Not a mentor

You're missing a very important aspect of how managers impact your career: Opportunities.

The manager's job is to find you impactful work that a) gets you promoted and b) challenges you in the ways you want or need to grow.


Am I missing this, or are you assuming that I am incapable of finding opportunities myself, within or without the organization that the manager is beholden to? I honestly can't understand this framing, of the manager's job as a sort of opportunity finder for those 'under' them, and somehow being more impactful at this than the individuals themselves.

I'll give you this, some people need to be managed and for some reason presented with opportunities by a 2nd party. But some people just don't, they need to be collaborated with.


> Am I missing this, or are you assuming that I am incapable of finding opportunities myself

Somewhat, yes. It has nothing to do with you. Some opportunities you can create yourself, go for it. Other opportunities only arise in the context of leadership meetings you are not a part of (by definition, if you're not the manager). Having a manager in those meeting push for your opportunities is priceless.

Having had many managers who don't do this for me and a few that do, definitely want the second kind.


This is a pretty odd take, from my perspective.

If one of my direct reports came to me and said they were interested in working on, say... AI observability (replace with whatever interests you), and that was something I had any influence over (even if only indirectly), I'd be finding whatever way I could to connect my report with that kind of work.

It's all well and good to say that you're in control of your own career advancement, but that's not in conflict with working with your manager on supporting your career development. Even if they don't have anything to teach you, they will necessarily have some influence of your scope/area of work, so it only makes sense to work them on aligning your work with your interests.


I believe everything you wrote about here is actually cooperation between two people, and to the point of what I said, you not actively getting in the way of your direct report's career progression.

> The manager's job is to find you impactful work that a) gets you promoted and b) challenges you in the ways you want or need to grow.

To me, the comment I responded to reads like a manager actively involved in the promotion of a direct report, and in finding a scope of work that the report might find challenging so that they grow. Your comment reads like a colleague helping out another colleague to the best of their ability. Which is exactly what I expect from a manager.


> You're missing a very important aspect of how managers impact your career: Opportunities.

Indeed! In basketball terms, a manager should be the MVP in Assists. They don't score directly but they set up plays for you so you can succeed. It's then up to the employee to act on it and score.

The best managers I've had are of this type.


> I'm not convinced that "actual logic and thought" aren't just about inferring what comes next statistically based on experience.

Often they are the exact opposite. Entire fields of math and science talk about this. Causation vs correlation, confirmation bias, base rate fallacy, bayesian reasoning, sharp shooter fallacy, etc.

All of those were developed because “inferring from experience” leads you to the wrong conclusion.


Bayesian reasoning is just another algorithm for predicting from experience (aka your prior).

I took the GP to be making a general point about the power of “next x prediction” rather than the algorithm a human would run when you say they are “inferring from experience”. (I may be assuming my own beliefs of course.)

Eg even LeCun’s rejection of LLMs to build world models is still running a predictor, just in latent space (so predicting next world-state, instead of next-token).

And of course, under the Predictive Processing model there is a comprehensive explanation of human cognition as hierarchical predictors. So it’s a plausible general model.


> under the Predictive Processing model there is a comprehensive explanation of human cognition as hierarchical predictors

It’s plausible!

But keep in mind humans have been explaining ourselves in terms of the current most advanced technology for centuries. We used to be kinda like clockwork, then a bit like a steam engine, then a lot like computers, and now we’re just like AI.

That’s why you blow a gasket or fuse, release some steam, reboot your life, do brain dump, feel like a cog in the machine, get your wires crossed, etc


> Society is a compact between the dead, the living and the yet to be born. Having a piece of someone’s deepest thoughts is a treasure for future generations

If you want to be remembered, live a life worth remembering.


> At some point, I think its okay to just use account age instead of even asking

Bet you there’s already a thriving grey market for old accounts with organic history.


People going that route dont care about filling a DOB on an account.

> everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we’re immediately filling it with more things

You fill a jar with sand and there is no space for big rocks.

But if you fill the jar with big rocks, there is plenty of space for sand. Remove one of the rocks and the sand instantly fills that void.

Make sure you fit the rocks first.


I think that's kind of the point though: AI is the sand, but it's the rocks that hold all of the value; the further you get away from using AI the more real value you obtain. Like, a few of the rocks have gold deposits in them, and the sand is just infinitely copious but never holds anything valuable. And you've got a bunch of people running around saying, "Behold my mountains of sand!"

You fill the bottle with water, you put a fish in it, you remove half of the water, the bottle is still half full, but if you remove the fish, it will have less water than before.

You fill the bottle with half of the water, you put the fish in, you can fill in the other half. If you start with the first half, you will end up with more water.


You write a metaphore in a comment, you remove half of it, you add another one in the middle, you add the half of the first one, and… nobody understands anything.

Is it the ultimate result of LLM use? People internalising the idea that writing is about stringing words together like a Markov chain without realising they're not saying anything of substance?

That is the philosophy of "humans are just token predictors", yes.

In a more advanced civilisation, you would be put in the pillory for the townsfolk to throw rotten cabbage at you until the Lord fixed whatever made you say that.

You put your right foot in, you put your right foot out, you put your right foot in, and you shake it all about.

The point of the metaphor is not to say "spending time is mechanically similar to putting things in a container". It is to look at spending time from a new angle, and see if it helps you understand it better. A wise person sees a metaphor as a launching point for thought, not as an expression of a metaphysical connection.

Yes, there are bad metaphors, and people who take metaphors too seriously. That you can conjure a bad metaphor with somewhat similar to semantics to some other metaphor does not mean that said metaphor is bad.


you fill the 3 liter bottle up to the top, and pour the contents into the 5 liter bottle

then you fill 3 liter bottle again, and pour the contents into the 5 liter bottle until the 5 liter one is full

empty the 5 liter bottle, and pour the 1 liter in the 3 liter bottle into the 5 liter bottle

fill the 3 liter bottle again and pour that into the 1 liter already in the 5 liter bottle to get 4 liters of water


Then you bring the fox back, take the hen across the river, ...


I'm so glad that style of inerview was dying out right when I graduated. And I love puzzles. But I don't need wannabe IQ tests for a job that expects me to work in legacy code and coordinae with other engineers.

But if you teach the bottle how to fish it will be able to feed itself for life.

> You fill the bottle with water, you put a fish in it, [some water overflows], you remove half of water...

That water overflow step is missing / implicit. But that's an observable event.


What?

I assume post used extreme example to demonstrate that wise-sounding metaphors may not have inherent point or value.

Hahah, I just have to reply and say I loved the original comment and was happy for the laugh. Obviously this is the answer to the riddle of

> Given a 3-liter container and a 5-liter container, both initially empty, and access to tap water, how can you measure exactly 4 liters of water without using any additional containers

I've offered and received some convoluted metaphors recently, love leaning hard into this one.


They're talking about Archimedes' principle, displacement of water. The fish makes the water bottle overflow, so be careful when you add the fish so that it doesn't. It's a counter analogy to the rocks one above.

They’re pointing out that if the jar was _filled_ with sand, then of course you can’t fit any rocks in because it’s full. It’s cute but misunderstands the original metaphor I think.

Psilocybin?

Not sure, I used to be better at diagnosing this type of episode.


If that ain’t the secret to solving an np complete problem I don’t know what is!

> including totally trashing the old implementation and creating an entirely new one from scratch that matches all the requirements

Let me guess, you've never worked in a real production environment?

When your software supports 8, 9, 10 or more zeroes of revenue, "trash the old and create new" are just about the scariest words you can say. There's people relying on this code that you've never even heard of.

Really good post about why AI is a poor fit in software environments where nobody even knows the full requirements: https://www.linkedin.com/pulse/production-telemetry-spec-sur...


> Let me guess, you've never worked in a real production environment?

The comment to which you're responding includes a note at the end that the commenter is being sarcastic. Perhaps that wasn't in the comment when you responded to it.


It wasn’t thanks for highlighting. Can be hard to tell online because there’s a lot of people genuinely suggesting everyone should build their own software on the fly


If the amount of code corporations produce goes even 2x there's gonna be a lot of jobs for us to fix every company's JIRA implementation because the c-suite is full of morons.


I work on a product that meets your criteria. We can't fix a class of defects because once we ship, customers will depend upon that behavior and changing is very expensive and takes years to deprecate and age out. So we are stuck with what we ship and need to be very careful about what we release.


That's why I find any effort to create specifications... cute. In brownfield software, more often than not, the code _is_ the specification.


But if you start from the beginning with a code base that is always only generated from a spec, presumably as the tools improve you'd be able to grow to a big industrial-grade app that is 100% based on a spec.

The question is how many giant apps out there have yet to be even started vs. how many brownfield apps out there that will outlive all of us.


If the spec covers 100% of the code paths, then yes, you're right. But now spec and code are entirely redundant. Changing the spec or changing the code takes the same effort.

If the spec doesn't specify all the details, then there are gaps for the code to fill. For example, code for a UI is highly specific, down to the last pixel. A spec might say "a dialog with two buttons, labelled OK and cancel". That dialog would look different every time the spec is reimplemented.

Unless of course, there was also a spec for the dialog, that we could refer to in the other spec? That's really just code and reuse.


This might be the "Steve, Don't Eat It!" version of the xkcd workflow comic.

Whatever you ship, steve will eat, and some steves will develop an addiction.


> When your software supports 8, 9, 10 or more zeroes of revenue, "trash the old and create new" are just about the scariest words you can say. There's people relying on this code that you've never even heard of.

Well, now it'll take them 5 minutes to rewrite their code to work around your change.


> Well, now it'll take them 5 minutes to rewrite their code to work around your change

You misunderstand. It will take them 2 years to retrain 5000 people on the new process across hundreds of locations. In some fields, whole new college-level certifications courses will have to be created.

In my specific experience it’s just a few dozen (maybe 100) people doing the manual process on top of our software and it takes weeks for everyone to get used to any significant change.

We still have people using pages that we deprecated a year ago. Nobody can figure out who they are or what they’re missing on the new pages we built


> You misunderstand. It will take them 2 years to retrain 5000 people on the new process across hundreds of locations. In some fields, whole new college-level certifications courses will have to be created.

Replace them by AI.

I’m still being sarcastic.


Ask AI about a strategy and tools to build to figure out.


Great now you have a strategy (one less MBA to hire). You still need to do the strategy.

The doing is where most of the time goes. Strategy docs are cheap, my intern can give you 5 of those by tomorrow.


That will be after it broke, which costs money

Also: no


> I will die on this hill: tech firms that mandated 5 days in the office was about soft layoffs, rather than a principled stance on individual performance under WFH

“True work only happens in person with human collaboration! Everyone must come back”

2 years pass

“Oh wow we can replace everyone with a chatbot this is amazing”

narrator: It was the interest rates all along. Many of these tech businesses are fundamentally bad, the ROI is smoke and mirrors, energy shocks and bad macro-economics are coming, and investors are starting to ask hard questions.


Tech has never been perfect, but there was a time when it felt more hopeful and optimistic and about building cool stuff. There's always been give and take with the money side of things that's necessary to keep fueling the building, but it feels like it all kind of went off the rails somewhere.

I'd be fine with earning less (we're pretty frugal) to work in that kind of environment with good people.


You'd also be a lot more likely to keep that kind of job post-crash, by all means take it if you find one.


I don't think it was interest rates. Tech just saw that workers were able to extract higher and higher pay and more and more benefits and the industry saw an opportunity to reverse this trend. And it has succeeded. The semi-coordinated action since 2023 has caused pay to stagnate, enabled businesses to remove benefits, and frightened workers away from changing employers.


You know, I never realized the "human collaboration" against "AI can replace you" dissonance before but I believe that you are complete correct.


It was the 100% writeoff of R&D spending and software salaries mandated as R&D. As soon as that changed, very precisely, layoffs started.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: