Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It occurred to me on my walk today that a program is not the only output of programming.

The other, arguably far more important output, is the programmer.

The mental model that you, the programmer, build by writing the program.

And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.

(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)

The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.

See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)

https://www.youtube.com/watch?v=ZSRHeXYDLko



> The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.

Perhaps on a related note, I've noticed that a lot of the positive talks about AI are about quantity. On the other hand, there is disproportionately very little deep discussion about quality. And I mean not just short term, local quality, but more long term and holistic quality (e.g. managing complexity under evolving requirements in a complex system with multiple connected parts) at real production scale, where there is much less tolerance for failure.

In all the places I've worked in throughout my career, I've felt that there have always been a tension between those who cared more about things like the mental model and holistic quality, and those who seemed to care less or were even oblivious about it. I think one contribution of the current AI hype is that it gave a more concrete shape to this split...


> Perhaps on a related note, I've noticed that a lot of the positive talks about AI are about quantity. On the other hand, there is disproportionately very little deep discussion about quality.

and to me this is so weird, because from what I can tell, quantity hasn't been the winning factor for a very long time now


LLM systems at their core being probabilistic text generators makes them easily produce massive works at scale.

In software engineering our job is to build reliable systems that scale to meet the needs of our customers.

With the advent of LLMs for generating software, we're simply ignoring many existing tenets of software engineering by assuming greater and greater risk for the hope of some reward of "moving faster" without setting up the proper guard rails we've always had. If a human sends me a PR that has many changes scattered across several concerns, that's an instant rejection to close that PR and tell them to separate those into multiple PRs so it doesn't burn us out reviewing something beyond human comprehension limits. We should be rejecting these risky changes out of hand, with the possible exception when "starting from scratch", but even then I'd suggest a disciplined approach with multiple validation steps and phases.

The hype is snake oil: saying we can and should one-shot everything into existence without human validation, is pure fantasy. This careless use of GenAI is simply a recipe for disasters at scales we've not seen before.


Well said, thank you.


I've found LLMs decrease the friction in enabling more pedantic lints and tooling. It is a quantity problem because enabling all the aggressive warnings in the compiler makes a lot of work, and its a quality outcome because presumably addressing every warning from the compiler makes the code better


Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf


>>[2019] Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)

During the Q&A, he responds "do we really want software written that humans cannot understand?!" His steadfast doubts against singularity are called into question, at least by his supporting 2019 responses.

Certainly the speaker is correct that modern hardware allows software to be crappily written — I fondly recall the "olden times" recanted about full-access operating systems of yesteryear. Those days are over...

The fact that a modern computer "needs" to be online to install an update is frustrating/concerning (e.g. for MacOS, without a USB installer must be online to update, even with stand-alone updater downloaded). Just use my local hardware (that I own) and install this software (that I have provided).


The phone number muscle memory example is perfect. There is a whole category of knowledge you only have if your hands did the work.


It's called "tacit knowledge" and I think we generally overindex on explicit, formal knowledge and ignore tacit knowledge. You can see that with language learning, we treat languages like something you "learn", but in my experience it's closer to a motor skill like playing tennis.

https://en.wikipedia.org/wiki/Tacit_knowledge


The most recent phone numbers I actually remember are those I learned and used just before getting a smartphone. I guess tapping on a screen doesn't quite give you that same effect!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: