Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t mind if my employer bought a subscription. But my personal motto if that if I have to do something multiple time, it should take less and less time. I reuse code heavily (which is why I learned vim as it makes that fluid). And that’s where LLMs becomes useless because they need the entire context to generate something. Which means I have to type it out for them in addition to what I want. And the whole thing becomes a drag. Maybe Cursor and the likes could help, but the code is only half the story, there are things like protocols, messages format, specs,…

What LLMs promise is endless drag. I try to structure my work to ensure that the final velocity is high.



I copy and paste code examples into LLMs all the time. They're extremely good at figuring out which parts of the context are relevant, so I don't find myself needing to do any editing at all - I find the right example, paste it in and add my prompt at the end.

This app for example - which runs OCR against PDF files entirely in the browser - was assembled by pasting in an example of PDF.js usage and an example of Tesseract.js usage and having it figure out the rest: https://simonwillison.net/2024/Mar/30/ocr-pdfs-images/


While I applaud the result, it's again one of the thing that would be a quick script if it were only for my personal usage. Because I wouldn't bother with making it user friendly as I'm the single user.

The kind of project I work on is more like this: Build an Android app for a quiz game. The quiz takes a list of random question from a set. Each set is a package that can be installed and upgraded when online. While the app is free, there is an activation code to be able to download the main packages. The app should work offline except for the activation and downloading packages. It also should notify when a new version is ready for a package. etc...

I don't know if LLMs could have helped me at the time (pre 2020), but I doubt it. Not because the code was complex, but mostly how cohesive the whole thing should be while taking care they're not tightly coupled and be maintainable by a single person. The IDE was a great helper once I got the design and the architecture outlined, mostly because it was deterministic and I already know what the end result should be.


GPT-3 came out in early 2020, prior to that we had just GPT-2 which was mildly interesting at best but not something that could generate usable code.

The best current LLMs GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet - are just about at the point now where I'd expect them to be able to get a useful chunk of your Android spec there done. Which is pretty wild!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: