At work I use skills to maintain code consistency. We instrumented a solid "model view viewmodel" architecture for a front-end app, because without any guard rails it was doing redundant data fetching and type casts and just messy overall. Having a "mvvm" rule and skill that defines the boundaries keeps the llm from writing a bunch of nonsense code that happens to work.
Possibly, and we do use linters, but linters don't stop LLMs from going off the rails. It does end up fixing itself because of the linter, but then the results are only as good as the linter itself.
Much of my time at work is reading through quickly typed messages from my boss and understanding exactly what questions I need to ask in order to make it easy for him to answer clearly.
Engineers who lack soft skills cannot be effective in team environments.
RTFM and being able to quickly reduce problems to the simplest possible test case are my superpowers.
To be fair, LLMs can be quite useful for quickly finding the correct place in TFM to look when you don't necessarily have a function or feature name to go on.
Haha. I always say that I'm only a good IC not because of my technical skills but because of my communication skills and my willingness to, as Steven Covey says, seek first to understand.
I literally got a transfer to a more exciting position and most likely a raise (discussing it on Tuesday =P) just because I communicated things related to the new position proactively and more than other people who, after the fact, expressed interest in the position.
Local user being hostile should be a user group setting in enterprise versions, not a default across all versions of them.
But now that I think of it, I was pretty hostile to my computer when I was ten years old and running windows 2000. I don't think we ever saw so many pop-ups before.
But even so, the admins of the computer system should have control over their computers. I can understand if my mom's user profile might have limitation, but the my admin profile should not.
Security isn't an unqualified good. You're always secure somethingfrom some threat. Keeping the subject and the threat actor implicit is causing confusion in minds of many tech people, and is in part the reason how we land in situations like this.
Windows is not just an operating system on your computer. It is a product (nowadays, a service) of Microsoft. Some security systems in it are meant to protect the PC/system/user from external threats. Others are meant to protect Microsoft, and Windows as a product/service, from the user.
Being specific about what is being protected and from whom, is more important than specifics of the actual security technology. After all, depending on the answers to those two questions, the very same security technology is protecting you from a cyber-criminal installing a rootkit on your PC, protecting Microsoft from you pirating Windows, and protecting copyright interests from you trying to watch a movie in a geographic location they don't want you to watch it in.
I think the sarcasm indicator is useful especially for some neurodivergent folks who may not pick up on social cues well. And the sacram indicator does not in any way detract from the joke.
Yeah it does. If you have to explain the joke, it makes it not funny. In the real world, people don't have explicit sarcasm markers, you have up deduce it. As a neurodivergent person, I reflexively downvote on /s because coddling people isn't going to help them grow or deal with the real world.
In the real world we have things like intonation, facial expressions, body language, and other indicators to denote sarcasm.
On the internet it is very possible and often plausible that someone can very much believe what may appear to a reasonable person to be sarcasm. Having a crutch online does not equate to an equivalent crutch offline.
Anecdotally, neurodivergent folks I know prefer, and some even require, a sarcasm indicator online.
There's even more to it — real conversations are interactive, so if a statement causes confusion, it can be cleared up immediately. Forum posts, however, must stand alone for the most part.
You probably also know the person you're talking to irl, so it's way easier to make the judgement call on whether they're serious or not, compared to a random person online.
This is a good point. I've seen people with really complex AI setups (multiple agents collaborating for hours). But what are they building? Are they building a react app with an express backend? A next js app? Which itself is a layer on top of an abstraction?
I haven't tried this myself but I'm curious if an LLM could build a scalable, maintainable app that doesn't use a framework or external libraries. Could be danger due to lack of training data but I think it's important to build stuff that people use, not stuff that people use to build stuff that people use to build stuff that....
Not that meta frameworks aren't valuable, but I think they're often solving the wrong problem.
When it comes time to debug would you rather ask questions about and dig through code in a popular open source library, or dig through code generated by an LLM specifically for your project?
The copout answer is it depends. I've debugged sloppy code in React both before and after LLMs were commonly used. I've also debugged very well-written custom frameworks before and after LLMs.
I think with proper guardrails and verification/validation, a custom framework could be easier to maintain than sloppy React code (or insert popular framework here).
My point is that as long as we keep the status quo of how software is built (using popular tools that male it fast and easy to build software without LLMs that often were unperformant), we'll keep heading down this path of trying to solve the problems of frameworks instead of directly solving the problems with our app.
You are going to allow a product from a company you have no reason to trust write important software for you and put it into production without checking the code to see what it does?
I agree with you, which makes me seem like the laggard at work. Devil's advocate is that AI-native development will use AI to ask these questions and such. So whether it's a framework or standard lib, def agree knowing your stuff is what matters, but the tools to demonstrate this knowledge is fast in flux.
Again, I am on the slow train. But this seems to be all I hear. "code optimized for humans" is marked for death.
had another thought on my drive just now. nextjs is really fantastic with LLM usage because there's so much body of work to source from. previously i found nextjs unbearable to work with with its bespoke isomorphic APIs. too dense, too many nuances, too much across the stack.
with LLMs it spit it out amazingly fast. but does that make nextjs the framework better or worse in design paradigms, that LLM is a requirement in order to navigate?
This may be true, but define an entire application. Is it a CRUD app? Is it an app that scales to a thousand, ten thousand, a million users? Is it an app that is bug free and if not bug free, easy to fix said bugs with few to no regressions? Is it an app that is easy to maintain and add new features to without risk of breaking other stuff?
I think it is genuinely impressive to be able to build one app with AI. But I haven't seen evidence that someone could build a maintanable, scalable app with ai. In fact, anecdotally, a friend of mine who runs an agency had a client who vibe coded their app, figured out that they couldn't maintain it, and had him rewrite the app in a way that could be maintained long term.
Again, I'm not an Ai detractor. I use it every day at work. But I've needed to harden my app and rules and such, such that the AI cannot make mistakes when I or another engineer is vibing a new feature. It's a work in progress.
I find it useful to use a brainstorming skill to teach me X Y Z and help me understand the tradeoffs for each, and what it'd recommend.
I've learned about outbox pattern, eventual consistency, CAP theorem, etc. It's been fun. But if I didn't ask the LLM to help me understand it would have just went with option A without me understanding why.
In my instance, I’m talking more about using library X or library Y, not the difference between using an atomic versus a mutex. I want to learn the latter, but the former isn’t something I care about.
Ah, that's fair. I personally don't normally care about library usage as long as it's fairly well documented and effective (like Shadcn vs raw tailwind components vs chakra... I don't really care).
Yea I just use LLM agents as tools, I don’t kick whole features to them or have a cloud agent running all the time. I rarely use more than $100 in usage monthly, usually less than half that. I use tab completion a lot in Cursor and use agents to make mechanical changes or integrate features I don’t care about learning, like integrating several libraries together into my application. I also use it to write things I’ve already got examples for, like database APIs.
Software engineers who haven’t tried these tools don’t understand what they are, and vibe coders who never understood software are taking the mindshare in public because it sounds revolutionary to some and apocalyptic to others. You have to stop listening to the claw bros and try using these as tools yourself in small ways to see what it’s really about, IMO.
Agreed; as in most things, moderation is key. It is a new tool that is here whether we like it or not. May as well learn to coexist with it, but also not defer all our thinking to the tool.
Really? On HN I see so many people AI naysayers who say either it's not useful or it's a net negative on productivity. Perhaps they are a minority, but they're certainly a vocal one.
This! I've actually learned a lot about what I don't know by using AI. It made me dig into learning proper systems design, app architecture, etc.
But at the same time the more I read about AI, the more I realize I need to learn about AI. Thus far I'm just using cursor and the Claude code extension alongside obra superpowers, and I've been quite happy with it. But on Twitter I see people with multiple instances of Claude code or open claw talking to each other and I don't even know how to begin to understand what's going on there. But I'm not letting myself get distracted — Claude code and open claw are tools. They could go away at any time. But systems thinking is something that won't go away. At least, that's my gambit.
It’s telling those people mostly talk about the complexity of the AI setup they’ve engineered to write code. Much more so than bragging about the software created by that process.
reply