Hacker Newsnew | past | comments | ask | show | jobs | submit | imiric's commentslogin

I agree that the tone is off-putting and immature, but attacking the author and the site doesn't change the fact that it gained traction and many students seemed to enjoy it (if we can believe the claims). There's clearly a demand for this type of site, so any technical or novelty merits are irrelevant.

What the author did wrong was mishandling the negative response. If he had been open to the feedback and worked on a plan to address the concerns, the site might have stayed up. Hopefully this is a learning opportunity, as he clearly needs it.


What a ridiculous conclusion.

Why does Adobe need to exfiltrate some information from my machine anyway? If I'm a customer, then they should know this when I sign into my account. They absolutely don't need this information if I'm visiting their website without logging in.

Modifying a global system file is something their software shouldn't be doing in the first place, but relying on this abuse to track me on their website is on another level of insidious behavior.


If you're worried about device fingerprinting, Adobe has far more reliable ways to do it already. Canvas fingerprinting, IP tracking, cookies. A hosts entry tells them almost nothing they couldn't get elsewhere, provides them with almost no entropy, and attributing insidious intent to what is most plausibly a UX feature is conspiratorial.

I'm not worried about this, since I don't use Adobe products. I'm just calling out what's clearly user hostile behavior. Considering the amount of hostility Adobe has exhibited towards its users over the years, I'm inclined to believe this is yet another example. Nothing conspiratorial about that. If anything, calling this a "UX feature" without any evidence either way is suspiciously dismissive.

Make sure to use "PRETTY PLEASE" in all caps in your `SOUL.md`. And occasionally remind it that kittens are going to die unless it cooperates. Works wonders.

Can you paste the relevant section in your soul please?

Sure, as soon as I locate my soul.

I love how despite how cold and inhuman LLMs are, we've at least taught them to respect the lives of kittens

The top 40 has rarely been about "art", though. The music there is highly formulaic and derivative, whose creators know well how to produce music that appeals to the masses.

The effect of this "AI" trend is that now humans with no musical background or experience can flood the medium, making it much more difficult for anyone to make a living from it, whether they're an artist or not.


No, back when there were actual musicians playing actual instruments the top 40 was legitimate art

https://en.wikipedia.org/wiki/Billboard_Year-End_Hot_100_sin...


The author contradicts themself.

> Stop thinking that finding a flaw is a contribution. It's half of a contribution at best. The other half is "and here's how we might solve that." If you're pointing out a problem without offering a path through it, that's not contributing.

So... finding a flaw is "half" of a contribution, but it also isn't a contribution?

Critical thinking is valuable. Yes, it can sometimes be counterproductive, but it can also be steered in a productive direction.

If someone criticizes something, ask them to elaborate. Why couldn't this work? What are the exact roadblocks? Is it based on prior experience, or gut feeling? If it's based on experience, how similar is the current situation to their past experience?

Chances are that after some prodding their criticism turns out to be a non-issue, or far less of an issue than they originally thought. In either case, the discussion itself often steers the group towards a better solution, so bringing up potential concerns, even if they're invalid, is often a good exercise.

The skill is aiming for the right moment to bring it up, having a bit of tact in the delivery, and not burying your head in the sand and being defensive (or offensive). Too often people attach their ideas to their identities, which is the root cause of why design discussions can be frustrating and nonproductive.


You're right, but are also ignoring that branding, appearance, etc., is simply not important to some people. They prefer function over form, which is where I think the author is coming from. They're wrong in thinking that most people share this opinion, and the idea of LLMs creating UIs seems awful to me, but as you can see from the comments here, this is appealing to some. It's niche, but this website is not exactly mainstream.

I partly share this opinion because most branding and UIs, products that are primarily marketed as a "lifestyle", etc., are obnoxious. Yes, appearance is a factor of anything we interact with, but when using technology my primary thought is if it solves a practical problem. Not if it's broadcasting an image, or even if it's enjoyable to use. The latter is important, but often companies prioritize it over functionality, which is backwards to me.

So starting with a mostly functional product, and giving me the choice of how to style it, is appealing to me. This is why I still use RSS, custom style sheets, the CLI and simple GUI wrappers, etc.

There is an audience for this type of product, but it's of the magnitude of a rounding error, so naturally most companies don't, and likely shouldn't, focus on this segment.


I totally agree that there is an often loud minority calling for this sort of thing: "I am an expert. I don't need styling or white space. I want every last square centimeter of space filled with 8pt font. I demand information density!" (aside: these are also the same people who say that JS-based UIs are slow and server-side HTML is faster, despite the fact that backend latency is 99.999% of the problem but that is another discussion...)

And yet, in my lived-experience at an unnamed Big Co when we did lots of UXR work in the on-call, monitoring, and incident management software/tooling, when it came to people being the primary on-caller handling a page for an incident when the company is losing millions for every minute of downtime that the 8pt font information dense UI they said they wanted actually led to increased stress, more mistakes, longer time-to-mitigation etc. Turns out that a carefully and deliberately designed UX and information architecture and - gasp - white space (that was all carefully and minutely tuned to specific CUJs over many rounds of research and prototyping) is really important.

Even if you have all the information available, just throwing stuff at the screen doesn't always help IME. Less is often more.


If you considered using it in the first place, reports of security vulnerabilities wouldn't concern you.

I find it puzzling whenever someone claims to reach "flow" or "zen state" when using these tools. Reviewing and testing code, constantly switching contexts, juggling model contexts, coming up with prompt incantations to coax the model into the right direction, etc., is so mentally taxing and full of interruptions and micromanagement that it's practically impossible to achieve any sort of "flow" or "zen state".

This is in no way comparable to the "flow" state that programmers sometimes achieve, which is reached when the person has a clear mental model of the program, understands all relevant context and APIs, and is able to easily translate their thoughts and program requirements into functional code. The reason why interrupting someone in this state is so disruptive is because it can take quite a while to reach it again.

Working with LLMs is the complete opposite of this.


Thank you so much. These comments let me believe in my sanity in an over-hyped world.

I see how people think its more productive, but honestly I iterate on my code like 10-15 times before it goes into production, to make sure it logs the right things, it communicates intent clearly, the types are shared and defined where they should be. It’s stored in the right folder and so on.

Whilst the laziness to just pass it to CC is there I feel more productive writing it on my own, because I go in small iterations. Especially when I need to test stuff.

Let’s say I have to build an automated workflow and for step 1 alone I need to test error handling, max concurrency, set up idempotency, proper logging. Proper intent communication to my future self. Once I’m done I never have to worry about this specific code again (ok some error can be tricky to be fair), but often this function is just practically my thought and whenever i need it. This only works with good variable naming and also good spacing of a function. Nobody really talks about it, but if a very unimportant part takes a lot of space in a service it should be probably refactored into a smaller service.

The goal is to have a function that I probably never have to look again and if I have to do it answers me as fast as possible all the questions my future self would ask when he’s forgotten what decisions needed to be made or how the external parts are working. When it breaks I know what went wrong and when I run it in an orchestration I have the right amount of feedback.

As others I could go very long about that and I’m aware of the other side of the coin overengineering, but I just feel that having solid composable units is just actually enabling to later build features and functionality that might be moat.

Slow, flaky units aren’t less likely to become an asset..

And even if I let AI draft the initial flow, honestly the review will never be as good as the step by step stuff I built.

I have to say AI is great to improve you as a developer to double check you, to answer (broad questions), before it gets to detailed and you need to experiment or read docs. Helps to cover all the basics


So don't write slow flakey unit tests? Or better yet, have the AI make them not slow and not flakey? Of if you wanna be old school, figure out why they're flakey yourself and then fix it? If it's a time thing then fix that or if it's a database thing then mock the hell out of that and integration test, but at this point if your tests suck, you only have yourself to blame.

Sorry I don’t get your point and you didn’t seem to get mine.

I’m saying I would guess I’m faster building manually then to let AI write it, arguably it won’t even achieve the level I feel best with in the future aka the one having the best business impact to my project.

Also the way I semantically define unit tests is that they are instant and non-flaky as they are deterministic else it would be a service for me.


I switched to use LLMs exclusively since around March last year and I haven’t wrote a line of code directly since then.

I have followed the usual autocomplete > VS Code sidebar copilot > Cursor > Claude Code > some orchestrator of multiple Codex/Claude Codes.

I haven’t experienced the flow state once in this new world of LLMs. To be honest it’s been so long that I can’t even remember what it felt like.


"My flow state is better than yours"? Point is, I get engaged with the thing and lose track of time.

I can lose track of time watching a movie or playing a video game, but it's not what Mihály Csíkszentmihályi would call "flow state", but just immersion.

LLMs deal with implementation details that get in the way of "flow"

So your solution is to deploy a black box that can be worked around with a basic lookup table for a single field?

CAPTCHAs were never meant to work 100% of the time in all situations, or be the only security solution. They're meant to block lazy spammers and low-level attacks, but anyone with enough interest and resources can work around any CAPTCHA. This is certainly becoming cheaper and more accessible with the proliferation of "AI", but it doesn't mean that CAPTCHAs are inherently useless. They're part of a perpetual cat and mouse game.

Like LLMs, they rely on probabilities that certain signals may indicate suspicious behavior. Sophisticated ones like Turnstile analyze a lot of data, likely using LLMs to detect pseudorandom keyboard input as well, so they would be far more effective than your bespoke solution. They're not perfect, and can have false positives, but this is unfortunately the price everyone has to pay for services to be available to legitimate users on the modern internet.

I do share a concern that these services are given a lot of sensitive data which could potentially be abused for tracking users, advertising, etc., but there are OSS alternatives you can self-host that mitigate this.


Tailscale is not different. It simply makes managing WG configuration easier, and adds some useful value-added features on top.

But, as you know, you can also manage this configuration yourself, either via traditional config mgmt tools, helpers like wg-meshconf, or even plain shell scripts, if you like. I'm aware this is a very HN-Dropboxy comment, but it's really not that complex[1], and is easily manageable for a small deployment.

Another VPN tool I used before WG gained momentum was tinc, which supports mesh networking out of the box. It's even easier to configure and maintain, and supports all platforms. It does run in userspace, which should make it slower than WG, but I found the performance acceptable for my modest use cases. Highly recommended.

[1]: https://www.procustodibus.com/blog/2020/11/wireguard-point-t... (this blog is a great WG resource!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: