I've always encouraged everyone more junior to review everything regardless of who signs off, and even if you don't understand what's going on/why something was done in a particular way, to not be shy to leave comments asking for clarification. Reviewing others' work is a fantastic way to learn. At a lower level, do it selfishly.
If you're aiming for a higher level, you also need to review work. If you're leading a team or above (or want to be), I assume you'll be doing a lot of reviewing of code, design docs, etc. If you're judged on the effectiveness of the team, reviews are maybe not an explicit part of some ladder doc, but they're going to be part of boosting that effectiveness.
Or just use Scala which has has a robust library ecosystem already, and everyone criticizes the BDFL for being an out of touch academic so you don't need to worry about appeals to authority.
Why do people let tweens wander a mall unattended when there are things like brewery/restaurants inside? Because it's illegal to serve them alcohol and as a social convention you know they won't.
Society works a lot better when we make the few bad actors that are out to exploit children stop, and instead expect everyone to look out for them/generally behave in prosocial ways. Things stop working when we say "why wouldn't you assume everyone around you is out to harm your kids and act accordingly?"
We can just say "actually you're not allowed to put gambling in a game targeting 7 year olds".
Unless you mean it just can't detect objects that small, my guess is we'll see things calibrate toward a lot more birds being cooked in active war zones vs drones with explosives being let through.
Because their (or their friend's) computer can't run the anticheat, but they're interested in playing with friends? My sister and mom wanted me to play Valorant with them a free years back, but apparently it needs kernel anticheat, so I just can't run it. I'm not going to buy a new computer for a game.
And the way community policing worked in the past is that the "police" (refs) could just kick or ban you. They don't need a trial system if the community doesn't want that.
There is one: universities. They're just really expensive so you can't stay there for more than a few years, and people aren't properly advised of how important the opportunity is.
Talking to a real human seems more depressing to me, especially when they're making less than $2/hour doing it, have multiple chats going all trying to hit sales targets, and they feel bad for you in the interaction. Paying for female attention is pretty bad, but not even getting the attention you paid for is just bleak. At that point go with the machine. At least it's not thinking "what the hell am I doing here?" while it's generating messages.
> App and website developers shouldn't be burdened with extra costly liability
Why not? Physical businesses have liability if they provide age restricted items to children. As far as I know, strip clubs are liable for who enters. Selling alcohol to a child carries personal criminal liability for store clerks. Assuming society decides to restrict something from children, why should online businesses be exempt?
On who should be responsible, parents or businesses, historically the answer has been both. Parents have decision making authority. Businesses must not undermine that by providing service to minors.
Spell it out: how do ID checks for specific services (where the laws I've read all require no records be retained with generally steep penalties) create an infrastructure for total surveillance? Can't sites just not keep records like they do in person and like the law mandates? Can't in-person businesses keep records and share that with whomever you're worried about?
How do you reconcile porn sites as a line in the sand with things like banking or online real estate transactions or applying for an apartment already performing ID checks? The verification infrastructure is already in place. It's mundane. In fact the apartment one is probably more offensive because they'll likely make you do their online thing even if you could just walk in and show ID.
I mean, we're talking about age verification in the OS itself in some of these laws, so tell me how it doesn't.
Quantity is a quality. We're not just seeing it for porn, it's moving to social media in general. Politicians are already talking about it for all sites that allow posts, that would include this site.
App and website developers having liability is an alternative to OS controls. Mandatory OS controls are OS/device manufacturers having liability. I agree that's a poor idea, and actually said as much like a year ago pointing out that this California bill was the awful alternative when people were against bills like the one from Texas. It's targeting the wrong party and creates burdens on everyone even if you don't care about porn or social media.
They're separate concepts. Clearly, obviously, mandating OS controls is creating liability for OS providers, not service operators. Other states do liability for providers without mandating some other party get involved.
California is also stupid for creating liability for service/app providers that don't even deal in age restricted apps, like calculators or maps. It's playing right into the "this affects the whole Internet/all of computing" narrative when in fact it's really a small set of businesses that are causing issues and should be subject to regulation.
It implies that the user has access to the technical infrastructure that supports age verification. Sucks to be you, if you can't afford a recent Apple or Android device to run the AgeVerification app.
There is also the problem of mission creep. Once the infrastructure is in place, to control access to age-restricted content, other services might become out of reach. In particular, anonymous usage of online forums might no longer be possible.
The EU Digital Wallet requires hardware attestation so only it only works on locked-down government-approved OSes. That opens the door for government control of all electronic devices.
OS-level ability to verify the age of the person using it absolutely provides infrastructure for the OS to verify all sorts of other things. Citizenship, identity, you name it. When it's at the OS level there's no way to do anything privately on that machine ever again.
I agree that a checkbox for if the user is over 18 opens the door to a checkbox for if the user is a citizen and even a textbox for the user's full name (which already exists on Linux so you better boycott Debian now!). I don't see how such input fields are "total surveillance".
Dueling physical analogies is never a productive way to resolve a conversation like this. It just diverts all useful energy into arguing about which analogy is more accurate but it doesn't matter because the people pushing this law don't care about any of them and aren't going to stop even if the entire internet manages to agree about an analogy. This needs to be fought directly.
How do we fight? It seems like agree or disagree, this isn't going to stop. There's so much money behind it in a time where the have nots can barely survive as is.
The OS is not the club's door. The OS is unrelated. The strip club needs to hire someone to work their door and check ID, not point at an unrelated third party. They should have liability to do so as the service provider.
For one thing, it's fairly uncommon for children to purchase operating systems. As long as there is one major operating system with age verification, parents (or teachers) who want software restrictions on their children can simply provide that one. The existence of operating systems without age verification does not actually create a problem as long as the parents are at least somewhat aware of what is installed at device level on their child's computer, which is an awful lot easier than policing every single webpage the kid visits.
So I agree that operating systems and device developers should not be liable. That's putting a burden on an unrelated party and a bad solution that does possibly lead to locked down computing. I meant that liability should lie with service providers. e.g. porn distributors. The people actually dealing in the restricted item. As a role of thumb, we shouldn't make their externalities other people's problems (assuming we agree that their product being given to children is a problem externality).
> Physical businesses have liability if they provide age restricted items to children.
These are often clear cut. They're physical controlled items. Tobacco, alcohol, guns, physical porn, and sometimes things like spray paint.
The internet is not. There are people who believe discussions about human sexuality (ie "how do I know if I'm gay?") should be age restricted. There are people who believe any discussion about the human form should be age restricted. What about discussions of other forms of government? Plenty would prefer their children not be able to learn about communism from anywhere other than the Victims of Communism Memorial Foundation.
The landscape of age restricting information is infinitely more complex than age restricting physical items. This complexity enables certain actors to censor wide swaths of information due to a provider's fear of liability.
This is closer to a law that says "if a store sells an item that is used to damage property whatsoever, they are liable", so now the store owner must fear the full can of soda could be used to break a window.
That's not a problem of age verification. That's a problem of what qualifies for liability and what is protected speech, and the same questions do exist in physical space (e.g. Barnes and Noble carrying books with adult themes/language).
So again, assuming we have decided to restrict something (and there are clear lines online too like commercial porn sites, or sites that sell alcohol (which already comes with an ID check!)), why isn't liability for online providers the obvious conclusion?
> That's a problem of what qualifies for liability and what is protected speech
The crux is we cannot decide what is protected speech, and even things that are protected speech are still considered adult content.
> why isn't liability for online providers the obvious conclusion?
We tried. The providers with power and money(Meta) are funding these bills. They want to avoid all liability while continuing to design platforms that degrade society.
This may be a little tin-foil hat of me, but I don't think these bills are about porn at all. They're about how the last few years people were able to see all the gory details of the conflict in Gaza.
The US stopped letting a majority of journalists embed with the military. In the last few decades it's been easier for journalists to embed with the Taliban than the US Military.
The US Gov learned from Vietnam that showing people what they're doing cuts the domestic support. I've seen people suggesting it's bad for Bellingcat to report on the US strike of the girls school because it would hurt morale at home.
The end goal is labeling content covering wars/conflicts as "adult content". Removing any teenagers from the material reality of international affairs, while also creating a barrier for adults to see this content. Those who pass the barrier will then be more accurately tracked via these measures.
>Plenty would prefer their children not be able to learn about communism
Plenty of people would prefer that children not learn about scientology from pro-scientology cultists too. It's not that they can't know about scientology (they probably should, in fact, because knowledge can have an immunizing effect against cults)...
And it's not that they can't know about communism (they probably should, in fact, because knowledge can have an immunizing effect against cults)...
Would you also be against learning about Capitalism from the Heritage foundation?
This is a comment section about large corporations lobbying against our ability to freely use computers and you break out the 80's cold war propaganda edition of understanding a complicated economic system that intertwines with methodology for historical analysis with various levels of implementations from a governmental level.
Concise isn't specific enough: I've primed mine on basic architecture I want: imperative shell/functional core, don't mix abstraction levels in one function, each function should be simple to read top-to-bottom with higher level code doing only orchestration with no control flow. Names should express business intent. Prefer functions over methods where possible. Use types to make illegal states unrepresentable. RAII. etc.
You need to think about what "good taste " is to you (or find others who have already written about software architecture and take their ideas that you like). People disagree on what that even means (e.g. some people love Rails. To me a lot of it seems like the exact opposite of "good taste").
Basically all of my actual programming work has been done by LLMs since January. My team actually demoed a PoC last week to hook up Codex to our Slack channel to become our first level on-call, and in the case of a defect (e.g. a pagerduty alert, or a question that suggests something is broken), go debug, push a fix for review, and suggest any mitigations. Prior to that, I basically pushed for my team to do the same with copy/paste to a prompt so we could iterate on building its debugging skills.
People might still code by hand as a hobby, but I'd be surprised if nearly all professional coding isn't being done by LLMs within the next year or two. It's clear that doing it by hand would mostly be because you enjoy the process. I expect people that are more focused on the output will adopt LLMs for hobby work as well.
I suspect this is more true than most people think. Today's bad code will be cleaned up by tomorrow's agents.
The other factor that gets glossed over is that llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility. When I do code with llms, a big part of it is demonstration, i.e. pseudocoding a pattern/structure, asking the model if it understands, and then having it complete the pattern. I've had a lot of success with this approach.
> llms create a financial incentive to create cleaner code, with tests, because the agent that you pay for will be more efficient when the code is easier to understand, and has clear patterns for extensibility
Right, this is the kind of discussion we're having on my team: suddenly all of the already good engineering practices like good observability, clear tests with high coverage, clean design, etc. act as a massive force multiplier and are that much more important. They're also easier to do if you prioritize it. We should be seeing quality go up. It's trivial to explore the solution space with throwaway PoCs, collect real data to drive your design, do all of those "nice to have" cleanups, etc. The people who assume LLM = slop are participating in a bizarre form of cope. Garbage in, garbage out; quality in, quality out. Just accept that coding per se is not going to be a profession for long. Leverage new tools to learn more, do more, etc. This should be an exciting time for programmers.
> It's clear that doing it by hand would mostly be because you enjoy the process.
This will not happen until companies decide to care about quality again. They don't want employees spending time on anything "extra" unless it also makes them significantly more money.
> It's clear that doing it by hand would mostly be because you enjoy the process.
This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.
Ultimately you always have to trust people to be judicious, but that's why it doesn't make any changes itself. Only suggests mitigations (and my team knows what actions are safe, has context for recent changes, etc). It's not entirely a black box though. e.g. I've prompted it to collect and provide a concrete evidence chain (relevant commands+output, code paths) along with competing hypotheses as it works. Same as humans should be doing as they debug (e.g. don't just say "it's this"; paste your evidence as you go and be precise about what you know vs what you believe).
That's sounds like the perfect recipe for turning a small problem into a much larger one. 'on call' is where you want your quality people, not your silicon slop generator.
If you're aiming for a higher level, you also need to review work. If you're leading a team or above (or want to be), I assume you'll be doing a lot of reviewing of code, design docs, etc. If you're judged on the effectiveness of the team, reviews are maybe not an explicit part of some ladder doc, but they're going to be part of boosting that effectiveness.
reply