* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
I think this is the way. Not control, but just make it simpler for parents to handle their childrens devices.
You dont have to make everyone share their age, you just make it so that parents can in a simpler way choose what the children should be able to access.
Make it easy to do right, dont add more control.
Its kind of the old anti-piracy copyprotections. The pirates always cracked it, and in the end, the ones who got to sit there trying to figure out what is the word in the manual is the user who actually paid for the game. So making it worse for the ones who paid, and better for the cracked version.
So, make it simple.
The tech community has had pretty much free reign over the last few decades, and has always chosen adult convenience over child safety (and mostly profit over both). The "middle ground" probably involves a bigger transfer than this.
There's probably a much better solution than "adults vs children" but very few with our expertise seem seriously interested in solving for safer children, which essentially leads to inexpert solutions gaining popular support.
I won't call myself an expert in this field, and haven't given it much thought, but a couple options just off the top of my head...
1. Limit child accounts to "classic" social network functionality. They get to see things from mutual friends. No algorithmic feeds, kids aren't in the user search, and no way for messages to be sent/received unless both sides have consented.
2. Disable chat for child accounts. How many chat apps do children really need? Each one is another potential vector for issues that parents would need to monitor.
I'm sure there is a monkey paw here, but either option seems better than no end-to-end encryption for anyone, at a time when government surveillance is a bigger issue than ever.
Frankly, I think option 1 would be better for all users, not just children. Go back to classic "social networks". This "social media" experiment has failed.
I feel like I can think of lots of situations where society puts in to protect children rather than leaving it to the parents (age ratings on films and games, YouTube Kids, regulations around advertising to children, the whole concept of school, reduced speed limits around playgrounds to give a few examples off the cuff).
My point more was that you said it wasn't a widely shared opinion, but to my mind, it is broadly the status quo. Whether that is good or bad is a separate point.
Its not reactionary to say you dont want the state to interfere too much in your child éducation.
Whether you are left or right its fine as long as the state aligns with you. But if you open an history book, you will sée it VERY OFTEN happened that states get crazy / ideological or just plain eugénist / liberticide.
I think getting the age thing correct is key to get parental classification to work properly(I think now platforms just ask for a birth date which is lame) e.g
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
Why do you need any third parties whatsoever? Just have the parents do it. They configure a setting in the kid's device which the device uses to determine what content to display. All you need from the app/service is a rating for the content. No third parties should never have to know anything about the user, because the user's device knows that, and the device knows it because the parents do.
This all depends on fantasy tech and/or totalitarian control of tech.
Who verifies that the person verifying the child's age is actually authorised to do that? Who verifies that verification? And so on up. This needs a chain of trust that can only end up at government. And that chain of trust will then be open to being abused by shitty politicians.
What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
What you propose here is the death of open computing. And I personally believe that we would be much better off as a species if we kept open computing and just taught our kids how to handle social media better.
> What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
This one is easy. You just don't require all devices to do that. The parent isn't required to give the kid a general purpose computer. You don't need to prevent every device from running DOOM, only one device, and then parents who want to impose such restrictions get the kid one of those.
- The line between "general purpose computer" and "not that" is weird. Android is an implementation of Linux, after all. Probably the best example is a Steam Deck. It's just Arch Linux, you can get to a desktop on it no problem, and you get sudo access and can install whatever you like on it. Are you saying that Responsible Parents should not get their kids a Steam Deck?
- And that raises the point of how responsible are we making parents for technical decisions that they do not necessarily have the knowledge to implement? If a child works out how to circumvent the age restriction and look at boobies (or whatever) and an authority finds out, are the parents liable? Are they likely to be prosecuted? Isn't this just adding more burden and bureaucracy to the job of parenting?
> Are you saying that Responsible Parents should not get their kids a Steam Deck?
I'm saying Authoritarian Parents should not get their kids a Steam Deck. If the kid can run arbitrary code then they can get a VPN and access websites hosted in Eastern Europe and then any of this is moot because there is no law you can impose on Facebook to do anything about it.
> If a child works out how to circumvent the age restriction and look at boobies (or whatever) and an authority finds out, are the parents liable?
No, because the parents rather than the "authorities" (who TF is that anyway?) should be the ones in charge of the decision whether the kid can look at boobies to begin with.
The devices that offer a mode that blocks all unapproved content are presumably going to advertise it. If you buy something that doesn't say it has anything like that, and then it doesn't, that's the expected result. If you buy a device that says it does and then it doesn't, now you have a bone to pick with the OEM.
It’s very hard to control kids internet access. Impossible really. Even if you do it fine at home, once they go to school it’s whatever policies the school has. Most require laptops and provide internet access.
so the school takes on that responsibility, as deputized by the parents.
Kids don't get unfettered access to the streets while at school. They can't take their bikes and ride out at will. What makes the internet and devices any different? The devices provided by the school should be lockdown-able, and kids should not be provided their own device unless there's a parental lock (which is enabled during school hours, and is similarly locked down).
The school does not take responsibility. Schools will tell you what you kid does at home is the parent's responsibility even if it is done on the school device. Parents do not have the ability to configure the content controls on the device itself, so technically sophisticated parents resort to tweaking router settings.
I feel uncomfortable about the idea of controlling children, even my own. Certainly there is a requirement to protect children from others but I feel like putting in guard rails to prevent children from themselves only leads to making things taboo and, as a result, more interesting.
Google is basically its own private internet. It caches content so you can access all sorts of terrible stuff just from Google.com (and its related domains).
But if you cut Google you cut Google Classroom - which is required.
And Google Classroom itself has many workarounds.
This isn’t just a Google problem. The centralization of the Internet around a few mixed content domains really kills conventional filtering.
Paradoxically, there are so many centralized domains that even if you can block one, it’s just a game of whack a mole.
Eventually you just block the whole internet - and then what’s the point? Take away the 20 most popular mixed content platforms, messaging, etc, and you’re effectively blocking the whole internet.
The kids can’t contact their friends, watch educational videos, or any other legitimate use.
> Classifying accounts as child accounts (moderated by a parent)
Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.
This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.
And the only reason it is permissible to presumptively treat people as underage until proven otherwise in the physical world is that there isn't a constellation of intermediaries collecting all your habits and preferences when you buy porno magazines or alcohol in person.
Why is the answer people seem to arrive at being "mandatory collection of blackmail material that will ruin careers and relationships" when it comes to the Internet?
Because "somebody has to think of the children." At this point, I am convinced parental instincts are being abused to slowly but surely install more-or-less complete public surveilance. Its a rather obvious approach. You have to appeal to something emotional. And parents, and their apparent unwillingness to take up responsibility for their parenting, are the perfect target. Even child-less adults will chime in to toot the "Somebody has to think of the children" mantra.
Yeah, how dare someone do or say anything that some random crazy asshole could use to threaten that person's personal or professional life or even put them in danger of physical harm.
To hell with gay kids growing up in very traditional religious areas in much of the world.
That person who made a racist joke on Discord when they were 13 years old? That should be able to ruin them when they're 30!
Someone confiding to a friend over social media DMs that they're in an abusive relationship with someone violent? Well - she shouldn't be surprised when her partner beats her within an inch of her life when he finds out. If only she did what she was told, right?
And let's not forget the cringiest or most sexual thing you've ever said online - make sure that your every utterance in private would pass scrutiny by your employer's HR department!
Seriously...I don't understand people like you. What a small, listless, and unusually safe world you must live in.
You may as well have asked why can't everyone think and act like you as well as live in your particular region of the world with the same friends, family, romantic, and professional opportunities that you've been provided throughout your life.
Unfair presentation. What they suggested was more akin to, "Assume someone with keys is an adult, and let them start the truck."
Dad should either know his children would never drive the truck without permission, or keep his keys as safe as his wallet (and if he can't trust his kids with keys, you bet his wallet needs protection).
That doesn't work, unless the system knows everyone's family relationships.
Not guesses. Not is told about and takes on trust. Knows.
There's nothing to stop a kid creating a fake adult account and using it as an adult, perhaps creating their own kid account for "official" use.
Ultimately this is an unsolvable problem without a single source of truth for verified ID and user age.
The only responsible way to do that is to create a global "ID escrow" agency, where ID details are private and aren't available to governments or corporations without a court order, but the agency can provide basic age checks and other privacy services of a limited nature.
Good luck with that idea in this culture.
Meanwhile we have the opposite - real ID is known to governments and corporations, personal habits and beliefs of all kinds can be tracked, there is zero expectation of privacy, and kids still aren't protected.
I disagree. The government SHOULD be able to force people to do things to be up to standard for the good of society. Attending a basic level of public education? Maintenance of hygiene? REASONABLE actions to prevent the spread of diseases? I believe a vaccine which has been scientifically proven to have a lower risk of death than the disease it prevents qualifies as such.
Contrast the above with a case of two lives in one package. An independently functional mother and an as yet unborn child. Is it reasonable to allow the mother to risk their own life (and endanger the linked child's life) in pursuit of some belief when that risk does not spread to others? That is a very different question than one which has an impact on risk to society as a whole.
I will say, if you support enforcing a particular outcome against 'parents rights' in this case, you had better also be for more state intervention and standards upkeep with respect to ensuring that child has sufficient resources and support to become a functioning member of society. If you're willing to go that far, then I can support the logical stance of extending said support even to the point of forcing the child out of their mother against the only individual who could consent or deny consent for that effort.
The US is allergic to taxes. Maybe it's a marketing thing. Benefits paid for by society.
Maybe a department of Return on Investment. See what those taxes pay for. Contrast to buying private versions of the services at the same SLA or better.
It’s more that the US is more like a collection of 50 little countries, and it’s supposed to be hard to accomplish much at a federal level. That separation has eroded a bit in the last 50 years but it’s still very much a part of our political ideology.
It's more of a thinker if you get past the very well done and entertaining first layer of 'slop'. (Content warning: there's some offensive potty humor and LOTS of violence in that first layer!)
The movie considers potential. A literal multiverse of potential. It also explores how society treats people using their potential and time in different ways. As fellow readers gray and their family relations start to get older they too will likely have the misfortune of knowing people entering dementia. How people are treated as they slide away from this reality is represented rather well by the film.
Society absolutely needs to more correctly incentivize smart people to 'get together' to form child creating and raising units (families).
Maybe society should focus on supporting high quality environments for raising children well.
This probably includes a bunch of budget expensive things like...
* Rich interaction between smart adults and children, at low density
* Ensure good breakfast and lunch at minimum
* year round childcare
* Every child great medical care
If we like the idea of biological parents bonding strongly with their children, the whole 'work from home' and 'work life balance' things should also be strongly evaluated. I happen to think that delivering strongly on the above points would also pair well with at least some 'work from home' so that parents have time to work, time for being human, and time to be a good parent. Harder to measure experimental results probably include a healthier emotional and motivational status, lower stress for everyone involved, and maybe even higher output if not just higher quality output during hours worked.
The filled out ballot is intended to be fully anonymous.
It is then slipped into a security sleeve to make it harder to read within the envelope.
The envelope is sealed and signed by the citizen.
Security is provided by the envelope which is the attestation that the citizen cast their ballot. Offhand, the county voting office is likely required to retain the ballot as part of the state/federal records. I haven't checked but that or a centralized ballot repository are the only things that make sense.
Once the ballot is removed from the envelope, it is just a sheet of paper with votes on it. There's no name, serial number, or signature on it.
Hence "stuffing" in more ballots cannot be detected.
Printing the ballots on security paper will not eliminate this risk, but it will make it much harder.
I don't know if there is an auditable "chain of custody" of ballots from mailbox to the counting center. The fraud here would be "losing" ballots that are from precincts that tilt significantly in one direction or another.
There's bigger issue than stuffing. In "rural" Hungary chain voting is customary where people are taken to the voting place by gangs and are either awarded with some money or a bag of potatoes, or threatened to be beaten if they do not comply. The first voter of the chain goes in, takes the ballot, hides it and takes it out. It is then pre-filled by the gang. The next voters take the prefilled ballot in, throw it in the box and bring a fresh clean ballot out, and so on...
In other cases, people get money/bag of potatoes for a photo of their correctly filled ballot.
That sounds good. But it doesn't account for the ballot from your mail box to the processing center. Nor does it check citizenship & residency status. Ballot harvesting is also legal and takes place in Washington state.
>The envelope is sealed and signed by the citizen.
Alas, the signature must reasonably match one on file (from somewhere ... presumably a state ID) or the ballot may be rejected. Since human signatures can vary wildly for reasons, this non-deterministic feature requires a human guess for -each- ballot. No mechanism to dispute that decision.
Mine has been disputed several times (because it changed due to name change and wasn't updated). There is a very clear mechanism to dispute that decision, and in fact that's why they ask for your phone number and/or email on the envelope--so when they want to dispute it, they have a way of contact for you to do what's necessary to make the ballot count (provisionally, only if the race is close enough for your vote to matter).
This is the way I like to do it. I know bloating the logs too much can be a problem, but it's even worse if you're lacking information to reconstruct what happened when there ends up being a problem. And only providing that detail when there's an error isn't enough. What if the issue never triggered an error in the application and it was only caught later on either by a person seeing something was off or by an error a downstream system?
Also it's helpful to log before operations rather than after because if a step gets stuck it's possible to know what it's stuck on.
Basic functions of society should never be run by 'for profit' standards. Do you want a 'for profit' fire department, medical care system, or law enforcement systems? These are core support services for an orderly society.
Citizens that rely upon these services in their time of need often have no other recourse.
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
reply