Hacker Newsnew | past | comments | ask | show | jobs | submit | vanviegen's commentslogin

Ah.. I'm glad it's just a narrative then, and that there are in fact just as many good things to report and that America is not rapidly becoming an authoritarian state.

Look at the attack vectors that are actually being used, and address them specifically, with minimally invasive measures.

If the problem is apps that allow remote control of your device, that people can be socially engineered into installing, put up barriers to gaining just that permissions. That approach would actually help motivate the problem (as scammers can now just use Google-approved apps for such things).

If the problem is ads that are pushing scams, Google could start with eradicating them from their own network. They seem to be the primary source. And, god forbid, perhaps even offer an ad blocker integrated in Android. (Yeah, I know.)

If the problem is scammers pretending to be a friend or family member in need of help though social apps, Google could force these apps to help users identify these cases (using local privacy friendly heuristics is course) for inclusion in the Play Store. And no, they wouldn't be able to demand the same from apps installed from elsewhere, but that should be firmly outside of their sphere of responsibility. And casual users would be extremely like to stick with the default app store anyhow.

Note that all three of these proposals provide a measure of safety from the problems they are addressing much larger than what Google is attempting by banning all non-Google-authorized applications.


I am quite genuinely curious what you think the best solution to prevent someone instructing a tech illiterate person over the phone to click through every permission warning about a malicious app they're installing is? No amount of scary menus will work. I feel like they only have 2 options, which is to limit some permissions without any exceptions (making their platform more closed), or make it harder to install apps as a whole.

Do you have a better idea?


If there is literally "No amount of scary menus will work." then those people cannot use computers. So long as they can transfer money with it, or do another action that a scammer may want to do, then the scammer can tell them to do it. They should not be allowed to install banking apps with that logic and need a legal guardian to manage their digital belongings

If the solution is that nobody has control of their digital life anymore (see also attempts to require client-side scanning and verify user age, which don't work if said user can override it) then we've lost sight of the bigger picture


It's not clear at all that a scammer is on the phone, instructing people to click through every warning that they see while sideloading a malicious app. As I stated up thread, the majority of these scams are happening through apps in the Play Store.

To address your question, there should be a straightforward option during device setup. If you're first attaching your account to the device, you simply check a box that says this is an advanced user's phone. You can put it behind the same kind of scary pop-ups that web browsers have when they're about to serve you an HTTP page, or when the HTTPS certificate is self-signed.

It's the most obvious, straightforward, user-friendly approach, and it was never even discussed.


> the most obvious, straightforward, user-friendly approach, and it was never even discussed

Fwiw, it was "discussed" in the sense that the person we're arguing with meant upthread ("let's discuss a good solution instead of this boring repetitive outrage"), but it's not like Google listens to that so any such discussion is pointless anyway. It is indeed the obvious solution and it comes up in each of these threads, but believers like GP can always be new rationalizations of why Google doesn't implement one proposal or another


> It's not clear at all that a scammer is on the phone, instructing people to click through every warning that they see while sideloading a malicious app.

Google claims this to be a very common or majority attack vector.

"The Global Scam Report also found that scams were most often initiated by sending scam links via various messaging platforms to get users to install malicious apps and very often paired with a phone call posing to be from a valid entity."

https://security.googleblog.com/2024/02/piloting-new-ways-to...

> If you're first attaching your account to the device, you simply check a box that says this is an advanced user's phone.

I completely agree this is a perfectly valid solution but what about those who already setup their device? The security of the checkbox only works if you click it before someone attempts to scam you.


All they say is that the apps are malicious, though. The majority of malicious apps distributed on Android are through the Play Store. I really wish they would provide concrete details here because I just don't believe that this is all hinging on sideloading.

I think it's a problem where the only solutions are worse, on the whole, than the disease.

Probably the best option would be the ability to lock down your own device somehow (i.e. put the toggle in the opposite direction by default). This at least lets others around someone vulnerable to this protect them (and probably much more effectively, as the controls can be a lot tighter than 'we once saw an ID we believed was real')


I actually rather dislike having my info spread over a gazillion services, all of them having their own paid accounts or advertising. Also, a single unified search for all communication and shared notes would be very helpful.

Also, I'm not familiar with ClickUp nor Dobase, but I imagine you can have them open in multiple tabs, allowing for your preferred way of working?


I recently used Bunny for a small project. Pretty good experience. And they're rolling out new functionality at a good pace.

> A diagram is a dense way to express information.

I'd say it's a lossy way to express information. I find that architecture diagrams often cannot express the exact concepts I mean to communicate, so you're left trying to shoehorn concepts into boxes that are somewhat similar, and try to make up for the difference using a couple of cryptic words.

Prose doesn't look as nice, but allows me to describe exactly what I want to say, on any level of detail required. Of course, like with a diagram, you do need to put in significant time and effort to make it comprehensible.


> I'd say it's a lossy way to express information.

A simplified explanation of the system is by definition lossy. This equally applies to a plain English description.

I’ve been in many design reviews and similar forums where someone has attempted to present a design through written English and finally someone says “we need a diagram here; this is too much to follow” and everyone in the audience nods because they are all lost.

One of the problems with trying to communicate system design with prose is that it makes sense to the person who writes it and has full context, but the audience is often left confused. Diagrams are often easier to follow specifically because they look under specified when they are.


> finally someone says “we need a diagram here; this is too much to follow” and everyone in the audience nods because they are all lost.

Yes, that happens. I can't remember any occasions where the diagram actually cleared things up though.

Coming to think of it, one way that seems to be pretty effective at getting complex designs across is in an interactive presentation with the presenter drawing on a whiteboard, starting simple and adding stuff while explaining what and why. The narrative is very important though. The whiteboard drawings by themselves are absolutely useless.


> I can't remember any occasions where the diagram actually cleared things up though.

I would be very concerned about the quality of the engineers I was working with if they couldn’t produce helpful diagrams.

It’s not coincidental that discussion of system architecture is usually accompanied by diagrams. They should be helpful. And in fact…

> Coming to think of it, one way that seems to be pretty effective at getting complex designs across is in an interactive presentation with the presenter drawing on a whiteboard, starting simple and adding stuff while explaining what and why.

You seem to agree that they are helpful.

> The whiteboard drawings by themselves are absolutely useless.

This seems like sort of a straw man, though. I don’t think anyone advocates for system diagrams in the absence of any context.


> This seems like sort of a straw man, though. I don’t think anyone advocates for system diagrams in the absence of any context.

My point is that I see value in interactively building up a diagram together. The final artifact, even when provided with context in the form of prose, I've never found to be actually helpful. Apart from good looks of course.


Yeah. If ordering a pizza also regularly involves entering BIOS setup to change boot device ordering, change SATA mode from RAID to AHCI and disable secure boot, depending on your distro.

> change SATA mode from RAID to AHCI

This is funny. I have an HP PC that has an option in the BIOS to "prepare for RAID" or some such. I wondered what that was, so I turned it on. I had Linux on it at the time, and nothing happened. I shrugged and just forgot about it.

Fast forward a few months later, when I gave this PC to my dad. He installed Windows on it, then started thinking the PC was somehow borked: "the installer sees the drive, installs, reboots, then it fails to boot". I was shocked, that PC worked perfectly.

Then I remembered about that setting, told him to untick the box in the BIOS, and he was off to the races.


As you said earlier, therapists are (thoroughly) trained on how to best handle situations. Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.

Training LLMs we can do.

Though it might be important for the patient to believe that the therapist is empathizing, so that may give AI therapy an inherent disadvantage (depending on the patient's view of AI).


Socialization with other humans has so many benefits for happiness, mental health, and longevity. Conversely, interaction with LLMs often leads to AI psychosis and harms mental health. IMO, this is pretty strong evidence that interaction with LLMs is not similar to socialization with real humans, and a pretty good indicator that LLM “therapy” is significantly less helpful or even harmful than human-driven therapy.


Precisely.


> Just 'being human' (and thus empathizing) may not be such a big part of the job as you seem to believe.

The word “just” is not in my comment anywhere. Being human is necessary, but not sufficient.

And no, you cannot train an LLM to be human.

An LLM is not a therapist. Please do not confuse the two.

You cannot train an LLM on how to be human.


It used to be, but only in cases where your distro doesn't just package whatever software you require. Nowadays I prefer Flatpak or AppImage over crappy custom Windows installers for those cases. They allow for sandboxing and reliable updating/deinstallation.


These days, I equate anything that ships via docker/flatpak first as built by someone that only care about their own computer, especially if the project is opensource. As soon as a library or a tool update, they usually rush to add a hard condition on it for no reason other than to be on the "bleeding edge".


I'm with you on this, but I do want to point out that a big reason that people will update bundled libraries like that is because they don't want to put the effort in to see whether their bundled library versions actually have any critical vulnerabilities that affect the project. It's easier to update everything and be sure that there are no critical vulnerabilities.

In other words, the Microsoft Windows update process as applied to software development.


> But you cannot afford to maintain an important feature Google wants to remove, like MV2.

That depends on who "you" is. Maintaining extensive patch sets is still way cheaper than building and maintaining an entire browser.


> After that there will be no stopping of Japan, South Korea and Iran rightfully wanting to have their own nukes.

And I'm afraid they'll be far from the only ones...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: