If this is a thing then the solution they offer is incorrect. A big giant red screen: “warning the identity of this application developer has not been verified and this could be an application stealing your data, etc” would have worked.
What they want is to get rid of apps like YouTube Vanced that are making them lose money (and other Play Store apps)
> What they want is to get rid of apps like YouTube Vanced
I think it is also very telling where they're rolling out first. Brazil, Indonesia, Thailand, and Singapore.
It felt weird that the official press release was quoting entities from these countries, as if it should give confidence to the rest of the world. I can't imagine what these countries would want with apps that can be traced back to a government id...
Vanced and such is more of a First World/Western issue. I don't think you're wrong but I got a strong gut feeling there's other pressures in the works. Just something doesn't smell right...
Hm, not sure about that. I know from browser add-ons that markets like Brazil do suffer from increased scams, especially banking scams. I could see that this is also an issue for scam apps.
Firefox for instance does not allow you to install unsigned extensions. You don't need to list them on their storefront, but they want to perform automated tests and have the ability to block extensions through this signing requirement.
So in principle I can see them wanting to address a legitimate issue, but the way they are going about this is way to centralized. IMO they should do something like we have for web certificates, where vendors can add more root authorities than just the one from Google, and users should be able to add their own root certificates if they want to side load apps.
> I could see that this is also an issue for scam apps.
I don't deny that it can be used to reduce scams, but I think there are far better ways to solve this that don't give authoritarian countries extra powers. Thing is, signing doesn't actually address the problem. It is a way to track the problem, not prevent the problem. Don't confuse the two.
> Firefox for instance does not allow you to install unsigned extensions.
That's absolutely not true[0]. You need to sign the extension to publish it to their app store but you don't need it to install. Btw, the Playstore already does this too. Which I'm totally okay with!
You can temporarily install extensions in about:debugging, but everything permanent needs to be signed.
> Add-ons need to be signed before they can be installed into release and beta versions of Firefox. This signing process takes place through addons.mozilla.org (AMO), whether you choose to distribute your add-on through AMO or to do it yourself.
What you are saying now is different than what you said before. This exact distinction is identical to the conversation of Google too.
I mean test it out. Write that short example extension in Firefox. Doesn't matter if you need to open up about:debugging (just as you need to do extra things on your android). It'll stay.
>Vanced and such is more of a First World/Western issue
What? I'm from Brazil and Vanced is as big, if not bigger here. In fact, most of my 'first world' friends just pay for YouTube Premium (or whatever it is called), and these kinds of workarounds are mostly used in countries with less purchasing power.
In addition to the other perspectives already offered here, warning screens such as the one you propose were already shown for sideloaded apps, and these screens worked against Google in their lawsuit with Epic Games. So that's another contributing factor for the policy we're discussing.
If a giant red warning saying 'THIS APP MAY BE MALWARE' doesn't stop someone, then they've either made an informed choice to proceed or it's willful negligence. In other words, users aren't 'trained' to ignore warnings; they're simply being willfully negligent.
It’s because on the other side of that warning is a cracked version of Spotify that removes the adverts.
The user can’t make an informed choice because it’s literally impossible to audit the safety of the app or the author. So they will click passed any warnings, follow any number of steps to install the app that gives them something desirable for free.
As someone who is usually careful I too have found myself clicking past warnings and error notifications in recent times, mainly because I want to do something and the software is actively preventing me from doing that. It isn't negligence, it is just wanting to get something done and not having the time or the nerves to carefully read through and think about messages, dialogs, and screens.
Back in the early days of the Internet there was the Joel Spolsky article on why users will always do anything to see the dancing bunnies.
It doesn’t matter what adjectives you apply to them - they do it and they’ll do it again. Most people are not equipped to evaluate the veracity of that statement, and if a few good apps don’t register with Google (that these will exist is the whole reason this move is problematic at all, right?) and ask you to click through on the website or whatever, they’ll get used to touching the stove and not getting burned.
c.f. the Windows “it could be malware” blurb. You basically can’t use any software from a small publisher without clicking through it, even if they pay for the code signing certificate.
But then you get situations like, "THIS PRODUCT MAY CAUSE CANCER," being cautioned everywhere, with no distinction between, "this is certainly harmful," and "we just haven't verified it isn't harmful".
The fact that you don't even realise why that wouldn't work is kind of telling.
> users aren't being 'trained' to ignore warnings
Of course they are. Every time they click "continue anyway" and it actually isn't malware (which is 99% of the time) they are being trained that the warning is nonsense.
And they're right! What use is a warning that an app might be malware, if a) it actually isn't almost every time you see the warning, and b) you have no way of telling if it is or isn't anyway?
I hate this move too and I don't think they should have done "just make the warning even bigger!" is obviously dumb.
There aren't too many false positives, it's just that most modern android software is malware.
Saying "this will steal your data" is probably correct.
So what were actually asking users is to install some malware, if it's provided by a big enough tech company, but not other malware. Of course users get confused.
Just stop downloading apps altogether and run the web views in the original web view - the web browser.
Will Google, Meta et al. do that and abandon their apps? Of course not, they need to install malware.
The way we allow paternalistic tech companies to train the consumer to abdicate personal responsibility is going to bite us in the ass sooner or later. I'm betting on sooner.
How about requiring the user to type into a text box "App Foo might be malware. I want to install it anyways."? And disable copy and paste for that box.
Maybe they shouldn't offer a "OK" button that the stupid user can blindly click. They could tell you, "this app is dangerous, go to system settings to enabled" and a "Dismiss" button.
I'll point to Windows Vista that went all in on this kind of security, even giving you a big warning if you tried to change your background. The computer magazines quickly published guides on how to change a slider or registry setting to reduce the amount of stupid warnings, and the people were quickly trained to ignore and just hit OK on these screens.
Anyway, Apple already does this with unknown apps downloaded from the internet, you need to go to security settings and hit a button there.
This is something laughable that Apple does. Anytime you install something from Github it'll make you click a few extra boxes. And their tightening down of things also ends up making people look for third party software in the first place. All this really does is, like you said, teach people to ignore warnings.
You use sideloadly to install any ipa you want.
If you don't have a developer account it will sign the application using a key with the validity of seven whole days! (instead if you have a developer account it will be valid one year, and don't forget to pay the 99€/year ransom)
So what's wrong with that? You get warned, you ignore the warning and get hacked, that's on you for being dumb enough to download stuff from some shady website. Plus, Android is supposed to have decent isolation and permission controls, unlike desktop OSs like Windows or Linux (not counting Snap/Flatpak) where software can read your entire disk or any arbitrary file and send it via the internet.
Plus, you are not required to do that, you can just stick to Google Play and trust what Google approves there. But no need to lock down others because of your recklessness.
Exactly this. I want a big toggle that I can turn on in developer settings (perhaps make it more involved than that, but you get the gist) that says "I acknowledge that from here on in I am responsible for my data and I hereby absolve Google and other interested parties from responsibility should I blah blah blah..."
Why the hell can't I use my rooted device for payments? It's my goddamn money at risk.
My Pixel phone warns me before allowing free installs (I refuse to call it "sideloading") from any app for the first time. And others like Xiaomi show (or used to show) a more prominent warning you had to read with the consequences, waiting at least 10 seconds to enable the option.
Plus the whole "banks need to protect you by ensuring your device" is stupid when cards are protected only by a PIN, and the app also requires some form of biometry to unlock it, which is to encrypt the underlying tokens. Banks should protect your money on their end, with clients having their responsibility to keep safe their stuff, whether that's their card or phone. It's a stupid premise itself, and it's lazy engineering.
Is the point of the warning to avoid liability or to actually inform the users? If you tell people everything causes cancer (instead of only saying when you've verified it doesn't), soon enough they're going to stop caring when you say stuff like, "don't eat asbestos, that causes cancer". I think a "checkmark" system makes more sense—for verified accounts/developers, put a checkmark near their name, and for unverified ones, have nothing. There's no reason to cause alarm when 99% of the time the alarm is misfounded.
By allowing people to shoot themselves in the foot after ignoring a unmistakable warning, you are helping teach the foolish to be more careful in the future. Making mistakes is the best way to learn something.
People who just ignore big banners will just tell you that "they have been hacked", as if getting hacked is like a weather phenomenon. They won't even connect them getting hacked with the big red banner.
If they even notice, that is. It's just as possible that they play open relay for a year before they move to a new phone because their battery is always dying so fast for some unknown reason.
It's such a simple and effective solution that could be implemented overnight and 'help to cut down on bad actors who hide their identity to distribute malware, commit financial fraud, or steal users personal data' tomorrow. Mission accomplished, internet saved, and everyone's happy just like a fairy tale out of the early 2000s.
People have no "control" over their own device if they have malware on it. The weirdo incoherent tech-chauvinism of "control" and "freedom" evidenced all over this thread is one of the most obnoxious trends on HN.
> People have no "control" over their own device if they have malware on it.
You are inadvertently reaching the true core of the question. The ones who have "control" over a device, are those who control the software running on it. Be it the bad guys (in the case of a malware-infested device), a giant corporation (in the case of a locked-down device), or yourself (when you can install and replace any software you want on the device).
Their point stands, though. The vast majority of users do not have either kind of control, so it is a very small concession to them in favor of securing their device against a malicious actor taking control.
I think this is what commenters here are missing. I agree politically with the notion that people should own their devices (having full control), but the reality is not and will never be that the majority have anything but the illusion of control. Meanwhile, as these devices become increasingly necessary for people to exist at all, and the data they store becomes increasingly sensitive, the ability to theoretically install your own software is completely irrelevant compared to the risk of anything bad happening.
Things that would be compromised if my phone is compromised: All private communication, bank accounts, stock portfolio, medical history, driver's license, criminal record, sexual history, grocery habits, all communication between my government and me, real estate deeds and mortgages, two-factor authentication keys, and I suppose my Steam library.
Like, that's a lot. People can lose their homes. The stakes are unfathomably high here.
It's pretty clearly an attempt to establish a clear chain of trust. If you are making a malicious app, the first thing you want to do is hide your identity. It is incredibly important that users can know whose code they are running, and who is responsible for the behavior of the malicious app that destroyed their life.
I can't say whether the specific implementation will be an improvement, but that is clearly the intent.
Meta and Google have not shown themselves to be "malicious" in sense that is relevant to this discussions. Whatever shady practices they may or may not have is very likely entirely within the law, and they are strongly motivated to protect people's personal data, because they will not have users (i.e., their product) if their platforms are insecure.
Meta has been shown to be malicious, up to an including violating permission controls to exfiltrate cookies from the browser with the facilitation of an android app.
The only reason, and it is the only reason, you do not view Meta as a malicious actor is because they've told you many times they are not.
Most Meta and Google products could be described as keyloggers or spyware. Many break permissions expectations - for example, Google apps have special privileges that allow them to circumvent some permissions on Android.
In addition, both Meta and Google products are primarily ad driven, with the majority of ads being scams. Again, virtually identical to other malicious apps.
Is any of this legal? Maybe, maybe not, you signed a EULA. But if all it takes is a EULA, then most android malware is not malware, and we're back at square one: play protect will not do anything.
And, to be clear, this is intentional. It is not Googles intention to squander malware because they rely on malware. No malware on Android and they go bankrupt.
It is their intention to further extract value out of the Google play store by leveraging their mandatory 30% cut. As well as making Android a more locked down platform and thereby more attractive to advertisers and DRM distributors.
That's just one type of malware - spyware is also malware.
The only difference between a malicious app exfilitrating your cookies through an exploit and Meta exfilitrating your cookies through an exploit is you trust Meta won't use your cookies to impersonate you, steal your browser sessions, drain your bank account, etc.
But that's just pure trust. Meta could easily have a leak - Experian had a leak and people lost millions of dollars.
"Free" devices exist. Linux computers. Linux phones. No codesigning, minimal sandboxing, none of that "malevolent" stuff from macOS/Windows/Android. Knock your socks off. You have a choice. Ideologically wanting everyone's devices to be like this is not sensible.
This isn't like anticompetitive behavior (bundling, lock-in, fees) where "you have a choice" is irrelevant because corporate power should be minimized and competition and consumer surplus should be maximized. Tradeoffs between security and nerd-fantasy "freedom" are valid.
I still remember that piece about the tween girl getting her nudes exposed because of a RAT. True "freedom" with technology, for non-nerds, means being able to use technology to pursue your passions, learn singing, fashion, dancing, without having to be terrified that this computer might destroy your life. That's "freedom" for 99% of folks. But the high-empathy folks here will respond "user error", "personal responsibility", "you should have known not to click that". You aren't entitled to be care-free, to have a life, to pay no attention to boring nerd stuff. Become a dead-inside geek like us, you bottom-quintile person, or else.
Let's see for how long this remains true. Every step they get closer to making you watch what they want, instead of what you want, it becomes more likely they will try to even prevent you from viewing videos when you use uBlock Origin.
What they want is to get rid of apps like YouTube Vanced that are making them lose money (and other Play Store apps)