How to remove malicious apps from /e/Apps?

I am writing this post because I believe we need some kind of moderation of the current (respective future) app store integrated in the /e/ eco-system.

I don’t want to be misunderstood, I am really happy that the /e/ eco-system has its own app store - no doubt its has its flaws and definitely margins to improve. Well, it is working fairly good so far. What I personally appreciate a lot is the simplicity how new apps can suggested to be added.

But what I am wondering now is how to get apps again out of the store - if needed. I am thinking about (technically) harmful and hate/criminal content. I don’t know how far the thinking has advanced in this respect or appropriate structures are already in place to effectively and quickly prevent apps being distributed - if necessary.

Sometimes media covers how Google and Apple (as guardians of their own app stores) does with similar challenges: I remember, early last year Google had banned an app from Play Store that spread hate speech and xenophobic ideas. In this case Google reacted (luckily) pretty quickly.
As well last year Google banned more than 150 malicious apps from Play Store. After checking randomly a few items from that list of banned apps this morning, I realised that some of those scam apps are actually available in /e/Apps.

I would like to see those scam apps being removed quickly from /e/Apps. But at this occasion I’d would like to see as well rules and procedures evolving for how to deal with such apps in the /e/ eco-system.

  1. Rules - a set of rules that outlines what apps need to respect in order to be distributed via /e/Apps. Here should use the occasion to set a clear statement not to distribute neither apps that are fraudulous nor apps that promote/support xenophobic, racist, hate, criminal content (this probably needs to be elaborated furthermore) In practical terms - as anyone can suggest apps to be added to the repository used by /e/Apps - it would be good to initiate as a technical solution a ban list (where all the 151 scam apps mentioned above could be added) and otherwise rely for now on the community (see 2).
  2. Reporting - introduce a report button where users can report scam apps and those with content violation.
  3. Removal procedure - Internally there needs to be procedures for how to deal quickly, efficiently and in a transparent manner with incoming reported apps/removal requests.

Further ideas and thoughts are welcome.

Regain your privacy! Adopt /e/ the unGoogled mobile OS and online servicesphone


Good point Ralph. At present the only way is to create an issue in Gitlab and have the team remove it. Marking this as a feature request.


The point about malicious apps I think is valid. Frankly I think if you were to implement the first point, it would have to be very tightly controlled. There’s a lot of people who use the term “hate speech” when what they really mean is “things I don’t like” and I’m not sure I trust anyone to be the arbiter of that (we are after all using /e/ to try to limit the influence of Big Tech, and one of those influences is doing everything in their power to eliminate potential competitors).


@HellsBells I agree. But as I said there is further thinking needed. As a port from technically malicious apps there are app that are probably unwanted for their content. The example for the Google ban for in the first link from my post above does absolutely resonate to me - that’s why I was speaking about hate. But take it as a temporary term. But I could imagine many other content reasons why apps could/should be banned (arms and drug trading, child abuse etc). This definitely needs some more thinking and elaboration. But the better things are outlined up front, the easier it will be for possible deciders to actually take decisions.

There would also be need to be recognition that this is a worldwide project, and laws differ. You mention arms trading; to my knowledge Gunbroker and similar sites don’t have their own mobile apps, but they do go to great pains to ensure their transactions are completed within the law. We’re also seeing the legalization of certain drugs in the US, I assume that would need to be taken into account. Child abuse I agree needs to be stamped out, you’re cleared hot all day as far as I’m concerned.
The Google ban I’m wary of, as again, I’m wondering how much of that was a ban that was justified and how much was trying to take out a competitor, as well as the fact that in the US we have a law that protects platforms and makes them not liable for what individual users post on the service, provided they aren’t actively censoring some and not others (thus making them a “publisher”). I’m not sure I agree with the idea of an entire app being held accountable for the conduct of individuals.

/e/OS Apps gets its apps from There is no evidence where Cleanapk get the apps from but

  • they claim most of the FOSS apps come from F-Droid
  • many people believe that Cleanapk get their apps from Google’s Play Store

I believe that both F-Droid and Google’s Play Store have some protection against malicious apps. You may want to ask /e/ and/or CleanApk what protection they have in place against such app. For /e/ you could raise a gitlab issue.

Good luck with finding a way to contact CleanApk who are a very shy organisation, with no clue on their website about where they or their servers are based. You could request that /e/ pose the question to CleanApk, because I don’t think /e/ would be using CleanApk if they didn’t have a way of raising and discussing problems with them :slight_smile:


I am inclined to ask:

How and why do malicious apps get into the Play store in the first place?