Mobile apps such as FaceApp must address user concerns on privacy and securty. FaceApp has gone viral and is now used by nearly 150 million users. That is nearly twice the number of users impacted by Cambridge Analytica. This should be a major privacy concern for users. US presidential campaigns are likely to ban the use of FaceApp or similar apps.
Often naysayers say the following:
- Scaremongering privacy freaks
- Users trade some privacy for vanity features
- It is upto Apple, Google, and Facebook to remove apps that violate their policies
- GDPR / CCPA are over-regulation
- Press and journalists crush innovation
FaceApp, like most apps, collects user information provided by the user. They usually keep this information for a extended period of time. I believe the photos collected by FaceApp are valuable data for AI and machine learning, it is likely that the company stores this information. The original uploaded picture is usually considered owned by the user. Any transformation may be considered derivative works and likely owned by FaceApp.
Images and derivative works are highly valuable for AI-based facial recognition, and training the algorithms for facial recognition or other purposes.
The history of edits, modifications, transformations, and other actions provided by FaceApp will also be stored. These are likely considered owned by the company as it is an action done as part of the app. It is very likely that each user is making several modifications to be pictures before they share the final picture with their friends or on social media. Resulting intermediate images are likely considered derivative works owned by FaceApp.
Third-party analytics such as Google Analytics, CloudFlare, and other tracking cookies are used by FaceApp. These analytics may not directly relate the web visit history to the specific user. However, with addtional cookies, device identifiers, the company can easily sell/share detailed user information to advertisers. Learn more about tracking cookies and user consent here.
Device identifier information is easy to obtain in mobile apps. It is simply a few lines of code when the app fires up. Any app could create own device identifiers when first installed. The result is detailed history of app usage, duration of time spent in the app, what done, when, and how. This information combined with the user metadata collected is a treasure for advertisers.
To all the naysayers who say – this is typical of any mobile or web app. They are correct. This information collection is similar to other mobile or web apps.
Use of information collected
The main stated purpose of information collection by FaceApp is product enhancements. This is common to other mobile and web apps. Often mobile and web apps collect information to improve the service, enhance features, add new features, highlight useful features, test new features, or fix problems while they occur.
Marketing and advertising: Unlike several popular mobile and web apps, FaceApp expressly states that “provide personalized content and information to you and others, which could include online ads or other forms of marketing”. The others could imply anyone, including nation-state actors such as Russia, China, and others.
To all the naysayers who say – why will the company share the data with Russia? See my additional commentary below in jurisdiction.
Automated updates: The information is used for automated updates. While this may look benign, just image that FaceApp is constantly connecting to its servers in the background even when it is not being used. The information shared by FaceApp in the background is not visible. This may not directly include location data, but tracking cookies can easily provide proximity data. In my opinion, mobile apps must by default ask for user permission to check for updates and to perform updates.
Sharing of information
FaceApp creates an exception with the following :
- Affiliates of the group of companies they belong to (there is no listing of these companies or the group)
- Affiliates have similar rights as FaceApp
- Marketers and advertisers
- 3rd party advertising partners
- Ad networks
- Service providers (usually most mobile and web apps store information with service provider and do not actively share information with service providers)
- And, if we get acquired, the acquiring company will have all the information
This can be read in two ways. One, that says this is a responsible company and will responsibly share information. Two, if this is not a responsible company or the company gets pushed around they may sell / share / license or otherwise make money whichever way they see possible including licensing the information to nation-state actors (see jurisdiction below).
Data sovereignty and jurisdiction
Why sovereignty and jurisdiction matter? This is indeed the right question to ask. Let’s compare a viral mobile app from a company in California with this viral mobile FaceApp. Let’s assume they have similar information collection, use, and sharing terms. So, a level field.
For the mobile app in California, existing privacy laws and CCPA come into effect. A side note: current privacy laws could not do much about Cambridge Analytica scandal, and Facebook got slapped a small $5 Billion fine for lax compliance. CCPA similar to GDPR provides better control to the consumer:
- Ask for details on how the information is used
- Ask for details on 3rd party companies information is shared with
- Ask to be forgotten
- Ask to not sell information
- Penalties on data breach or data loss
- Penalties on non-compliance (upto $7500 per violation)
FaceApp does not have any of the above compliance requirements. Today, FaceApp does not state compliance with any of the privacy laws. To be considered a responsible company with user information, FaceApp must voluntarily incorporate these policies. It must provide an easy way for its users to exercise their privacy rights. Today, FaceApp asks the users to send them an email to exercise the right to be forgotten. This is not sufficient.
Let’s assume the Russian government serves a warrant for user information. Will FaceApp founder(s) risk going to jail vs. fighting for the users privacy? For the viral mobile app in California, there is precedence that the company does fight for the privacy rights of users. The state and federal courts do provide protections.
So what’s next
1. FaceApp performs most of the photo processing in the cloud. We only upload a photo selected by a user for editing. We never transfer any other images from the phone to the cloud.
2. We might store an uploaded photo in the cloud. The main reason for that is performance and traffic: we want to make sure that the user doesn’t upload the photo repeatedly for every edit operation. Most images are deleted from our servers within 48 hours from the upload date.
3. We accept requests from users for removing all their data from our servers. Our support team is currently overloaded, but these requests have our priority. For the fastest processing, we recommend sending the requests from the FaceApp mobile app using “Settings->Support->Report a bug” with the word “privacy” in the subject line. We are working on the better UI for that.
4. All FaceApp features are available without logging in, and you can log in only from the settings screen. As a result, 99% of users don’t log in; therefore, we don’t have access to any data that could identify a person.
5. We don’t sell or share any user data with any third parties.
6. Even though the core R&D team is located in Russia, the user data is not transferred to Russia.
Additionally, we’d like to comment on one of the most common concerns: all pictures from the gallery are uploaded to our servers after a user grants access to the photos (for example, https://twitter.com/joshuanozzi/status/1150961777548701696). We don’t do that. We upload only a photo selected for editing. You can quickly check this with any of network sniffing tools available on the internet.Source: FaceApp
More useful links: