Everyone is talking about FaceApp - the app that can edit photos of people's faces to show younger or older versions of themselves.
Thousands of people are sharing the results of their own experiments with the app on social media.
But since the face-editing tool went viral in the last few days, some have raised concerns over its terms and conditions.
They argue that the company takes a cavalier approach to users' data - but FaceApp said in a statement most images were deleted from its servers within 48 hours of being uploaded.
The company also said it only ever uploaded photos that users selected for editing and not additional images.
What is FaceApp?
FaceApp is not new. It first hit the headlines two years ago with its "ethnicity filters".
These purported to transform faces of one ethnicity into another - a feature that sparked a backlash and was quickly dropped.
The app can, however, turn blank or grumpy expressions into smiling ones. And it can tweak make-up styles.
This is done with the help of artificial intelligence (AI). An algorithm takes the input picture of your face and adjusts it based on other imagery.
This makes it possible to insert a toothy smile, for instance, while adjusting lines around the mouth, chin and cheeks for a natural look.
So what's the problem?
Eyebrows were raised lately when app developer Joshua Nozzi tweeted that FaceApp was uploading troves of photos from people's smartphones without asking permission.
However, a French cyber-security researcher who uses the pseudonym Elliot Alderson investigated Mr Nozzi's claims.
He found that no such bulk uploading was going on - FaceApp was only taking the specific photos users decided to submit.
FaceApp also confirmed to the BBC that only the user-submitted photo is uploaded.
What about facial recognition?
Others have speculated that FaceApp may use data gathered from user photos to train facial recognition algorithms.
This can be done even after the photos themselves are deleted because measurements of features on a person's face can be extracted and used for such purposes.
"No, we don't use photos for facial recognition training," the firm's chief executive, Yaroslav Goncharov told BBC News. "Only for editing pictures."
Is that it?
Not quite. Some question why FaceApp needs to upload photos at all when the app could in theory just process images locally on smartphones rather than send them to the cloud.
In FaceApp's case, the server that stores user photos is located in the US. FaceApp itself is a Russian company with offices in St Petersburg.
Cyber-security researcher Jane Manchun Wong tweeted that this may simply give FaceApp a competitive advantage - it is harder for others developing similar apps to see how the algorithms work.
Steven Murdoch, at University College London, agreed.
"It would be better for privacy to process the photos on the smartphone itself but it would be likely [to be] slower, use more battery power, and make it easier for the FaceApp technology to be stolen," he told BBC News.
US lawyer Elizabeth Potts Weinstein argued the app's terms and conditions suggested user photos could be used for commercial purposes, such as FaceApp's own ads.
But Lance Ulanoff, editor-in-chief of tech site Lifewire, pointed out that Twitter's terms, for example, contained a similar clause:
Are users aware of all this?
For some, this is the nub of the issue. Privacy advocate Pat Walshe pointed to lines in the FaceApp's privacy policy that suggested some user data may be tracked for the purposes of targeting ads.
The app also embeds Google Admob, which serves Google ads to users.
Mr Walshe told BBC News this was done "in a manner that isn't obvious" and added: "That fails to provide people with genuine choice and control."
Mr Goncharov said terms in FaceApp's privacy policy were generic. He said the company does not share any data for ad-targeting purposes.
The app made money through paid subscriptions for premium features instead, he added.