YouTube’s Secret Experiment? Creators Claim Shorts Are Being Altered by AI Without Consent

YouTube is once again under scrutiny from its creator community—this time over alleged artificial intelligence (AI) modifications to Shorts videos. Several content creators have claimed that their videos were altered by the platform without their permission, sparking a debate over transparency, trust, and the fine line between machine learning and generative AI.

Creators Spot “Unwanted Effects” in Shorts:

The controversy surfaced after YouTube content creator Rhett Shull noticed that his Shorts looked different compared to when he uploaded the same videos to Instagram. In a side-by-side comparison, he claimed that the YouTube version appeared “smoothened,” with what looked like an “oil painting effect” on his face.

His observation echoed complaints from a Reddit post dated June 27, titled “YouTube Shorts are almost certainly being AI upscaled.” The user shared screenshots showing variations in video quality across resolutions and claimed that AI had been used to add or remove details such as smoother faces, sleeker hair, and even erased wrinkles on shirts.

Both creators accused YouTube of altering their content deceptively, without clear communication or consent.

YouTube Responds: “No Generative AI Involved”

In response to the growing backlash, YouTube admitted that some Shorts were indeed being edited as part of an ongoing experiment—but denied the use of generative AI.

Rene Ritchie, YouTube’s head of editorial and creator liaison, explained on X (formerly Twitter):

“No GenAI, no upscaling. We’re running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing—similar to what a modern smartphone does when you record a video.”

Ritchie clarified that Generative AI (GenAI) refers to advanced technologies like transformers and large language models, while upscaling typically means boosting a video’s resolution (e.g., 480p to 1080p). YouTube, he insisted, was doing neither.

Despite the explanation, many creators remain dissatisfied. Shull, in his video, stressed that the issue was not just about technology but about trust:

“The most important thing I have as a YouTube creator is that you trust what I’m making. Replacing or enhancing my work with some AI upscaling system not only erodes that trust with the audience, but it also erodes my trust in YouTube.”

For creators, the platform’s failure to ask permission or notify them before making such alterations feels like an infringement on their creative control.

This controversy also highlights a broader issue—where does one draw the line between AI and machine learning? To many, “machine learning” is still a form of AI, regardless of whether it’s generative or traditional. For YouTube to insist on the distinction while altering creators’ work has only deepened suspicions.

Some users fear that these silent experiments could become a slippery slope, leading to more aggressive content manipulation without user control in the future.

YouTube has not confirmed how long the experiment will continue or whether creators will eventually get the option to opt out. For now, the company maintains that the changes are designed to improve video clarity and viewing experience—but creators argue that any such “improvements” should not come at the cost of artistic integrity and audience trust.

As the AI-versus-machine-learning debate rages on, one thing is clear: transparency is key. Without it, even the best-intentioned experiments risk alienating the very creators who keep the platform alive.

Banking Adda © 2023 About Us | Frontier Theme