Youtube announced in September that new measures will be put in place to better protect children by removing data targeting features on videos identified as being aimed at kids.
“Starting in about four months, we will treat data from anyone watching children’s content on YouTube as coming from a child, regardless of the age of the user. This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications.”
A new audience setting was introduced in November for creators to use and set their video restrictions, alongside YouTubes own AI machine that will identify content and override the creators designation “if abuse or error is detected”.
The change was announced after the US Federal Trade Commission hit Google with a $170 million penalty last year as part of a settlement over an investigation into the privacy of children’s data on the site. YouTube says that it’s committed to helping creators “navigate this new landscape and to supporting our ecosystem of family content” via new assistance tools and prompts. YouTube says that it will share more information on these tools “within the coming months”. However it’s been a grey area trying to identify what is fine for kids and what should channels should be set at.
It’s a great move to protect younger audiences from being abused by those able to take advantage of the platform, however leaves creators in a bit of a grey area. It’ll be interesting to see how things play out and how well the AI tool can pick up on specific content.