The UK’s Information Commissioner’s Office (ICO) has confirmed that professional social networking platform LinkedIn has suspended the processing of user data in the country to train its artificial intelligence (AI) models.
“We are pleased that LinkedIn has considered the concerns we raised about its approach to training generative AI models with information relating to UK users,” said Stephen Almond, executive director of regulatory risk. said.
“We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”
Almond also said the ICO intends to closely monitor companies offering generative AI capabilities, including Microsoft and LinkedIn, to ensure they have appropriate safeguards in place and take steps to protect the information rights of UK users.
The development follows a company owned by Microsoft confessed to train its own artificial intelligence on user data without obtaining their explicit consent within the framework updated privacy policy which entered into force on September 18, 2024, 404 Media reported.
“At this time, we do not allow training for generative artificial intelligence on data from members in the European Economic Area, Switzerland, and the United Kingdom, and will not provide settings to members in these regions until further notice,” Linked said.
Company too noted a separate FAQ says it aims to “minimize personal data in datasets used to train models, including by using privacy-enhancing technologies to edit or remove personal data from the training dataset.”
Users residing outside of Europe may give up the practice by going to the “Data Privacy” section of your account settings and turning off the “Data to improve generative artificial intelligence” setting.
“Opting out means that LinkedIn and its affiliates will not use your personal data or content on LinkedIn to further train the models, but this does not affect the training that has already taken place,” LinkedIn said.
LinkedIn’s decision to quietly allow all users to train their AI models comes days after Meta admitted that it was collecting non-private user data for similar purposes back in 2007. has since recovered UK user data training.
Last August, Zoom abandoned his plans to use client content to train an AI model after concerns were raised about how that data might be used in response to changes to the app’s terms of service.
The latest development underscores the growing focus on artificial intelligence, specifically how human data and content can be used to train large artificial intelligence language models.
It also comes after the US Federal Trade Commission (FTC) released a report that essentially said major social media and video streaming platforms engaged in extensive surveillance of users with lax privacy controls and inadequate safeguards for children and teens.
Personal user information is then often combined with data generated by artificial intelligence, tracking pixels, and third-party data brokers to create more comprehensive consumer profiles before being monetized by selling to other willing buyers.
“The companies collected and could retain data indefinitely, including information from data brokers and information about users and non-users of their platforms,” the FTC said in a statement. saidadding that their data collection, minimization and retention methods were “horribly inadequate”.
“Many companies are engaged in extensive data sharing, which raises serious concerns about the adequacy of controls and controls over companies’ data processing. Some companies have not deleted all user data in response to user deletion requests.”