LinkedIn plans to use your data to train its AI. Here’s how to stop it

LinkedIn users might not realize they’re giving permission for the site to use their personal data and any content they create on the platform to train the company’s generative AI models, but if they’re in the U.S., Canada, or one of several other countries, odds are they soon will be. Updates to the company’s user agreement that go into effect November 20 will automatically opt users into sharing their data. (A social media posting from Rachel Tobac, CEO of Social Proof Security, first brought the issue to light Wednesday afternoon.) “Some of the changes being made are to account for updates to new products and features in the generative AI space as well as license terms that allow our creators to expand their brands even beyond LinkedIn,” said Kalinda Raina, LinkedIn’s chief privacy officer, in a video update on the site. The update will affect a wide swath of users, including everyone in the U.S. Raina noted the company is pausing the collection and use of data from the European Union and Switzerland from its generative AI modeling, however. Both regions have much more strict laws governing privacy. A quick look at the company’s settings confirmed the changes. If you’d prefer that your information and posts not be part of LinkedIn’s AI efforts, getting out of the auto opt-in isn’t difficult, but it will require a little work on your part. Here’s how to do it. Opting out of LinkedIn’s AI training on desktop Once you are logged into LinkedIn’s website in your browser, click on “Settings and Privacy” under the “Me” tab. Then select “Data Privacy.” From there, look under the “How LinkedIn uses your data” header. At the bottom, you’ll find “Data for Generative AI Improvement.” Click on that and toggle the switch to “Off” to opt out. While you’re in those settings, you might also want to look at the Data Research tab, which gives LinkedIn permission to use data about you for social, economic, and workplace research. Opting out of LinkedIn’s AI training on the mobile app First, tap your profile picture in the upper left corner of the app, then “Settings” at the bottom of the screen. Click on “Data privacy,” then “Data for Generative AI Improvement.” Toggle the switch to “Off” to opt out. While the new user agreement doesn’t go into effect until November, a help page on LinkedIn seems to indicate data collection may already be taking place. “Where LinkedIn trains generative AI models, we seek to minimize personal data in the datasets used to train the models, including by using privacy-enhancing technologies to redact or remove personal data from the training dataset,” the page reads. A LinkedIn spokesperson did not say how long the AI system has been harvesting user information without the option for users to opt out. “We believe that our members should have choices over how their data is used, which is why we are making available an opt-out setting for training AI models used for content generation in the countries where we do this,” said Greg Snapper, a separate LinkedIn spokesperson. “We’ve started letting our members know about these updates across multiple channels.” LinkedIn initially introduced its AI tool last October, but rolled out an expansion earlier this year. LinkedIn Learning is still in beta but is offering AI summaries and answers to questions on the site already. That could be worrisome to content creators, many of whom use LinkedIn as a platform to get broader exposure for their thoughts. Reporters and news outlets (including Fast Company) use it to promote their stories, and many business executives use it as a forum to position themselves as thought leaders in their area of specialty. AI summaries of those posts could potentially make it harder for creators to capitalize on their work—and the summaries often do not give any credit to the original author. Tobac found that out firsthand, as sentences from her post on LinkedIn, warning of the practice were seemingly lifted in an AI answer below her post. The AI answer did not credit her for the thoughts. 

LinkedIn plans to use your data to train its AI. Here’s how to stop it

LinkedIn users might not realize they’re giving permission for the site to use their personal data and any content they create on the platform to train the company’s generative AI models, but if they’re in the U.S., Canada, or one of several other countries, odds are they soon will be.

Updates to the company’s user agreement that go into effect November 20 will automatically opt users into sharing their data. (A social media posting from Rachel Tobac, CEO of Social Proof Security, first brought the issue to light Wednesday afternoon.)

“Some of the changes being made are to account for updates to new products and features in the generative AI space as well as license terms that allow our creators to expand their brands even beyond LinkedIn,” said Kalinda Raina, LinkedIn’s chief privacy officer, in a video update on the site.

The update will affect a wide swath of users, including everyone in the U.S. Raina noted the company is pausing the collection and use of data from the European Union and Switzerland from its generative AI modeling, however. Both regions have much more strict laws governing privacy.

A quick look at the company’s settings confirmed the changes. If you’d prefer that your information and posts not be part of LinkedIn’s AI efforts, getting out of the auto opt-in isn’t difficult, but it will require a little work on your part. Here’s how to do it.

Opting out of LinkedIn’s AI training on desktop

  • Once you are logged into LinkedIn’s website in your browser, click on “Settings and Privacy” under the “Me” tab. Then select “Data Privacy.”
  • From there, look under the “How LinkedIn uses your data” header. At the bottom, you’ll find “Data for Generative AI Improvement.” Click on that and toggle the switch to “Off” to opt out.
  • While you’re in those settings, you might also want to look at the Data Research tab, which gives LinkedIn permission to use data about you for social, economic, and workplace research.

Opting out of LinkedIn’s AI training on the mobile app

  • First, tap your profile picture in the upper left corner of the app, then “Settings” at the bottom of the screen.
  • Click on “Data privacy,” then “Data for Generative AI Improvement.” Toggle the switch to “Off” to opt out.

While the new user agreement doesn’t go into effect until November, a help page on LinkedIn seems to indicate data collection may already be taking place.

“Where LinkedIn trains generative AI models, we seek to minimize personal data in the datasets used to train the models, including by using privacy-enhancing technologies to redact or remove personal data from the training dataset,” the page reads.

A LinkedIn spokesperson did not say how long the AI system has been harvesting user information without the option for users to opt out.

“We believe that our members should have choices over how their data is used, which is why we are making available an opt-out setting for training AI models used for content generation in the countries where we do this,” said Greg Snapper, a separate LinkedIn spokesperson. “We’ve started letting our members know about these updates across multiple channels.”

LinkedIn initially introduced its AI tool last October, but rolled out an expansion earlier this year. LinkedIn Learning is still in beta but is offering AI summaries and answers to questions on the site already.

That could be worrisome to content creators, many of whom use LinkedIn as a platform to get broader exposure for their thoughts. Reporters and news outlets (including Fast Company) use it to promote their stories, and many business executives use it as a forum to position themselves as thought leaders in their area of specialty.

AI summaries of those posts could potentially make it harder for creators to capitalize on their work—and the summaries often do not give any credit to the original author.

Tobac found that out firsthand, as sentences from her post on LinkedIn, warning of the practice were seemingly lifted in an AI answer below her post.

The AI answer did not credit her for the thoughts.