Slack, the popular communication platform, has come under fire for training its machine-learning models on user messages, files, and other content without explicit user permission. This practice, which is opt-out by default, has raised significant concerns about privacy and data use within the platform.
Opt-out hurdles and privacy concerns
The process to opt out of this data usage is notably cumbersome. Users cannot directly request to exclude their data; instead, they must go through their organisation’s Slack administrator, who then needs to contact Slack’s customer experience team. This convoluted process has been criticised for placing the burden on users to protect their own privacy.
I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc? I'm positive I'm not reading this correctly. pic.twitter.com/6ORZNS2RxC
— Corey Quinn (@QuinnyPig) May 16, 2024
Corey Quinn from DuckBill Group first highlighted the issue after spotting a policy statement in Slack’s Privacy Principles. The statement indicated that to develop AI and machine learning models, Slack analyses “customer data” and “other information,” which encompasses messages, content, files, and usage information.
Mixed messages in privacy policies
Slack’s privacy policies have also shown inconsistencies that add to user confusion. One section claims that Slack cannot access the underlying content when developing AI and ML models, citing technical measures to prevent such access. However, another part of the policy allows for the training of machine learning models using customer data, contradicting the earlier assurance.
The company has responded to the backlash by clarifying that while it uses platform-level machine-learning models for features like channel recommendations and search results, customers can request to exclude their data from training these models. Nevertheless, this does not fully address the broader concerns about data privacy and the opt-out nature of the policy.
Premium tools and misleading statements
Further complicating matters, Slack’s marketing for its premium generative AI tools claims, “Work without worry. Your data is your data. We don’t use it to train Slack AI.” This statement has been criticised as misleading by observers like PCMag, as it implies comprehensive data protection across all AI models, which is not the case.
As the debate over Slack’s use of customer data continues, the tech community and its users are calling for more transparency and easier opt-out options to ensure that personal and organisational data remains secure and private. Slack has yet to respond to inquiries from multiple media outlets regarding these issues, leaving many waiting for further clarification.
Editor’s note: The PR team from Slack has contacted us to highlight significant updates made to their privacy principles, clarifying their practices regarding the use of customer data in AI training. These changes, made in response to the concerns raised, specify that Slack does not use customer data to train generative AI models and only utilises de-identified, aggregate data for improving platform features such as search and recommendations. For full details on Slack’s updated AI privacy principles, please see their blog post: Slack Privacy Principles.