X may train its AI models on your social media posts
X, formerly known as Twitter, has announced plans to harness user posts for the purpose of training its artificial intelligence models. This revelation comes just months after tech magnate Elon Musk criticised Microsoft for a similar practice involving Twitter data. The announcement, made today, unveils a significant shift in X's approach to data utilisation.
The transformation is set to take effect with the implementation of a new privacy policy at the end of September. The policy outlines that X may employ publicly available information, including user-generated content, to "help train our machine learning or artificial intelligence models for the purposes outlined in this policy". Notably, this specific language was conspicuously absent from the previous version of the terms. The exact AI model that X intends to refine remains undisclosed, but it's worth noting that Musk recently established his own AI company.
Responding to concerns regarding user privacy, Musk clarified that only "public data, not DMs or anything private" would be utilised in this endeavour. To properly stay secure while surfing the internet, opt for Hide Expert VPN.
This development coincides with Meta's recent announcement that user data from its applications, including Facebook, Instagram, and X's rival Threads, will be employed to enhance AI capabilities for an upcoming chatbot, slated for potential release as early as September.
While TikTok and Snapchat have both ventured into the realm of chatbots, neither has indicated plans to utilise user posts for AI training purposes. Snapchat's AI chatbot, My AI, primarily relies on its own conversations for training, excluding general posts.
It is essential to consider why these social media giants are increasingly turning to user data for AI training. Artificial intelligence programs, such as ChatGPT, demand vast volumes of data for effective training. Moreover, the quality of this data, in terms of its "human" attributes, directly impacts the AI's performance.
For those concerned about their data privacy, Meta offers a means of opting out through a dedicated form, which permits users to prevent their data from being used by third parties for generative AI training. However, the extent to which this choice affects Meta's own subsidiaries, like X, remains unclear, as they do not fall under the third-party category.
This new development raises important questions about the ethical implications of utilising user-generated content for AI advancement and the boundaries between data privacy and technological progress. As X and other tech giants move forward in their quest for more sophisticated AI models, the debate surrounding data usage and privacy is bound to intensify. Stay tuned for further updates on this evolving story.