Microsoft has updated its free AI tool, Designer, after it was used to create fake nude images of Taylor Swift that went viral on social media. The company has added “guardrails” to prevent the use of non-consensual photos, and has warned that users who create deepfakes will lose access to the service. The company has also expressed its concern and condemnation over the misuse of AI tools, and has called for more action and regulation to ensure online safety and security.
Designer is a text-to-image program powered by OpenAI’s Dall-E 3, a state-of-the-art AI model that can generate realistic and diverse images from natural language inputs. The tool is available for free on Microsoft’s website, and allows anyone to create and download images based on their text descriptions. The tool can produce images of various objects, animals, scenes, and even celebrities, with different styles, colors, and attributes.
Designer is a popular and powerful tool that can be used for various purposes, such as education, entertainment, art, and design. However, the tool can also be abused and misused for malicious and harmful purposes, such as creating fake and misleading images, violating privacy and consent, and spreading misinformation and propaganda.
A shocking and disturbing scandal
Designer was linked to a shocking and disturbing scandal that involved the creation and dissemination of fake nude images of Taylor Swift, the famous singer and songwriter. The images showed Swift naked and surrounded by Kansas City Chiefs players, in a reference to her rumored relationship with Travis Kelce, the team’s tight end. The images were traced back to Designer, before they were shared and circulated on X, Reddit, and other websites.
The scandal sparked outrage and criticism from Swift’s fans, friends, and representatives, who denounced the images as disgusting and disrespectful, and demanded their removal and prosecution. The scandal also raised legal and ethical issues, such as the violation of Swift’s rights and dignity, the infringement of her intellectual property and image rights, and the potential damage to her reputation and career.
A swift and responsible response
Microsoft responded swiftly and responsibly to the scandal, and updated its Designer tool to prevent the use of non-consensual photos. The company said that it was investigating the reports and taking appropriate action to address them. The company also said that it had large teams working on the development of guardrails and other safety systems, in line with its responsible AI principles, such as content filtering, operational monitoring, and abuse detection. The company also said that, according to its code of conduct, any Designer users who create deepfakes will lose access to the service.
Microsoft also expressed its concern and condemnation over the misuse of AI tools, and called for more action and regulation to ensure online safety and security. The company’s CEO, Satya Nadella, said that technology companies need to “move fast” to crack down on the misuse of AI tools, and that he did not want an online world that was not safe for both content creators and content consumers. The company also said that it was working with other stakeholders, such as OpenAI, the government, and the industry, to establish and enforce standards and best practices for the use of AI tools.