Microsoft Joins Thorn and All Tech Is Human to enact strong child safety commitments for generative AI – Microsoft On … – Microsoft

| Courtney Gregoire – Chief Digital Safety Officer
While millions of people use AI to supercharge their productivity and expression, there is the risk that these technologies are abused. Building on our longstanding commitment to online safety, Microsoft has joined Thorn, All Tech is Human, and other leading companies in their effort to prevent the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children. Today, Microsoft is committing to implementing preventative and proactive principles into our generative AI technologies and products.
This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, an organization dedicated to collectively tackling tech and society’s complex problems, aims to mitigate the risks generative AI poses to children. The principles also align to and build upon Microsoft’s approach to addressing abusive AI-generated content. That includes the need for a strong safety architecture grounded in safety by design, to safeguard our services from abusive content and conduct, and for robust collaboration across industry and with governments and civil society. We have a longstanding commitment to combating child sexual exploitation and abuse, including through critical and longstanding partnerships such as the National Center for Missing and Exploited Childrenthe Internet Watch Foundationthe Tech Coalition, and the WeProtect Global Alliance. We also provide support to INHOPE, recognizing the need for international efforts to support reporting. These principles will support us as we take forward our comprehensive approach.
As a part of this Safety by Design effort, Microsoft commits to take action on these principles and transparently share progress regularly. Full details on the commitments can be found on Thorn’s website here and below, but in summary, we will:
Today’s commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.
We will also continue to engage with policymakers on the legal and policy conditions to help support safety and innovation. This includes building a shared understanding of the AI tech stack and the application of existing laws, as well as on ways to modernize law to ensure companies have the appropriate legal frameworks to support red-teaming efforts and the development of tools to help detect potential CSAM.
We look forward to partnering across industry, civil society, and governments to take forward these commitments and advance safety across different elements of the AI tech stack. Information-sharing on emerging best practices will be critical, including through work led by the new AI Safety Institute and elsewhere.
Tags: , , , ,
| Teresa Hutson
| Clint Watts
| Julie Brill
| Tania Cosentino
| Brad Smith
| Brad Smith
Follow us:


Leave a Reply

The Future Is A.I. !
To top