Australia ensures to remove child abusive AI-generated content from search engines

Australia's eSafety Commissioner, Julie Inman-Grant, has raised concerns about the misuse of artificial intelligence (AI) tools in generating disturbing content such as child abuse and terrorist propaganda. She has called upon tech giants like Google, Microsoft's Bing, and DuckDuckGo to take decisive action to remove such harmful material from AI-powered search engines. This move is part of new industry standards aimed at addressing these critical issues.

Inman-Grant emphasized the need for businesses to proactively mitigate the harm caused by their products. She noted that 'synthetic' child abuse content and generative AI-driven propaganda by terrorist organizations are already in circulation. Addressing this problem is essential, and it requires immediate attention. Microsoft and Google have recently announced their plans to incorporate AI tools, namely ChatGPT and Bard, into their popular consumer search engines. Inman-Grant pointed out that the advancement of AI technology necessitates a reevaluation of the existing "search code" governing these platforms.

 

One significant aspect of these new regulations is that they expand the scope of the code to cover not only online content that search engines retrieve after user queries but also material that these services might generate themselves. This comprehensive approach will compel search engines to continually review and enhance their AI tools to ensure that content falling under the "class 1A" category, which includes child sexual exploitation, pro-terror content, and extreme violence material, does not appear in search results. Measures such as delisting and blocking such results will be enforced.

The eSafety Commissioner highlighted the concern that malicious actors are harnessing new AI tools for illicit purposes, such as producing child abuse material. In response to this growing threat, the new rules will not only hold tech companies accountable for minimizing harm on their platforms but also encourage the development of security tools, including the detection of deepfake images. Attorney-General Mark Dreyfuss revealed that the Australian Federal Police is utilizing AI technology to identify child abuse material rather than manual examination of photos. Additionally, a new initiative encourages adults to submit images of themselves as children to assist in training AI models. Dreyfuss expressed his commitment to supporting this initiative by contributing a childhood photo of himself. 

In summary, Australia is taking significant steps to combat the misuse of AI in generating harmful content. These measures will promote responsible AI usage and protect individuals, especially children, from exposure to disturbing material on search engines.


© Copyright 2023. All Rights Reserved Powered by Vygr Media.