The Australian government may take a strict stance to ensure that young users cannot access AI chatbots. reuters Australian regulators may require app storefronts to block AI services that don’t implement age verification to restrict mature content by March 9, the report said.
“eSafety will use the full range of our powers where there is non-compliance,” a representative of the commissioner said in a statement to the publication. Those avenues “may include actions regarding gatekeeper services such as search engines and app stores that provide key points of access to particular services.”
A review by reuters found that only nine of the 50 leading text-based AI chat services in the region had offered or shared plans for age assurance. According to the report, eleven services reportedly “had content filters or planned to block all Australians from using their service”, leading to a large number of people not taking public action a week before the country’s deadline. AI companies could face fines of up to A$49.5 million ($35 million) for failure to comply.
There is ongoing debate around the world over the question of which parties are responsible for preventing children from accessing potentially harmful material. For example, Apple and Google in the US are lobbying for the task to be handed over to platforms rather than app store operators. The language from Australian regulators regarding all stores is hardly definitive at this stage, but given the blanket ban on the use of social media and some highly social digital platforms for citizens under 16 implemented last year, an aggressive stance appears to be in line with leaders’ priorities.
<a href