Italy's privacy watchdog has launched an investigation into AI companies scraping vast amounts of personal data online to train algorithms, prompting transparency and consent concerns.
Italy's privacy regulator seeks to determine whether adequate safeguards exist around internet data collection underlying much of today's AI. The probe comes amid rising global scrutiny of AI ethics and data practices enabling tech giants' systems.
Companies anonymously amassing photos, chats, browsing history etc. from public platforms for algorithms have faced criticism. Experts argue informed consent around secondary use cases is impossible at scale.
News of the Italian probe follows the regulator briefly blocking ChatGPT over suspected GDPR breaches in January. It remains unclear what websites or companies may currently face scrutiny.
The investigation intends to assess if firms sufficiently notify individuals before scraping online data trails for AI training. Regulators reserved the right to urgently intervene if finding flags.
Global Regulatory Trends
Italy now consults academics and advocacy groups on potential policy changes around transparency in emerging training datasets.
The review parallels surging attention on governance for algorithms influencing finance, employment, law enforcement and more across sectors. The EU is poised to vote on landmark AI regulations next month, with member states like France and Germany pressing for democratic checks.
The Road Ahead
As AI advances enable new products and efficiencies, questions around underlying data practices wait unresolved. Italy steps forward seeking technical insight and public feedback to inform their course. Most agree companies should be clear on what data gets collected, and how it fuels systems touching daily life.