Agitateur de Neurones

Google prend la sécurité des IAs très au sérieux

Google considère que la sécurité des Intelligences Artificielles devrait être une préoccupation majeure de leurs concepteurs.

Ci-dessous un extrait de l’interview de Ian Goodfellow à la conférence EmTech Digital.

“I want Machine Learning to be as secure as possible before we rely on it too much.” – Ian Goodfellow of Google Brain.

The talk: Ian Goodfellow, who created generative adversarial networks (GANs), wants to make AI systems more secure by training them on the same examples that could fool them in the first place.
Protect and defend: Computer vision software can be fooled with adversarial examples, which could be something as simple as a photo with a few changed pixels or as complicated as a 3D-printed turtle (which, true story, Google’s computer vision software mistook for a rifle). But if machine learning algorithms are trained on those same adversarial attacks, they can learn to spot and dismiss them.
Why it matters: With older technologies, like a PC’s operating system, security could be beefed up only after bad actors exploited a weakness. That won’t work with safety-critical systems, like self-driving cars or facial recognition systems used as airport security, which need to be robust enough to fend off attackers from day one.