You're building a cool AI (Artificial Intelligence) startup.
Suddenly, your AI system fails.
Or worse, it gets hacked in to.
What went wrong?
AI is becoming really powerful.
And there's an increasing dependence on it.
But companies have stated security and privacy as their top barrier to implementing it.
In order to protect your AI, it's important to know the risks.
Testing AI is hard because of how complex and expensive it can be. And when AI is not tested properly, it fails during production. For example, some teams only measure a single aggregate metric instead of looking at all the subsets. Companies like Robust Intelligence help test and measure AI systems at scale.
Imagine people hacking into your self-driving car or your Alexa. These new AI attacks are aimed at the underlying machine learning algorithms. Examples include Evasion Attacks (disrupting the model) and Poisoning Attacks (contaminate the data set).
Counterfit - Microsoft's open source tool to test attacking your own AI
Robust Intelligence: AI model monitoring
Protopia.ai - data privacy for AI
Fiddler - AI continuous monitoring
Protegrity - data protection for AI
https://www.robustintelligence.com/blog/is-your-ai-model-ready-for-production
https://www.cmswire.com/information-management/so-you-think-your-ai-deployment-is-secure/
https://www.gartner.com/smarterwithgartner/build-3-operations-management-skills-for-ai-success
https://www.synopsys.com/designware-ip/technical-bulletin/why-ai-needs-security-dwtb-q318.html
https://www.brookings.edu/research/how-to-improve-cybersecurity-for-artificial-intelligence/
This is a special newsletter. Every week, we deconstruct the best crypto trends and share those insights with you.