#AI · AI Cybersecurity · 2017-09-12 · Thomas Ott

~1 min read

File this under "no shit Sherlock," but hackers are already weaponizing machine learning.

The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users. via Gizmodo

In reality the above example is just a bunch of loops and stuff, it's what the tweets contain and to whom it's sent to that makes all the difference. That 'intelligence' is probably generated from some machine learned model or "AI".

Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.

Yes, propensity to buy to click models and even some NLP will get people to infect their machines.

Chalk this up to another abuse, yet innovative way, to use machine learning.