The industrial age of website bots: How to detect and block automated attacks
Bots are a reality, and its hard to separate your users and good bots (e.g., search) from the bad ones (brute force, fraud, scrapers, etc.). Ido Safruti and Ariel Sirota review how bots work, explain how to operate a few common bots, and, most importantly, show what you can do to detect and block malicious activity while enabling your users and good bots to work uninterrupted.
|Talk Title||The industrial age of website bots: How to detect and block automated attacks|
|Conference||O’Reilly Security Conference|
|Conf Tag||Build better defenses|
|Date||November 9-11, 2016|
Bots, or nonhuman automated tools, are growing in popularity and create a real threat to websites and applications. Bots are ideal workers for repetitive and complex tasks and can easily and efficiently run on different hosts and cloud services or as malware on infected machines. Bots are participating in different types of attacks impacting all types of applications and businesses—performing the entire scale of attacks from scraping, account abuse (account creation or account takeover), credit card and coupon guessing, application layer distributed denial of service (L7 DDoS), scalping, and click fraud. Bots range in complexity and abilities from simple scripts to full browser-based tools that can render complex pages and even solve CAPTCHA challenges. Ido Safruti and Ariel Sirota review how bots work, explain how to operate a few common bots, and, most importantly, show what you can do to detect and block malicious activity while enabling your users and good bots to work uninterrupted. In order to detect and protect from different bots, Ido and Ariel first introduce a few common bots used by attackers, like PhantomJS and Selenium, to create a testing environment and verify that we can efficiently detect such tools. Ido and Ariel then explore a variety of techniques and open source tools and libraries you can use in order to detect different bots and outlines the things to consider when blocking them, once suspecting a request as being originated by a bot, such as: