Defending Against Web Scraping
Page 1 of 1
The risk of data loss from a Website can come from multiple avenues. There could be an outright data breach where an attacker steals content directly from a database, or an automated bot could scrape the site, stealing data that is out in the open. The challenge of dealing with automated Web-content scraping is one that startup ScrapeDefender is aiming to tackle.
"Websites have all sorts of different types of content that is available, free to the public, and the creators of those sites intend for that content to be consumed by people to use," Robert Kane, CEO of ScrapeDefender, told eWEEK. "What has happened is there is now a whole industry of scraping with bots that harvest mass amounts of data from sites."
Those data-harvesting scraping bots can potentially be grabbing pricing information from retail or travel sites, for example, and then repurposing the data in ways in which the original content creator did not intend. ScrapeDefender is now launching its cloud-based anti-scraping and real-time monitoring service with the goal of tracking and limiting the risk of scraping.