The article "Modern Internet attacks" is provided by Sophos Plc and SophosLabs.
Web applications also usually use some kind of URL classification. They block requests to known malicious URLs or domains regardless of whether malicious content was detected. This is useful, taking into account that hackers actively use automation to constantly change the threats and avoid detection. The success of blocking requests to known malicious domains depends on the timely updates of such list. The effectiveness of such list is determined by a number of factors, including:
The relevance of the data. The information about malware on the Internet has to be collected as fast as possible to detect new attacks. The system should work globally. Such solutions can use automatic Internet robots or collaborate with partners to collect the maximum amount of data about Internet threats.
Server-side support. To handle the incoming data about URLs, check the content and quickly publish the data you need complex processing and publication systems. Such systems should track threats and analyse malicious Internet programs in real time. They should ensure the identification of all files used in the attack and block all URLs involved.
The URL filtering can also be used to control what kind of sites the users are allowed to visit. Pornographic, gaming or entertainment sites may be closed within an organization (because they are dangerous or distracting). The accuracy of the classification data determines how successful the URL filtering is. For this reason, some products use the licensed third-party data to classify URLs more efficiently.