Measuring False Negatives and IP Reputation

Richi Jennings noted in a recent bulletin the claim from BorderWare of getting 98.3% detection using IP Reputation (DNSRBLs), and that other sources suggested 75%.

Isode has been making measurements of false negative rates, published in a white paper, “Measuring the False Negative Rate for Isode’s M-Switch Anti-Spam.”

Our measurements suggest that the (public) DNSRBLs we use hit about 90% of spam. Well-managed DNSRBLs seem an effective way to detect spam, because they have a very low false positive rate. We use DNSRBLs to mark messages (rather than reject at the SMTP server), so we can examine quarantine to check for false positives.

A further 5% can be hit by two other reputation mechanisms:

  1. SPF (which is well known) is reasonably effective, but can produce some false positives, particularly in conjunction with mailing lists.
  2. SURBL detects URLs within messages, using an underlying RBL mechanism.

Isode’s M-Switch anti-spam can hit most of the remaining spam with a variety of other spam markers and content scoring (using Support Vector Machine derived tables). General-purpose content scoring appears to work very well for many users, but aggressive checking leads to false positives for others, which can be mitigated by use of whitelists.

It seems conceivable that rates higher than 90% can be achieved using public DNSRBLs, although experience suggests that some (poorly managed) DNSRBLs lead to false positives.

Steve Kille

Post a comment

You must be logged in to post a comment. To comment, first join our community.