The percentage after each rejection item, represents the percentage of the total number of incoming emails bounced by this rejection rule. To a limited extent, this allows comparison of the effectiveness of each of the different rejection rules.
There are two caveats.
Although it may seem obvious, this does need to be called out:
Bear in mind that the first blocklist consulted is the most likely to
get a "hit". That list is most likely to have the highest numbers, as
the test proceeds on to the next list only if the first didn't list
this guy.
Or, another way to look at it is; if all the lists have listed the creep we're looking up, only the first one gets the credit, as we stop looking as soon as we get a "hit".
I believe that the order the lists appear in the .mc file
dictates the order they are checked by the server at receive
sendmail
code to confirm this.
Now to really complicate matters, my mailstats script (which renders the report webpage) displays the blocklists sorted in alphabetical order, which does not match the order they appear in my .mc file.
sendmail access
file (details at
sendmail.org) each time I have
gotten a spam email since early 2002.The essential ingrediant statistics wise, is that this list comes first. A listing here overrides any other blocklist. Thus, this list will always have a leg up on the other lists -- not only is it "tailor made" to the spam my system gets, it's also checked first.
Lastly, I started using this method before subscribing to any other blocklist. This means that the Vintners.Net local blocklist will undoubtedly get many of the hits that other lists would also get.