Bayesian message classification: Difference between revisions
imported>Andrew Jenkin (Included brief description of the independence assumption as used in the Naive Bayes Classifier.) |
imported>Peter Schmitt m (null edit) |
||
(6 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
{{subpages}} | |||
'''Bayesian message classification''' is the process of using [[Bayesian probability|Bayesian statistical methods]] to classify documents into categories. | '''Bayesian message classification''' is the process of using [[Bayesian probability|Bayesian statistical methods]] to classify documents into categories. | ||
Bayesian message classification was proposed to improve spam filtering from Sahami et al. (1998)<ref>{{cite paper|author=M. Sahami, S. Dumais, D. Heckerman, E. Horvitz|title=A Bayesian approach to filtering junk e-mail|publisher=AAAI'98 Workshop on Learning for Text Categorization|date=1998}}</ref> and gained attention in 2002 when it was described in a paper by [[Paul Graham]].<ref>{{cite web|url=http://www.paulgraham.com/spam.html|title=A Plan for Spam|last=Graham|first=Paul|authorlink=Paul Graham|year=2002}}</ref> Since then it has become a popular mechanism to distinguish illegitimate [[e-mail spam|spam email]] from legitimate [[electronic mail|email]]. Many modern mail programs implement Bayesian spam filtering. [[Server-side]] email filters, such as [[SpamAssassin]] and [[Anti-Spam SMTP Proxy|ASSP]], make use of Bayesian spam classification techniques, and the functionality is sometimes embedded within [[mail server]] software itself. | Bayesian message classification was proposed to improve spam filtering from Sahami et al. (1998)<ref>{{cite paper|author=M. Sahami, S. Dumais, D. Heckerman, E. Horvitz|title=A Bayesian approach to filtering junk e-mail|publisher=AAAI'98 Workshop on Learning for Text Categorization|date=1998}}</ref> and gained attention in 2002 when it was described in a paper by [[Paul Graham]].<ref>{{cite web|url=http://www.paulgraham.com/spam.html|title=A Plan for Spam|last=Graham|first=Paul|authorlink=Paul Graham|year=2002}}</ref> Since then it has become a popular mechanism to distinguish illegitimate [[e-mail spam|spam email]] from legitimate [[electronic mail|email]]. Many modern mail programs implement Bayesian spam filtering. [[Server-side]] email filters, such as [[SpamAssassin]] and [[Anti-Spam SMTP Proxy|ASSP]], make use of Bayesian spam classification techniques, and the functionality is sometimes embedded within [[mail server]] software itself. | ||
The regular incidence of | The regular incidence of spam with selections of lengthy normal text passages from books, etc, is an attempt to corrupt this process by modifying the occurrences of 'non-spam' words in spam messages. | ||
== Mathematical foundation == | == Mathematical foundation == | ||
Bayesian [[Classifier (mathematics)|classifier]]s take advantage of [[Bayes | Bayesian [[Classifier (mathematics)|classifier]]s take advantage of [[Bayes Theorem]]. Bayes' theorem, in the context of spam, says that the probability that an email is spam, given that it has certain words in it, is equal to the probability of finding those certain words in spam email, times the probability that any email is spam, divided by the probability of finding those words in any email: | ||
:<math>\Pr(\mathrm{spam}|\mathrm{words}) = \frac{\Pr(\mathrm{words}|\mathrm{spam})\Pr(\mathrm{spam})}{\Pr(\mathrm{words})}</math> | :<math>\Pr(\mathrm{spam}|\mathrm{words}) = \frac{\Pr(\mathrm{words}|\mathrm{spam})\Pr(\mathrm{spam})}{\Pr(\mathrm{words})}</math> | ||
In practice [[frequentist estimation]] of the <math>\Pr(\mathrm{words}|\mathrm{spam})</math> terms is not usually feasible due to the vast number of possible word combinations that may occur in an email. Therefore a simplifying | In practice, [[frequentist estimation]] of the <math>\Pr(\mathrm{words}|\mathrm{spam})</math> terms is not usually feasible due to the vast number of possible word combinations that may occur in an email. Therefore a simplifying independence assumption is frequently made i.e. that the probability of a word occurring in a document is assumed (incorrectly) to be independent of the occurrence or absence of any other word. This assumption gives rise to the widely-used [[Naive Bayes classifier]]. | ||
<!-- Insert formula for calculating pr(words|spam) values using independence assumption? --> | |||
== Process == | == Process == | ||
Line 52: | Line 55: | ||
==References== | ==References== | ||
<references/> | <references/> | ||
Revision as of 19:12, 27 September 2009
Bayesian message classification is the process of using Bayesian statistical methods to classify documents into categories.
Bayesian message classification was proposed to improve spam filtering from Sahami et al. (1998)[1] and gained attention in 2002 when it was described in a paper by Paul Graham.[2] Since then it has become a popular mechanism to distinguish illegitimate spam email from legitimate email. Many modern mail programs implement Bayesian spam filtering. Server-side email filters, such as SpamAssassin and ASSP, make use of Bayesian spam classification techniques, and the functionality is sometimes embedded within mail server software itself.
The regular incidence of spam with selections of lengthy normal text passages from books, etc, is an attempt to corrupt this process by modifying the occurrences of 'non-spam' words in spam messages.
Mathematical foundation
Bayesian classifiers take advantage of Bayes Theorem. Bayes' theorem, in the context of spam, says that the probability that an email is spam, given that it has certain words in it, is equal to the probability of finding those certain words in spam email, times the probability that any email is spam, divided by the probability of finding those words in any email:
In practice, frequentist estimation of the terms is not usually feasible due to the vast number of possible word combinations that may occur in an email. Therefore a simplifying independence assumption is frequently made i.e. that the probability of a word occurring in a document is assumed (incorrectly) to be independent of the occurrence or absence of any other word. This assumption gives rise to the widely-used Naive Bayes classifier.
Process
Particular words have particular probabilities of occurring in spam email and in legitimate email. Most email users might frequently encounter the word Viagra in spam email, but will seldom see it in other email. The classifier doesn't know these probabilities in advance, and must first be trained so it can build them up. To train the process, the user must manually indicate whether a new email is spam or legitimate email. For all words in each training email, the classifier will adjust the probabilities that each word will appear in spam or legitimate email in its database. For instance, Bayesian classifiers will typically have learned a very high spam probability for the words "Viagra" and "refinance", but a very low spam probability for words seen only in legitimate email, such as the names of friends and family members.
After training, the word probabilities (also known as likelihood functions) are used to compute the probability that an email with a particular set of words in it belongs to either category. Each word in the email contributes to the email's spam probability. This contribution is called the posterior probability and is computed using Bayes' theorem. Then, the email's spam probability is computed over all words in the email, and if the total exceeds a certain threshold (say 95%), the classifier will rate the email as spam. Email marked as spam can then be automatically moved to a "Junk" email folder, or even deleted outright.
Advantages
The advantage of Bayesian spam classification is that it can be trained on a per-user basis.
The spam that a user receives is often related to the online user's activities. For example, a user may have been subscribed to an online newsletter that the user considers to be spam. This online newsletter is likely to contain words that are common to all newsletters, such as the name of the newsletter and its originating email address. A Bayesian spam classifier will eventually assign a higher probability based on the user's specific patterns.
The legitimate e-mails a user receives will tend to be different. For example, in a corporate environment, the company name and the names of clients or customers will be mentioned often. The classifier will assign a lower spam on words within emails.
The word probabilities are unique to each user and can evolve over time with corrective training whenever the classifier incorrectly rates an email. Bayesian spam classification is more accurate after training compared to pre-defined static rules.
It can perform particularly well in avoiding false positives, where legitimate email is incorrectly classified as spam. For example, if the email contains the word "Nigeria", which frequently appeared in a long spam campaign, a pre-defined rules filter might reject it outright. A Bayesian classifier would mark the word "Nigeria" as a probable spam word, but would take into account other important words that usually indicate legitimate e-mail. The name of a spouse may strongly indicate the e-mail is not spam, which could overcome the use of the word "Nigeria."
Some spam filters combine the results of both Bayesian spam classifiers and pre-defined rules resulting in even higher filtering accuracy. Recent spammer tactics include insertion of random innocuous words that are not normally associated with spam, thereby decreasing the email's spam score, making it more likely to slip past a Bayesian spam filter.
General applications of Bayesian classification
While Bayesian classification is used widely to identify spam email, the technique can classify (or "cluster") almost any sort of data. It has uses in science, medicine, and engineering. One example is a general purpose classification program called AutoClass which was originally used to classify stars according to spectral characteristics that were otherwise too subtle to notice. There is recent speculation that even the brain uses Bayesian methods to classify sensory stimuli and decide on behavioural responses (Trends in Neuroscience, 27(12):712-9, 2004) (pdf).
See also
- Bayesian inference
- Bayes' theorem
- Naive Bayes classifier
- Recursive Bayesian estimation
- Stopping e-mail abuse
External links
- Guide to Bayesian spam filters: part 1, part 2.
- SpamBayes - Open source Bayesian spam filter.
- Detailed explanation of Paul Graham's formulas, which lack mathematical rigorousness
References
- ↑ M. Sahami, S. Dumais, D. Heckerman, E. Horvitz (1998). "A Bayesian approach to filtering junk e-mail". AAAI'98 Workshop on Learning for Text Categorization.
- ↑ Graham, Paul (2002). A Plan for Spam.