Full “Laplacianised” posterior naive Bayesian algorithm
© Mussa et al.; licensee Chemistry Central Ltd. 2013
Received: 24 May 2013
Accepted: 12 August 2013
Published: 23 August 2013
In the last decade the standard Naive Bayes (SNB) algorithm has been widely employed in multi–class classification problems in cheminformatics. This popularity is mainly due to the fact that the algorithm is simple to implement and in many cases yields respectable classification results. Using clever heuristic arguments “anchored” by insightful cheminformatics knowledge, Xia et al. have simplified the SNB algorithm further and termed it the Laplacian Corrected Modified Naive Bayes (LCMNB) approach, which has been widely used in cheminformatics since its publication.
In this note we mathematically illustrate the conditions under which Xia et al.’s simplification holds. It is our hope that this clarification could help Naive Bayes practitioners in deciding when it is appropriate to employ the LCMNB algorithm to classify large chemical datasets.
A general formulation that subsumes the simplified Naive Bayes version is presented. Unlike the widely used NB method, the Standard Naive Bayes description presented in this work is discriminative (not generative) in nature, which may lead to possible further applications of the SNB method.
Starting from a standard Naive Bayes (SNB) algorithm, we have derived mathematically the relationship between Xia et al.’s ingenious, but heuristic algorithm, and the SNB approach. We have also demonstrated the conditions under which Xia et al.’s crucial assumptions hold. We therefore hope that the new insight and recommendations provided can be found useful by the cheminformatics community.
Broadly speaking there are two conceptually different ways to solve statistical problems: the frequentist and the Bayesian approaches. On the pros and cons of each method there are numerous excellent review articles and text books, such as the recent book by Murphy . Unlike the frequentist approach, in the Bayesian approach any a priori knowledge about the probability distribution function that one assumes might have generated the given data (in the first place) can be taken into account when estimating this distribution function from the data at hand. If the data are noise–free and “complete”, the role of the a priori information in estimating the distribution function diminishes drastically. However, the a priori information can be crucial when the data are noisy and sparse. The latter scenario is typical in realistic large chemical datasets, which, arguably, makes Bayesian based statistics a powerful data analysis tool.
Unfortunately, Bayesian statistics in its fullest form is not computationally feasible in realistic cheminformatics data analyses. However, in recent years, a simplified version of the Bayesian approach, which is commonly known as the “Naive” Bayesian algorithm, has been found to be a useful classification tool in multi–class classification problems in cheminformactics. To this end a Naive Bayesian classifier is built on binary descriptor space. The descriptors/features x j , representing the compounds to be classified, assume binary values 0 or 1, where (j = 1,2,...,L) and L can typically be more than 1,000. Thus for some cheminformatics practitioners even the Naive Bayesian algorithm in its standard form is computationally prohibitive when the dataset is large. In this regard, Xia et al.  proposed a simpler version of the standard Naive Bayesian algorithm, albeit for binary classification problems; slight variants of this algorithm for multi–class classification can also be found in [3, 4]. According to Rogers et al., Rogers being a co–author of the work presented in , “the standard Naive Bayes was modified by considering only the effect of the presence of a feature and not its absence”. There are also a few more noticeable aspects of this proposed simplification: (a) the authors cleverly estimate directly – albeit heuristically – the a posteriori class probability for the present feature; (b) these authors (rather ingeniously) incorporate a Laplacian–correction into the estimated posterior class probability; and (c) the authors deem absent features not discriminating enough and therefore discard their contributions to the estimation of the posterior class. More than anything else it is this omission of the absent features from the Standard Naive Bayes (SNB) algorithm that makes Xia et al.’s proposed Naive Bayes Algorithm, termed Laplacian Corrected Modified Naive Bayes (LCMNB), (and its variants by different groups) computationally fast.
It is these three points, (a), (b) and (c), that we expound on in a mathematical setting to demonstrate under which conditions they hold – not only in an abstract sense, but also in the practical sense for a NB practitioner to make an informed decision as to when it is appropriate to employ SNB or LCMNB, in the cheminformatics context.
where x = (x1,x2,...,x L ) and ω i denote the feature vectors and class labels, respectively; x j and L being as described before, whereas i is just an index for the class labels. The terms p(ω i |x), p(x|ω i ), p(ω i ), and p(x) refer to the posterior probability for ω i given x, the descriptor vector distribution conditioned on class ω i , the a priori probability of class ω i occurring, and the descriptor vector density function, respectively – for more details, see ref. [3, 4, 6].
In practice, it is extremely difficult to estimate p(ω i |x) or p(x|ω i ). This reality inevitably forces one to make concessions over the degree of accuracy the estimated p(ω i |x) or p(x|ω i ) can deliver. One widely employed scheme to obtain these probability distributions with compromised accuracy is to assume that individual descriptors x j , j = 1,2,...,L, are independent conditional on ω i . It is this naive assumption of independence among features to which the term “Naive” in “Naive Bayesian” refers.
Clearly is common to all classes and therefore plays no role in classification. Thus, in practice (in the Naive Bayes context with which this work is concerned) one is required to estimate p(ω i |x j ) and p(ω i ).
Since generative approaches can be informative and “simpler” than their discriminative counterparts , we make use of Bayes’ theorem again, i.e., and then estimate p(ω i |x j ) through , where with C referring to the number of classes. p(ω i ) denotes the a priori class probability, which is relatively easy to estimate. Thus, in our Bayesian context, the estimation of p(ω j |x j ) boils down in practice to estimating p(x j |ω i ).
Estimation of p(x j |ω i ), with x j = 1 and0
where and N ij , respectively, denote the number compounds in class ω i , and number of compounds in this class with descriptor x j assuming value 1. β i and α i are Beta distribution hyper–parameters per class and the valid range of values that these hyper–parameter can assume are as defined in Appendix A. When α i and β i equal 1, α i and β i + α i in Eqs. 12–13 can be viewed as a “Laplacian correction”.
Results and discussion
Estimation of p(ω i |x j = 1) and p(ω i |x j = 0)
Estimation of p(ω i |x j = 1): In Our Approach
Assume that we have N chemical compounds (and their activity labels) available for training, where of these compounds belong to class ω i .
Assume that the class a priori distribution is taken as , where (which is a valid assumption as found in any realistic large chemical dataset).
(recall that ).
where is the number of times x j assumes the value 1.
Estimation of p(ω i |x j = 1): In Xia et al.’s Formulation
Eq. 16 constitutes what Xia et al. term “the Laplacian–Corrected Modified Naive Bayes (LCMNB)” estimator for p(ω i |x j = 1).
We note in passing that in Xia et al.’s case, C = 2 and , which in their nomenclature denoted by p(Active) – that is, A2 = 1 while A1 = K − 1.
Initially we employed the Beta a priori distribution for the class conditional distribution to ascertain the equivalence of Eqs. 15 and 16. Fortunately, however, we have ended up with the general equations (Eqs. 14 – 15) that not only encapsulate the LCMNB scheme of Xia et al., but also subsume the other various variants of LCMNB, such as those discussed in Nidhi et al. and Nigsch et al.’s papers [3, 4].
At any rate, let us proceed to the nub of this work: Identifying the conditions under which the LCMNB algorithm holds with respect to the SNB algorithm. But first we need to describe the estimation of p(ω i |x j = 0).
Estimation of p(ω i |x j = 0): In Our Approach
Naive Bayes: scoring function from
Now we come to the core of this work, under which conditions does the LCMNB algorithm hold with respect to the SNB algorithm? Before we answer this question, we deem it instructive and more insightful to map Eq. 18 monotonically to a discriminant function, a “scoring function” (so to speak).
Thus unless one (or more) of the above – (i), (ii) and (iii) – is (are) met, the assumption on which the Modified Naive Bayesian algorithm is based is questionable and therefore its practitioners should pay attention to this discrepancy; clearly it is not justifiable to discard from the onset the contribution of simply because features x j are absent, i.e. x j = 0.
Starting from a standard Naive Bayes (SNB) algorithm, we have derived mathematically the relationship between Xia et al.’s ingenious, but heuristic algorithm, and the standard Naive Bayes approach. We also describe the conditions on which Xia et al.’s crucial assumption – contributions from absent feature can be discarded – holds. It is our hope that, with this new insight, cheminformaticians may now be able to efficiently use the Modified version of the standard Naive Bayes algorithm, as proposed by Xia et al., and subsequently by Nidhi et al. and Nigsch et al.
Appendix A: Estimator of p(x j |ω i )
Here we give for completeness the proof that a priori Beta distribution leads to Eqs. 12 and 13 in the text.
ω i : class label indexed by i, i = 1,2,...,C.
C: Number of classes.
: Number of samples in class ω i .
N ij : Number of samples in class ω i with feature x j = 1, j = 1,2,...,L.
L: Number of features.
We state from the onset, in the following derivation we follow closely the descriptions given in ref. . We also note, for clarity’s sake, in the following analyses we abuse notation and use x jk for both the random variable and its realization.
where μ ij is an estimate for the conditional probability that feature j occurs in class ω i , and is what we are trying to estimate given a set of compounds assumed to belong to class ω i . (In our context, μ ij is an estimator for p(x j |ω i ), where p(x j |ω i )is as defined in the text.)
To estimate μ ij in a Bayesian framework, we first view μ ij as a random variable, then choose an “appropriate” prior and likelihood for the random variable μ ij .
where B(α i ,β i ) ensures that the Beta distribution is normalised
We are in debt to Dr Dave Rogers for his many useful comments on the original LCMNB approach, in particular for helping us understand more about the two–class LCMNB version.
Mussa and Glen would like to thank the Unilever Centre for Molecular Sciences Informatics for its support, whereas Mitchell would like to thank the Scottish Universities Life Sciences Alliance (SULSA).
- Murphy KP: Machine Learning: A Probabilistic Perspective. 2012, Cambridge, MA: MIT PressGoogle Scholar
- Xia X, Maliski EG, Gallant P, Rogers D: Classification of kinase inhibitors using a Bayesian model. J Med Chem. 2004, 47: 4463-4470. 10.1021/jm0303195.View ArticleGoogle Scholar
- Glick M, Davies JW, Jenkins JL, Nidhi: Prediction of biological targets for compounds using multiple-category Bayesian models trained on chemogenomics databases. J Chem Inf Model. 2006, 46: 1124-1133. 10.1021/ci060003g.View ArticleGoogle Scholar
- Nigsch F, Bender A, Jenkins JL, Mitchell JBO: Ligand-target prediction using winnow and naive Bayesian algorithms and the implications of overall performance statistics. J Chem Inf Model. 2008, 48: 2313-2325. 10.1021/ci800079x.View ArticleGoogle Scholar
- Rogers D, Brown RD, Hahn M: Using extended–connectivity fingerprints with Laplacian-modified Bayesian analysis in high–throughput screening follow–up. J Biomol Screen. 2005, 10: 682-686. 10.1177/1087057105281365.View ArticleGoogle Scholar
- Townsend JA, Glen RC, Mussa HY: Note on naive Bayes based on binary descriptors in Cheminformatics. J Chem Inf Model. 2012, 52: 2494-2500. 10.1021/ci200303m.View ArticleGoogle Scholar
- Duda RO, Hart PE: Pattern Classification and Scene Analysis. 1973, New York, NY: John Wiley & Sons, LtdGoogle Scholar
- Koch RK: Introduction to Bayesian Statistics. 2007, Berlin: SpringerGoogle Scholar
- Bishop CM: Pattern Recognition and Machine Learning. 2006, New York: SpringerGoogle Scholar
- Ross SM: Introduction to Probability and Statistics for Engineers and Scientist. 1987, New York: John Wiley & SonsGoogle Scholar
- Davidson AC: Statistical Models (Cambridge Series in Statistical and Probabilistic Mathematics). 2008, Cambridge: Cambridge University PressGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.