The influence of the inactives subset generation on the performance of machine learning methods
© Smusz et al.; licensee Chemistry Central Ltd. 2013
Received: 10 January 2013
Accepted: 25 March 2013
Published: 5 April 2013
A growing popularity of machine learning methods application in virtual screening, in both classification and regression tasks, can be observed in the past few years. However, their effectiveness is strongly dependent on many different factors.
In this study, the influence of the way of forming the set of inactives on the classification process was examined: random and diverse selection from the ZINC database, MDDR database and libraries generated according to the DUD methodology. All learning methods were tested in two modes: using one test set, the same for each method of inactive molecules generation and using test sets with inactives prepared in an analogous way as for training. The experiments were carried out for 5 different protein targets, 3 fingerprints for molecules representation and 7 classification algorithms with varying parameters. It appeared that the process of inactive set formation had a substantial impact on the machine learning methods performance.
The level of chemical space limitation determined the ability of tested classifiers to select potentially active molecules in virtual screening tasks, as for example DUDs (widely applied in docking experiments) did not provide proper selection of active molecules from databases with diverse structures. The study clearly showed that inactive compounds forming training set should be representative to the highest possible extent for libraries that undergo screening.
Machine learning methods are among the most popular tools used in cheminformatic tasks [1–3]. So far, many aspects of their application in experiments connected with the classification of chemical compounds have been extensively examined: the type of molecules representation , number of compounds from particular class in the dataset , parameters of learning algorithms , the type of machine learning method , etc. Interestingly, the influence of differences in dataset composition resulting from various ways of selection of molecules forming a set of inactives has never been thoroughly investigated.
Databases of compounds with reported activity towards particular target usually contain only a few molecules which are proved to be inactive. Therefore, during the preparation for machine learning experiments, the need of generating sets of compounds assumed as inactive arises. Various approaches to this task have already been proposed. Selection from databases of known ligands [8, 9], where compounds with unconfirmed activity towards considered receptor (active towards proteins other than the target of the interest) were assumed as inactive, generation of putative inactives , random selection out of large databases  are just some the most common examples. Only in very few cases, number of inactive compounds is sufficient enough to perform ML experiments .
In this study, six most frequently used ways of selecting assumed inactives were tested: random and diverse selection from: the ZINC database , the MDDR database  and libraries generated according to the DUD methodology  in terms of their impact on the machine learning methods performance.
As the common sense suggest, such effect should be observed, but to determine if it is noticeable and repeatable (and thus dependent on the experimental conditions) all tests were performed for 5 different protein targets, with the use of 3 different fingerprints for molecules representation and 7 machine learning algorithms with varying parameters.
Machine learning methods used in the experiments with the optional abbreviations used in further work
Naïve Bayes (NB)
Sequential Minimal Optimization (SMO)
The complexity parameter was set at 1, the epsilon for a round-off error was 1.0 E-12, and an option of normalizing training data was chosen.
1) The normalized polynomial kernel,
2) The polynomial kernel
3) The RBF kernel
Instance-Based Learning (Ibk)
The brute force search algorithm for nearest neighbour search with Euclidean distance function.
The number of neighbours used:
One artificial example used during training, number of member classifiers in the Decorate ensemble: 10, the maximum number of iterations: 10.
1) With reduced-error pruning
2) With C.4.5 pruning
Random Forest (RF)
Trees with unlimited depth, seed number: 1.
Number of generated trees:
Similarly to already reported findings, the number of trees grown when Random Forest is run equal to 100 led to the highest classification efficiency , as well as setting the number of neighbours to 1 when k-NN is applied . Some authors also pointed out, that RBF kernel is an optimal choice for SVM experiments , whereas in our tests, the normalized polykernel showed the best values of evaluating parameters for the majority of cases. Naïve Bayes appeared to be more effective base classifier than J48, and C.4.5 pruning won over the reduced-error pruning (Table 1, Additional File 1: Figure S1).
Common-test set mode
The influence of the way of inactive molecules selection on machine learning methods performance was similar for different algorithms (Figure 1a). All classifiers were able to correctly indicate active molecules, regardless of the changes in the inactives set composition, which is shown by the dark-red maps referring to this evaluating parameter. On the other hand, precision was very sensitive to variations in the way of subsets of inactives generation – heat maps related to this parameter are definitely more complex comparing to those presenting recall values. Changes in precision values also had a major contribution to variations in MCC – they were even more visible due to the range of MCC from −1 to 1. For this reason, both of these parameters are discussed together.
For all classifiers, the highest effectiveness of classification occurred for ZINC_random sets (first columns in each part of heat maps referring to particular fingerprint). Although in this case, the test set was generated in the analogous way as for training, which makes it a little bit privileged, this type of test set is a very good illustration of the virtual screening experiment, where large libraries of chemical structures are evaluated. Therefore, this part of study can be regarded as a reference point for the rest of results. For the experiments with COX-2 inhibitors, the MCC and precision values were similar to those obtained for ZINC_random also for both sets of inactives selected out of the MDDR database when MACCSFP and ExtFP were used. Applying KlekFP led to situation where those parameters remained on similar level for all classifiers but Hyperpipes (lower by ~0.3 for precision and ~0.5 for MCC comparing to ZINC_random). For the rest of targets, changes in precision and MCC (being a consequence of varying conditions of inactives selection) were very similar. Their highest values (very close to ZINC_random) were provided by random selection of compounds from the MDDR database and the lowest by DUD sets, which caused the reduction of classification effectiveness even by ~0.5 or ~0.8 for precision and MCC respectively). In addition, similarly to COX-2, the lowest precision and MCC changes for different sets of inactives occurred when molecules were represented by KlekFP. All of the performed experiments indicated that random selection of inactives provided better results than the diverse approach. This dependence was the most clearly indicated in case of “ZINC sets” of inactives - up to ~0.8 variation in MCC values, for inactives from the MDDR database this difference was around 0.2, whereas for “DUD sets” those two methods of selecting inactive molecules led to comparable results, although they were significantly less efficient than those obtained with the “inactives” selected from the two previous databases.
Various-test sets mode
As regards to the results obtained in the various-test sets mode (Figure 1b), differences between particular experiments were suppressed. It is hard to identify the set providing the best recall or the one that led to significant fall in its values. Even though, for some methods (such as NB, Decorate and J48) a slight drop was observed for ZINC_random and MDDR_random sets indicated by shifting in colours of boxes corresponding to experiments with its use from dark to light red.
In this part of research, the rate of false positives was significantly lower, which is illustrated by the dark red precision maps in Figure 1b) versus the light red-yellow ones corresponding to experiments with the common-test set mode (Figure 1a), and this also had a consequence in higher MCC values. An apparent fall in values of these parameters occurred for experiments with datasets containing inactives picked diversely from the ZINC database and represented by ExtFP, as well as for application of Hyperpipes as a classifier. The highest evaluating parameters values were obtained for DUD sets as it is shown by dark-red colour of two last columns in heat maps generated for each fingerprint. In case of using KlekFP, ZINC_diverse sets also led to high values of evaluating parameters, regardless of the machine learning algorithm applied for the classification task.
As inactives selected from PubChem database much more resembles the second part of the study (various-test sets mode) – the chemical space of inactive molecules is much more limited comparing to ZINC database or any other commercially available library of compounds, results obtained for this external validation sets coincide with those obtained in the various-test set mode. Heat maps and numerical values of evaluating parameters corresponding to them are available in Additional file 3: Figure S2, Additional file 4: Table S4.
Influence of the inactives subset composition on particular machine learning method performance
In general, changes in recall, precision and MCC were the most significant when the machine learning methods were combined with ExtFP (standard deviation of MCC was even close to 0.4 as it is shown by red spots on heat maps). On the other hand, the most stable results were provided by KlekFP – SD of MCC values was more than 0.2 lower compared to already mentioned tests.
Except the classification of M1 agonists, recall values were not much influenced by changes in dataset composition. It remained on the similar level for all experiments with SD usually below 0.1, which is expressed by corresponding dark blue squares on heat maps (Figure 2). Changes in precision values contributed the most to MCC fluctuations and the tendencies in variations of these two parameters were very similar. Even in case of experiments with COX-2 inhibitors, when all values of evaluating parameters were on high level, the MCC changes (resulting from different ways of set of inactives composition) were clearly visible and reached the maximum value of 0.36 for SMO.
SMO algorithm was the most dependent on the way of the inactives set construction (indicated by red spots on the heat maps corresponding to SD of MCC values from ~0.3–0.4). Two methods least affected by inactives set formation were Hyperpipes (SD < 0.1), and Ibk (SD equal 0.2 at most), yet the first one was at the same time the worst classifier in terms of MCC (MCC of 0.1 using ExtFP for molecules representation).
As it appears from the obtained results the size and properties of the chemical space, that is used for the experiments has undoubtedly a great impact on the performance of machine learning methods. All tested classifiers faced the problem of correct classification of compounds picked up from the ZINC database when they were trained on sets with inactives selected according to different approaches (from MDDR database or DUD libraries). Although the ability of machine learning methods to select active molecules was on a very similar level (recall values were not varying a lot when moving to different sets), the high rate of false positives indicates errors in assigning a proper class label to inactive compounds. Strong limitation of chemical space (as it happen in DUD libraries) led to variation of precision values at the level of 0.5–0.7. The larger the database from which the compounds were selected, the more significant the improvement of the results. What is also worth noting, the evaluating parameters values were much higher when molecules for inactives set formation were picked up randomly, rather than with the assurance of the maximum diversity of the selected compounds. The values of MCC indicate that definitely the best results were obtained for sets with inactives randomly selected out of the ZINC database.
The case is different when inactives for test sets formation were selected in an analogous way as compounds chosen for training. In those experiments, the dependencies are reverse comparing to the common-test set mode. In this case, the best results were provided by sets with compounds selected from libraries containing smaller number of molecules (the MCC values exceeding even 0.9 for “DUD sets”). In contrast to the outcomes from the common-test set mode, better results were also obtained when compounds were selected in a diverse way rather than random, however the scale of change was varying a lot for different targets.
These results showed that when chemical libraries with compounds covering chemical space to quite high extent undergo virtual screening procedure, structures selected for training should also cover this space as much as possible. The other way, it seems to be the source of difficulties for machine learning methods to correctly identify potentially active ligands.
Choosing the best machine learning method
The application of machine learning methods in virtual screening experiments is connected with the desire to correctly select potentially active compounds, also with structures that are different from the already known ligands. Commercially available databases that are evaluated in virtual screening procedure contain compounds with various structures and properties. Therefore, it is desirable that machine learning methods are able to correctly classify compounds that are different from those present in the training set.
Ibk was the method that was characterized by the highest efficiency in classification of compounds from the ZINC database, when trained on highly limited DUD sets and the ExtFP and MACCSFP were used as fingerprints (MCC ~0.2–0.3 for the former fingerprint and ~0.55–0.7 for the latter one). The SMO was the most effective, when molecules were represented by KlekFP with Ibk, Decorate and Random Forest only little worse comparing to them (MCC ~0.8–0.9). The last combination of learning algorithm-fingerprint enables the correct classification of molecules significantly different from those present in the training set.
Although the DUD sets are useful for docking studies, applying it for virtual screening experiments with the use of machine learning is not as effective. The high limitation of chemical space is a source of difficulties for learning algorithms to proper discriminate actives from inactives, when large libraries containing a great variety of chemical structures are evaluated. However, when they are limited to some extent (for example by other filters), forming training sets from more narrow databases may lead to improvement in classification effectiveness.
Various ways of selection of compounds, that are assumed to be inactive are applied in computational experiments. However, this step of training set formation may have a significant impact on the effectiveness of classification performed by machine learning methods. Although, the DUD sets are widely applied in docking experiments with a high-level results , they are not necessarily appropriate for application in virtual screening experiments with the use of machine learning. As libraries of commercially available compounds contain a big variety of structures, machine learning algorithms are unable to select correctly potentially active compounds, when they are trained on inactive molecules covering a chemical space only to a small extent as it is in case with DUD. Because of that, inactive compounds chosen for training set should be as representative as possible for the libraries that undergo screening. It is also worth noting that in the majority of studies connected with examining the machine learning methods performance, compounds covering training and test sets are selected in an analogous way. Results of such researches will not necessarily coincide with real experiment (DUD sets also provided the best results when tested on similarly generated compounds, however they did not led to effective classification of various compounds from large database, as ZINC). What is more, out of the group of tested fingerprints, KlekFP should be applied for molecules representation, as it provided the highest efficiency of classification of molecules different from those used for training a classifier. The same goes for Ibk and SMO when particular learning methods are taken into consideration.
The protein targets chosen for the experiments were: cyclooxygenase-2 (COX-2), muscarinic receptor M1, protease HIV-1 (HIV PR), matrix metalloproteinase and receptor 5-HT1A. They were selected after analysing a set of papers concerning different aspects of machine learning methods tests [16–19]. We chose such targets that were appearing most frequently in this kind of comparative studies and for which a sufficient number of known ligands exist. All compounds showing activity towards them were extracted from the MDDR database  – they formed a class of positive learning examples. Sets of compounds assumed as inactive were selected out of the three already mentioned databases: ZINC, MDDR and libraries that were generated in the way similar to DUD.
Libraries generation according to DUD approach
Number of decoys selected for each target
Number of input ligands
Number of decoys
Formation of the set of inactives
Composition of training and test sets used in the experiments
MDDR activity index
Number of actives/number of inactives
All experiments were performed in two modes:
With the use of one common test set, the same for each method of inactive molecules generation, where inactives were randomly selected out of the ZINC database – common-test set mode,
With the use of test sets with inactives prepared in the analogous way as for training – various-test sets mode.
For each part of the study that used randomization, the procedure of inactive molecules selection was repeated 10 times (during both train and test set generation). There was only 1 iteration of experiments selected in a diverse way, as (due to the way an algorithm of diverse selection works) the same set of compounds from particular database is always picked up.
Although selecting inactives for common-test set mode favours the ZINC random training set, this type of experiment resembles to the greatest extent virtual screening tasks, where libraries that undergo the procedure contain very diverse compounds. However, in order to provide an independent validation, an external validation test set was provided with inactives selected from PubChem database . However, not for all the targets, true inactives were available in sufficient number and therefore the experiments in such mode were performed only for M1 (AID: 628; 61476 cmds), metalloproteinase (AID: 618; 86197 cmds) and 5-HT1A (AID: 567; 64559 cmds) preserving 10 times randomization.
For each structure in the dataset, three different types of fingerprints were generated with the use of the PaDEL-Descriptor software : Extended Fingerprint (ExtFP, 1024 bits) , MACCS Fingerprint (MACCSFP, 166 bits)  and Klekota & Roth Fingerprint (KlekFP, 4860 bits) .
Machine learning experiments
Seven machine learning methods were selected for the experiments: Naïve Bayes classifier , Sequential Minimal Optimization , Instance-Based Learning , Decorate [29, 30], Hyperpipes , J48  and Random Forest [32, 33]. For some methods, a series of tests were performed with varying settings for different classifiers (Table 1). Algorithms’ implementations present in the WEKA package (version 3.6)  were used. All calculations were performed on Intel Core i7 CPU 3.00 GHz computer system with 24 GB RAM running a 64-bit Linux operating system.
The evaluation of machine learning methods
Recall measures the fraction of correctly labelled positive examples, precision describes the correctness of positive instances prediction, whereas MCC is a balanced measure of binary classification effectiveness, ranging from −1 to 1, where 1 corresponds to error-free class labelling and −1 to reverse classification [7, 35].
The study was partly supported by a project UDA-POIG.01.03.01-12-100/08-00 co-financed by the European Union from the European Fund of Regional Development (EFRD); http://www.prokog.pl and by a grant PRELUDIUM 2011/03/N/NZ2/02478 financed by the National Science Centre (http://www.ncn.gov.pl)
- Geppert H, Vogt M, Bajorath J: Current trends in ligand-based virtual screening: molecular representations, data mining methods, new application areas, and performance evaluation. J Chem Inf Model. 2010, 50: 205-216. 10.1021/ci900419k.View Article
- Melville JL, Burke EK, Hirst JD: Machine learning in virtual screening. Comb Chem High Throughput Screen. 2009, 12: 332-343. 10.2174/138620709788167980.View Article
- Schwaighofer A, Schroeter T, Mika S, Blanchard G: How wrong can we get? A review of machine learning approaches and error bars. Comb Chem High Throughput Screen. 2009, 12: 453-468. 10.2174/138620709788489064.View Article
- Hert J, Willett P, Wilton DJ, Acklin P, Azzaoui K, Jacoby E, Schuffenhauer A: Comparison of topological descriptors for similarity-based virtual screening using multiple bioactive reference structures. Org Biomol Chem. 2004, 2: 3256-3266. 10.1039/b409865j.View Article
- Liu XH, Ma XH, Tan CY, Jiang YY, Go ML, Low BC, Chen YZ: Virtual screening of Abl inhibitors from large compound libraries by support vector machines. J Chem Inf Model. 2009, 49: 2101-2110. 10.1021/ci900135u.View Article
- Bruce CL, Melville JL, Pickett SD, Hirst JD: Contemporary QSAR classifiers compared. J Chem Inf Model. 2007, 47: 219-227. 10.1021/ci600332j.View Article
- Plewczynski D, Spieser SAH, Koch U: Assessing different classification methods for virtual screening. J Chem Inf Model. 2006, 46: 1098-1106. 10.1021/ci050519k.View Article
- Hammann F, Gutmann H, Baumann U, Helma C, Drewe J: Classification of Cytochrome P 450 Activities Using Machine Learning Methods. Mol Pharmaceutics. 2009, 33: 796-801.
- Chen B, Harrison RF, Papadatos G, Willett P, Wood DJ, Lewell XQ, Greenidge P, Stiefl N: Evaluation of machine-learning methods for ligand-based virtual screening. J Comput Aided Mol Des. 2007, 21: 53-62. 10.1007/s10822-006-9096-5.View Article
- Han LY, Ma XH, Lin HH, Jia J, Zhu F, Xue Y, Li ZR, Cao ZW, Ji ZL, Chen YZ: A support vector machines approach for virtual screening of active compounds of single and multiple mechanisms from large libraries at an improved hit-rate and enrichment factor. J Mol Graph Model. 2008, 26: 1276-1286. 10.1016/j.jmgm.2007.12.002.View Article
- Li H, Ung CY, Yap CW, Xue Y, Li ZR, Chen YZ: Prediction of estrogen receptor agonists and characterization of associated molecular descriptors by statistical learning methods. J Mol Graph Model. 2006, 25: 313-323. 10.1016/j.jmgm.2006.01.007.View Article
- Irwin JJ, Shoichet BK ZINC: A Free Database of Commercially Available Compounds for Virtual Screening. J Chem Inf Model. 2005, 45: 177-182. 10.1021/ci049714+.View Article
- MDDR licensed by Accelrys, Inc. USA. http://www.accelrys.com,
- Huang N, Shoichet BK, Irwin JJ: Benchmarking sets for molecular docking. J Med Chem. 2006, 49: 6789-6801. 10.1021/jm0608356.View Article
- Nicholls A: What do we know and when do we know it?. J Comput Aided Mol Des. 2008, 22: 239-255. 10.1007/s10822-008-9170-2.View Article
- Ma XH, Jia J, Zhu F, Xue Y, Li ZR, Chen YZ: Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries. Comb Chem High Throughput Screen. 2009, 12: 344-357. 10.2174/138620709788167944.View Article
- Plewczynski D: Brainstorming: weighted voting prediction of inhibitors for protein targets. J Mol Model. 2011, 17: 2133-2141. 10.1007/s00894-010-0854-x.View Article
- Plewczynski D, von Grotthuss M, Spieser SAH, Rychlewski L, Wyrwicz LS, Ginalski K, Koch U: Virtual high throughput screening using combined random forest and flexible docking. Comb Chem High Throughput Screen. 2007, 10: 189-196. 10.2174/138620707780126705.View Article
- Gardiner EJ, Gillet VJ, Haranczyk M, Hert J, Holliday JD, Malim N, Patel Y, Willet P: Turbo Similarity Searching: Effect of Fingerprint and Dataset on Virtual-Screening Performance. Stat Anal Data Min. 2009, 2: 103-114. 10.1002/sam.10037.View Article
- InstantJChem: ChemAxon. 2011, http://www.chemaxon.com,
- RDKit: Open-source cheminformatics. http://www.rdkit.org,
- Discovery Studio, provided by Accelrys, Inc USA. http://www.accelrys.com,
- Wang Y, Xiao J, Suzek TO, Zhang J, Wang J, Zhou Z, Han L, Karapetyan K, Dracheva S, Shoemaker BA, Bolton E, Gindulyte A, Bryant SH: PubChem's BioAssay Database. Nucleic Acids Res. 2012, 40: D400-412. 10.1093/nar/gkr1132.View Article
- Yap CWEI: PaDEL-Descriptor: An Open Source Software to Calculate Molecular Descriptors and Fingerprints.J Comput Chem. 2010, 32: 1466-1474.View Article
- Steinbeck C, Han Y, Kuhn S, Horlacher O, Luttmann E, Willighagen E: The Chemistry Development Kit (CDK): an open-source Java library for Chemo- and Bioinformatics. J Chem Inf Comput Sci. 2003, 43: 493-500. 10.1021/ci025584y.View Article
- Ewing T, Baber JC, Feher M: Novel 2D fingerprints for ligand-based virtual screening. J Chem Inf Model. 2006, 46: 2423-2431. 10.1021/ci060155b.View Article
- Klekota J, Roth FP: Chemical substructures that enrich for biological activity. Bioinformatics. 2008, 24: 2518-2525. 10.1093/bioinformatics/btn479.View Article
- Platt JC: Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines. Advances in Kernel Methods – Support Vector Learning. Edited by: Scholkopf B, Burges C, Smola AJ. 1999, Cambridge: MIT Press, 185-208.
- Melville P: Mooney RJ Constructing Diverse Classifier Ensembles using Artificial Training Examples. 2003, Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence: Morgan Kaufmann Publishers Inc, 505-510.
- Stefanowski J, Pachocki M: Comparing Performance of Committee Based Approaches to Active Learning. Recent Advances in Intelligent Information Systems. Edited by: Klopotek M, Przepiorkowski A, Wierzchon S, Trojanowski K. 2009, Warsaw: EXIT, 457-470.
- Deeb ZA, Devine T: Randomized Decimation HyperPipes. 2010, http://www.csee.wvu.edu/~timm/tmp/r7.pdf,
- Breiman L: Random Forests. Mach Learn. 2001, 45: 5-32. 10.1023/A:1010933404324.View Article
- Svetnik V, Liaw A, Tong C, Culberson JC, Sheridan RP, Feuston BP: Random forest: a classification and regression tool for compound classification and QSAR modeling. J Chem Inf Comput Sci. 2003, 43: 1947-1958. 10.1021/ci034160g.View Article
- Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH: The WEKA data mining software: an update. SIGKDD Explorations. 2009, 11: 10-18. 10.1145/1656274.1656278.View Article
- Savojardo C, Fariselli P, Martelli PL, Shukla P, Casadio R: Prediction of the Bonding State of Cysteine Residues in Proteins with Machine-Learning Methods. Computational Intelligence Methods for Bioinformatics and Biostatistics 7th International Meeting. Edited by: Rizzo R, Lisboa PJG. 2011, Berlin Heidelberg: Springer-Verlag, 98-111. 6665View Article
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.