8. BREIMAN, Leo. Bagging predictors. Machine learning, v. 24, p. 123-140, 1996.
9.
MUSCHELLI III, John. ROC and AUC with a binary predictor: a potentially
misleading metric. Journal of classification, v. 37, n. 3, p. 696-708, 2020.
https://doi.org/10.1007/s00357-019-09345-1
10.
KOIZUMI, Yuma et al. SNIPER: Few-shot learning for anomaly detection to minimize
false-negative rate with ensured true-positive rate. In: ICASSP 2019-2019 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP).
IEEE, 2019. p. 915-919.
11.
MURTHY, Sreerama K. Automatic construction of decision trees from data: A multi-
disciplinary survey. Data mining and knowledge discovery, v. 2, p. 345-389, 1998.
12. MITCHELL, Tom M. Machine learning. 1997.
13.
DOMINGOS, Pedro; PAZZANI, Michael. On the optimality of the simple Bayesian classifier
under zero-one loss. Machine learning, v. 29, p. 103-130, 1997.
14. AHA, D. Lazy Learning. Kluwer Academic Publishers. 1997.
15. SCOTT LONG, John. Regression models for categorical and limited dependent variables.
Advanced quantitative techniques in the social sciences, v. 7, 1997.
16.
BURGES, Christopher J C. A tutorial on support vector machines for pattern recognition.
Data mining and knowledge discovery, v. 2, n. 2, p. 121-167, 1998.
17.
POWERS, David MW. Evaluation: from precision, recall and F-measure to ROC, infor-
medness, markedness and correlation. arXiv preprint arXiv:2010.16061, 2020.
18.
FREEMAN, Elizabeth A.; MOISEN, Gretchen G. A comparison of the performance of
threshold criteria for binary classification in terms of predicted prevalence and kappa.
Ecological modelling, v. 217, n. 1-2, p. 48-58, 2008.
19.
LÓPEZ, Victoria et al. An insight into classification with imbalanced data: Empirical
results and current trends on using data intrinsic characteristics. Information sciences,
v. 250, p. 113-141, 2013.
20.
BRADLEY, Andrew P. The use of the area under the ROC curve in the evaluation of
machine learning algorithms. Pattern recognition, v. 30, n. 7, p. 1145-1159, 1997.
21.
SPACKMAN, Kent A. Signal detection theory: Valuable tools for evaluating inductive
learning. In: Proceedings of the sixth international workshop on Machine learning.
Morgan Kaufmann, 1989. p. 160-163.
22. HOSMER, D. W.; LEMESHOW, Stanley. John Wiley & Sons. New York, 2000.
23.
ROSENBAUM, Paul R.; RUBIN, Donald B. The central role of the propensity score in
observational studies for causal effects. Biometrika, v. 70, n. 1, p. 41-55, 1983.
24.
BRAY, Bethany C. et al. Inverse propensity score weighting with a latent class exposure:
Estimating the causal effect of reported reasons for alcohol use on problem alcohol use 16
years later. Prevention Science, v. 20, p. 394-406, 2019.
25.
GLYNN, Adam N.; QUINN, Kevin M. An introduction to the augmented inverse propensity
weighted estimator. Political analysis, v. 18, n. 1, p. 36-56, 2010.
26.
VARELLA, Carlos Alberto Alves. Análise multivariada aplicada as ciencias agrárias.
Seropédica: Universidade Federal Rural do Rio de Janeiro, 2008.
27. ASSUNÇÃO, R. Linear Discriminant Analysis. Minas Gerais: DCC-UFMG, 2020.
28.
MENOTTI, D. Classificação. Universidade Federal do Paraná (UFPR). Especialização em
Engenharia Industrial 4.0. Paraná.
© INTERMATHS
CC BY 4.0
74 | https://doi.org/10.22481/intermaths.v4i2.13422 J. S. P. Ferreira