PAULO CILAS MARQUES FILHO2024-09-272024-09-2720220167-86551872-7344https://repositorio.insper.edu.br/handle/11224/6993We show that the byproducts of the standard training process of a random forest yield not only the well known and almost computationally free out-of-bag point estimate of the model generalization error, but also open a direct path to compute confidence intervals for the generalization error which avoids processes of data splitting and model retraining. Besides the low computational cost involved in their construction, these confidence intervals are shown through simulations to have good coverage and appropriate shrinking rate of their width in terms of the training sample size.Digitalp. 171 - 175InglêsRandom forestsGeneralization errorOut-of-bag estimationConfidence intervalBootstrappingConfidence intervals for the random forest generalization errorjournal article10.1016/j.patrec.2022.04.031