TitleEssays on model selection using Bayesian inference
NameChen, Guo (author), Tsurumi, Hiroki (chair), Landon-Lane, John (internal member), Swanson, Norman (internal member), Akincigil, Ayse (outside member), Rutgers University, Graduate School - New Brunswick,
Bayesian statistical decision theory,
DescriptionThis dissertation is composed of three essays evaluating Bayesian model selection criteria in various models, and whenever necessary, the Bayesian criteria are compared with sampling theory criteria. In chapter two, I compare the 2-regime threshold ARMA model (TARMA) and 2-state Markov switching model (MSM). Bayesian Markov Chain Monte Carlo (MCMC) algorithms are devised to obtain coefficient estimates, conditional and unconditional predictive densities. Posterior densities and cumulative densities of the mean square error of forecast (MSEF) of two competing models are generated. The main finding is that for one-day conditional prediction, the 2-regime TARMA model predicts the interest rate better than the MSM. Under the unconditional prediction, however, MSM has less prediction error than TARMA.
In chapter three, I compare the MSEF and Pseudo Bayes Factor (PSBF) obtained by 10-fold CV method and those from an out of sample prediction for fixed points. The MSEF suggests there is a slightly superior performance for the CV method in model selection over traditional out-of-sample forecast in the i.i.d sample. However, the same result is not obtained by PSBF. By excluding forecasted data in constructing coefficients within MCMC, the out-of-sample method is further improved by yielding higher probability to select the true model.
In chapter four, I evaluate logit and probit binary choice models. Monte Carlo experiments are conducted to compare the following five criteria in choosing the univariate probit and logit models: the deviance information criterion (DIC), predictive DIC, Akaike information criterion (AIC), weighted and unweighted sums of squared errors. The results show that if data are balanced no model selection criterion can distinguish the probit and logit models. If data are unbalanced and the sample size is large the DIC and AIC choose the correct models better than the other criteria. If unbalanced binary data are generated by a leptokurtic distribution the logit model is preferred over the probit model. The probit model is preferred if unbalanced data are generated by a platykurtic distribution.
NoteIncludes bibliographical references (p. 113-120)
Noteby Guo Chen
CollectionGraduate School - New Brunswick Electronic Theses and Dissertations
Organization NameRutgers, The State University of New Jersey
RightsThe author owns the copyright to this work