Dybowski, Richard and Roberts, Stephen J. (2001) ‘Confidence Intervals and Prediction Intervals for Feed-Forward Neural Networks’, Clinical Applications of Artificial Neural Networks.
Dybowski,R.,Roberts,S.J.(2001) Clin Apps Artifl Neurl Ntwks 298-326.pdf - Accepted Version
Available under License Creative Commons Attribution No Derivatives.
Download (470Kb) | Preview
The chapter opens with an introduction to regression and its implementation within the maximum-likelihood framework. This is followed by a general introduction to classical confidence intervals and prediction intervals. We set the scene by first considering confidence and prediction intervals based on univariate samples, and then we progress to regarding these intervals in the context of linear regression and logistic regression. Since a feed-forward neural network is a type of regression model, the concepts of confidence and prediction intervals are applicable to these networks, and we look at several techniques for doing this via maximum-likelihood estimation. An alternative to the maximum-likelihood framework is Bayesian statistics, and we examine the notions of Bayesian confidence and predictions intervals as applied to feed-forward networks. This includes a critique on Bayesian confidence intervals and classification.
|Item Type:||Book Section|
|Additional Information:||Citation: Dybowski, R., Roberts, S. J. (2001) ‘Confidence Intervals and Prediction Intervals for Feed-Forward Neural Networks’ In Dybowski R. & Gant ,V. (eds.), Clinical Applications of Artificial Neural Networks, Cambridge University Press 2001, pp 298-326.|
|Divisions:||Schools > Architecture Computing and Engineering, School of|
|Depositing User:||Mr Stephen Grace|
|Date Deposited:||02 Nov 2009 10:52|
|Last Modified:||27 Sep 2012 11:58|
Actions (login required)