Dybowski, Richard and Roberts, Stephen J. (2001) ‘Confidence Intervals and Prediction Intervals for Feed-Forward Neural Networks’, Clinical Applications of Artificial Neural Networks.
Use this permanent URL when citing or linking to this resource in ROAR.
The chapter opens with an introduction to regression and its implementation within the maximum-likelihood framework. This is followed by a general introduction to classical confidence intervals and prediction intervals. We set the scene by first considering confidence and prediction intervals based on univariate samples, and then we progress to regarding these intervals in the context of linear regression and logistic regression. Since a feed-forward neural network is a type of regression model, the concepts of confidence and prediction intervals are applicable to these networks, and we look at several techniques for doing this via maximum-likelihood estimation. An alternative to the maximum-likelihood framework is Bayesian statistics, and we examine the notions of Bayesian confidence and predictions intervals as applied to feed-forward networks. This includes a critique on Bayesian confidence intervals and classification.
|Divisions:||Schools > Architecture Computing and Engineering, School of|
|Additional Information:||Citation: Dybowski, R., Roberts, S. J. (2001) ‘Confidence Intervals and Prediction Intervals for Feed-Forward Neural Networks’ In Dybowski R. & Gant ,V. (eds.), Clinical Applications of Artificial Neural Networks, Cambridge University Press 2001, pp 298-326.|
|Date Deposited:||02 Nov 2009 10:52|
|Item Type:||Book Section|
|Creators:||Dybowski, Richard and Roberts, Stephen J.|
|Publisher:||Clinical Applications of Artificial Neural Networks|
|Last Modified:||27 Sep 2012 11:58|
|Depositing User:||Stephen Grace|