Ordinary least square (OLS) linear regression have point estimates on weight vector that fit the formula: . If we assume normality of the errors: with a fixed point estimate on , we could also enable analysis on confidence interval and future prediction (see discussion in the end of [2]). Instead of point estimates, bayesian linear regression assumes and are random variables and learns the posterior distribution of and from data. In my view, Bayesian linear regression is a more flexible method because it supports incorporating prior knowledge about parameters and the posterior distributions they provide enable more uncertainty analysis and facilitate other tasks [3], for example Thompson sampling in contextual bandit problems, which we will cover in the future. However, it is also not a panacea: they do not generally improve the prediction accuracy if no informative prior is provided.
The fundamental of Bayesian methods lies in Bayesian theorem, which states:
Specifically, in Bayesian linear regression, represents observed data and and refers to and . So the formula above can be rewritten as:
The procedure to adopt bayesian methods is to (1) define the likelihood and proper prior distributions of the parameters in interests; (2) calculate the posterior distribution according to Bayesian theorem; (3) use the posterior distribution to achieve other tasks, such as predicting on new .
The likelihood function of linear regression is:
which can be illustrated as what the probability to observe the data if we assume is normally distributed with mean and variance .
To get analytical expression of the posterior distribution , we usually require that the prior in the same probability distribution family as the likelihood. Here we can treat the likelihood as a two-dimensional exponential family (the concept is illustrated in chapter 4 in [6]), a distribution regarding to and . Therefore, the prior of the likelihood, , can be modeled as a Normal-inverse-Gamma (NIG) distribution:
The inverse gamma is also called inverse chi-squared distribution; they only differ in parameterization [9]. We can denote as . Note that we express , meaning that and is not independent. For one reason, if we model , we would not get a conjugate prior. Second, if you think are generated from some process governed by and , then and are dependent conditioned on (see section 3 in [8]).
Now, given that the likelihood and the prior are in the same probability distribution family, the posterior distribution is also a NIG distribution:
,
where:
The posterior predictive distribution (for predicting new data) is:
The result should be a student-t distribution. But the derivation detail is very complicated, possibly referring to section 6 in [10] and section 3 in [8]. I know from other online posts [11,12,13] that practical libraries don’t calculate the analytical form of the posterior distribution but rely on sampling techniques like MCMC. However, even though they get the posterior distribution, I don’t know how they would implement the posterior predictive distribution. This would worth my further investigation in the future.
Side notes
We have touched upon Bayesian linear regression when introducing Bayesian Optimization [1]. [4] is also a good resource of Bayesian linear regression. However, [1] and [4] only assume is a random variable but is still a fix point estimate. This post actually goes fully bayesian by assuming both and are random variables whose joint distribution follows the so called Normal inverse Gamma distribution. There aren’t too many resources in the same vein though. What I’ve found so far are: [5], section 2 in [7].
Reference
[1] https://czxttkl.com/?p=3212
[3] https://wso2.com/blog/research/part-two-linear-regression
[4] http://fourier.eng.hmc.edu/e176/lectures/ch7/node16.html
[5] A Guide to Bayesian Inference for Regression Problems: https://www.ptb.de/emrp/fileadmin/documents/nmasatue/NEW04/Papers/BPGWP1.pdf
[6] Bolstad, W. M. (2010). Understanding computational Bayesian statistics (Vol. 644). John Wiley & Sons.
[7] Denison, D. G., Holmes, C. C., Mallick, B. K., & Smith, A. F. (2002). Bayesian methods for nonlinear classification and regression (Vol. 386). John Wiley & Sons.
[8] https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture5.pdf
[9] https://en.wikipedia.org/wiki/Scaled_inverse_chi-squared_distribution
[10] Conjugate bayesian analysis of the gaussian distribution
[11] https://wso2.com/blog/research/part-two-linear-regression
[12] https://towardsdatascience.com/introduction-to-bayesian-linear-regression-e66e60791ea7