Theoretical Topics

In this part, we present theoretical topics that are essential for understanding the concepts and techniques used in the previous chapters. This part consists of the following chapters:

In Chapter 25, we cover the theoretical aspects of the simple linear regression model introduced in Chapters 11 and 12. We introduce fundamental concepts of asymptotic theory, including convergence in probability, convergence in distribution, the law of large numbers, and the central limit theorem. We then show how these tools can be used to derive the asymptotic properties of the OLS estimator. We consider extended least squares assumptions under which the OLS estimators and t-statistics have exact sampling distributions. This chapter is supplemented by Appendix C — Appendix for Chapter 25, which provides further discussion on statistical distributions.

Chapter 26 consists of three main sections. In the first section, we cover the theoretical aspects of the multiple linear regression model introduced in Chapters 13 and 14. We introduce matrix notation for the multiple linear regression model and show how to derive the OLS estimator using matrix algebra. We also discuss the asymptotic and exact sampling distributions of the OLS estimator. Finally, we introduce the generalized least squares (GLS) estimator that requires a strong form of the zero-conditional mean assumption.

In the second section, we revisit the IV regression model and show how to derive the TSLS estimator using matrix algebra. We then discuss the asymptotic and exact sampling distribution of the IV estimator. In particular, we show that the TSLS estimator is efficient under the assumption of homoskedasticity.

In the third section, we cover two classes of estimators: the IV and generalized method of moments (GMM) estimators. Both are based on minimizing a quadratic form that incorporates all moment conditions. For the linear IV regression model, these two classes of estimators are equivalent. We also show that the TSLS estimator is a special case of the GMM estimator. Furthermore, we derive an efficient GMM estimator by selecting an optimal weighting matrix for the quadratic form. This efficient GMM estimator is then used to formulate a heteroskedasticity-robust J-statistic for testing overidentifying restrictions.

Chapter 26 is supplemented by Appendix D — Appendix for Chapter 26, which reviews linear algebra, matrix calculus, and random vectors and their distributions. We recommend that readers review these materials before reading Chapter 26.