期刊全称 | Asymptotic Statistical Inference | 期刊简称 | A Basic Course Using | 影响因子2023 | Shailaja Deshmukh,Madhuri Kulkarni | 视频video | | 发行地址 | Presents fundamental concepts from asymptotic statistical inference theory, illustrated by R software.Contains numerous examples, conceptual and computational exercises based on R, and MCQs to clarify | 图书封面 |  | 影响因子 | .The book presents the fundamental concepts from asymptotic statistical inference theory, elaborating on some basic large sample optimality properties of estimators and some test procedures. The most desirable property of consistency of an estimator and its large sample distribution, with suitable normalization, are discussed, the focus being on the consistent and asymptotically normal (CAN) estimators. It is shown that for the probability models belonging to an exponential family and a Cramer family, the maximum likelihood estimators of the indexing parameters are CAN. The book describes some large sample test procedures, in particular, the most frequently used likelihood ratio test procedure. Various applications of the likelihood ratio test procedure are addressed, when the underlying probability model is a multinomial distribution. These include tests for the goodness of fit and tests for contingency tables. The book also discusses a score test and Wald’s test, their relationship with the likelihood ratio test and Karl Pearson’s chi-square test. An important finding is that, while testing any hypothesis about the parameters of a multinomial distribution, a score test statistic | Pindex | Textbook 2021 |
1 |
Front Matter |
|
|
Abstract
|
2 |
,Introduction, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
Chapter 1 is introductory. It discusses basic framework of parametric statistical inference, elaborates on identifiability property of the probability distribution with some illustrations as it is basic to all statistical inference procedures and data analysis. An estimator . is defined as a Borel measurable function from the sample space to the parameter space. In the present book, the focus is on the discussion of large sample optimality properties of estimators and test procedures. Various results from parametric statistical inference for finite sample size, form a foundation of the asymptotic statistical inference theory. Section 1.2 briefly discusses these results. The principal probability tool in asymptotic investigation is the convergence of a sequence of random variables. As sample size increases, we study the limiting behavior of a sequence . of estimators of . and examine how close it is to . using various modes of convergence. For ready reference, some modes of convergence are defined and various related results are listed in Sect. 1.3. The novelty of the book is use of . software to illustrate various concepts from asymptotic inference. The last section of every chapte
|
3 |
,Consistency of an Estimator, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
As discussed in Chap. ., in asymptotic inference theory, we study the limiting behavior of a sequence . of estimators of . and examine how close it is to . using various modes of convergence. The most frequently investigated large sample property of an estimator is weak consistency. Weak consistency of an estimator is defined in terms of convergence in probability. We examine how close the estimator is to the true parameter value in terms of probability of proximity. Weak consistency is always referred to as consistency in literature. In Sect. 2.1, we define it for a real parameter and illustrate by a variety of examples. We study some properties of consistent estimators, the most important being the invariance of consistency under continuous transformation. Strong consistency and uniform consistency of an estimator are discussed briefly in Sects. . and .. In Sect. ., we define consistency when the distribution of a random variable or a random vector is indexed by a vector parameter. It is defined in two ways as marginal consistency and joint consistency, the two approaches are shown to be equivalent. This result is heavily used in applications. Thus, to obtain a consistent estimat
|
4 |
,Consistent and Asymptotically Normal Estimators, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
Chapter 3 addresses the concept of consistent and asymptotically normal (CAN) estimators. Suppose . is a consistent estimator of .. In view of the fact that convergence in probability implies convergence in law, .. Thus, the asymptotic distribution of . is degenerate at .. Such a degenerate distribution is not helpful to find the rate of convergence or to find an interval estimator of .. Hence, we try to find a blowing factor . such that the asymptotic distribution of . is non-degenerate. In particular, we find a sequence . of positive real numbers tending to . as ., such that the asymptotic distribution of . is non-degenerate. It is particularly of interest to find a sequence . of real numbers tending to . as ., so that the asymptotic distribution of . is normal. Estimators for which large sample distribution of . is normal, are known as CAN estimators. These play a key role in large sample inference theory, in particular, to construct large sample confidence intervals and approximating the distribution of test statistic in large sample test procedures. We discuss variance stabilization technique and studentization technique to construct large sample confidence intervals. In Sects
|
5 |
,CAN Estimators in Exponential and Cramér Families, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
Chapters 2 and 3 present the theory related to consistent and CAN estimators. Chapter 4 is concerned with the study of a CAN estimator of a parameter, when a probability distribution of . belongs to a specific family of distributions such as an exponential family or a Cramér family. An exponential family is a subclass of a Cramér family. In Sect. ., we prove that in a one-parameter exponential family and in a multiparameter exponential family, the maximum likelihood estimator and the moment estimator based on a sufficient statistic are the same and these are CAN estimators. Section . presents the Cramér-Huzurbazar theory for the distributions belonging to a Cramér family. Cramér-Huzurbazar theory, which is usually referred to as standard large sample theory of maximum likelihood estimation, asserts that for large sample size, with high probability, the maximum likelihood estimator of a parameter is a CAN estimator. These results are heavily used in Chaps. 5 and 6 to find the asymptotic null distribution of the likelihood ratio test statistic, Wald’s test statistic and the score test statistic. In many models, the system of likelihood equations cannot be solved explicitly and we nee
|
6 |
,Large Sample Test Procedures, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
In Chaps. 2, 3, and 4, we discussed point estimation of a parameter and studied the large sample optimality properties of the estimators. We also discussed interval estimation for large .. The present and the next chapters are devoted to the large sample test procedures. All the results about the estimators established in Chaps. 2, 3, and 4 are heavily used in both the chapters. Most of the theory of testing of hypotheses has revolved around the Neyman-Pearson lemma, which leads to the most powerful test for simple null against simple alternative hypothesis. It also leads to the uniformly most powerful tests in certain models, in particular for exponential families. A likelihood ratio test procedure, which we discuss in the second section, is also an extension of Neyman-Pearson lemma in some sense. Likelihood ratio test procedure is the most general test procedure when the parameter space is either a subset of . or .. Whenever an optimal test exists, such as the most powerful test, uniformly most powerful test, uniformly most powerful unbiased test, the likelihood ratio test procedure leads to the optimal test procedure. In Chap. 5, we discuss likelihood ratio test procedure when t
|
7 |
,Goodness of Fit Test and Tests for Contingency Tables, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
Chapter 6 presents the theory related to the asymptotic null distribution in goodness of fit test procedures and in tests for contingency tables. All tests related to contingency table and all goodness of fit tests are the likelihood ratio tests when the underlying probability model is a multinomial distribution. Section 6.2 is devoted to a study of multinomial distribution, where we discuss the maximum likelihood estimation of cell probabilities and study the asymptotic properties of these estimators. Some tests associated with multinomial distribution are also developed. Section 6.3 presents the role of multinomial distribution in goodness of fit tests, which are essentially the tests for validity of the model. In goodness of test procedures the most frequently used test statistics is Karl Pearson’s test statistic. We prove that the likelihood ratio test statistic and Pearson’s test statistic for testing some hypotheses in a multinomial distribution are equivalent, in the sense that their asymptotic null distributions are the same. In Sect. 6.4, we study Wald’s test procedure and score test procedure. It is proved that asymptotic null distribution of likelihood ratio test statist
|
8 |
,Solutions to Conceptual Exercises, |
Shailaja Deshmukh,Madhuri Kulkarni |
|
Abstract
In Chapter 7, the solutions to almost all the conceptual exercises from Chaps. 2 to 6 are presented. Chapter 2 has 36 exercises while Chap. 3 contains 37 exercises. Chapters 4, 5 and 6 contains 11, 11 and 6 exercises respectively. There are 85 multiple choice questions with the answer key.
|
9 |
Back Matter |
|
|
Abstract
|
|
|