Please note that the content of this book primarily consists of articles available from Wikipedia or other free sources online. In probability theory, there exist several different notions of convergence of random variables. The convergence (in one of the senses presented below) of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. For example, if the average of n uncorrelated random variables Yi, i = 1, ..., n, all having the same finite mean and variance, is given by X_n = frac{1}{n}sum_{i=1}^n Y_i, then as n tends to infinity, Xn converges in probability (see below) to the common mean, μ, of the random variables Yi. This result is known as the weak law of large numbers. Other forms of convergence are important in other useful theorems, including the central limit theorem. Throughout the following, we assume that (Xn) is a sequence of random variables, and X is a random variable, and all of them are defined on the same probability space (Ω, F, P).
"About this title" may belong to another edition of this title.
(No Available Copies)
Search Books: Create a WantCan't find the book you're looking for? We'll keep searching for you. If one of our booksellers adds it to AbeBooks, we'll let you know!
Create a Want