1
Stochastic Processes: A Brief Review
1.1 Introduction
In this chapter, we introduce the basic mathematical tools we will use. We assume the reader has a good understanding of probability spaces and random variables. For more details we refer to [67, 70]. This chapter is not meant to be a replacement for a book. To get the fundamentals please consult [70, 117]. In this chapter, we are reviewing fundamental notions for the rest of the book.
So, what is a stochastic process? When asked this question, R.A. Fisher famously replied, "What is a stochastic process? Oh, it's just one darn thing after another." We hope to elaborate on Fisher's reply in this introduction.
We start the study of stochastic processes by presenting some commonly assumed properties and characteristics. Generally, these characteristics simplify analysis of stochastic processes. However, a stochastic process with these properties will have simplified dynamics, and the resulting models may not be complex enough to model real-life behavior. In Section 1.6 of this chapter, we introduce the simplest stochastic processes: the coin toss process (also known as the Bernoulli process) which produces the simple random walk.
We start with the definition of a stochastic process.
Definition 1.1.1
Given a probability space , a stochastic process is any collection of random variables defined on this probability space, where is an index set. The notations and are used interchangeably to denote the value of the stochastic process at index value .
Specifically, for any fixed the resulting is just a random variable. However, what makes this index set special is that it confers the collection of random variables a certain structure. This will be explained next.
1.2 General Characteristics of Stochastic Processes
1.2.1 The Index Set
The set indexes and determines the type of stochastic process. This set can be quite general but here are some examples:
- If or equivalent, we obtain the so-called discrete-time stochastic processes. We shall often write the process as in this case.
- If we obtain the continuous-time stochastic processes. We shall write the process as in this case. Most of the time represents time.
- The index set can be multidimensional. For example, with , we may be describing a discrete random field where at any combination we have a value which may represent some node weights in a two-dimensional graph. If we may be describing the structure of some surface where, for instance, could be the value of some electrical field intensity at position .
1.2.2 The State Space
The state space is the domain space of all the random variables . Since we are discussing about random variables and random vectors, then necessarily or . Again, we have several important examples:
- If , then the process is integer valued or a process with discrete state space.
- If , then is a real-valued process or a process with a continuous state space.
- If , then is a -dimensional vector process.
The state space can be more general (for example, an abstract Lie algebra), in which case the definitions work very similarly except that for each we have measurable functions.
We recall that a real-valued function defined on is called measurable with respect to a sigma algebra in that space if the inverse image of set , defined as is a set in sigma algebra , for all Borel sets of .
A sigma algebra is a collection of sets of satisfying the following conditions:
- .
- If then its complement .
- If is a countable collection of sets in then their union
Suppose we have a random variable defined on a space . The sigma algebra generated by is the smallest sigma algebra in that contains all the pre images of sets in through . That is,
This abstract concept is necessary to make sure that we may calculate any probability related to the random variable .
1.2.3 Adaptiveness, Filtration, and Standard Filtration
In the special case when the index set possesses a total order relationship,1 we can discuss about the information contained in the process at some moment . To quantify this information we generalize the notion of sigma algebras by introducing a sequence of sigma algebras: the filtration.
Definition 1.2.1 (Filtration).
A probability space is a filtered probability space if and only if there exists a sequence of sigma algebras included in such that is an increasing collection i.e.:
The filtration is called complete if its first element contains all the null sets of . If, for example, 0 is the first element of the index set (the usual situation) then This particular notion of a complete filtration is not satisfied and may lead to all sorts of contracdictions and counterexamples. To avoid any such case we shall assume that any filtration defined in this book is complete and all filtered probability spaces are complete.
In the particular case of continuous time (i.e. ), it makes sense to discuss about what happens with the filtration when two consecutive times get close to one another. For some specific time we define the left and right sigma algebras:
The countable intersection of sigma algebras is always a sigma algebra [67], but a union of sigma algebras is not necessarily a sigma algebra. This is why we modified the definition of slightly. The notation used represents the smallest sigma algebra that contains the collection of sets .
Definition 1.2.2 (Right and Left Continuous Filtrations).
A filtration is right continuous if and only if for all , and the filtration is left continuous if and only if for all .
In general we shall assume throughout (if applicable) that any filtration is right continuous.
Definition 1.2.3 (Adapted Stochastic Process).
A stochastic process defined on a filtered probability space is called adapted if and only if is -measurable for any .
This is an important concept since in general, quantifies the flow of information available at any moment . By requiring that the process be adapted, we ensure that we can calculate probabilities related to based solely on the information available at time . Furthermore, since the filtration by definition is increasing, this also says that we can calculate the probabilities at any later moment in time as well.
On the other hand, due to the same increasing property of a filtration, it may not be possible to calculate probabilities related to based only on the information available in for a moment earlier than (i.e. ). This is the reason why the conditional expectation is a crucial concept for stochastic processes. Recall that is -measurable. Suppose we are sitting at time and trying to calculate probabilities related to the random variable at some time in the future. Even though we may not calculate the probabilities related to directly (nobody can since will be in the future), we can still calculate its distribution according to its best guess based on the current information. That is precisely .
Definition 1.2.4 (Standard Filtration).
In some cases, we are only given a standard probability space (without a separate filtration defined on the space). This typically corresponds to the case where we assume that all the information available at time comes from the stochastic process itself. No external sources of information are available. In this case, we will be using the standard filtration generated by the process itself. Let
denote the sigma algebra generated by the random variables up to time . The collection of sigma algebras is increasing and obviously the process is adapted with respect to it.
Notation
In the case when the filtration is not specified, we will always construct the standard filtration and denote it with .
In the special case when , the set of natural numbers, and the filtration is generated by the process, we will sometimes substitute the notation instead of . For example we may write
1.2.4 Pathwise Realizations
Suppose a stochastic process is defined on some probability space . Recall that by definition for every fixed, is a random variable. On the other hand, for every fixed we shall find a particular realization for any time 's, this outcome is typically denoted . Therefore, for each we can find a collection of numbers representing the realization of the stochastic process. That is a path. This realization may be thought of as the function .
This pathwise idea means that we can map each into a function from into . Therefore, the process may be identified as a subset of all the functions from into .
In Figure 1.1 we plot three different paths each corresponding to a different realization , . Due to this pathwise representation, calculating probabilities related to stochastic processes is equivalent with calculating the distribution of these paths in subsets of the two-dimensional space. For example, the probability
is the probability of the paths being in the unit square. However, such a...