Quantitative Finance

 
 
Statistics in Practice (Verlag)
  • 1. Auflage
  • |
  • erschienen am 8. November 2019
  • |
  • 496 Seiten
 
E-Book | ePUB mit Adobe DRM | Systemvoraussetzungen
978-1-118-62988-8 (ISBN)
 

Presents a multitude of topics relevant to the quantitative finance community by combining the best of the theory with the usefulness of applications

Written by accomplished teachers and researchers in the field, this book presents quantitative finance theory through applications to specific practical problems and comes with accompanying coding techniques in R and MATLAB, and some generic pseudo-algorithms to modern finance. It also offers over 300 examples and exercises that are appropriate for the beginning student as well as the practitioner in the field.

The Quantitative Finance book is divided into four parts. Part One begins by providing readers with the theoretical backdrop needed from probability and stochastic processes. We also present some useful finance concepts used throughout the book. In part two of the book we present the classical Black-Scholes-Merton model in a uniquely accessible and understandable way. Implied volatility as well as local volatility surfaces are also discussed. Next, solutions to Partial Differential Equations (PDE), wavelets and Fourier transforms are presented. Several methodologies for pricing options namely, tree methods, finite difference method and Monte Carlo simulation methods are also discussed. We conclude this part with a discussion on stochastic differential equations (SDE's). In the third part of this book, several new and advanced models from current literature such as general Lvy processes, nonlinear PDE's for stochastic volatility models in a transaction fee market, PDE's in a jump-diffusion with stochastic volatility models and factor and copulas models are discussed. In part four of the book, we conclude with a solid presentation of the typical topics in fixed income securities and derivatives. We discuss models for pricing bonds market, marketable securities, credit default swaps (CDS) and securitizations.

  • Classroom-tested over a three-year period with the input of students and experienced practitioners
  • Emphasizes the volatility of financial analyses and interpretations
  • Weaves theory with application throughout the book
  • Utilizes R and MATLAB software programs
  • Presents pseudo-algorithms for readers who do not have access to any particular programming system
  • Supplemented with extensive author-maintained web site that includes helpful teaching hints, data sets, software programs, and additional content

Quantitative Finance is an ideal textbook for upper-undergraduate and beginning graduate students in statistics, financial engineering, quantitative finance, and mathematical finance programs. It will also appeal to practitioners in the same fields.



MARIA C. MARIANI, PHD, is Shigeko K. Chan Distinguished Professor and Chair in the Department of Mathematical Sciences at The University of Texas at El Paso. She currently focuses her research on mathematical finance, stochastic and non-linear differential equations, geophysics, and numerical methods. Dr. Mariani is co-organizer of the Conference on Modeling High-Frequency Data in Finance.

IONUT FLORESCU, PHD, is Research Professor in Financial Engineering at Stevens Institute of Technology. He serves as Director of the Hanlon Laboratories as well as Director of the Financial Analytics program. His main research is in probability and stochastic processes and applications to domains such as finance, computer vision, robotics, earthquake studies, weather studies, and many more. Dr. Florescu is lead organizer of the Conference on Modeling High-Frequency Data in Finance.

weitere Ausgaben werden ermittelt

1
Stochastic Processes: A Brief Review


1.1 Introduction


In this chapter, we introduce the basic mathematical tools we will use. We assume the reader has a good understanding of probability spaces and random variables. For more details we refer to [67, 70]. This chapter is not meant to be a replacement for a book. To get the fundamentals please consult [70, 117]. In this chapter, we are reviewing fundamental notions for the rest of the book.

So, what is a stochastic process? When asked this question, R.A. Fisher famously replied, "What is a stochastic process? Oh, it's just one darn thing after another." We hope to elaborate on Fisher's reply in this introduction.

We start the study of stochastic processes by presenting some commonly assumed properties and characteristics. Generally, these characteristics simplify analysis of stochastic processes. However, a stochastic process with these properties will have simplified dynamics, and the resulting models may not be complex enough to model real-life behavior. In Section 1.6 of this chapter, we introduce the simplest stochastic processes: the coin toss process (also known as the Bernoulli process) which produces the simple random walk.

We start with the definition of a stochastic process.

Definition 1.1.1


Given a probability space , a stochastic process is any collection of random variables defined on this probability space, where is an index set. The notations and are used interchangeably to denote the value of the stochastic process at index value .

Specifically, for any fixed the resulting is just a random variable. However, what makes this index set special is that it confers the collection of random variables a certain structure. This will be explained next.

1.2 General Characteristics of Stochastic Processes


1.2.1 The Index Set


The set indexes and determines the type of stochastic process. This set can be quite general but here are some examples:

  • If or equivalent, we obtain the so-called discrete-time stochastic processes. We shall often write the process as in this case.
  • If we obtain the continuous-time stochastic processes. We shall write the process as in this case. Most of the time represents time.
  • The index set can be multidimensional. For example, with , we may be describing a discrete random field where at any combination we have a value which may represent some node weights in a two-dimensional graph. If we may be describing the structure of some surface where, for instance, could be the value of some electrical field intensity at position .

1.2.2 The State Space


The state space is the domain space of all the random variables . Since we are discussing about random variables and random vectors, then necessarily or . Again, we have several important examples:

  • If , then the process is integer valued or a process with discrete state space.
  • If , then is a real-valued process or a process with a continuous state space.
  • If , then is a -dimensional vector process.

The state space can be more general (for example, an abstract Lie algebra), in which case the definitions work very similarly except that for each we have measurable functions.

We recall that a real-valued function defined on is called measurable with respect to a sigma algebra in that space if the inverse image of set , defined as is a set in sigma algebra , for all Borel sets of .

A sigma algebra is a collection of sets of satisfying the following conditions:

  1. .
  2. If then its complement .
  3. If is a countable collection of sets in then their union  

Suppose we have a random variable defined on a space . The sigma algebra generated by is the smallest sigma algebra in that contains all the pre images of sets in through . That is,

This abstract concept is necessary to make sure that we may calculate any probability related to the random variable .

1.2.3 Adaptiveness, Filtration, and Standard Filtration


In the special case when the index set possesses a total order relationship,1 we can discuss about the information contained in the process at some moment . To quantify this information we generalize the notion of sigma algebras by introducing a sequence of sigma algebras: the filtration.

Definition 1.2.1 (Filtration).


A probability space is a filtered probability space if and only if there exists a sequence of sigma algebras included in such that is an increasing collection i.e.:

The filtration is called complete if its first element contains all the null sets of . If, for example, 0 is the first element of the index set (the usual situation) then This particular notion of a complete filtration is not satisfied and may lead to all sorts of contracdictions and counterexamples. To avoid any such case we shall assume that any filtration defined in this book is complete and all filtered probability spaces are complete.

In the particular case of continuous time (i.e. ), it makes sense to discuss about what happens with the filtration when two consecutive times get close to one another. For some specific time we define the left and right sigma algebras:

The countable intersection of sigma algebras is always a sigma algebra [67], but a union of sigma algebras is not necessarily a sigma algebra. This is why we modified the definition of slightly. The notation used represents the smallest sigma algebra that contains the collection of sets .

Definition 1.2.2 (Right and Left Continuous Filtrations).


A filtration is right continuous if and only if for all , and the filtration is left continuous if and only if for all .

In general we shall assume throughout (if applicable) that any filtration is right continuous.

Definition 1.2.3 (Adapted Stochastic Process).


A stochastic process defined on a filtered probability space is called adapted if and only if is -measurable for any .

This is an important concept since in general, quantifies the flow of information available at any moment . By requiring that the process be adapted, we ensure that we can calculate probabilities related to based solely on the information available at time . Furthermore, since the filtration by definition is increasing, this also says that we can calculate the probabilities at any later moment in time as well.

On the other hand, due to the same increasing property of a filtration, it may not be possible to calculate probabilities related to based only on the information available in for a moment earlier than (i.e. ). This is the reason why the conditional expectation is a crucial concept for stochastic processes. Recall that is -measurable. Suppose we are sitting at time and trying to calculate probabilities related to the random variable at some time in the future. Even though we may not calculate the probabilities related to directly (nobody can since will be in the future), we can still calculate its distribution according to its best guess based on the current information. That is precisely .

Definition 1.2.4 (Standard Filtration).


In some cases, we are only given a standard probability space (without a separate filtration defined on the space). This typically corresponds to the case where we assume that all the information available at time comes from the stochastic process itself. No external sources of information are available. In this case, we will be using the standard filtration generated by the process itself. Let

denote the sigma algebra generated by the random variables up to time . The collection of sigma algebras is increasing and obviously the process is adapted with respect to it.

Notation


In the case when the filtration is not specified, we will always construct the standard filtration and denote it with .

In the special case when , the set of natural numbers, and the filtration is generated by the process, we will sometimes substitute the notation instead of . For example we may write

1.2.4 Pathwise Realizations


Suppose a stochastic process is defined on some probability space . Recall that by definition for every fixed, is a random variable. On the other hand, for every fixed we shall find a particular realization for any time 's, this outcome is typically denoted . Therefore, for each we can find a collection of numbers representing the realization of the stochastic process. That is a path. This realization may be thought of as the function .

This pathwise idea means that we can map each into a function from into . Therefore, the process may be identified as a subset of all the functions from into .

In Figure 1.1 we plot three different paths each corresponding to a different realization , . Due to this pathwise representation, calculating probabilities related to stochastic processes is equivalent with calculating the distribution of these paths in subsets of the two-dimensional space. For example, the probability

is the probability of the paths being in the unit square. However, such a...

Dateiformat: EPUB
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat EPUB ist sehr gut für Romane und Sachbücher geeignet - also für "fließenden" Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Bitte beachten Sie bei der Verwendung der Lese-Software Adobe Digital Editions: wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

103,99 €
inkl. 7% MwSt.
Download / Einzel-Lizenz
ePUB mit Adobe DRM
siehe Systemvoraussetzungen
E-Book bestellen