Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
In the early stages of derivative markets, dedicated models were typically put together to value and risk manage new transaction types as they emerged. After Black and Scholes [5] published in 1973 a closed-form formula for European options under a constant volatility assumption, alternative models-like the Cox-Ross-Rubinstein binomial tree [7] in 1979, later replaced by more efficient finite difference grids-were developed to value American options under the same assumptions.
As trading in derivatives matured, the range of complex transactions expanded and models increased in complexity so that numerical methods became necessary for all but the simplest vanilla products. Models were typically implemented in terms of finite difference grids for transactions with early exercise and Monte-Carlo simulations for products with path-dependency. Notably, models increased in dimension as they grew in complexity, making grids impractical in most cases, and Monte-Carlo simulations became the norm, with early exercises typically supported by a version of the Longstaff-Schwartz regression-based algorithm [22]. Sophisticated models also had to be calibrated before they were used to value and risk manage exotics: their parameters were set to match the market prices of less complex, more liquid derivatives, typically European calls and puts.
Most of the steps involved-calibration, Monte-Carlo path generation, backward induction through finite difference grids-were independent of the transactions being valued; therefore, it became best practice to implement models in terms of generic numerical algorithms, independently of products. Practitioners developed modular libraries, like the simulation library of our publication [27], where transactions were represented in separate code that interacted with models to produce values and risk sensitivities.
However, at that stage, dedicated code was still written for different families of transactions, and it was necessary in order to add a new product to the library, to hard code its payoff by hand, compile, test, debug, and release an updated software.
The modular logic could be pushed one step further with the introduction of scripting languages, where users create products dynamically at run time. The user describes the schedule of cash-flows for a transaction with a dedicated language specifically designed for that purpose, for example:
for a 1y European call with strike 100, or
for the same call with a 120 (weekly monitored) knock-out barrier.1
The scripts are parsed into expression trees, and visited by an evaluator, a particular breed of visitor, who traverses the trees, while maintaining the internal state, to compute payoffs over the scenarios generated by a model:
All of this is explained in deep detail with words and code in part I.
With scripting, finance professionals were able to create and modify a product on the fly, while calculating its price and risk sensitivities in real time. The obvious benefits of such technology quickly made it a best practice among key derivatives players and greatly contributed in itself to the development of structured derivatives markets.
Early implementations, however, suffered from an excessive performance overhead and a somewhat obscure syntax that made scripting inaccessible to anyone but experienced quantitative analysts and traders. Later implementations fixed those flaws. The modern implementation in this publication comes with a natural syntax, is accessible to non-programmers,2 and its performance approaches hard-coded payoffs.
This publication builds on the authors' experience to produce a scripting library with maximum scope, modularity, transparency, stability, scalability, and performance.
Importantly, our implementation transcends the context of valuation and sensitivities; it offers a consistent, visitable representation of cash-flows that lends itself to a scalable production of risk, back-testing, capital assessment, value adjustments, or even middle office processing for portfolios of heterogeneous financial transactions. We also focus on performance and introduce the key notion of pre-processing, whereby a script is automatically analyzed, prior to its valuation or risk, to optimize the upcoming calculations. Our framework provides a representation of the cash-flows and a way of working with them that facilitates not only valuation but also pre-processing and any kind of query or transformation that we may want to conduct on the cash-flows of a set of transactions.
Scripting makes a significant difference in the context of xVA, as explained in part V, and more generally, all regulatory calculations that deal with multiple derivatives transactions of various sophistication, written on many underlying assets belonging to different asset classes. Before xVA may be computed over a netting set,3 all the transactions in the netting set must be aggregated. This raises a very practical challenge and a conundrum when the different transactions are booked in different systems and represented under different forms. Scripting offers a consistent representation of all the transactions, down to their cash-flows. Scripted transactions are therefore naturally aggregated or manipulated in any way. A key benefit of scripted cash-flows is that scripts are not black boxes. Our software (more precisely, the visitors implemented in the software) can "see" and analyze scripts, in order to aggregate, compress, or decorate transactions as explained in part V, extract information such as path-dependence or non-linearity and select the model accordingly, implement automatic risk smoothing (part IV), or analyze a valuation problem to optimize its calculation. Our library is designed to facilitate all these manipulations, as well as those we haven't thought about yet.
The purpose of this publication is to provide a complete reference for the implementation of scripting and its application in derivatives systems to its full potential. The defining foundations of a well-designed scripting library are described and illustrated with C++ code, available online on:
Readers will find significant differences between the repository code and the code printed in the book. The repository has been undergoing substantial modernization and performance improvements not covered in this edition of the text. Make sure you connect to the branch Book-V1, not the master. Besides, the code base evolves throughout the book and the online repository contains the final version. It is advisable to type by hand the code printed in the text rather than rely on the online repository while reading the book.
This code constitutes a self-contained, professional implementation of scripting in C++. It is written in standard, modern C++ without any external dependency. It was tested to compile on Visual Studio 2017. The library is fully portable across financial libraries and platforms and includes an API, described in section 3.7, to communicate with any model.
The code as it stands can deal with a model of any complexity as long as it is a single underlying model. It works with the Black and Scholes model of [5] and all kind of extensions, including with local and/or stochastic volatility, like Dupire's [9] and [10], or single underlying stochastic volatility models.4 The library cannot deal with multiple underlying assets, stochastic interest rates, or advanced features such as the Longstaff-Schwartz algorithm of [22]. It doesn't cover the management of transactions throughout their life cycle or deal with past fixings. All those features, necessary in a production environment, would distract us from correctly establishing the defining bases. These extensions are discussed in detail in parts II and III, although not in code.
Our online repository also provides an implementation of Fuzzy Logic for automatic risk smoothing, an excel interface to the library, a tutorial for exporting C++ code to Excel, a prebuilt xll, and a demonstration spreadsheet.
The C++ implementation is discussed in part I, where we explore in detail the key concepts of expression trees and visitors. We show how they are implemented in modern C++ and define the foundations for an efficient, scalable scripting library. We also provide some (very) basic self-contained models to test the library, although the library is model agnostic by design and meant to work...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.