Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Preface
At the intuitive level, any practicing mathematician or computer scientist —indeed any student of these two fields of study— will have no difficulty at all to recognize a computation or an algorithm, as soon as they see one, the latter defining, in a finite manner, computations for any given input. It is also an expectation that students of computer science (and, increasingly nowadays, of mathematics) will acquire the skill to devise algorithms (normally expressed as computer programs) that solve a variety of problems.
But how does one tackle the questions “is there an algorithm that solves such and such a problem for all possible inputs?” —a question with a potentially “no” answer— and also “is there an algorithm that solves such and such a problem via computations that take no more steps than some (fixed) polynomial function of the input length?” —this, too, being a question with a, potentially, “no” answer.
Typical (and tangible, indeed “interesting” and practically important) examples that fit the above questions, respectively, are
and
For the first question we have a definitive “no” answer,2 while for the second one we simply do not know, at the present state of knowledge and understanding of what “computing” means.3
But what do we mean when we say that “there is no algorithm that solves a given problem” —with or without restrictions on the algorithm’s computation lengths? This appears to be a much harder statement to validate than “there is an algorithm that solves such and such a problem” —for the latter, all we have to do is to produce such an algorithm and a proof that it works as claimed. By contrast, the former statement implies a, mathematically speaking, provably failed search over the entire set of all algorithms, while we were looking for one that solves our problem.
One evidently needs a precise definition of the concept of algorithm that is neither experiential, nor technology-dependent in order to assert that we encountered such a failed “search”. This directly calls for a mathematical theory whose objects of study include algorithms (and, correspondingly, computations) in order to construct such sets of (all) algorithms within the theory and be able to reason about the membership problem of such sets. This theory we call the theory of computation. It contains tools which, in principle, can “search”4 the set of all algorithms to see whether a problem is solvable by one; or, more ambitiously, to see if it can be solved by an algorithm whose computations are “efficient” —under some suitable definition of efficiency.
The theory of computation is the metatheory of computing. In the field of computing one computes: that is, develops programs and large scale software that are well-documented, correct, efficient, reliable and easily maintainable. In the (meta) theory of computing one tackles the fundamental questions of the limitations of computing, limitations that are intrinsic rather than technology-dependent.5 These limitations may rule out outright the existence of algorithmic solutions for some problems, while for others they rule out efficient solutions.
Our approach is anchored on the concrete (and assumed) practical knowledge about general computer programming attained by the reader in a first year programming course, as well as the knowledge of discrete mathematics at the same level. The next natural step then is to develop the metatheory of general computing, building on the computing experience that we have assumed the reader attained. This will be our chapter on computability, that is, the most general metatheory of computing. We de velop this metatheory via the programming formalism known as Shepherdson-Sturgis Unbounded Register Machines (URM) —which is a straightforward abstraction of modern high level programming languages. Within that chapter we will also explore a restriction of the URM programming language, that of the loop programs of A. Meyer and D. Ritchie. We will learn that while these loop programs can only compute a very small subset of “all the computable functions”, nevertheless are significantly more than adequate for programming solutions of any “practical”, computationally solvable, problem. For example, even restricting the nesting of loop instructions to as low as two, we can compute —in principle— enormously large functions, which with input x can produce outputs such as
(1)
The qualification above, “in principle”, stems from the enormity of the output displayed in (1) —even for the input x = 0— that renders the above function way beyond “practical”.
The chapter —after spending considerable care in developing the technique of reductions— concludes by demonstrating the intimate connection between the unsolvability phenomenon of computing on one hand, and the unprovability phenomenon of proving within first-order logic (cf. Gödel (1931)) on the other, when the latter is called upon to reason about “rich” theories such as (Peano’s) arithmetic —that is, the theory of natural numbers, equipped with: the standard operations (plus, times); relations (less than); as well as with the principle of mathematical induction.
What to include and what not to include in an introductory book on the theory of computation is a challenge that, to some extend, is resolved by the preferences of the author. But I should like to think that the choices of topics made in this volume are more rational than simply being manifestations of “preference”.
The overarching goal is to develop for the reader a “first-order” grounding in the fundamentals, that is, the theoretical limitations of computing in its various models of computation, from the most general model —the URM— down to the finite automaton.
We view the technique of reductions as fundamental in the analysis of limitations of computing, and we spend a good deal of space on this topic, a variant of which (polynomial-time reductions) the student of computer science will encounter in Subsection 5.1.2 and will re-encounter in later studies as well, for example, in a course on algorithms and complexity. On the other hand, we do not hesitate to omit combinatorial topics such as “Post’s correspondence problem”, which only leads to specialized results (e.g., the algorithmic unsolvability of detecting ambiguity in context free languages) that we feel embody a less fundamental technical interest. Our emphasis is on laying the foundational tools and concepts that allow us to carry out a mathematical analysis of, and acquire a thorough understanding of, theoretical limitations of computing in both their absolute manifestation (uncomputability) and also in their relative manifestation (complexity and “intractability”).
Consistent with our stated goal and emphasis, we purposely give short shrift to the area of so-called “positive” results, apart from a few familiarization examples of “programming” with URMs, loop programs, FA, NFA, and PDA. This is not a course about writing algorithms, but mostly about what algorithms cannot do at all and about what they have a lot of trouble doing. For example, results of Chapter 5 immediately imply that, in general, FORTRAN-like programs that allow nesting of the loop instruction equal to just three have highly impractical run times; certainly as high as6
Thus, we leave out “applications” such as lexical scanners via finite automata; automata-minimization; parsing of context free languages using LL, LR, recursive descend, and other parsers; and defer them to a later course on compiler writing tools —these topics do not belong here. We would rather concentrate on what is foundationally important and omit what is not.
Another challenge is where to start building this metatheory. What should be our abstraction of a computer program? It should be a straightforward observation that since this metatheory, or “theory” as we nickname it, abstracts computing practices —in order to analyze and study said abstractions mathematically— the student must have encountered in the first instance the concrete counterparts of these abstractions for the latter to make any sense.
It is hardly the case that, prior to the second year of study, students have “pro grammed” scanners or parsers. Rather, students have programmed solutions for less specialized problems, using a high level general purpose language such as C/C++, Java, possibly Pascal, etc. They never programmed an automaton, a push-down automaton, or anything like a Turing machine...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.