Covers multivariable calculus, starting from the basics and leading up to the three theorems of Green, Gauss, and Stokes, but always with an eye on practical applications. Written for a wide spectrum of undergraduate students by an experienced author, this book provides a very practical approach to advanced calculus starting from the basics and leading up to the theorems of Green, Gauss, and Stokes. It explains, clearly and concisely, partial differentiation, multiple integration, vectors and vector calculus, and provides end-of-chapter exercises along with their solutions to aid the readers understanding. Written in an approachable style and filled with numerous illustrative examples throughout, Two and Three Dimensional Calculus: with Applications in Science and Engineering assumes no prior knowledge of partial differentiation or vectors and explains difficult concepts with easy to follow examples. Rather than concentrating on mathematical structures, the book describes the development of techniques through their use in science and engineering so that students acquire skills that enable them to be used in a wide variety of practical situations. It also has enough rigor to enable those who wish to investigate the more mathematical generalizations found in most mathematics degrees to do so. Assumes no prior knowledge of partial differentiation, multiple integration or vectors Includes easy-to-follow examples throughout to help explain difficult concepts Features end-of-chapter exercises with solutions to exercises in the book. Two and Three Dimensional Calculus: with Applications in Science and Engineering is an ideal textbook for undergraduate students of engineering and applied sciences as well as those needing to use these methods for real problems in industry and commerce.
Phil Dyke teaches mathematics to undergraduates, and marine physics to postgraduates at the School of Computing, Electronics and Mathematics, University of Plymouth, UK. He is also the author of ten other textbooks.
Revision of One-Dimensional Calculus
In this chapter, there will be a brief run through of those bits of mathematics that will be required in subsequent chapters. Such a run through cannot, of course, be exhaustive or even particularly thorough. Therefore, if you should find any material in this chapter that is completely new, then you should revisit fundamental reading material on single-variable calculus. When reading this chapter, you may find either familiar content in which you may be rusty or a few unfamiliar nuances here and there. To understand the concepts of differentiation and integration, let us introduce what is meant by a limit and in the process also introduce convergence.
1.1 Limits and Convergence
The standard notation for a function dates back to Leonhard Euler in the eighteenth century. The notation implies that one simply inserts the value into the definition of the function in order to calculate the corresponding value of the function itself. Therefore, for example, if its value where would simply be and so at the value of the function is indisputably 2. When there is no doubt that , when , has the value 2, it is usually written neatly as . Such certainty is however sometimes not the case. Take the function
at the point where . We see here that the numerator is 10 but the denominator is 0 so does not have a value. We could say 'it is infinite' and move on rather than worry too much. Mathematically, it is better to say that increases without limit as approaches the value 1. Here, the concept of a limit appears, and the limit is written as
which is not a good use of the mathematical equals sign as neither the limit nor is a number; it's an abstract concept that at the time worried most mathematicians and was in fact responsible for the subsequent mental breakdowns of several nineteenth-century pioneers of number theory. Those interested in learning more should explore the definitions of the transfinite numbers (the use of the Hebrew letter aleph is standard) but be assured that it has no connection with madness these days. Note that can approach 1 from the right or the left . If from the left, then the notation is used, if the approach is from the right, then the notation applies. Sometimes, the minus or plus symbols are written as suffices or superfixes thus or . Examination of the function shows that
Another reason for having to use limits is if the function takes an indeterminate form. Perhaps the limit
is familiar. Its value is 1 even though both numerator and denominator tend to 0 as approaches 0 and from either side of 0. The evaluation of such indeterminate forms can be done from first principles. For example, in the case of , simply expand as a series in powers of called a McLaurin series:
and letting , all the terms on the right vanish apart from the 1 which is the value of the limit. Of course, we must state that the series is certainly convergent for small values of so as convergence of the right hand side to 1 is assured. The use of the term convergence should be noted. There is a departure from standard texts on pure mathematics that legitimately make a great deal of what is meant by convergence and limits and give the precise definitions of both. This is not done here; not just for reasons of space, but for reasons of emphasis. In this text, the emphasis is on mathematical methods and practicalities. Books on real analysis need to be consulted for the theorems and proofs. Convergence of any series of real numbers , is assured provided
This is the ratio test that is not infallible in the sense that this ratio could tend to unity yet the series still be convergent, but it is this test that comes to our aid here. The series for can be written as
and the absolute value of the ratio of successive terms is
which always tends to zero as no matter what the value of . Therefore, by the ratio test, absolute convergence of the series is assured.
There will be more on limits later, but the definition of derivative needs to be given now.
If a function is smooth, that is it has a tangent at a point that changes smoothly as the point moves along the graph of the curve in the plane, then the limit
exists and is unique and is called the derivative of at the point . Again this will do for our purposes, but pure mathematics involving and (due to Karl Weierstrass (1815-1897)) is required for rigour and clarity as to what smooth actually means here, and this can be found in books on real analysis. Without mathematics, it means sharp corners and breaks in the graph of are disallowed. At a corner, there are two tangents; only one can be permitted, otherwise it is not unique and so there is no unique value to the above limit and hence no unique derivative. Derivatives really do come in very handy for calculations in a variety of applied areas, so the evaluation of this limit has received a great deal of attention since calculus was first proposed by Isaac Newton (1642-1727) in 1665 in England, and Gottfried Leibniz (1646-1716) a little later, around 1675, in Germany. Leibniz' approach wins here and it is his notation that is now followed; Newton used pure geometry and only the dot to denote differentiation with respect to time in some areas of mechanics survives today from his pioneering research. His fluxions are fascinating to study, but now are only of interest to historians. To do a simple example straight from the limit definition, let us find the derivative of the function at an arbitrary point . The numerator is and this simplifies to . The denominator is just and this is a factor of the numerator that can be cancelled to leave the quotient
the right hand side of which tends to as . Thus, the derivative of at the point is equal to , or more succinctly the derivative of is for any value . Notice the cancellation of occurs before it is allowed to tend to zero, so this is legal. Do not let it tend to zero first, then cancel, that is a mathematical sin; cancelling zeros usually leads to nonsense. Thus, if the limit exists, and in this text it usually does, then write 1.1
and Equation 1.1 is the derivative of the function with respect to . There are many standard results for differentiation (finding derivatives), and all can be derived by finding this limit using various mathematical expansion techniques. The derivative of was found above and this generalises to
where is any real number. In addition, the addition formulas for trigonometric functions can be used to derive
When finding derivatives, however, one has to be careful not to use formulas that themselves depend on differentiation. In particular, Taylor's theorem and the special case McLaurin's theorem come later in this chapter; they are expansions of functions in terms of polynomials that are obtained through differentiation. Let's take a look at the exponential function where is the base for natural logarithms. There are many ways to define mathematically: it is that value of such that the function differentiates to itself. Equivalently, it is the value of such that the slope of the function has unit slope at . This can be taken as the definition of the number : that is, if
for some real number , then . Many mathematicians prefer the limit definition
as it is clean and isolated, but then one would have to prove the first definition. It is more natural here to use the functional definition in terms of . The inverse of differentiation is integration, so writing the above equation is
from which we get
and integrating both sides gives
but since the inverse of is , it must be the case that
always allowing for an arbitrary constant of course. The last equation can be the definition of the natural logarithm, in which case, the derivative of with respect to is and so
is derived, in which case, the function treated as dependent upon differentiates to itself. This is, therefore, the exponential function. As long as one treats either this property as a definition of the exponential function, or the integration of as the definition of the logarithm to the base (natural logarithm), one can derive the other. One certainly cannot derive both out of thin air, and it is rather messy to start with the limit definition of as one has to use obscure properties of limits as well as differentiation rules yet to be introduced.
1.2.1 Rules for Differentiation
There are rules most will be familiar with for differentiating functions of functions, products and quotients. Here they are:
These can all be proved using the limit definition of derivative; here is a proof of the product rule.
for suitably well-behaved functions and .
From the limit definition of derivative we have
so the right-hand side becomes
upon subtracting then adding in the numerator. Hence, grouping the numerator gives
and letting the limit establishes the result.
The quotient rule follows from applying the product rule to and the function of a function rule to . Note that the more mature name for the 'function of a function' rule is the 'chain rule'. This gets a lot of attention later in this book starting with the next...