Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Chapter 1
The Road to Test-Driven Development
WHAT'S IN THIS CHAPTER?
Test-Driven Development (TDD) has become one of the most important concepts and practices in modern software development. To understand why this is, consider the history of the practice of creating software. TDD was created through an almost evolutionary process. It came about as a response to the difficulties and challenges of writing software, but there was no real plan for its creation. It's a classic case of the traits of a thing that make that thing more successful and stronger being propagated and the traits that lead to failure being discarded. The practices of TDD were not created by any single company or individual; they rose from countless discussions (or, more likely, arguments) about what was done in the past, why it failed, and what could be done better. If TDD is a structure, such as a house, its foundation is created from failure. Failed projects, whose developers knew there had to be a better way, are what TDD has been built upon.
In this chapter you learn about the history of software development and how the methodology of managing software projects has moved from favoring waterfall to iterative to agile methodologies. You'll learn how the practice of Test-Driven Development is a key component of agile methodologies to ensure that quality code that addresses the business needs is being produced. I will explain the tenets of Test-Driven Development, outline its benefits, and show you an example of how Test-Driven Development is done.
THE CLASSICAL APPROACH TO SOFTWARE DEVELOPMENT
To understand the importance of TDD, it's necessary to see the road that led to it. Over the past 50 years the practice of software development has constantly evolved in an effort to find a balance between the needs of the business, the capabilities of the current technology, and the methodology in which developers are most productive. Missteps have occurred along the way, but even they were important as a means of determining which techniques and methodologies were evolutionary dead ends. This chapter reviews the road to TDD.
A Brief History of Software Engineering
Software development for business began during the age of the mainframe. Each hardware vendor seemed to have its own unique platform and paradigm for developing software. Sometimes these systems were similar enough to each other that developers could move from job to job and platform to platform with very little friction. Other times it was like starting from scratch. Although the basic concepts of computing were the same, each vendor had its own, sometimes very unique take on those concepts. Languages were archaic, often requiring many lines of code to do the simplest things that we take for granted today. And many times what worked in one implementation of a language or platform didn't work quite the same way in another.
The mainframe was a large, expensive piece of equipment. Many companies didn't own one, so the concept of the service bureau was born: Companies with a mainframe would lease time on their computer to customers. Unfortunately, this sometimes meant waiting for access to the computer. Imagine if you wrote a program today but couldn't compile it until next Monday. It would be very hard to be a productive developer with that kind of constraint. Suppose you attempted to compile on Monday but encountered an error. You could fix it, but you wouldn't know if your fix was correct for three more days. The limited access to computing resources often meant that testing, out of necessity, took a backseat to getting the product out the door.
These were also the days before the concept of waterfall development. Developers, left to their own devices, often worked in an iterative manner, scoping out specific pieces of a system and completing those, and then adding new features and functionality later. This method worked well, because it allowed developers to approach application development in a logical manner that kept things in terms they could understand and manage. Unfortunately, business users and what was logical and comprehensible to them often were not taken into consideration.
The second generation of mini-computers emerged in 1977, but they didn't really take off in business until 1979 with the release of VisiCalc. VisiCalc was the first spreadsheet application available for the personal computer. It demonstrated that PCs weren't just toys for the home, but machines that could provide real value to business. PCs offered many advantages over mainframes, the first one being that they were much less expensive. A business that couldn't afford even one dedicated mainframe could afford dozens of PCs. And although PCs weren't as fast as mainframes, their availability made them ideal for day-to-day tasks that didn't require the power of the mainframe. Developers could write applications for the PC and know right away if their code worked. They also didn't have to wait days to have their jobs scheduled and run.
Things got even better with third- and fourth-generation programming languages. They abstracted some of the more mundane tasks of their predecessors and allowed developers to be more productive by focusing on the business problem at hand. These languages also opened software development to a wider audience who didn't want to deal with the friction of languages such as Assembler and C. Business and the business computer industry ultimately settled on a few base languages and their derivatives. This helped developers become more attractive and marketable to business as their skills became more portable.
Ultimately business's need to plan brought about the waterfall project methodology. The concept behind waterfall was that every software project, whose average time span was about two years, should have every phase from inception to delivery planned from the start. This necessitated a long period at the beginning for requirements gathering. After all the requirements were gathered, they were "thrown over the wall" to the architects. The architects took the requirements and designed the system that would be built down to the smallest detail. When they completed this task, they threw the design over the wall to the developers, who built the system. After the developers completed their work, they threw the system to the Quality Assurance (QA) department, which tested the application. As soon as the application was validated, it was deployed to the users.
Software testing in a waterfall methodology was often a long, difficult, inefficient, and expensive process. QA testers would test applications by manually running through test scripts, which were documents that instructed the tester to carry out an action in the system and described the result the tester should observe. Sometimes these scripts ran into hundreds of pages. When a change was made to the system, it could take a tester two or more weeks to completely regression-test the system. Another issue was that often these test scripts were written by the developer who created the system. In these cases the scripts usually described how the system would act, not how it should act.
The first step toward TDD happened with the proliferation of automated QA testing tools. These recorded a series of actions a user takes on a user interface (UI) and allowed them to be played back later. This verified that the UI worked correctly. After the initial tests were recorded, the QA tools also allowed much faster regression testing than manual tests and could be run repeatedly. A large failing of many of these early tools was that the tests they created were brittle. When an aspect of the UI changed, the test usually couldn't handle the change, and the test would break. For tools that used the record/playback model, that meant the test had to be discarded and a new one created. Later versions of these tools allowed for scripting that would make some of these changes easier to absorb, but the tests still remained fragile.
From Waterfall to Iterative and Incremental
Software development doesn't happen in a void. It doesn't matter if it's an 18-month project to create an application to collate the Enterprise's Testing Procedure Specification (TPS) reports or a website that you built for your child's peewee hockey team; you are using a methodology. You have requirements, you plan features, and you build the application. After it's tested, you deploy it to a grateful user base.
A problem with the waterfall methodology is that all the requirements are gathered early on. In business, requirements often change for a variety of reasons. Changes in the law, a shift in the company's strategic direction, or even something as simple as a mistake in the requirements-gathering phase could have serious repercussions for the downstream process. The planned-out nature of waterfall does not respond well to change. A change request to the system generally must go through the same requirements/design/development/QA process that the rest of the system did. This creates a ripple effect that causes the rest of the plan to become inaccurate.
To create the upfront plan, the work must be estimated early - sometimes years before the actual work is to be done, and usually by someone who won't actually do the work. This creates a house of cards in which one wrong...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.