Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Jeremy C. Ganz
The purpose of this chapter is to outline the medical facilities that were available to the inventors of radiosurgery at the time when the technique was being developed. This is achieved by describing in brief the timeline of discoveries relevant to clinical neurology and the investigation of neurological diseases. This provides a background understanding for the limitations inherent in the early days when investigations and imaging in particular were fairly primitive. It also helps to explain the choices that were made by the pioneers in those early days. The limitations of operative procedures and institutions designed to treat neurological diseases are also mentioned.
Keywords
clinical neurology
radiology
contrast studies
operating theaters
neurological hospitals
Radiosurgery was first defined by Lars Leksell in the following terms: "Stereotactic radiosurgery is a technique for the non-invasive destruction of intracranial tissues or lesions that may be inaccessible to or unsuitable for open surgery" (Leksell, 1983). As stated in this section, no human activity occurs in a vacuum including the development of medical technology. Radiosurgery was developed out of the perceptions and efforts of a small group of men who passionately believed that such a method was urgently needed in the battle against a large number of contemporaneously untreatable diseases. The possibility of developing radiosurgery was a spin-off of the developing field of nuclear physics, which was such a characteristic development of the first half of the twentieth century. What was required would not be clear at the start, but would become so. There were five essential elements. The first chapters of this book concern the journey toward understanding and eventually the implementation of these elements; and it was a long journey:
1. Images that enable the visualization of the lesion to be treated are an essential part of the method.
2. A three-dimensional reference system common for imaging, treatment planning, and treatment.
3. A treatment planning system by means of which the irradiation of each case can be optimized.
4. A means of producing well-defined narrow beams of radiation that selectively and safely deliver the radiation dose under clinical conditions.
5. Adequate radiation protection.
This book concerns neurosurgery and neuroradiosurgery and surgery of the central nervous system (CNS). At the time when the processes that would lead to neuroradiosurgery were beginning-around 1930-neurosurgery's contribution to patient welfare, while more rational and scientifically based than any at the time in its previous history, had relatively little to offer. Certainly, cell theory had permitted the analysis of the cellular components of the CNS and their architecture and interrelationships. Based on this new knowledge, clinical neurology had made great strides with the development of the examination of the CNS based on the understanding of how its different components were interconnected (Compston, 2009). John Madison Taylor had introduced the reflex hammer in 1888 (Lanska, 1989). Gradual understanding of how to examine the CNS was propounded by Joseph Babinski (1857-1932) in 1896 (Koehler, 2007). Ernst Weber (1795-1878) and Heinrich Adolf Rinne (1819-1868) had introduced means of distinguishing between conductive and neurogenic hearing loss although the precise date of their tests has proved impossible to determine. These tests require tuning forks that had been originally invented by John Shure (ca. 1662-1752) reaching the advanced age for the time of 90 years. He was distinguished enough that parts were written for him by both Händel and Purcell (Shaw, 2004). It was applied to neurological testing first in 1903 (Freeman and Okun, 2002). The ophthalmoscope was invented by Helmholtz in 1851 (Pearce, 2009). It was developed and its source of illumination was improved over succeeding decades. During my time at the National Hospital for Nervous Diseases, Queen Square, London, I was told that such was the value given to ophthalmoscopy that there was a time when junior doctors at Queen Square were required to examine the fundus of patients suspected of raised intracranial pressure (ICP) every 15 min. In 1841, Friedrich Hofmann invented the otoscope (Feldmann, 1995, 1997).
In the 1930s, the examination of the CNS was becoming fairly precise and this precision would improve over the decades to come until the arrival of computerized imaging in the 1970s and 1980s. Until then, clinical examination was the most accurate method for localizing pathological processes. However, not all clinical symptoms arise from identifiable foci of diseases. Thus, subacute combined degeneration of the cord gives a complex picture with some tracts affected more than others. Again, in multiple sclerosis, with intermittent lesions varying in time and space, a simple localization from clinical information would be difficult. However, this is not that important for the performance of a surgical technique of which radiosurgery is one because surgical conditions are single and focal in the vast majority of cases.
The advances described in the previous paragraphs greatly increased the accuracy with which a skillful clinician could localize the position of a pathological process within the CNS. Even so, the first systematic monograph on clinical neurological localization was published as late as 1921 by a Norwegian, Georg Herman Monrad-Krohn (1884-1964), writing in English (Monrad-Krohn, 1954). In 1945, the more or less definitive text by Sir Gordon Holmes (1876-1975) was published (McDonald, 2007).
As far as functional investigations were concerned, electroencephalogram (EEG) became commercial in 1935 and electromyography (EMG) arrived in 1950.
In terms of further radiological investigations, the first visualization of the CNS came with the use of contrast-enhanced X-ray studies introduced by Cushing's student Walter Dandy (1886-1946), specifically pneumoencephalography (1918) (Dandy, 1918) and pneumocisternography (1919) (Dandy, 1919). While these examinations were undoubtedly an improvement, yet to modern eyes, they still look primitive. Then, in 1927, came carotid angiography that while a further improvement was still limited and not without risk. Vertebral angiography became routine in the early 1950s. A brief description of the way these methods works follows. Since the first radiosurgery information was published in the early 1950s, it is necessary to see how the necessary imaging for radiosurgery could be achieved at that time. If we bear in mind that the technique was solely used for intracranial targets, there were basically three imaging techniques.
Plain skull X-rays existed but were of little value in showing targets suitable for radiosurgery. The right side of Fig. 4 shows an X-ray of the skull, taken from the side, and indicates that the only reliable location of an intracranial soft tissue is the position of the pituitary gland (see Figure 4).
Following 1918, it became clear that parts of the brain could be demonstrated using what are called contrast media. These are fluid substances (liquid or gas) that affect the passage of X-rays through the skull. Either they let the rays pass more easily, in which case they will darken the part of the image where they are, or they will stop them passing so easily, in which case the portion of the image-containing medium will appear lighter. The most frequently used medium in this context was air and how it worked requires some explanation.
It is necessary to digress a little and explain some facts about intracranial anatomy. The brain sits tightly enclosed within the skull but it is floating in a bath of fluid called cerebrospinal fluid (CSF). This is created at roughly 0.32 ml/min. Figure 1 is a diagram of the anatomy of the brain and the fluid-filled spaces (called ventricles) that it contains. Figure 2 illustrates how the CSF is made in the ventricles and flows through the brain. It leaves the ventricles and flows over the brain between two membranes, the pia mater and the arachnoid. The pia mater means soft mother and is called that because it embraces the brain as a mother embraces her child. The arachnoid is so called after some imaginative anatomists looking through the microscope considered that the membrane and the space under it looked like a spider's web. In Greek mythology, a skillful but arrogant young lady called Arachne challenged Athena, the goddess of among other things weaving, to a weaving contest. The girl inevitably lost and was turned into the world's first spider. Thus, spiders are called arachnids and this explains the use of the term arachnoid in the current context. It should be remembered that at any one time, there is about 150 ml of CSF in the system and two-thirds of it is outside the brain in the subarachnoid space.
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.
Dateiformat: PDFKopierschutz: Adobe-DRM (Digital Rights Management)
Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.
Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Dateiformat: ePUBKopierschutz: Wasserzeichen-DRM (Digital Rights Management)
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet - also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Wasserzeichen-DRM wird hier ein „weicher” Kopierschutz verwendet. Daher ist technisch zwar alles möglich – sogar eine unzulässige Weitergabe. Aber an sichtbaren und unsichtbaren Stellen wird der Käufer des E-Books als Wasserzeichen hinterlegt, sodass im Falle eines Missbrauchs die Spur zurückverfolgt werden kann.