Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Natural language processing (NLP) is a subfield of artificial intelligence (AI) that seeks to improve human and machine interactions by enabling machines to speak using human language. It is the science of attempting to train machines to understand, interpret, and communicate using human language in the form of natural conversation. Some computer scientists have studied NLP to improve system interfaces by allowing humans to communicate with machines using the same language(s) they already know. For others, however, NLP has been used to simulate interactions to better understand complex social concepts such as language, learning, and social psychology. In recent years, we have seen an explosion of growth in the capabilities of NLP due to the scaling up of deep-learning language models, also referred to as large language models (LLMs).
At this very moment, as you read this, LLMs are being trained in deep-learning GPU powerhouses to become sophisticated systems that establish extremely complex logical connections. These machines are equipped with learning algorithms and optimization routines to teach themselves how to complete their objectives. They are not provided any explicit instructions pertaining to accomplishing the task; instead, their learning algorithms provide a framework for teaching themselves the optimal way to accomplish their objectives.
At their simplest unit level, these models are just composed of individual statistical calculations. But at scale, something else seems to emerge-something that is not so easily explained. When you combine billions of these individual statistical calculations into a single centralized system, they are not merely the sum of their parts. This je ne sais quoi manifests itself as the confluence of all these complex interconnections between neural nodes. During model training, billions of neural pathways are activated, etched, and refined. These refined neural pathways determine the activation configuration of the neural nodes, each one containing a precise weight falling somewhere within the infinite fractional (non-integer) space that exists between 0 and 1. In the same way that aggregate data becomes information-and understanding of data relationships becomes knowledge-the result is something that is probably not alive or conscious, but is certainly far more than just statistics.
LLMs have introduced incredible new efficiencies to creative workflows and processes and have been rapidly adopted for personal and operational uses. But as with any exceedingly powerful technology, great opportunity often comes with great risk. Any time you are manipulating a highly complex system, there are unexpected consequences. One of the surprising by-products of scaling LLMs is that in learning language, these systems have also been able to make complex connections related to a sense of humanity. Our humanity is often transcribed into our words, our writing, and even our daily use of language. And so, while supplying these models with terabytes of language data, we have also inevitably supplied them with an obscenely large learning sample of our own humanity. While these systems are (presumably) unable to actually experience the human condition, they have nonetheless become highly effective at communicating using language that would suggest a sense of emotions, sympathy, kindness, and consideration. When you interact with an LLM system, it is easy to feel the unsettling sense of a seemingly "human" connection with something that you know is anything but. And herein lies the real risk: in creating a system that we can speak to in our own language, we have also created a system that can just as effectively speak to us with the same.
To more effectively interface with and control machine operations, we have simultaneously and inadvertently created a means whereby the machines can more easily control us. We have now equipped machines with "natural language" (i.e., with our language). This new interface to interact with computer systems uses the same languages and the same methods of communication that we use to interact with one another. By speaking their language, humans have hacked machines for decades. But now, with the machines speaking our language, the era when machines will hack humans is upon us.
In 1992, Isaac Asimov wrote The Positronic Man, a book about an advanced humanoid AI named Andrew who engaged in a lifelong pursuit to become more human-like. In 1999, the book was adapted to film as a movie called Bicentennial Man. In both the book and the film, the depicted AI androids were well equipped with basic general intelligence. They were adaptive and able to learn new simple tasks in real time-for example, how to toast a bagel, how to make a cup of coffee, or how to sweep the floor. However, these androids sorely lacked basic social intelligence. They could carry on simple conversations but struggled with complex social concepts like humor and emotion. They routinely failed to identify and appropriately respond to simple social cues and were depicted as being awkward in unexpected or nonstandard social interactions. And Asimov was not alone in this vision of the future. Much of the early science fiction about AI depicted a future where the AI systems were able to engage in tasks requiring general intelligence at a level on par with humans, but they were still very apparently robotic regarding their social interactions. They were often depicted engaging in stiff, mechanical, awkward, and seemingly emotionless interactions.
Asimov's early depiction of AI was, ironically, quite the opposite of what actually came to be. In the decades since Positronic Man, basic artificial general intelligence (AGI) has proved to be exceedingly more difficult to achieve (especially in a kinetic sense) than basic artificial social intelligence. Real-world AI systems lack the ability to adapt to ordinary daily tasks like taking out the garbage or washing the dishes. While these tasks can be automated by nonintelligent machines specifically designed to accomplish these individual tasks, AI systems lack the general intelligence to be able to adapt to unusual circumstances to accomplish simple or unexpected tasks. Instead, machine learning algorithms must be used to train AI systems on very specific types of tasks. And while AGI still has not been achieved (at least not at the time of this writing), thanks to machine learning algorithms specifically focused on the syntactical structures of human language, referred to as natural language processing, AI systems are already displaying signs of something akin to social intelligence. These modern AI systems can write persuasive arguments, tell clever jokes, and even provide words of comfort in times of distress. They can even effectively engage in creative tasks like writing a poem or a story. This strange and unexpected reversal of Asimov's foretelling of the future is precisely the world we find ourselves in today. Even more fascinating is that the key to finally unlocking AGI may be hidden within the advancement of these language processing capabilities. By integrating LLMs with other operational services (i.e., service-connected LLMs), and due to the unexpected capabilities that emerge naturally from scaling these models, we are witnessing the emergence of what some have begun to consider the early sparks of AGI (Bubeck et al., 2023). So, not only was artificial social intelligence easier to achieve, but it may become the very gateway to achieving AGI as well.
The term social intelligence refers to a person's ability to understand and effectively navigate social situations, manage interpersonal relationships, and adapt their behavior in response to the emotions, intentions, and actions of others. In an article in Psychology Today, Ronald Riggio (2014) describes what social intelligence is and how it differs from general intelligence:
Intelligence, or IQ, is largely what you are born with. Genetics play a large part. Social intelligence (SI), on the other hand, is mostly learned. SI develops from experience with people and learning from success and failures in social settings. It is more commonly referred to as "tact," "common sense," or "street smarts."
So, could it be possible for an AI system to have or exhibit social intelligence? And to refine this question even further, it is probably more appropriate to ask specifically whether modern LLMs are capable of social intelligence-since those seem to be the most likely candidates within the current spectrum of AI. Riggio described social intelligence as something that, unlike general intelligence, is learned. He also described it as something that develops from experiences with past social interactions. We know that AI systems do learn, at least insofar as you consider "machine learning" to be actual learning. And just like human learning, machine learning does involve receiving information, processing it to identify patterns and trends, and establishing models of understanding derived from those processes. Moreover, while modern LLMs do not (at least at the time of their initial training) have their own social experiences to learn from, they are nonetheless able to learn from a large pool of non-personal social experiences (albeit the social experiences of others). These social experiences come in the form of multiparty text-based communications...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.