Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
A. M. Abirami1* and A. Askarunisa2
1Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, India
2Department of Computer Science and Engineering, KLN College of Information Technology, Madurai, Tamil Nadu, India
Abstract
The internet world contains large volume of text data. The integration of web sources is required to derive needed information. Human annotation is much difficult and tedious. Automated processing is necessary to make these data readable by machines. But mostly they are available in unstructured format, and they need to be formatted into structured form. Structured information is retrieved from unstructured or semi-structured text which is defined as text analytics. There are many Information Extraction (IE) techniques available to model the documents (product/service reviews). Vector space model uses only the content but not the contextual representation. This complexity is resolved by Semantic web, the initiative of WWW Consortium. The advantage of the use of Semantic web enables the ease of communication between Businesses and in process improvement.
Keywords: Ontology, semantic-web, decision making, healthcare, service, reviews
Text analysis is defined as deriving structured data from unstructured text. Additional information like customer insight about the product or service can be retrieved from the unstructured data sources using text analytics techniques. Its techniques have different applications such as insurance claims assessment, competitor analysis, sentiment analysis and the like. Many industries use text analytics for their business improvement. Social media impacts different industries like product business [1, 2], tourism [3, 4], and healthcare service [5] with the tremendous changes in the recent past years.
Retrieving and summarizing web data, which are dispersed in different web pages, are difficult and complex processes; also, they consume most of the manual effort and time. No standard data model exists for web documents. This increases the necessity of annotating the huge number of text documents that exist in the World Wide Web (WWW). Extracting and collating the information from these text is a complex task. Unlike numerical dataset, text documents contain more number of features. The amount of resources required to represent big dataset may be improved by representing the text documents with most needed and non-redundant features. Classification or clustering algorithms may be used for identifying the features from the text documents. The documents are analyzed, modeled and then used in the process of business improvement or for personal interest. Thus, the annotated text improves automated decision-making process, which in turn reduces the manual effort and time required for text analysis.
The report from British Columbia Safety and Quality Council says when patients and healthcare service entities are engaged in online platform, then there would be greater improvement in offering healthcare services. Improvement in healthcare services is visible when insights from the experience of patients are analyzed [5]. Hence, it becomes necessary to consolidate the opinions from the customers or clients so as to improve business, decision-making and increase revenue. Figure 1.1 gives the overview of decision-making process from the online product/service reviews, using different information extraction and text analysis techniques.
There exist many challenges while analyzing the social media text or user-generated content. In languages like English, the same word has multiple meaning (polysemy), and different words have same meaning (synonymy). People show "variety" and use heterogeneous words while expressing their views. It often leads to complication in processing the textual data. Most of the feature extraction techniques do not consider the semantic relationships between the terms. Subjectivity that exists in text processing techniques adds complexity to the process, which in turn impacts the evaluation of results. Also, the rare availability of gold-standard or annotated text data for different domains add more challenges to text analysis [6]. Hence, the identification and application of suitable Natural Language Processing (NLP) techniques are the main research focus in text data analysis.
Figure 1.1 Decision-making process from social media reviews.
Text analytics supports the context matching between the reader and the writer. This challenge can be managed if different vocabularies of features and their relationship are well represented in the data model. For example, content based contextual user feedback analysis enables the users to buy new products or avail any service by highlighting the best features of products or services. Challenges and issues in information retrieval problems are overcome if Ontology representation and topic modelling techniques are used for modeling the text documents. The chapter focuses on extracting relevant features from the set of documents and building domain ontology for them. The Ontology helps in building the predictive or sentiment analysis model by using suitable information retrieval (IR) techniques and contextual representation of data, so as to enable automated decision-making process, before buying a new product or availing a new service, as shown in Figure 1.2.
Figure 1.2 User-generated content analysis (UCA) model.
Ontology describes a domain of classes. It is defined as a conceptual model of knowledge representation. The concepts of the domain (classes), their attributes, their properties and their relationships are well described by the Ontology model. It also explains the meanings of the terms applicable to the domain. Ontology is one of the key components of semantic web technology. The semantic web technologies like Ontology, RDF and Sparql are used in describing different words and their dependencies by modeling the textual data. Components of Ontology include:
Information Extraction (IE) and Ontology are related with one another like: Ontology is used in information extraction as part of understanding process of the domain; on the other hand, IE is used to design and enrich Ontology [7]. Common vocabulary and shared understanding among different people are enabled by Ontology. The contextual representation of data semantics is well described by the Ontology [8]. The UML diagrams along with Ontology support the biologists by classifying the entities and interactions between proteins and genes [9]. The terms (vocabularies) and the concepts (classes) in the source Ontology are used in term matching, thereby used in tagging the text documents. Thus the Ontology and their specifications are used in the information extraction process.
Knowledge is data that represents the outcome of computer-based cognitive processes such as perception, learning, association, and reasoning, or the translation of knowledge acquired by human [10]. It is the language by which human express their understanding about the concept. The concepts and the instances of a particular domain are expressed in the knowledge base also referred as the semantic knowledge dictionary. It is one of the most important techniques to represent the knowledge for a domain. Domain Ontology is developed to formally define the concepts, relationships, and rules so as to include the semantic content of the domain. The semantic approach uses the concepts in the documents to establish the contextual relationship rather than the terms. Issues like synonymy and polysemy may not be resolved if terms are used as indices while modelling the text documents. Various semantic-based information extraction approaches like Latent Semantic Indexing [11] and Latent Dirichlet Allocation [12, 13] techniques are used for building the relationship among the indexed terms, so as to represent the contexts between the concepts. This chapter focuses on developing domain Ontology to represent the features and their related terms mentioned in the product/service reviews generated in social media web sites.
Ontology facilitates the shared understanding among the people by formalizing the conceptualization of a specific domain. The contextual representation of data semantics is well described by the Ontology [8]. Ontology defines concepts (domain) by using the common vocabulary and describes attributes, behavior, relationships and constraints. The UML diagrams along with The interactions between proteins and genes are well explained by Ontology representation which would support the biologists for classification [9]. Reviews on hotels and movies are classified using the rule-based systems and Ontology [14-16]. Document annotation and rules were used to create knowledge base of web documents from the extraction of semantic data like named entities [14, 17, 18]. Ontology learning and RDF repositories were used for building the...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.