This tutorial offers an in-depth exploration of Relation Hallucination in abstractive text summarization and introduces the Relation Hallucination Index (RHI), a novel metric for evaluating hallucination in summarization models. Hallucination, the generation of information not grounded in the source text, poses significant challenges for NLP applications, especially in domains that require high accuracy, such as medicine, law, and finance. Relation Hallucination, where models fabricate relationships between entities, is particularly problematic. This tutorial will walk attendees through the concept of Relation Hallucination, the motivation for developing RHI, and its practical application in assessing state-of-the-art summarization models like GPT-3.5, BART, T5, and Pegasus. We will cover the metric's foundations, calculations, and visualizations to provide a hands-on understanding of RHI, empowering researchers and practitioners to use it for enhanced model evaluation.
A supervised ranking model, although generally more effective than traditional approaches, often requires intricate processing that involves several stages. This has motivated researchers to explore simpler pipelines leveraging large language models (LLMs) that can work in a zero-shot manner. Current zero-shot re-rankers demonstrate promising results, achieving effective performance without the need for training data and operating with streamlined pipelines. However, since zero-shot inference relies only on preexisting knowledge and generalization of the model and operates without access to a task-specific training set, its performance is typically less robust than that of supervised models. This tutorial covers a technique that improves the zero-shot ranking performance of LLMs using few-shot in-context learning. This technique requires a similar query and a pair of documents as an in-context example, which provides context for the query and definition of the downstream task. Providing these localized in-context examples is effective while being a non-parametric way of controlling the LLM ranking predictions.
This tutorial delves into the transformative role of Generative AI in processing and interpreting unstructured textual data, a critical need in data-driven organizational decision-making. It highlights the integration of Large Language Models (LLMs), knowledge graphs, and liquid neural networks to automate and enhance business text processing across domains like finance, legal, and corporate governance. Key applications include sentiment analysis of financial market reports, extraction of insights from regulatory filings, and risk analysis using diverse data sources. The tutorial explores Retrieval-Augmented Generation methods for integrating knowledge with LLMs and addresses the challenge of detecting adverse media, crucial for managing corporate reputation and compliance. Finally, it examines the automation of corporate compliance tasks using AI, ensuring organizations meet dynamic regulatory demands efficiently.