100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Summary Natural Language Generation Exam Notes €8,49
In winkelwagen

Samenvatting

Summary Natural Language Generation Exam Notes

 10 keer bekeken  0 keer verkocht

This document contains notes and summaries covering the content of the course Natural Language Generation within the Artificial Intelligence Master at Utrecht University.

Voorbeeld 2 van de 7  pagina's

  • 22 november 2022
  • 7
  • 2022/2023
  • Samenvatting
Alle documenten voor dit vak (5)
avatar-seller
massimilianogarzoni
Examples of NLG applications
• Weather forecast, road maintenance, automatic journalism, reporting on sports results, textual feedback on
health, agents and dialogue systems, financial reporting for companies, image labelling
• NLG can produce higher-quality texts than mail-merge, especially when there’s lots of variation in output texts
• NLG systems can be easier to update when need to regularly change content or structure of generated documents

NLG systems’ pipeline
• Most common architecture in NLG systems is a three-stage pipeline with following stages:
o Text planning: combines content determination and discourse planning
o Sentence planning: combines sentence aggregation, lexicalization, and REG
o Linguistic realization: involves syntactic, morphological, and orthographic processing
• Issue of intermediate representations: how inputs and outputs of different stages should be represented and
what to pass from one stage to the other
• Data analytics and interpretation: making sense of data, often discarded
• Content Determination: decide on content and structure of text
⁃ Content selection:
⁃ Deciding what information should be communicated in the text
⁃ Creating a set of messages from system’s inputs or underlying data sources
⁃ Largely consists of filtering and summarizing input data
⁃ Messages created are expressed in some formal language that labels and distinguishes entities,
concepts and relations in domain
⁃ Represent each message as an attribute–value matrix; each describes some relation that holds
between those entities or concepts specified as arguments of that relation
⁃ Most NLG systems base content determination on domain-specific rules acquired from domain
experts; easier and more faithful to human texts
⁃ Document structure:
⁃ How should I organize this content as a text? What order do I say things in? What rhetorical
structure?
⁃ Impose ordering and structure over set of messages to be generated
⁃ Structuring the messages produced by content determination into a coherent text
⁃ Text plans (output of text planner):
⁃ Usually represented as trees whose leaf nodes specify individual messages, and whose internal
nodes show how messages are conceptually grouped together
⁃ Most common strategy is to represent messages as formally similar as possible to the
representation used for sentence plans
⁃ The clustering decisions made in tree will have an impact on determination of sentence and
paragraph boundaries in resulting text
⁃ Sentence plans:
⁃ Classic template systems simply insert parameter into boilerplate without doing any further
processing (newer systems might perform limited linguistic processing as well)
⁃ Abstract sentential representations: represent sentences plans by using an abstract
representation language which specifies the content words (nouns, verbs, adjectives and adverbs)
of a sentence, and how they are related
⁃ Sentence Planning Language (SPL): characterizes the sentence by means of named attributes and
their values, and allows values themselves to consist of named attributes and their values
• Microplanning: decide how to linguistically express text (which words, sentences, etc. to use; how to identify
objects, actions, times)
⁃ Input: a tree-structured text plan whose leaf nodes are messages
⁃ Output: a new text plan whose leaf nodes are combinations of messages that will eventually be realized
as sentences
⁃ Lexical/syntactic choice: which words and linguistic structures to use?

, ⁃ Lexicalization: deciding which specific words and phrases should be chosen to express domain
concepts and relations which appear in messages
⁃ Often simply done by hard coding a specific word or phrase for each domain concept or relation
⁃ Sometimes improve fluency by allowing NLG system to vary words used to express a concept or
relation, either to achieve variety or accommodate subtle pragmatic distinctions
⁃ Especially important when NLG system produces output texts in multiple languages
⁃ Aggregation: how should information be distributed across sentences and paragraphs?
⁃ Sentence aggregation: grouping messages together into sentences, not necessary but often, if
done well, can significantly enhance fluency and readability
⁃ Reference: how should text refer to objects and entities?
⁃ REG: task of selecting words or phrases (linguistic forms) to identify domain entities
⁃ Unlike lexicalization, REG is usually formalized as a discrimination task, where system needs to
communicate sufficient information to distinguish one domain entity from others; this requires
account of contextual factors
⁃ Goal is to include enough information in description to enable hearer to unambiguously identify
target entity
• Linguistic Realization:
⁃ Applying rules of grammar to produce a text which is syntactically, morphologically, and orthographically
correct
⁃ Generating grammatically correct sentences to communicate messages
⁃ Realizer:
⁃ Module where knowledge about grammar of NL is encoded
⁃ Activates syntactic component, morphological component and orthographic component

Building NLG systems
• Need knowledge of language and application:
⁃ Imitate a corpus of human-written texts
⁃ Manually examine, or use learning if corpus is large enough
⁃ Ask domain experts, although they’re better at critiquing what system does
⁃ Experiments with users, very nice in principle, but lots of work
• Evaluation of output texts:
⁃ Does system help people? Do people like texts and believe are useful? When to compare output texts
with human texts?
• Requirement analysis and system specification:
⁃ Developer uses a collection of example inputs and associated output texts to describe to users the system
she proposes to build
⁃ Corpus-based approach where corpus contains examples of system inputs and corresponding output texts
and should cover full range of texts expected to be produced by system, including boundary, unusual and
typical cases
• Analyzing information content of corpus texts:
⁃ Important step: identify parts of human-authored corpus texts conveying info not available to NLG
system; this analysis requires classifying each sentence of a corpus text into one of following categories:
⁃ Unchanging text: text always present in the output; easiest to generate
⁃ Directly available data: text with info already in input data (or DB/KB)
⁃ Computable data: text with info that can be derived from input data via computation or reasoning
⁃ Unavailable data: text with info not present in or derivable from input data; causes most problems
and impossible to generate (if not in input, can’t be in output)


Metrics

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper massimilianogarzoni. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €8,49. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 52355 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€8,49
  • (0)
In winkelwagen
Toegevoegd