100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Lecture Notes NLP - alles €5,56   In winkelwagen

College aantekeningen

Lecture Notes NLP - alles

 2 keer bekeken  0 keer verkocht

Alle lectures van NLP samengevat. Er was geen boek voor dit vak, dus dit is alle tentamenstof

Voorbeeld 3 van de 18  pagina's

  • 19 oktober 2024
  • 18
  • 2023/2024
  • College aantekeningen
  • Hartmut fitz
  • Alle colleges
Alle documenten voor dit vak (1)
avatar-seller
donjaschipper
Lecture notes Natural Language Processing
Lecture 1
Why is language special?
- Language emerged in homo sapiens between 30.000–100.000 yrs ago
o Other intelligent species have communication, but language is uniquely human
o Human language conveys an unbounded number of meanings
- Every child can learn any of ~6000 languages that exist today during “critical period”
o Learning “mother tongue” takes no effort at all, compared to second language (SL)
- Transmission of knowledge; writing systems invented ~5.000 yrs ago

Important tools for NLP
- Finite-state automata (FSA)
o An abstract machine that can process sequences of symbols
o Consists of states and transitions
o Reads sequences of symbols, one at a time
o An automaton accepts a sequence of symbols if and only if it stops in one of the
predefined final states after processing the entire sequence
o A finite-state automaton is a five-tuple ⟨ Σ, S , q o , F , δ ⟩ where:
 Σ is a set of symbols – alphabet
 S is a finite set of states
 q 0 is the initial state
 F ⊆ S is the set of final states
 δ is the transition function
- Regular languages
o If Σ is an alphabet, then:
 The empty set Ø (= { }) and {ε} are regular languages (ε is the empty string)
 The set {x} is a regular language for every x ∈ Σ
 If A and B are regular languages, then so is their union A U B
 If A and B are regular languages, then so is their concatenation A • B
 If A is a regular language, then so is A* (“Kleene closure”) where
¿ i
A =U i ≥ 0 A and A0 = {ε}, A1 = A, Ai = A • ... • A
 Nothing else is a regular language
 If A is a regular language, then so is its complement ∑ ¿¿
 If A and B are regular languages, then so is their intersection A ∩ B




- Regular expressions

, o Can be used to represent regular languages, or (equivalently) finite-state automata
o A regex is a sequence of characters that defines a search pattern
o Cheat sheet: slide 40/41

Lecture 2
- Phonology - study of phonemes, the basic units of speech sounds, and
phonotactics
- Morphology – study words and the parts from which words are composed,
i.e., morphemes
- Pragmatics – studies the difference between literal and implied meaning.
E.g., “I feel like pizza today.”

FSAs can be used for testing if strings belong to a language or not (classification).
However, many natural language processing tasks require converting one string to another:




For these tasks we need rewrite systems – transducers (FST)

- If a transducer has to look ahead an unbounded distance in
order to correctly generate the output on the first decision –
nondeterministic.
- Unlike FSA, nondeterministic FST cannot always be converted
to a deterministic equivalent!
- A finite-state transducer FST is an FSA that can produce
output strings:
o FST → ⟨ Σ ,O , S , q0 , F , δ ⟩
 Σ is a set of input symbols (alphabet)
 O is the set of output symbols
 S is a set of states
 q 0 is the initial state
 F ⊆ S is the set of final states
 δ is a function form ( Σ ∪ { ε } ) × S into P¿
 Example transition rule
o ⟨ ⟨ a , q 0 ⟩ , ⟨ q1 , b ⟩ ⟩
o when reading symbol a in state q0, go into state q1 and write
b

Applications of FST in NLP
1. Tokenization
2. Stemming
3. Lemmatization
4. Morphological processing
- Tokenization

, o Turning a string, document or text into tokens (smaller chunks), e.g., sentences,
words, characters, or subwords. Sometimes also called “segmentation”
o Important first step in preprocessing of text for more complex applications (every
state-of-the-art NLP system uses tokenization to prepare input)
o Rules for tokenization can be created with regex + FST
o Sentence tokenization often based on punctuation.
 Does not always work as intended: “Wow!”, said Alice. “Wow! / “, said Alice.
o In English, words are often separated from each other by blanks (whitespace).
 But whitespace is not always sufficient for word tokenization: New York ->
New / York, but you want them together.
o Very hard for languages that for example do not have whitespaces like Thai,
Japanese, etc.
o Non-alphabetic languages use logograms (a.k.a. characters, glyphs) that correspond
to words or morphemes
 Use a weighted FST
 Each path has an
associated cost
 Find the path that
minimizes overall cost
 Costs can be learned
from data (annotated
corpora)
o Deciding problem: given a model and a
sequence of observations, what is the most likely state sequence in the model that
produces these observations?
 Viterbi algorithm provides a method for finding the most optimal path in the
least number of steps
 Always choose the shortest path to get somewhere (prune a path if it
has a higher cost than another path that is at the same time but with
a lower cost)
- Inflectional Morphology
o The morphology of a language defines how words are built up from smaller parts.
Morphemes are the smallest units that have some meaning in a language
o There are many productive patterns in language (but also many exceptions). It is
much more efficient in NLP to store these rules than all word forms. Rules are then
used to derive one form from another, or to decompose unknown words.
o Morphological rules are useful if we want to decompose words into subword units (in
for example the stem of the word -> stemming).
- Stemming
o cats = cat + s, walking = walk + ing
o Widely used - Porter algorithm
 Based on a series of cascaded rewrite rules implemented as an FST:
 ATIONAL→ATE (e.g., relational → relate)
 ING→ if and only if stem contains vowel (e.g., motoring → motor, but
not bring → br)
 Lexicon-free (no dictionary or vocabulary needs to be used)
- Lemmatization

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper donjaschipper. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €5,56. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 82388 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€5,56
  • (0)
  Kopen