Text mining summary
1 – Search is divided into 2: meta data index and text index
Metadata index = index based on metadata associated with a document: title, author, date
and keywords. Like a library computer where you can search on “author”
Text index = index based on the full text of documents themselves. Often used in search
engines, like finding a keyword in texts.
Event index = type of index used to organize and retrieve information about events, like
meetings or concerts. Typically involves meta data such as date, time, location etc as well as
any related documents such as agendas or presentations. Like when planning a vacation, to
see what will happen during your time there.
Anomalies = refer to data points that differ significantly from the norm or expected
behaviour, like if sentiment analysis model identifies a piece of text as positive while most
other similar texts are negative
Short text is complex message with a lot of relations and information NLP technology to
extract this information
Computational linguistics = algo’s that model language data and define notions like:
similarity, info value, sequence probabilities (develop chatbots that can understand and
respond to user queries)
Natural Language Processing (NLP) = engineering to address aspects of natural language like:
tokenisation, lemmatisation, compound splitting, sentiment analysis (=> uses ML algo to
determine the emotional tone of given text)
NLP toolkits = software packages and resources that provide and/or combine collections of
NLP modules (NLTK in python)
Language applications = machine translation, summarisation, chat bots, text mining (google
translate or siri)
Text mining = from unstructured text to structured data (information or knowledge)(our
focus: understand the technology, limitations, build applications) (=> topic modelling which
uses statistical algo’s to identify topics and patterns within a large corpus of text)
Week 2&3 – part 1:
NLP:
Complex problem (extracting info from texts) is broken down into a number of
smaller problems
Simple, structural problems solved first and higher-level semantics tasks are solved
later, using output of earlier modules as input
o So called pipeline architecture with dependencies across modules
o Error propagation
For each problem different techniques:
o Knowledge-base & rules (linguistic knowledge)
o Machine learning (supervised and unsupervised) data driven
We always need to do preprocessing:
Even for the current state-of-the-art deep learning systems
First problem: what is a word, what is a sentence? Not trivial
, Tokenization (example of problems):
o 21st century, quotes, don’t, hyphens, $100,45 etc
Sentence splitting (example of problems):
o Dr., bol.com, etc.m white spaces, tables, HTML markup, <h1></h1>, new lines
Named entity recognition pipeline example: sentiment analysis pipeline example
Some issues:
Dependencies across modules result in error propagation
Ambiguities (multiple values with confidence score, like POS tagging: 80% noun, 20%
verb) are often not exploited by next levels
Conflicts: different modules state information that is not compatible
Complex and difficult to maintain, like: input and output needs to be interoperable
across modules
Text mining is like solving a puzzle. You have to put all the pieces together to understand
what the puzzle is trying to show you.
But sometimes there are problems that can make solving the puzzle difficult. One problem is
when the different pieces of the puzzle depend on each other, so if one piece is wrong, it
affects all the other pieces.
Another problem is when there are different meanings for the same word, like "run" can
mean to jog or to manage. This can make it hard to understand what the puzzle is trying to
say.
Also, sometimes there can be different parts of the puzzle that don't match up or agree with
each other. This can make it even harder to understand the puzzle.
Lastly, text mining is complex and requires a lot of work to make sure everything fits together
properly. It's like putting together a big Lego castle where each block has to fit with the
others.
For example, imagine you are trying to understand a book about a dog named Max. One
piece of the puzzle might be the word "bark." Depending on how it's used, it could mean Max
is barking at someone, or he's barking up a tree. If the wrong meaning is chosen, it could
affect the rest of the puzzle.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller simonvanrens. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $8.09. You're not tied to anything after your purchase.