100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.6 TrustPilot
logo-home
Exam (elaborations)

NLP - Chapter 11 Questions and Answers 2024

Rating
-
Sold
-
Pages
2
Grade
A+
Uploaded on
31-10-2024
Written in
2024/2025

Exam of 2 pages for the course NLP at NLP (NLP - Chapter 11)

Institution
Course

Content preview

NLP - Chapter 11

Transformers decoder - answer takes its own previous output and the encoder's output
as input

BERT - answer Bidirectional Encoder Representations from Transformers

BERT Paper - answer Pré-training of deep bidirectional transformers for language
understanding

How are input embeddings calculated in BERT? - answer the sum of the token
embeddings, the segmentation embeddings, and the position embeddings

BERT_BASE and BERT_LARGE – answer BERT_BASE has 12 layers, hidden size of
768, and 110M total parameters. BERT_LARGE has 24 layers, hidden size of 1024,
and 340M total parameters

difference between Bert and Distilbert - answerDistilbert is a distilled version of Bert,
with a reduced number of layers, parameters, and weights. It aims to maintain 97% of
Bert's performance while reducing its size by 44%. Both use a vocabulary of 30K
subwords.

Bidirectional encoders - answerhave access to tokens both before and after the current
one, allowing for greater context understanding

masked language model (MLM) - answerInstead of next word prediction, we use a cloze
task where a word is masked and we ask what word fits best. BERT training

What is the purpose of the next sentence prediction (NSP) task in training BERT? -
answerThe NSP task determines if a given sentence naturally follows another sentence.

Explain how contextual embeddings are extracted from BERT. - answerwe feed input
tokens into a bidirectional self-attention model like BERT, and the output layer provides
the contextual vectors for each token.

pretraining - answerspend a long time training the LM on massive corpora; roughly 40
epochs (passes over full training set) minimum

domain-adaptive pretraining - answerafter pretraining, continue pretraining using a
domain-specific corpus

keep pretraining - answermany studies show that continuing pretraining beyond
standard limits still shows downstream benefits

Written for

Course

Document information

Uploaded on
October 31, 2024
Number of pages
2
Written in
2024/2025
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

$13.49
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached


Also available in package deal

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
julianah420 Phoenix University
Follow You need to be logged in order to follow users or courses
Sold
681
Member since
3 year
Number of followers
328
Documents
35009
Last sold
3 days ago
NURSING,TESTBANKS,ASSIGNMENT,AQA AND ALL REVISION MATERIALS

On this page, you find all documents, package deals, and flashcards offered by seller julianah420

4.2

154 reviews

5
102
4
21
3
10
2
5
1
16

Trending documents

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions