100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Machine Learning Exam with perfect answers 2024 $11.49   Add to cart

Exam (elaborations)

Machine Learning Exam with perfect answers 2024

 9 views  0 purchase
  • Course
  • Machine Learninggt
  • Institution
  • Machine Learninggt

t/f A neural network with multiple hidden layers and sigmoid nodes can form non-linear decision boundaries. correct answers true t/f If you increase the number of hidden layers in a Multi Layer Perceptron neural networks (fully connected networks), the classification error of test data always d...

[Show more]

Preview 2 out of 13  pages

  • August 4, 2024
  • 13
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
  • Machine Learninggt
  • Machine Learninggt
avatar-seller
HopeJewels
Machine Learning Exam
t/f A neural network with multiple hidden layers and sigmoid nodes can form non-linear
decision boundaries. correct answers true

t/f If you increase the number of hidden layers in a Multi Layer Perceptron neural
networks (fully connected networks), the classification error of test data always
decreases. correct answers false

t/f The number of neurons in the output layer must match the number of classes (Where
the number of classes is greater than 2) in a supervised learning task. correct answers
false

t/f using Mini-batch gradient decent the model update frequency is higher than batch
gradient descent which allows for more robust convergence, avoiding local minima.
correct answers true

Which of the following gives non-linearity to a neural network?

(a) Stochastic Gradient Descent.

(b) Rectified Linear Unit

(c) Convolution function

(d) None of the above correct answers (b) Rectified Linear Unit

For a classification task, instead of random weight initializations in a neural network, we
set all the weights to zero. Which of the following statements is true?

(a) There will not be any problem and the neural network will train properly

(b) The neural network will train but all the neurons will end up recognizing the same
thing

(c) The neural network will not train as there is no net gradient change

(d) None of these correct answers (b) The neural network will train but all the neurons
will end up recognizing the same thing

What are the steps for using a gradient descent algorithm in training neural networks?
1. Calculate error between the actual value and the predicted value
2. Reiterate until you find the best weights of network
3. Pass an input through the network and get values from output layer

, 4. Initialize random weight and bias
5. Go to each neurons which contributes to the error and change its respective values
to reduce the error.

(a) 1, 2, 3, 4, 5

(b) 5, 4, 3, 2, 1

(c) 3, 2, 1, 5, 4

(d) 4, 3, 1, 5, 2 correct answers (d) 4, 3, 1, 5, 2

In a neural network, knowing the weight and bias of each neuron is the most important
step. If you can somehow get the correct value of weight and bias for each neuron, you
can approximate any function. What would be the best way to approach this?

(a) Assign random values and hope they are correct

(b) Search every possible combination of weights and biases till you get the best value

(c) Iteratively check that after assigning a value how far you are from the best values,
and slightly change the assigned values to make them better

(d) None of these correct answers (c) Iteratively check that after assigning a value how
far you are from the best values, and slightly change the assigned values to make them
better

t/f A perceptron is guaranteed to perfectly learn a given linearly separable function
within a finite number of training steps. correct answers true

t/f A multiple-layer neural network with linear activation functions is equivalent to one
single-layer perceptron that uses the same error function on the output layer and has
the same number of inputs. correct answers true

Which of the following is NOT correct about ReLU (Rectified Linear Unit) activation
function?

(a) Does not saturate in the positive region.

(b) Computationally effective comparing to sigmoid and Tanh activation functions

(c) Mostly converge faster than sigmoid and Tanh activation functions

(d) Zero-centered output. correct answers (d) Zero-centered output.

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller HopeJewels. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $11.49. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

78677 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$11.49
  • (0)
  Add to cart