100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Summary Alle lectures samengevat €17,99   In winkelwagen

Samenvatting

Summary Alle lectures samengevat

 58 keer bekeken  7 keer verkocht

Ik heb dit tentamen, aan de hand van deze samenvatting, gehaald met een 9. Succes!

Voorbeeld 3 van de 27  pagina's

  • 10 oktober 2023
  • 27
  • 2020/2021
  • Samenvatting
Alle documenten voor dit vak (2)
avatar-seller
gideonrouwendaal
Lecture 1: Introduction to Computation Intelligence
Intelligence: the ability to acquire and apply knowledge and problem solving

Computations: the action of mathematical calculations. The use of computers

Knowledge: experience(=data) represented by facts and information

Computational intelligence: computers acquire knowledge and solve problems

Artificial intelligence:

- Symbolic AI: logic, knowledge representations,…
- Sub-symbolic AI: neural nets, evolutionary algorithms, nature-inspired algorithms
- Statistical Learning
- Probabilistic methods
- Optimization

Computational intelligence (can be seen as a subfield of AI):

- Neural Networks
- Evolutionary algorithms
- Nature-inspired algorithms (swarm intelligence)
- Probabilistic methods
- Optimization

Most important part is that optimization is in common.

Why is AI so successful? Accessible hardware (in the past not possible), powerful hardware,
intuitive programming language (python), specialized packages (PyTorch, Numpy, SciPy..).

The components of AI/CI systems: Knowledge representation (how to represent and process data?),
knowledge acquisition (how to extract knowledge?), and learning problems (what kind of problems
can we formulate).

Optimization: find the optimal solution (minimum or maximum) from a given set of possible
solutions that minimizes/maximizes given objective function.

Learning as optimization task: For given data D, find the best data representation from a given class
of representations that minimizes given learning objective (loss). Optimization algorithm = learning
algorithm!

Learning tasks:

- Supervised learning
o We distinguish inputs and outputs
o We are interested in prediction
o We minimize predictions error
- Unsupervised learning
o No distinction among variables
o We look for data structure
o We minimize reconstruction error, compression rate,..
- Reinforcement learning
o An agent interacts with an environment

, o We want to learn a policy
o Each action is rewarded
o We maximize the total reward

Examples of supervised learning: Classification, Regression

Examples of unsupervised learning: Clustering, Reconstruct a function

Examples of reinforcement learning: How to control drones, Pac-Man

Lecture 2: Optimization
Example of an optimization problem is the travelling salesman problem: minimize total distance
such that each place is visited once. Another optimization problem is the coverage problem: we
know the range of each tool, how to place them such that it covers most of the locations (coverage
is as big as possible). Another optimization problem is curve fitting: fitting a curve through
datapoints. A different optimization problem is radiation optimization: how to maximizes radiation
through an antenna. Another famous problem is the knapsack problem: we have a bag (that can
handle certain volume) and objects, find combination. Recap: Find the optimal solution (minimum
or maximum) from a given set of possible solutions that minimizes/maximizes given objective


function:




Continuous optimization is easier than discrete. 3) Optimal solution: the optimal solution is smaller
than any other solution AND the constraints hold. Constraints are very important, because they can
shift the optimal solution to another, local optimum.

Taxonomy of optimization problems:

- Constrained vs unconstrained: constrained problems are the ones that have explicit
constraints. If no constraints: basically everything is allowed
- Convex vs non-convex: convex functions are for example exponential functions. If you follow
the curve: always end up in the optimal solution. Convex sets: if you make a step in any
direction of the search space, you will always end up in this set (Circle is a convex set, circle
with a hole in it: non-convex)
- Deterministic vs stochastic: deterministic is that if the objective function and the constraints
are known, nothing can change. Stochastic: randomness is involved (e.g. objective function
is a random function)
- Global vs local: sometimes having a local optimum is already fine. Global optimum is the
best and local optimum is the best in its neighborhood
- Continuous vs discrete: discrete optimization (combinatorial) number of possible solutions is
exponential (requires considering many different combinations).

, Before tackling any problem, formulate the problem as good as possible (do not think about the
solution/method yet). The method should be adjusted to the problem.

Taxonomy of optimization methods

- Derivative-free methods (0th order methods): you do not need to know the mathematical
form of the objective function. Just “look” for candidate solutions and try to make them
better. E.g. Hill climbing, iterated local search….
- Gradient-based methods (1st order methods): more specialized, because the information
about the gradient of the objective function is used. A gradient is a vector of first derivatives.
E.g. Gradient descent, ADAM,…
- Hessian-based methods (2nd order methods): require calculating Hessian. Hessian is a matrix
of second derivatives. E.g. Newton’s method,… Hessian-based methods are expensive to
compute. If you have a small problem, you should start here.

Iterative optimization methods: we are interested in numerical methods. Therefore, we consider
iterative optimization methods. You start with a random, possible model, in the model space, and
from there look for better (iterate) models. The general procedure:




In 2: look at current point, and check if next solution is better or not. If not, go back and look for
other (better) points. Crucial part is the formulation of ψ. This is the algorithm step.

Gradient descent: derivatives of the objective function with respect to optimization variables =




gradient: If access to objective function (analytically): always use gradient
descent. Intuition behind the gradient

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper gideonrouwendaal. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €17,99. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 78075 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€17,99  7x  verkocht
  • (0)
  Kopen