Machine Learning according to Herbert Alexander Simon:
Learning is any process by which a system improves performance from experience.
Machine Learning is concerned with computer programs that automatically improve their
performance through experience.
For example spam e-mails. After analyzing several spam e-mails, machine recognizes
spam e-mails because of similar characteristics (unsupervised learning)
Supervised learning is that rules and restrictions are given to the machine
Why machine learning?
Develop systems that can automatically adapt and customize themselves to individual
users.
Discover new knowledge from large databases (data mining).
Ability to mimic human and replace monotonous tasks – which require some intelligence.
Develop systems that are too difficult/expensive to construct manually because they
require
detailed skills or knowledge tuned to a specific task (knowledge engineering bottleneck).
Statistics, machine learning and data mining: distinctions are fuzzy!
Statistics: more theory based, more focused on testing hypothesis
Machine learning: more heuristic, focused on improving performance of a learning agent
Looks at real time learning and robotics
Data mining: integrates theory and heuristics, focus on the entire process of knowledge
discovery, including data cleaning, learning, and integration and visualization of results.
Machine learning:
Study of algorithms that improve their performance at some task with experience.
Optimize a performance criterion using example data or past experience.
Role of statistics: inference from a sample
Role of computer science: efficient algorithms to solve the optimization problem and
representing and evaluating the model for inference.
Machine learning is the preferred approach to:
Speech recognition, natural language processing, computer vision, medical outcome
analysis, robot control.
This trend is accelerating because of:
Improved machine learning algorithms, improved data capture/networking/faster
computers,
software too complex to write by hand, new sensors/IO devices, demand for self
customization to user/environment, failure of expert systems in the 1980’s.
Origins of data mining:
, Draws ideas from machine learning/artificial intelligence, pattern recognition, statistics
and
database systems.
Traditional techniques may be unsuitable due to enormity of data.
High dimensionality of data: heterogeneous and distributed nature of data
What is data mining?
Extraction of implicit, previously unknown and potentially useful information from data.
Exploration & analysis, by automatic or semi-automatic means, of large quantities of data
in order to discover meaningful patterns.
Knowledge discovery in databases (KDD) is…
The non-trivial process of identifying
implicit (by contrast to explicit)
valid (patterns should be valid on new data)
novel (novelty can be measured by comparing to expected values)
potentially useful (should lead to useful actions)
understandable (to humans)
… patters in data
Data mining is a step in the KDD process
Steps of a KDD process:
1. Data cleaning missing values, noisy data, inconsistent data
2. Data integration merging data from multiple data stores
3. Data selection select data relevant to the analysis
4. Data transformation aggregation (daily to weekly sales) or generalization (street to city)
5. Data mining apply intelligent methods to extract patterns
6. Pattern evaluation interesting patterns should contradict the user’s belief or confirm hypothesis
7. Knowledge presentation visualization and representation techniques to present mined ideas
Good to know…
60 to 80% of the KDD effort is preparing the data and the remaining 20% is mining
A data mining project always starts with analysis of the data with traditional query tools:
• 80% of the interesting information can be extracted using SQL
• how many transactions per month include item number 15?
• show me all the items purchased by Mandy Smith.
• 20% of hidden information requires more advanced techniques
, • which items are frequently purchased together by my customers?
• how should I classify my customers in order to decide whether future loan
applicants will be given a loan or not?
Data mining tasks
Prediction Tasks
• Use some variables to predict unknown or future values of other variables
Description Tasks
• Find human-interpretable patterns that describe the data.
Common data mining tasks
• Classification [Predictive]
• Clustering [Descriptive]
• Association Rule Discovery [Descriptive]
• Sequential Pattern Discovery [Descriptive]
• Regression [Predictive]
• Deviation Detection [Predictive]
Classification of ML algorithms
There are two types of ML algorithms: supervised learning and unsupervised learning.
Supervised learning (Dependent techniques)
• Training data includes both the input and the desired results.
• For some examples the correct results (targets) are known and are given in input to the
model
during the learning process.
• The construction of a proper training, validation and test set is crucial
• Have to be able to generalize: give the correct results when new data are given in input
without
knowing a priori the target.
• (Logistic) Regression, Neural Networks, SVM
Unsupervised learning (Interdependent techniques)
• The model is not provided with the correct results during the training.
• Can be used to cluster the input data in classes on the basis of their statistical properties
only.
• Examples: clustering, factor analysis
, Example supervised learning (algorithms)
Logistic regression
Data: bankruptcy of firms over years
• Firm level variables (leverage, liquidity)
• Industry level variables (market growth, recession)
• Marketing related variables (capabilities, assets)
Notes: logistic regression fits better, it’s a kind of machine learning.
The uses of supervised learning
Example: decision trees tools that create rules
Prediction of future cases: use the rule to predict the output for future inputs
Knowledge extraction: the rule is easy to understand
Compression: the rule is simpler than the data it explains
Outlier detection: exceptions that are not covered by the rule, e.g., fraud
Unsupervised learning
Learning “what normally happens”
No output
Clustering: grouping similar instances
Other applications: summarization, association analysis
Example applications
• Customer segmentation in CRM
• Image compression: colour quantization
• Bioinformatics: learning motifs
Reinforcement learning
Topics:
• Policies: what actions should an agent take in a particular situation
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller nikkinuman. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $4.25. You're not tied to anything after your purchase.