USER MODELS = needed bc every U is different with goals, preferences, needs, and knowledge. It is an internal representation of
user characteristics used by a system as a base for adaptation. Many characteristics possible (age, gender, location, habits,
personality,...) but not all relevant. Built thru USER MODELLING = process of creating and updating UM by deriving characteristics
from user data which can be collected EXPLICITLY OR IMPLICTLY or thru STEREOTYPING. 4 stages of U Modelling = Acquisition,
Inference, Representation, and Updating. KNOWLEDGE INFERENCE = interpretation of events and observations of a U, making sense
of the stored knowledge. Possible thru 3 approaches: Detecting patterns (sys. should respond to recurrent behavior, need to define
which behav. is relevant), Matching user behavior with behavior of others, and Classifying users or content based on user behavior
(stereotyping or modeling user interests). User Modelling Structures = Flat. Hierarchical, Stereotype, DOMAIN OVERLAY (for each
item in a domain, attributes represent the users' knowledge/interest in this item), Logic-based model.
CONTEXT MODELING = collected data needs to be modeled and interpreted in a meaningful way that is machine-readable. A
context model (static - predefined or dynamic- changes) represents a subset of context in the app in the form of attributes. 5 Ws =
WHO (the user, but also the ppl in their environment - social assessment), WHAT (perceiving human activity), WHERE (location, and
sequence of actions), WHEN (time, passage of time, change of behav. thru time), WHY. Context lifecycle = acquisition (Physical,
Virtual, Logical sensors), modeling, reasoning, distribution, update. context awareness = sys. is aware of context and can adapt to it,
3 types = Presentation of info and services to U, Automatic execution of services, Annotation of context info for later retrieval.
Degree of context awareness = User adaptation, Passive context-awareness, and Active context-awareness. Prefiltering = context
info influences the data selection/construction. Postfiltering = ratings predicted using all data without context, then adjusted for
each user based on the context. Contextual Modeling = contextual info used directly as a part of the algo.
RECOMMENDER SYSTEMS = CONTENT-BASED (model-based, uses item meta-data to recommend), COLLABORATIVE (memory - uses
all raw data, model - creates a dedicated model), HYBRID (3 lvls) and KNOWLEDGE (constraint or case ). Problems and solutions =
cold start, sparseness (not everyone gives ratings - automatic generation, implicit profiling), diversity (serendipity - positive surprise,
content and collaborative as solution, fairness - items keep getting recommended lead to higher ratings so reshuffle), Scalability -
(high comp. eff. - use model-based, limit users or items), Privacy, changing user interests (short vs long term interests - context-
aware as solution).
COLLABORATIVE FILTERING = based on ratings expressed by other similar users, user-item matrix. Process = Users rate items;1. Find
S of similar Us which have rated similar to the U in past (neighborhood) 2. Generate candidate items to recommend not yet rated by
U but rated in the neighborhood 3. Predict rating of U for candidate items; Select and display n best items. MEMORY BASED CF
ALGORITHM = USER-USER SIMILARITY metrics are Mean-Squared difference, Cosine, or Pearson corr. Similar users based on
demographics/stereotypes Neighborhood of similar users can be determined with Center-based (S has k most similar users,
predefined #, maybe some are not really similar, shouldn't be too large) or Similarity threshold (S has all users with similarity >
threshold, maybe too few Us above threshold). To aggregate should follow ST first then if too small we determine a centroid (vector
containing the avg. ratings of all rated items in a S) and add users that are most similar to centroid. Clustering - process of grouping a
set of objects into classes of similar objects. Prediction of CF can come from the arithmetic mean, weighing based on similarity and
Considering the deviation from avg. ratings for a user. MODEL-BASED CF ALGORITHM = ITEM-ITEM CF = find items that are similar
to item X then use our user's ratings to predict.
CONTENT BASED = matching user profile with itemset, recommend items that are similar to the items that user has liked in the
past, items in the item model are represented thru vectors of features. Match what we know about user to what we know about the
item. We want to compute the similarity of unseen item with user profile based on the keyword overlap using Dice coefficient
(normalized by the amount of keywords in the user and in the item). Search queries are boolean expressions and we get all
documents containing the terms. We want to rank the documents to be most useful to the searcher. Score measure how well do the
query and document match but need to consider the frequencies of terms (number of occurrences of a term in the doc.). Each doc.
is represented by a vector which does not consider the order of the words in a doc = bag of words model! Problem = longer
documents have a higher chance of overlapping so we need to normalize the freq. of the term in one document by how often the
term appears in other document = TF-ID = measures how important a term is within a document relative to the collection of
documents. TF = how often t appears in d, but relevance of D is not increasing proportionally with TF. IDF = proportion of doc. in all
doc. that contain the t, words unique to a smaller amount of documents receive a higher weight while common words have a lower
weight.
KNOWLEDGE BASED = similarity functions to determine matching degree between query and item. CONSTRAINT-BASED - a set of
rules defined to match user preferences to their item. User specifies their initial preferences - user presented with a matching set -
user can revise requirements. Variables are user features and item features, constraints logically look like: IF user requires A, THEN
proposed item should posses feature B that matches that requirement. Finding a set of suitable items: Rule-based (if U want low
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller paolahorvat12. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $6.95. You're not tied to anything after your purchase.