odeWeek 1 Web-lectures
The central question with the articles is: how can we apply theory to practice?
Instructional design & evaluation is about theories of learning, theories of instruction, and theories of
evaluation . they all have to align
Introduction 1.1 Theories – Bert Slof
Learning Pyramid (is more a Learning PyraMYTH)
There isn’t any scientific literature that is backing this up – so be a
critical reader, take a critical stand at those info graphics/literature
Where is the theory (educational practices)?
(Bertolini et al.,XX) (Margaryan et al., 2015)
1
,How can we apply theory to educational practices?
The first article was about instructional design and how to apply it to MOOC’S. (Margaryan) They
made differentiation between CMOOC and XMOOC. CMOOC’s were more the first version of the
MOOC, which were usually addressed by (commercial) companies. They have certain kinds of
characteristics. (visualization in the table, you don’t have to remember this by heart blue columns).
Compared to that you see a tendency that also universities try to develop MOOC’s for web lectures
and those kind of things (new version) – XMOOC’s. And they are more using web lectures, using
assessment questionnaires and they are less focussed on a broad community of learners who can
more or less volunteering join a community and then discuss things with each other with only some
kind of bets in contrast to formal certificate whether not you passed the course or not. (Bertolini et
al.,)
• cMOOCs = the original MOOCs, designed by companies to get outsiders to learn
• xMOOCs = made by universities, less focused on a learning community; traditional learning
online
Margaryan→ So what do they do? They said okay well Merrill is a big isle in instructional design – we
took 5 of his main principles (problem, activate, demonstrate, apply and integrate)
• Merills’ first five principles
o Problem-centred
o Activation
o Demonstration
o Application
o integration
Take a problem (that is the central point of instruction)→ You activate prior knowledge →you as a
teacher give a demonstration → than you let the learners apply it → then you let them reflect
(integrate) on how they applied it and whether or not they mastered the competent or skill.
And then they said if you take those 5 principles and 5 other ones – then we are going to see
whether or not the selected MOOC’s meet the criteria of those instructional design principles.
Conclusion = NOT! Main findings:
• MOOC are well organized
• (but content wise) Instructional design quality is low
• Findings seem to be comparable for both types of MOOC’s (no substantial differences
between the two types of MOOC’s)
• Difficult to transfer theory to educational practice
If you try to be a critical reader, what might be something that you noticed while reading the article?
It’s quit a straight forward article, we’ve got this we’ve got that. No difference, that’s it. But I for
example (Bert Slof) find it quit striking that the ten principles – that in order to be a good MOOC it
had to apply to all ten principles. Why those 10 principles? For example the principles 6 – 10 for the
collaborative learning - the feedback - were in not really explained compared to the principles of
Merrill. And that is something you might raise some questions about.
An other question you might raise is, well we have principles based on problem based learning and
we apply those principles to MOOC’s. So are those MOOC’s really problem based? Because that’s is a
quit big assumption for the Merrill principle. So aren’t you comparing apples with pears?
The curriculum in Maastricht University, is really problem based learning – you might have easily
2
,obtained a different outcome because of the criteria you chose to assess the instructional quality of a
specific course.
Note for long-term assignment: which criteria you chose have to match/ align in some way before
you start assessing the course in question. Because otherwise you can beforehand already tell the
quality is bad.
Other questions raised about the mythology: is ther inter-rater reliability? How did they rate it
(coding) and were there different raters? (high inter-rater reliability means that the two persons that
rate it, the principles for each course agreed – and give some quality check on how they applied the
method). I did not read much on how they rate it. If only 1 person who is coding, yeah well perhaps it
was Merrill – and he might have very strict opinions about applying his principle – compared to when
someone else is doing it.
Is there any explanation besides the fact that you only applied for only the Merrill principles and
there no difference between types of MOOCS? Because they seem to differ.
Discussion
• Do MOOCs have to incorporate all principles?
o Why those specific principles? Do they all need to be incorporated?
• Rational for including principle 6 – 10
• The principles are or problem-based learning, but are MOOCs problem based as well?
• Applied coding scheme?
o Selective?
o Dependency?
• Inter-rater reliability
o Was the inter-rater reliability sufficient?
• Explanation why findings seem to be comparable?
o There is no explanation for why the findings on cMOOCs and xMOOCs are similar
o (Margaryan et al 2015)
Other article:
Also about theories but on how theories are used when you write scientific articles. In that article:
We’ve some kind of definition of theory. Theory should be able to describe, explain or predict
phenomena (in this case: learning). And it can be prescriptive: how can it effect those processes in
order to also offer principles for actual design of the learning. And what also important is – theories
can be refined and be adaptive. A theory should also be able to be verified but also whether or not it
is untrue (falsified).
Theories
• Describe, explain and/or predict phenomena
• Offer guidelines: prescriptive and descriptive
• Can be refined and generalized to the discipline
• Can be verified / falsified
Conclusion in this article: Do we use a lot of theories in scientific articles? No! 174 of the total
amount of articles used theory (explicitly). Then you can see where they used theories- in theoretical
framework, for getting the data, in discussion and a little in refinement (use is getting downwards). Is
this troublesome? There is certainly room for improvement. They also did an comparison about, does
3
, it differ for which type of study is conducted? So if you have a really descriptive study, you see the
grey bar (no theoretical evidence) is quit high. And if you look at correlational studies you see the
bars have the most positive balance for use of theory. But if you look for example at comparison
studies, its less.
Discussion: Bert Slof had a hard time reading the article and reading the distinction explicit, vague
and none. How did you make that distinction? And how can you make the final claim that Education
technology does not appear to be a mature discipline (undeveloped field of science)?
And if it is unclear how they rated it (explicit, vague and none), if it is unclear how they distinguish
explicit vague and none - how is it possible to have agreement between raters?
If you look at the percentages in Table 2 (only the overall percentages) – the use is unclear
Critiques
• What is the difference between explicit, vague and none? What are the cut-off points?
• How was the inter-rater reliability achieved?
• If knowledge evolves, then what is it based on? Without a solid conceptualisation, everyone
could make up their own definition of what knowledge is
There are actually theories, theories about expertise development, about learning, instructional
design and assessment and evaluation. We going to talk about these theories and aligning them.
4