August 4, 2017
EdTech Factotum is a weekly newsletter of 3 interesting things I have read, watched or listened to in the world of educational technology, online & blended learning, and open education.
Here are 3 things I read & found interesting this week.
Fiona Hollands, WCET, August 3, 2017
Following up on an EdSurge story I included in last weeks newsletter (How Much Do Educators Care About Edtech Efficacy? Less Than You Might Think) comes more about how (un)important empirical research is to people who make EdTech decisions.
Unlike last weeks story, which focused on k-12, Holland’s article & research focuses on higher education and how decisions around EdTech are made at various public and private institutions.
Much like the results from the k-12 study, peer-reviewed research journals and empirical studies were rarely used as a source of information by those who make decisions about EdTech deployments at higher ed institutions. Peer-reviewed research articles were only cited by 9% of the respondents as being a source of information about EdTech.
One reason Holland gives for the low number is the belief among institutions that their use cases and particular contexts are special & different from other institutions, and that research done at one institution is not necessarily generalize-able to their local context. Ok, I get that to a point. But that belief in uniqueness certainly did not stop the LMS from becoming the entrenched learning technology across almost all institutions. And I know from my own personal experience that there are institutions that will always follow the lead of other institutions. So, the unique snowflake rationale, while true to a point, generally rings hollow. There are many points of pedagogical commonality among institutions.
9% could also be seen as a damning indictment of the value that EdTech decision makers find in the quality and type of research being done by EdTech researchers, and of the (lack of) value they find in the traditional academic peer-review journal system. Indeed, Holland herself makes this point when she says that the reason she decided to write this article as a blog post and not another academic journal article is because the same content as an article just won’t be read by the people who should be reading it.
Peer-reviewed academic journals were listed as a source of EdTech information in only 9% of interviews (which is one reason I am writing this blog post instead of revising and resubmitting a journal article I wrote previously).
Holland often notes that, while some technology choices are made to fit a specific need, it is sometimes the other way round.
There were a number of situations in which an EdTech administrator came across an EdTech product or service that seemed too appealing to pass up. They purchased the product and then engaged faculty members in trying to figure out how to make it useful in the classroom.
Technology is chosen first and then presented to faculty to use – the garbage can method, which is a new term for me.
Is there a lack of practical and pragmatic research being done by EdTech researchers? Is there a failure to effectively disseminate that research the the right people? Or is it a failure on the part of decision makers to dig into the research that is available? Likely a bit of all of this. Whatever the reasons, it would do all of us in this field good to pause and reflect on why this is the case – that the very institutions who support and create the research don’t actually use the research.
See also EdTech Decision Making in Higher Ed for more on this research.
A critical meta-analysis of 26 research papers on the use of student-facing learning dashboards. Learning dashboards are often the primary user interface where teachers and students are presented & interact with the learning data collected by online learning systems.
What is refreshing to see is that there has been a great deal of learning theory that has gone into the design of learning dashboards. The table below is a thematic summary of what the researchers found as the underlying learning theories that influenced the design of the learning dashboards (apologies for the quality of the screen grab, which includes pre-print watermark).
However, some of the outcomes of the application of these theories is problematic and may have the opposite effect of what was intended. For example, Collaborative Learning is a fairly dominant theme and often manifests itself in learning dashboards by making all learners aware of each others dashboard data.
Dashboard developers argue that for effective collaboration, learners need to be aware of their teammates’ learning behaviour, activities and outcomes.
Additionally, Self-Regulated Learning is a popular theory applied to student learning dashboards. The researchers note that;
Social comparison has a stronger presence on this level as it is used to reveal the behaviour of peers as a source of suggestions on how learners could improve.
On the long-term, there is the threat that by constantly being exposed to motivational triggers that rely on social comparison, comparison to peers and “being better than others” becomes the norm in terms of what defines a successful learner.
Instead, the researchers argue that dashboards should focus more on the individual learner and their achievements & goals, and better support differentiated instruction.
Sharon O’Malley, Inside Higher Ed, July 12, 2017
Nice Delphi-method article interviewing 4 experts in online learning to come up with this list of 7 guidelines for effective online learning. They are;
- Make it a group effort
- Focus on Active Learning
- Chunk the Lessons
- Keep group sizes small
- Be Present
- Parse your time
- Embrace Multimedia assignments
A good starting point for anyone wishing to become a competent online instructor.
I’ve got a round of summer holidays upcoming & there will not be a newsletter for the next couple of weeks.
Thanks for reading,