Hosting Studio20 using OBS with Zoom

This week I hosted the 3 day BCcampus online event Studio20: Engaging Learners Online. The workshop was designed to push the boundaries of what can be done in a synchronous online environment and to inspire educators to think differently about facilitating online, focusing on three themes, Vision, Voice and Active Learning.

I had a few roles for the event.

First, we asked the participants to do quite a bit of creating (images, audio) over the three days and we wanted to have a central place where people could share their creations. Sounds like the perfect use of a SPLOT. So, I set up a Studio20 SPLOT on the OpenETC to act as a resource collector for people during Studio20 activities and created a short 60 second video on how to SPLOT, co-produced with my 13 year old son who helped me with a few finer points of DaVinci Resolve to edit the final product.

It worked perfectly and it didn’t take long for it to begin to fill with contributions from participants.

My main role for the conference, however, was to act as the host and M/C for the event and in keeping with the theme of experimentation I wanted to experiment with doing something different with Zoom.

Earlier this spring, I spruced up my work from home setup, purchasing a green screen, a good mic, and an external webcam that was a bit better quality than my built in webcam. Seeing some of the work that Ken Bauer was doing with his TechEdTips, I began experimenting with an open source platform called OBS – Open Broadcast System – as a way to add some more visual appeal to my weekly course videos (here’s an early example of a weekly course update video I did for my RRU course)

Even though OBS is made for live streaming, you can record MP4 videos with it as well and that was primarily how I was using it in the spring as I didn’t feel super comfortable with the tech to use it in a live stream session. But by the time Studio20 came along I was feeling comfortable enough with the tech to use it as my video source for live streaming.

The advantage of using OBS as your video source should become apparent once you watch the video below, but in a nutshell it gives you so much more control over what you share on the screen, and, once you get comfortable enough, you can create different scenes and seamlessly switch between these different scenes during a live synchronous session. And, as Alan Levine points out in his recent post on using OBS for his hosting role for the recent OEGlobal conference;

Well, anyone using Zoom can attest to the fumbling around needed to do screen sharing. And to compound thing, when you screen share, you end up in some kind of disconnected space because you do not see the full zoom interface.

OBS gives you much more control over the video and screen sharing experience, and allows you to add on a few extras like on screen timers (Zoom, honestly, why is this simple yet powerful tool not a thing in your platform?)

One of the other reasons I wanted to use OBS was that, as a virtual host, one of your roles is to help keep participants oriented and progressing through the event, and you often provide the instructions needed for people to help orient themselves to what is happening. I hoped that by providing a different visual look it would provide a small visual signal to participants that my role in the event was different than the other participants. That when this distinct looking video feed came on the screen it was to provide an important piece of information about the event; announcing breaks, introducing sessions, that kind of stuff. So, I wanted to look a bit different in order to gather participants attention to keep them oriented.

Here’s a behind the green screen glimpse of what I put together in OBS for Studio20.

 

In the video I mention using Powerpoint in Reading mode to give you more control over the slides and prevent it from taking over your whole desktop. This is only for Windows users, but Alan Levine found a similar workaround for Mac users.

To use OBS in Zoom, you need to add a virtual camera plugin to OBS (UPDATE: Tim Owens has posted a comment on Alan’s post that the virtual camera source is now available by default for Windows users so you may not have to install it separately) . Like WordPress, OBS is open source and there is a robust developer and user community creating extensions, plugin and tutorials, and one of the plugins you need to add to OBS to get it working in Zoom is theĀ  virtual camera OBS plugin that turns OBS into a virtual camera in Zoom. Once you have it installed, OBS appears as a camera source in Zoom.

screenshot of Zoom showing location of OBS Virtual Camera under Zoom camera settings

Once that is setup and you click “start Virtual Camera” in OBS, whenever scene you are running on OBS will appear in Zoom as your video feed.

When I am running OBS with multiple scenes, I like to run it in Studio mode, which gives me 2 different scene windows. On the right is the Program, or Live, screen. Whatever scene is loaded on the Program side is the one that is live in OBS. On the left is a Preview screen where I can preload the next scene I want to switch to. Between the two screens is the Transition button that allows me to make the Preview scene live. This way I can see ahead of time what scene I want to switch to.

Here is a screenshot of OBS running & the different areas I use. Click for a larger image.

Screenshot of OBS showing different regions of the OBS interface

While this all seems complicated, when you are running an actual event it was not more complicated to switch between scenes than it is to switch between slides in a Powerpoint presentation. Click the pre-built scene and it appears in the Preview. Click transition and it is live.

In my next post I’ll touch on how I built the scenes in OBS. Alan also has a very good post that walks you through the process.

 

On the Historical Amnesia of Ed Tech #25YearsOfEdTech

Today marks the start of the audio podcast version of 25 Years of Ed Tech, a project I have been working on for the past 6 months or so. If you are interested in the origin story behind the podcast & accompanying series Between the Chapters, there is a show coming later this week where I talk about the project with Laura Pasquini and Martin Weller. I’ll link to it here when it’s released. What I am hoping to do with this post is to encourage you to contribute your voice to the conversation by blogging or tweeting your own observations about the chapters as we progress using the hashtag is #25YearsOfEdTech.

The historical amnesia of ed tech is a timely title for the introduction of the book given that it feels like we are experiencing that amnesia again. During the COVID spring pivot, education technologists with a specialization in online learning were struggling to make their experienced voices heard as the move to emergency remote learning happened. A good example was the rush to immediately procure services like virtual proctoring without critical considerations of Pandora’s box of privacy these platforms unleashed. Or the rush to embrace synchronous platforms and turn all learning into high bandwidth chomping virtual lecture sessions without consideration for the types of digital divide inequities this would introduce for learners. Educational technologists have seen these issues before, albeit perhaps not at the scale or pace of the spring, and were well-positioned to help for those who listened.

Education technology (as a professional field or academic sub-discipline take your pick) is a relatively new field. You can map the rise of ed tech with the rise of networked technologies in general. As networked technologies have moved from the fringes of society to the mainstream they have become more commonplace in teaching & learning practice. Indeed, the field is so new that Martin notes many of the experienced practitioners in the field have arrived in ed tech from other disciplines (pg. 4) and, as a result, we may lack the “shared set of concepts or history” (pg. 4) that acts as the historical grounding for most fields, which forms part of his rationale for writing a book on the history of ed tech (and Martin is quick to caveat that this is “a” and not “the” definitive history).

That said, while we are a relatively new field, you can easily argue that there have been educational technology thinkers going back to the earliest days of education. Whomever first saw the affordances of slate and chalk as a learning tool was thinking like an educational technologist. I sometimes wonder if that person was put into a position of a change agent for slate and chalk-based educational reform, like their digital ed tech counterparts decades later were. Although I imagine they were lucky enough to avoid getting tagged with job titles like Lead Slate Evangelist and Top Chaulk Guru for the Wonderful Chalk Company and writing puff pieces in One Room Schoolhouse Daily on how their new chalk and slate system increased learning engagement among students by 48% through exciting new chalkification pedagogy.

Which brings me to one of the changes I have witnessed in our field that Martin touches on in his introduction. Many of the ed tech’s I know who have worked in the field for the past 25 years have seen their role change from being that technology evangelist (I still wear the emotional scars after going on about blogs to a colleague who effectively shut me up with a loud and dismissive “BLAH, BLAH, BLAHGS!”) to being much less enthusiastic and far more critical about the role of technology in education. We may have begun as technology enthusiasts embracing much of the early ethos of the web (OPENNESS AND TRANSPARENCY FTW! THE INTERNET IS A DEMOCRATIZING FORCE FOR GOOD! KNOWLEDGE WILL NOW BE FREE AND ACCESSIBLE TO EVERYONE! DISRUPTION! TED TALKS! KAHN ACADEMY!) but who now understand that things are not that simple and….good. That technology is, in the words of Neil Postman, “…both a burden and a blessing; not either-or, but this-and-that.”

My own journey on the ed tech history path reflects this shift from evangelist to critic and maps closely to Martin’s experiences so the book does resonate with me. But I am also aware that, like Martin, I am a person of a certain type; a white, middle-aged, heterosexual male, educated, employed, and a product of all that privilege has brought with it. And when I first began embracing the web ethos of openness and transparency in the early 2000’s, I did so unaware of just how much that privilege allowed me to do so. Today, I’m not quite as evangelical.

That said, there are still reasons to be optimistic about technologies’ role in education. My optimism is mostly rooted in small, human-scale ed tech as opposed to massive at scale ed tech these days. Technologies that amplify human qualities and place human beings at the centre of the learning process, not remove or obfuscate humanity through scale. For example, last week a group of students in my RRU course brought a guest speaker into class via Collaborate for a robust Q&A session on privacy, data and ed tech ethics, something that would have been virtually impossible for students to do 25 years ago. There is still good to be found in ed tech in 2020.

Here’s the book intro, read by Martin.

 

Ed Tech Vox Populi

Here’s what I am working on.

A serialized audio version of Martin Wellers’ book 25 Years of Ed Tech featuring volunteer contributions from 25 different people in education, edtech, open education.

Martin doesn’t know this, but he planted the seeds of this project with me long before he published the book. Each year, Martin writes a blog post recapping the books he reads and I have always been in awe of his prodigious annual book consumption; 93+ last year! I am lucky if I get through 2.

I asked Martin how he managed to do this. His secret? Audiobooks. So on his recommendation, I decided to give the format a try and discovered 3 things;

  1. It makes a big difference in how quickly I can get through a book. I am still nowhere near 100 like Martin, but am seeing a big uptick in the amount of reading I am doing.
  2. I really enjoy the format. I mean, the full cast version of American Gods by Neil Gaiman? Epic. Many of you know I used to have a radio career so have an affinity for audio, despite getting burned out by doing it for a living. Honestly, the old “do what you love and you’ll never work a day in your life” axiom doesn’t really fly with me. But….well, another post.
  3. It has really expanded the physical spaces I read/listen in. Run on the treadmill? In goes the audiobook. Walk Tanner? In goes an audiobook. No longer is my book reading limited to the 5 minutes in bed before I fall assleep, which used to be the case.

Thanks to Martin, I’m hooked and a fan of the format now, too.

So, when his book was nearing the final stages of being (openly) published by Athabasca Press, I thought, “Well, for the huge fan of audiobooks that Martin is, his own book isn’t available as an audiobook. That’s a shame.”

Wait a sec. I have a radio background. I’ve done audio work before. I wonder how much work it would be to do an audiobook version of it? Hmmmm…..

Light. Bulb.

I emailed Martin and said, “Hey, I have an idea….” To which he immediately sent me a PDF advanced copy of the book so I could test out how much effort might be involved with creating an audio version of the book.

But then I thought, “You know, I am not the only person with audio experience in my network. There are a ton of wonderful educators who do podcasts. I wonder if they might want to participate?”

So I started sending out some exploratory messages to people in my network whom I thought might be interested in taking part. And before I knew it, I had 25 people lined up each to read a chapter. I am sure I could have easily got 25 more to participate with the enthusiastic response I had (and I am sorry to the dozens of other people whom I had on my list to contact, but ran out of chapters).

I was also trying to be mindful of ensuring that we had a diversity of voices participating. And by voices, I mean that in the literal sense of the word as the overall narrative voice is, of course, Martin’s. It is his book and these are his words, which makes it an odd sort of thing when you are asking people to read someone else’s words in something so intimate as their own voice.

Maha Bali was the first to notice this. Shortly after she test read her chapter she emailed to tell me that she felt the urge to want to comment on the contents of her chapter. However, we are adhering closely to the No Derivatives restriction on the book (more on this in a moment), so our readings need to be word for word of the original. But it was Maha who first tossed out the idea that maybe there could be some way to have a discussion about each chapter that was separate from the audio version of the book.

And into the project comes Laura

Laura Pasquini was also someone I had asked to participate, knowing that she was an active podcaster. She picked up on Maha’s comments.

“Hey Clint. What do you think about doing a podcast that is kind of like a book club where we could invite people to discuss the chapters? Oh, and by the way, I have a pro account for Transistor and I would be happy to host the podcast and can take care of setting up all the feeds and such for syndication.”

Um….yes please!

So the plan is to release the book as a serialized podcast with one chapter released every Monday read by a different volunteer narrator and then release a second podcast on Thursday which is the discussion of the chapter. I am gathering the book chapters and Laura is producing the supplemental podcast. Both will be pushed out on the same feed. It will soon be available via the podcast tool of your choice, but for now, if you know how to manually set up a podcast subscription you can use this RSS feed.

Open Win

I alluded to the copyright license earlier, but want to hit on it here explicitly as the open license that Martin and Athabasca Press have released his book under (CC-BY-NC-ND) plays an incredibly important role in making a project like this possible, and the possibility lies in a nuanced detail of the Creative Commons license that may not be obvious when you see a -ND restriction.

At first blush, many might think that the -ND clause would restrict this type of activity from happening. However, what we are doing in this project is something called format-shifting – moving from print to audio, one format to another. We are being very careful not to alter the actual words in the book which would then start to veer into adaptation/derivative territory.

But format-shifting is allowed even with an -ND license. All Creative Commons licenses allow format-shifting. So while it may seem like the -ND is restrictive, it is still flexible enough to allow us to redo the book in audio form and redistribute as a weekly podcast without having to ask for permission ahead of time*, which illustrates the value that ANY CC license can bring to a piece of content vs. an All Rights Reserved copyright.

I should also note that the artwork we are using by Bryan Mathers is also CC licensed, as is the bg music for the podcast. Open wins all around.

The launch date is November 4th and you can see a full list of all the wonderful people who are participating on the podcast website. Martin has also written about the project.

* side note: even though CC licenses don’t require asking for permission, I do like to keep people in the loop when their stuff is involved and I have been in regular contact with Athabasca Press & Martin and they have been very supportive.

 

Domo Arigato EdTech Roboto

Back in the days P.C. (Pre-COVID) I was starting to do a deeper dive into the world of Mastodon and set up an instance to play around with. One of the things I did was build a bot that automatically posts blog posts from edtech bloggers to Mastodon.

I’m going to try to CogDog walk through the process as much as I can here in this post, hazy memories and all as I did this work 6-9 months ago. But to start, in true CogDog style, the live demo or, if you prefer, a screenshot below of what the EdTech bot Mastodon account looks like if you don’t want to click away just yet.

Screenshot of a Mastodon social media account showing the home page of the EdTech bot

What you are looking at in the screenshot is the homepage of the EdTech bot built on a Mastodon instance hosted on Cloudron by the OpenETC. The bot is being fed blog posts from an aggregated list of edtech blogs from my Inoreader account which I have connected to Mastodon using an IFTTT recipe. What this bot does is watch my Inoreader blog list for a new blog post from one of the edtech bloggers on my list. When it sees a new post, it automatically posts it to Mastodon.

Holy moving parts, Batman. I’m going to do my best to break this down.

But first, some Mastodon context

Now, I do have a Mastodon user account on the mastodon.social server where I cross-post content from that account to Twitter as a way to begin stepping away from relying so heavily on Twitter without losing my network (you can read some of my rationale in a post I wrote for OER20). Mastodon.social is the instance of Mastodon maintained by the developer of Mastodon.

But Mastodon is an open-source application, meaning I don’t have to rely on the mastodon.social instance. I can actually set up my own Mastodon server and run my own Twitter-like service. I can keep it closed to just a few people, or open it up and connect it to other Mastodon instances creating a wider social network.

I do think that federated models that can be controlled at the local level, but still connected to other instances, is a good alternative to commercial social media platforms, especially when they are hosted and controlled at the community level.

I get that people are still hesitant to take the plunge – re-creating networks are difficult. So part of my experimenting was to see if I could find ways to make Mastodon a bit more useful for others who may eventually find their way to Mastodon. One of the value adds I wanted to see if I could do was build a Mastodon bot that could provide a source of relevant, updated content for educational technologists on Mastodon.

The dark side of bots

Now, before I go further, I should acknowledge that there is a definite dark side to building bots for social media sites. Much misinformation and disinformation being spread on social media today comes from bots. Which, in itself, is a pretty good reason to learn how to build bots as a way to understand how they work and how they can be used to spread misinformation.

That said, I also think bots can be useful in a trusted network. In my case, as a way to provide a useful account for others interested in edtech to follow in order to find some usefulness on a new platform where their network is currently not at. I thought a bot that posts curated blog posts might be a way to bring an edtech network into the Mastodon world where new people using the service only had to follow an account to connect with a relevant network via their blog posts. That was the rationale behind my experiment to build a useful bot.

You might be asking yourself, “so, aren’t you worried about others building a bot that may be malicious?” Here is the great thing about running your own software – you control the environment and who you want to let into your instance. Don’t want your instance to be used to build bots? Lock it down. Restrict registration to a trusted group of people or even just yourself. Run it like you do a WordPress blog if you wish. No one else can create an account on your Mastodon instance, but you can still interact with the rest of the Mastodon world because of the federated aspect of the application. Lock it up…

Screen shot showing nobody can sign up

And only invite those on to your instance whom you trust.

Screenshot showing admin field that allows administrators of mastodon to invite select people in

Now, this does not prevent the creation of nasty bots on other Mastodon instances. But by controlling your own environment you do limit and control bot creation on your own instance, which is a heck of a lot more bot control than you get on open public platforms like Twitter.

Ok, onto the steps.

Step 1: Mastodon

So, the first step was setting up my own instance of Mastodon, which I did via the OpenETC. Grant Potter has been experimenting with an application deployment framework called Cloudron which makes deploying web-based applications fairly straightforward.

One of the applications in Cloudron is Mastodon, so with a click and a subdomain, I was able to get a sandbox instance of Mastodon up configured & running on the OpenETC.

Screenshot showing the homepage of the OpenETC test mastodon instance

I spent a few weeks of spare time poking around and figuring out the administration side of the instance and invited a few people in my network to create some test accounts to give it a try. I’m glossing over a lot of the details of the set-up and configuration here, but that is a blog post on its own and I want to get to the bot building part.

Step 2: Make the bot

Once I had a local instance of Mastodon up and running, I needed to create a new account on the instance that would be my bot. Once the account was created, I logged in to the new bot account and went into the preferences screen where I customized the bot page to make it clear to people that this was a bot and what the bot did. Within the account preferences, there is a toggle switch that designates the account as a bot.

screenshot showing the bot toggle

This toggles a notification on the bot profile page that lets others know that this is an automated bot account.

screenshot of bot profile page letting others know this account is a bot

Step 3: Feed the bot with Inoreader

Around this time I was also switching my RSS reader from Feedly to Inoreader after hearing Laura Gibbs rave about it on Twitter (it’s a very good product and I am happy I listened to Laura). One of the features of Inoreader is the ability to generate a single RSS feed from a collection of RSS feeds. It’s a wonderful way to aggregate a lot of information sources and share as a single feed. In Inoreader I set up a collection of blogs from about 20 or so ed tech bloggers I read on a regular basis and called the collection EdTech Blogs. From there I am able to get a single RSS feed for all those blogs.

animated gif showing how to access a single RSS feed from a collection

Step 4 Connect the do…um, bot using an IFTTT webhook

Ok, the bot account has been created. I have a source feed. Now I just need to connect the source RSS feed to the bot account and let it do its thing.

To do this, I turned to teh interwebs and they delivered this wonderful blog post that explained, in detail, how to set up a webhook to connect the bot to Inoreader through IFTTT. At a high level (you can read the blog post for the specific details), you log into your bot account on Mastodon and create an application via the development tab.

Once you create your new application in Mastodon, it will generate an access token that you need in order for IFTTT to send the blog posts from Inoreader to the correct account on your Mastodon instance.

Once you have that token, go over to IFTTT and create a webhook applet. Enter in the Inoreader RSS feed you want to monitor, the Mastodon instance url and access token, and set up a few parameters on what you want to be included in the Mastodon post. In this case, I have the blog title, blog author, and link back to the original post.

Which results in a Mastodon post from the EdTech bot that looks like this one from JR Dingwall

There are some things I can’t control with the bot that I wish I could, like the author names so that people can tell at a glance who wrote the blog post. However, the bot can only post the info it is given, and there are quite a few people who attribute their blog posts to a generic “admin” account so you end seeing posts attributed to admin. And if the person creating the blog post does not have an image associated with the post, you end up with a generic looking page document in the image placeholder. But if they do add an image, the bot does pick it up and adds it to the post, like Alan’s example here

I also wish that I had some more programming chops to be able to write a script that does what IFTTT does as the middleware piece and not have to rely on IFTTT. But still, a fun project that will hopefully pay off someday and make it a bit more enticing for some edtech to check in on their long-dormant Mastodon account knowing that there is a bot in the bg providing fresh relevant content for them.

If you have a Mastodon account and want to follow the bot, hit it up here, at least for the near future.

 

Could a Canadian MOOC provider have helped higher ed this fall?

Efforts appear to be well underway at most Canadian post-secondary institutions to offer a good bulk of their courses online this fall in response to COVID-19 concerns. Many institutions in Canada signalled their intent early on in the pandemic and, from my own personal experiences, I know a few who have used the early announcement to hire on extra staff & bolster their online technology to accommodate the surge of online learning that will be occurring in the next few weeks.

I don’t know what the summer has been like for institutions preparing for the fall. I can only imagine based on my own personal interactions and the kinds of conversations I am hearing & seeing in my network on topics like academic burnout that this summer has been incredibly difficult for those who support teaching, learning & educational technology at their institutions. I wish I could say things are about to get better, but my own personal experience tells me that the time from Mid-August when instructors begin to return from their summer hiatus (those who were lucky enough to have one this summer) until the end of September is usually the most insane 6-8 weeks of the year.

The Canadian post-secondary system is about to undergo a massive surge in online learning that will eclipse the spring, and I suspect that over the summer a lot of energy at individual institutions has been poured into the development of high enrollment first and second year online courses. A lot of new “Introduction to…” courses will be coming online this fall, all slight variations of one another. Which, systematically, represents a lot of redundant work.

In the spring, sensing this massive duplication effort coming, Alex Usher floated an idea around the collaborative development of a core set of shareable high enrollment first-year courses. Alex’s idea represents a more systematic approach to developing online courses to help alleviate the development and delivery pressure many institutions will be feeling acutely over the next few weeks. It is a laudable idea. Indeed, it is one that has found some purchase within my own organization where a project is currently underway to identify OER content that could be used to develop Open Courseware (OCW) for high enrollment courses.

However, as years of experience have taught many of us who have worked on OCW-type projects, the road to OCW is not an easy one. The idea of reshareable course content is one that has driven the OER side of open education for the better part of 20 years and, while there has been a great deal of success in open education since OER’s first began, the idea of OCW has had limited results. I won’t get into the myriad of technical, cultural, pedagogical, administrative hurdles that exist (some of which were touched upon by George Veletsianos in his response to Alex’s original post), but they are not insignificant.

This makes me wonder about additional ways in which the Canadian post-secondary system can collaborate and scale up online learning that was not such a massive duplication of effort, and I began to wonder about MOOC’s and MOOC providers. Might a centralized MOOC provider in Canada have given institutions another way to prepare for the fall?

If there was a publically-funded (important imo) national MOOC provider in Canada, say in the form of an inter-provincial institutional consortium, could that have been in a position to help the wider post-secondary system respond more nimbly to the fall onslaught by making highly enrolled first-year courses available broadly and at scale? Instead of developing course content to be copied and moved from one institutional platform to another – which is still a significant hurdle in 2020 – why not a centralized learning platform that is open to thousands of students from any institution that would be able to scale to handle a massive influx of online learners all looking for those same foundational courses?

Not that MOOC’s are the be-all and end-all. But they have shown that they can play a role in the delivery of scalable online learning, which is a need that all Canadian post-secondary institutions have right now, and might have for the foreseeable future. Say what you will about MOOC’s, but they are built for scale and, this fall, online learning is going to need to scale.

All this Monday morning quarterbacking also got me wondering as to why a national MOOC provider has yet to emerge in Canada, like in the US or the UK? It seems to me that, regardless of the COVID online demand, there is an appetite among Canadian institutions for a MOOC service judging from the number of publically funded post-secondary institutions I see who have partnered with for-profit and non-profit providers in the US.

There are likely a lot of reasons why one has not emerged in the publically funded landscape, not the least of which has to do with post-secondary education being a provincial, not a federal, responsibility in Canada. When you are talking about projects like MOOC’s that rely on the scale to be successful, that would likely require a national effort. But perhaps the timing is right to begin to ask the question of whether there is a role in the Canadian higher education landscape for a publically funded MOOC provider, what a Canadian MOOC provider might look like, and why nothing has emerged in the years since MOOC’s moved into the mainstream?