Pressbooks cloning now with added H5P goodness

Steel Wagstaff at Pressbooks has been talking to me for the past few months of a relatively new feature of Pressbooks where, when you clone a Pressbooks book it will now bring over all the H5P activities within that book.

While this feature has been available for Pressbooks network users, for us at BCcampus where we host our own instances of Pressbooks that is a bit behind the update schedule for Pressbooks network clients, we have had to wait for the feature as we worked our way through updating our Pressbooks network. Thanks to the work of the BCcampus Dev/ops team and Josie Gray, the update to our networks happened a few weeks ago and I was able to finally test out the H5P cloning feature on our instance.

Caveat: if you are self-hosting your own Pressbooks network, you need to have the H5P plugin installed and both the H5P plugin and Pressbooks plugin need to be up-to-date. At the time I write this that means H5P WordPress plugin ver 1.15.0 and PB version 5.13.0, although, as you will see below, during my testing I uncovered a bug in the H5P WordPress plugin that is scheduled to be fixed in the next release.

I tested the cloning feature using this great Body Physics textbook from Open Oregon that I knew had a lot of H5P interactions in it. Cloning the book is not that difficult (Pressbooks has a good step by step guide) and, with the exception of the issue below, worked quite well.

The cloning process itself does take some time, about 10 minutes to complete for this particular book. For testing I am focusing specifically on how the H5P content came over (as opposed to the PB content) and, for the most part, the cloning routine did a very good job of copying all the H5P content included in the book.  When I went into the H5P Content section, I see all the H5P interactions there indicating that everything came over in the clone.

Screenshot of H5P Content area in Pressbooks showing all cloned H5P activities

If I check out the H5P interactions in the book, I can see that the H5P interactions kept their original context within the book. That is, they appear in the same place in the cloned PB book as the original source book. Additionally, when I looked at the embed url for each H5P content type in the cloned book, I see that the H5P content now lives within my book separate from the cloned book. It has not simply been embedded in my book from the source book. A full copy has been created within my book so that I can modify and customize without affecting the upstream version. Screenshot of H5P element showing the URL of where the H5P element is located, confirming that the H5P element has fully copied over.In the future, it might be useful to see the copy as a fork of the original with links back to the original version somehow instead of a separate and discrete copy of the element. But that would require a lot of future development work. Put that on the wishlist for the future (and, imo, low priority as I think that, while forking is an elegant way to be able to trace derivatives to/from, it would be something that very few users would ever actually use. I’d like to be proven wrong with that assumption someday, but right now I don’t think there is the level of engagement with revising OER’s that warrant the development effort).

However, all is not perfect. I notice that the author attribution in the copied H5P elements is incorrect. I am listed as the author of all of them.

Screenshot show me attributed as author of all H5P interactions

One of the features I love about H5P is that you can add CC licenses to each individual H5P element, which makes the rules around sharing and attributing content much easier and, imo, is one of the big reasons why I think H5P is an OER platform as there has been care and consideration on the part of the H5P developers to add this as default functionality in H5P. But it is problematic if all the H5P content imported loses attribution information, and falsely attributes all the imported content to the person doing the importing.

At first I thought that it might be that the attributions were not set correctly in the source material, and, sure enough when I went back to the original book I did not see any visible license information where I would expect to see it.

Screenshot of H5P element showing missing license informtionSo maybe there is no actual attribution information included with the original content and that is why the author info in the copy reverted to me, the default author of the book?

I tested with another book with the same result – I was credited as the author with all the imported H5P elements. I contacted Steel at Pressbooks to double check that this was unexpected behaviour and we did a quick test together, creating a new book with a new H5P element with proper attribution and imported that into a new Pressbooks book. Again, the H5P element was attributed to the person importing the book and not the original author.

Steel confirmed that this was unusual behaviour and created an issue in the PB GitHub repo reporting the issue. After some BA work on the Pressbooks side it was discovered that the issue was upstream in the H5P WordPress plugin. A report was filed there and the developers of the H5P WordPress plugin have developed a fix that will be released in the next release of the plugin. So, with luck, this cloning attribution issue should be fixed soon.

Other than that issue, however, the cloning routine itself is very slick and sets the foundations to make the reuse and adaptation of H5P content in Pressbooks much easier.


When your project goes to the dogs

After something like 15 years of connecting both virtually and face-to-face, I am very happy to finally be working with Alan Levine (@cogdog) on a project. I needed some technical help installing and configuring three open source math based homework systems that are being evaluated as part of the BCcampus Open Homework System project. Even though these are not WordPress projects, I know there is more to the dog than WordPress, and approached him to see if he would be able to help out with the project. And I am very glad that he said yes as he has brought exactly what I needed to make this part of the project go smoothly.

His first order of business was to configure some web space for the project. Now, I would have expected that he would have sent an email with three IP url’s pointing to different installs (something like or some other user unfriendly url that we often have to work with when working with sandbox servers). Instead, Alan set up a one page landing page and worked with our internal network tech to configure subdomains for each of the installations so that the faculty testers would have some sane links to follow while they were testing the three systems.

Screenshot of the this is not a test test site for the BCcampus open homework systems

Not only does this provide a fine landing place for testers to have everything in one neat and tidy package, but Alan has gone one step further to turn this landing page into a project status page so that, at a quick glance, everyone involved with the project can see the status of the technical work he is doing in a very open and transparent way.

Screenshot showing example of project status

He has also set up a way for testers to report issues back to him for follow up using a simple Google Form. Anyone who has an issue during the testing period can fill the form out and send him a screenshot of the problem, collecting all the technical issues for each platform in a handy single spreadsheet that will be invaluable at the end when we are evaluating the testing results.

Along the way, you see little glimpses of the human touch that marks it as an Alan project. Like the disclaimer he included at the bottom of the landing page with an animated homage to Mission Impossible.

Screenshot of Mission Impossible self-destructing tape from 60's TV showI really appreciate little touches like this that add a bit of levity and fun to a project.

I also appreciate that Alan has not hesitated to jump in and make contact with the developers of the three applications we are testing. So far he has discovered at least 2 minor issues in the code for one of the applications and has been working with the developers to fix the code. That is value added. Regardless of whether or not we decided to use this particular piece of software, Alan has made the project better by uncovering these errors and working with the developers to fix them.

Working in open source has it’s own challenges. One of them being that you need to add value to the community to have a voice in the community. Build some goodwill with the development community and establish a relationship. This kind of direct communication with the developers to help them improve their code goes a long way in establishing the foundation for long term relationships which puts you in good stead with the community, and Alan does it very well.

I am really happy this project has gone to this particular dog.


On the network effect and PLN’s

As I work more with Mastodon, I am noticing more and more feature parity with Twitter, and a few features that I wish Twitter had (content warnings, for example). I’ll be writing more about specific features of the platform in coming weeks, part of my master plan to build a good, compelling reason to convince people in my network to begin using the service more (yes, I am looking at you).

I get it. Building networks is hard work. That has always been the Achilles heel of whatever new service gets introduced. Even Twitter. In the early days of Twitter many people came and went and it took a long time for Twitter to become a service that was of value to many people. Until the network effect began to kick in.

It is not the platform. It is the people on the platform that make the platform useful. It’s classic chicken and egg. Not enough users and you end up with a platform that is….

Tumbleweed blowing across and empty desert landscape

Which Mastodon can feel like, especially when compared to what you are currently experiencing in Twitter. So, there is little incentive for users to participate.

But it doesn’t have to be all or none. I’m not completely giving up on Twitter just yet, and neither do you. I am going into this with the realization that building networks takes time, and this is a long game, not a quick win. Which is why I am working hard to figure out a way to do little things like cross-post from Mastodon to Twitter. I want to continue to maintain a presence in Twitter as I begin to slowly cultivate a network in Mastodon. But I also want to work on ways which make me think Mastodon the default and Twitter the platform I only check once or twice a day. And then once or twice a week. And then once or twice a month.

I am looking at this as a long term project. A slow weaning from one platform to another. Yeah, it means working in both for awhile. But I am ok with the trade-off as I know that building networks takes time. I’ve been around this block before. It takes effort. It takes time. But most importantly, it takes people. So, I am willing to put in the work as I want to provide some value to others who put in the effort. As Digenti (1999) notes, building PLN’s is a reciprocal affair.

To have a truly valuable PLN, investments in time and resources are essential. This requires an extension of the typical transactional business mind-set. If, as a business manager or change agent, we “do the deal” and fail to consider building our PLN, we have lost much of the value of our interactions. This is particularly true in the activities of collaborative learning, where each project we engage in should enhance and broaden the PLN of each member.

Where each project we engage in should enhance and broaden the PLN of each member.

I know better people than me have tried and failed to convince people to start using Mastodon

And I know that there are many valid reasons to keep with the status quo. But I feel like I have to start somewhere. We have to start somewhere. If we are concerned about the effects of the commercialization and monetization of our personal data, we need to start making efforts to move away from platforms that use us as the product. For me, that means decentralized federated services that are controlled by people and not corporations.

I see my main role in the network right now is to try and provide value-added information to my network in the hopes that someday others may be convinced to begin doing the same. This is how PLN’s are built, one person at a time adding value with intent. Participating. Contributing.

How do you build a PLN? First, it is important to overcome the hesitation around “using” people. If you are building a PLN, you will always be in a reciprocating relationship with the others in the network. Ideally, you should feel that your main job in the network is to provide value-added information to those who can, in turn, increase your learning (Digenti, 1999).

This will be a long process. But then again, relationship building always is.

Source: Digenti, D. (1999). Collaborative Learning: A Core Capability for Organizations in the New Economy. Reflections, 1(2), 45–57.


Trying another way to cross post from Mastodon to Twitter

I wrote last week of using IFTTT to cross post my Mastodon posts from Mastodon to Twitter. After playing around with the IFTTT for a week, there were some limitations that I discovered, including the inability to cross post photos from Mastodon to Twitter, leaving a lot of my tweets this week looking like this:

When they should have actually looked like this:

Fortunately, Wayne Mackintosh from the OERu (who have been running their own Mastodon instance as part of the open source OERu tech stack Dave Lane administers) suggested another Mastodon to Twitter cross posting tool that has been built by specifically for this task. My first cross post test a few minutes ago looks like this new tool may work better than IFTTT for cross posting, at least as far as the images are concerned.

Original Mastodon toot:

Cross posted tweet on Twitter:

And it looks like this application also has the ability to gracefully truncate longer Mastodon posts to fit with Twitter’s shorter length requirements (Mastodon gives you 500 characters vs Twitter’s 280). So, I’m going to test this cross-poster out this week.


Setting up a test instance of Mastodon for the OpenETC using Cloudron

One of the things I want to blog more about this year is the work we are doing with the OpenETC.

Last week, I wrote about how I am looking to use Mastodon more over Twitter in an attempt to try to gain more control over how my data is being used by corporations. The lovely thing about Mastodon is that it is a federated service, meaning that it works much like email in that anyone can set up a Mastodon server, and connect with other Mastodon servers, so control of the platform can happen at a local community level. Here is a good overview of how federation works, using Mastodon as an example.

I am currently using a Mastodon account on the Mastodon server, which is maintained and operated by the main developer of Mastodon, Eugen Rochko. I’ve made the decision to actually pay for the service by signing up for a small monthly Patreon payment to Eugen to help support the development and administration of the platform. But this week I wanted to see what it would take to set up an instance of Mastodon for the OpenETC with the idea of making the platform available to other educators and edtech’s in BC. As it turns out, it wasn’t that difficult to set up, thanks to Cloudron, the administrative dashboard Grant has set up with Digital Ocean. Cloudron is an application that allows you to quickly launch and configure web applications, and one of the tools we are looking to utilize more with the OpenETC in order to launch new services.

<cue the geeking out>

Logging into Cloudron, Mastodon is one of the applications that I can one-click install.

Screenshot that shows Cloudron interface

Labelled as Unstable because it is still fairly new in Cloudron. But unstable is all the more reason to take it for a test spin before actually launching it.

Clicking the button and I am greeted with a prompt asking me what subdomain i want to install Mastodon on. In this case, I am going to use to denote a test instance. After a few minutes of Cloudron installing, I am greeted with a new instance of Mastodon in Cloudron, ready to log in to and administer.

Screenshot of Cloudron admin area showing Mastodon has been installed and is running

Screenshot of Cloudron admin area showing Mastodon has been installed and is running

From here I need to create a user account on the public facing Mastodon front end at, which looks like this:

Mastodon log in screen

User accounts take the form similar to a cross between a Twitter handle and an email address. My account name is, which is a bit longer than a simple Twitter handle, but necessary because, being a federated system where others can set up Mastodon on their own domains, the domain name is needed in your Mastodon handle in order for messages to get routed to the correct user at the correct Mastodon instance. Hence why we need to have the after the username, like to you need to do with an email address.

The only tricky bit I had in creating an account was getting the account verification email as it was redirected to my spam folder. But once I found it there, setting up the first user account on the instance was easy.

One the user account is created, I need to elevate that account to be the site administrator. To do this, I actually need to get into the terminal and run a command.

Screenshot of command line that needs to be run in a terminal to elevate my account from user to administrator

Screenshot of command line that needs to be run in a terminal to elevate my account from user to administrator

Where <username> is replaced by my username on the system. This is the only time I need to get into the terminal, which can be intimidating. Fortunately, Cloudron as a built in Web Terminal Interface that takes the challenge of connecting at the terminal level easy.

Screenshot of Cloudron appliation administration area for Mastodon

Screenshot of Cloudron application administration area for Mastodon

I click on Terminal and a web-based terminal editor opens up.

Screenshot of terminal window in Cloudron

Screenshot of terminal window in Cloudron

I copy and past the command to elevate my account to a Mastodon administrator account and log out of the terminal after the command runs.

Then I log back into Mastodon and can see that I now have access to the administrative section of Mastodon under my user preferences.

Screenshot of administrative section of Mastodon under user preferences

Screenshot of administrative section of Mastodon under user preferences

The server is now set up and running and available for testing by the OpenETC community.

The one thing I noticed right away was how quickly our test instance gets connected to other Mastodon instances in the fediverse. It wasn’t too long before I was seeing follow requests from users from other Mastodon instances on my account. Additionally, I could follow my own account from my account. There was no additional administrative configuration needed on my end to make that connection happen. Which is great as I thought it might be tricky to get connected to users on other instances, but that isn’t the case.

Right now I have a few users from the OpenETC community who are creating accounts and logging in. I am moderating all requests for now just so that I can get a better sense of how this thing works, so am limiting the number of users I am inviting in to make accounts and test out the features. I am going to start peeking around the admin interface a bit more to see what options I want to set as defaults for the system, and how I can control the flow of information to and from other federated instances of Mastodon. I am sure there are ways in which an administrator can moderate and control the federation capabilities. But for now, this was a good start to a new project that will hopefully put a powerful Twitter alternative in the hands of OpenETC users in the near future.