What AI tools have you been using regularly?

Getting your Trinity Audio player ready...
5 min read

I’m coming back to work after what has felt like a very long period away. It started with a 4 month leave of absence in the spring. When I came back to work, our HR folks informed me that I had a lot of accumulated vacation time I needed to use up (there is likely a connection between my spring leave and the amount of vacation I had not taken over the past year, but perhaps that is better left to a different blog post). So much of my summer was 2 weeks on/2 weeks off. Which was great for my mental health, but not so good for generating any kind of rhythm at work.

One area I feel like I haven’t taken a deep enough dive into is the world of AI. While I have experimented with ChatGPT, Copilot and a few other genAI tools, I really haven’t incorporated much into my day to day workflow. So I am on the hunt for genuinely useful AI tools that folks are using regularly and find useful.

One I am exploring is a tool called Recall, which is a combination summarization tool, knowledge management tool, and a tool for generating questions you can use to help you remember your saved content. It is essentially a bookmarking tool on steroids, and the ability to generate questions from the content you save in recall turns it into a potentially powerful study tool for learners.

For example, I recently watched David Wiley’s OTESSA talk on “Why Open Education Will Become Generative AI Education” (which I suspect is very similar to the talk he will be giving next week at the University of Regina).

I then ran the video through Recall, which created this summary of the talk. The summary was ok, although there were some issues. It failed to connect the similarity of the early criticisms of OER with the early criticisms of GenAI in the same explicit way that David did in his talk. Recall did note David’s criticisms of of GenAI, but just failed to connect it to the early days criticisms of OER the same way that David did.

Recall also failed to pick up on David’s specific example of “think-pair-share” which, for educators, would be a very resonant comparison. As well, it made a critical terminology error. In the talk, David suggests that we are in a transition time from the information age to the generative age. Recall replaced the word “generative” with “generation”, which completely alters the meaning of the phrase. So, not great and a distinction that had I not actually watched the original talk would have gone unnoticed.

This mistake was echoed in the auto-generated questions Recall created where the first question asks to explain the main difference between the “information age” and the “generation age” when it should have been “generative age”. As far as questions go, it would have been a good one had it not got the term wrong.

The rest of the auto-generated questions look like this:

Screenshot showing 8 questions auto-generated by recall. The questions are: What is the main difference between the 'information age' and the 'generation age' according to the text?	
What was the speaker's profession when they first recognized the potential of digital information as a non-rivalrous resource?	
What is the primary concern regarding the quality of information generated by AI?	
What is 'capability deprivation' in the context of open educational resources?
What is the proposed future direction for open educational resources?	
What is one of the advantages of using open-weight language models?	
What is the name of the tool mentioned in the text that provides access to numerous open-weight models?	
What type of license, inspired by the open-source software movement, was proposed for educational materials?

True to its name, all the questions are fairly low level recall questions, and some of these are more pertinent than others. Does the speakers profession at the time they had their eureka moment really the most relevant thing to ask about? No. I mean, maybe interesting he was a webmaster in the early days of the web, but if I was writing review questions for recall study, not something that would at all be a high priority. Still, asking a question like “what is capability deprivation in the context of OER” is a good question as it forces the learner to not only recall what 2 key concepts are (capability deprivation and open educational resources), but also understand something of the relationship between those two concepts. So, a bit deeper level of analysis than recalling someones profession.

Still I couldn’t help but read these questions and wonder whether it was really asking me to recall the most pertinent information from the talk. David provided a lot of historical context, which is important to frame his argument, but really shouldn’t be the basis for the recall questions as they are somewhat ancillary to his main argument. And there were very few questions that actually asked me to recall the major points of his argument, clearly stated in the title of the talk. While there is a question about the future of open educational resources, it feels like there should have been more of those kinds of questions that demonstrate understanding of the major concepts of his argument.

That said, this was my first foray into Recall and I do want to continue using it for awhile as I like the concept of how it should work, and think there are some interesting pedagogical components to it. Indeed, as a tool I think it is exactly the kind of tool that supports the ideas David puts forth in his talk, minus the very important local model aspect (watch the talk). Despite the limitations, I do want to continue to experiment with it.

So, what AI tools are you using on a regular basis – tools that you started experimenting with that have now found a way into your regular workflow? Tools that genuinely scratch an itch for you, and why?

10 Comments

CogDog September 12, 2024 Reply

On the cautious side here, a blog post pending for months. I used Descript.com for my podcast editing. It has AI features, though I understand from Emily Bender’s podcast that transcription is more machine learning in action than AI. I try the descript feature to generate show notes purely as a test.

I have found Elicit on occasional use pretty good at pulling from research papers.

My thinking is that CHatGPT et al are great when almost they cannot get it wrong (I got help converting a chemical mixture of ml/L to a large gallon amount) or when it matters not if it is wrong or imaging wildly (creating filler text to test a web form, like Lorem Ipsum, it generated a bio for my dog).

I still find the results from AI image generators boring, but also acknowledge that I am not an expert prompter by any stretch. I am sure someone who does this a lot, or is pro at MidJourney, could blow my results away. I guess though, I am content being more a representative of what a casual user can produce than trying yo be a high flying guru.

So I mainly reach for Adobe Firefly or Craiyon only if I am needing an image to mock AI.

I know others are digging in deeper and doing interesting things, like running LLMs locally on sourced content. I am ready to be behind the curve for now.

Clint Lalonde September 12, 2024 Reply

Elicit looks promising, thanks! Local seems to be where I should focus some attention as there are benefits that counteract some of the negatives of LLM’s (commercialization, huge resource usage). David mentioned a tool called LM Studio for running local LM’s. I have to admit the talk of running a service locally makes me feel a bit like the early days of the web & configuring a local webserver.

D'Arcy Norman September 12, 2024 Reply

Elicit is really good – I used it a bit at the tail end of writing my dissertation and it found stuff I was unable to get through Googly Scholar etc. We also use Scite.ai (our library is just wrapping up a pilot evaluation of it).

And Soroush Sabbaghan has built some useful AI tools for teaching and research:
https://www.corei.dev (research)
https://www.smartie.dev (course design etc)

D'Arcy Norman September 12, 2024 Reply

I’ve been using Ollama with llama3.1 to run GenAI locally on my laptop (and integrating with Obsidian), and ChatGPT-4o to play around with a more frontier model.

Clint Lalonde September 12, 2024 Reply

I really like the idea of locally run LLM’s and your comment reminded me that I did briefly play with Mozilla’s llamafile project last year. David also mentioned LM Studio in his talk, which is another way to run localized LLM’s that i want to play with. I’m curious as to whether you have incorporated Ollama into your workflow. Like, is it something you use daily, weekly, or sporadically? What kind of tasks do you give it?

D'Arcy Norman September 12, 2024 Reply

I’m just messing around with it to see what I might use it for, but being able to just write notes during a meeting and then click a toolbar icon to have it pull out action items and set them up as tasks in markdown format? That’s been super useful so far – and it seems surprisingly good at finding things that I need to do.

Grant September 12, 2024 Reply

I have not seen Recall & you picked a great example to share its affordances and pitfalls .. thanks for sharing this – will definitely check it out.

Clint Lalonde September 12, 2024 Reply

It’s not perfect, but I have only given it a single piece of data to work with so far. And, well, still a commercial platform being driven for profit, which I’m not super keen on. The local models seem to be more in line with some open values so I want to explore those a bit more.

AIpunt September 20, 2024 Reply

Thank you for sharing this information.

Leave a Reply

Theme by Anders Norén