Why are Third Party Vendors Such Arses?

The short answer is that they’re not. They’re experiencing culture shock between higher education and capitalism. Their goals and higher education’s are entirely different, and sometimes diametrically opposed. Sometimes they’re not, but I’ll leave that for the Marxists out there to critique.

I’ll outline a few examples, no names except for where I need a name for a tool because it’s too hard to keep using “middleware” that could mean anything from a database to a API connector to something like IFTTT. I’m not writing this to shame edtech vendors or name call, but if you are a vendor and you do these sorts of things – maybe consider stopping.

Hyper aggressive sales.

You’ve all seen this, or gotten emails day after day from the same vendor telling you about their great product. Or, you’ve been a teacher, and they call you periodically. Or more frequently. Daily even. I’ve gotten relentless edtech bros emailing me on LinkedIn then at work. By the way, if you do this and it’s part of your company culture, you do know that I mark that stuff as spam, right? All it does is create one of two things for a relationship… you either gain someone who just capitulates to you (but resents you) or you anger someone (who then holds a grudge for longer than an eon). Neither of those are great, but one is a sale. In an extreme case, you might get a cease and desist from a CIO who is tired of your harassment.

Circumventing process.

EdTech workers have definitely been asked for this sort of stuff continually. Move fast and break things is not a good mantra for education, nor public institutions. If your company wants to do it your way, rather than a standard LTI 1.3 kind of way, and then refuses to budge because your API way (to simply manage single-sign-on!) is already built, you’re an ass. If you are ever told, “we don’t just enable every option in LTI 1.3 settings” and you turn around and suggest you need all those data options – you most definitely don’t. If we have a process that we tell you takes months to go through, no, it can’t go quicker. It’s literally my job to ensure the security of the data in the system you’re trying to connect to, so work with me, not against me. It’s not my fault you left it to the last minute before semester and are trying to rush the integration through, literally using teachers as a sacrificial wedge to bypass security, privacy and accessibility. You know what that makes you.

Oh, and when the vendor agreement allows an instructor to sign off for an entire institution? That’s no good.

Data greediness.

Outlined above a little bit, but when you ask for an API integration, you should be able to easily answer “What API calls are you making?”. If you have an LTI 1.3 integration, and we ask “what do you use this data for?” you should be able to answer that within minutes of asking. Dancing around that question just raises my suspicions. You might actually need all that data. In 20 years of doing this work, and probably working on 100+ integrations with the LMS and other tools, it’s happened twice. Those two vendors were very quick to respond with what they use each data point for, how long they kept it, and why they needed it for those functions. That’s excellent service. Also that wasn’t the sales person… so yeah. Oh, and 99% of integrations between the LMS and something else can be done with LTI 1.3. Vendors out there, please use the standards. And get certified by IMS Global/1EdTech. It goes a long way to building your reputation.

Third-party malfeasance.

OK, it’s not that bad, but a new trend I’ve started seeing is a vendor using another vendor to manage something (usually data). EdLink is the sort of thing I’m thinking about here. EdLink allows easy connections between two unrelated vendors with no established connection method. So think, connecting McGraw Hill to your Student Information System (not the actual example I’m thinking of to be clear, we don’t have, or want, to connect McGraw Hill to our SIS). To be honest, this doesn’t bother me as much as some of the other grievances I’ve got – but obfuscating your partnerships and running all your data through a third-party that we don’t have an agreement with, is definitely something that raises an eyebrow or three. As one starts to think about what-if scenarios (also my job) it makes clarity around who has your data at what time and for how long all the more difficult. The service doesn’t bother me, as long as the middle-person in the scenario is an ethical partner of the vendor you’re engaging with. In many cases, you need to have a level of trust in the partner, and if they’ve shown themselves as less than trustworthy, then well, you’ve got a problem.
Again, I’m sure EdLink is fine, but when a vendor uses EdLink, and is presented with that fact, it’s a challenge for security experts as they not only have to do one analysis, but two. I understand why a vendor might try to frame EdLink as their own service, but it’s undeniable that it isn’t. So just be honest and upfront. You may pass by a team that doesn’t prioritize this level of detail, but we are not blind. We will figure it out.

One other big challenge with third-parties acting on behalf of a vendor is that if there’s a problem, you typically have to go through the vendor to access the middle person’s support team to get it rectified. This adds a layer of complexity AND time to something that was likely intended to save time and hassle for the vendor.

(Ex) Twitter

I was an early adopter of Twitter, circa 2007 or so, but didn’t really get going for a couple years. I really loved it. While Jack Dorsey wasn’t an ideal CEO, and his vision and mine often differed, I didn’t believe his motives were anything but to build a business that he could eventually sell and disappear off into the clouds with millions of dollars. So, I understood why he and his company left hateful speech up (it engages/enrages people) but eventually someone would be so heinous to get it taken down. Then 2016 happened, and well, America elected a guy who says the racist, sexist, hateful things out loud. In 2022, that Elon Musk fella bought it and I knew it was going to become a worse place. To be honest, I kinda’ stopped with Twitter (and all social media) during the pandemic. Sure, I’d browse through feeds, and out of all of them, Twitter was still useful – someone would post something at least once a day that would be click-worthy.

Twitter is no longer useful. With different people scattered across the multitudes of places (Mastadon for some, Bluesky for others and any number of other places for many more) educational technology is worse off.

In the face of the rampant racism, being lead by a man-child who does it all for the lols, I can’t be there any more, and I don’t want my 14 year old account to represent me there anymore. I left it in the slight chance this wouldn’t happen, but it did.

I haven’t really thought about how I’ll replace the serendipity of Twitter. Of finding someone working on bullshit detection, or commentary about how Google’s ethics team was rapidly fired – or more important things like the Arab spring uprising and the Hong Kong protests. Or more mundane things like Mandarake’s posts about keshi from their store in Nakano Broadway. If you follow that link, welcome to my new obsession, Onion Fighter.

Now I did the right thing and downloaded my archive. I don’t know if they make sense to republish without the context of the surrounding conversation… or if it’s even feasible. Oh well, end of an era. Hope we find each other, wherever we end up!

Ch-Ch-Changes

I’m trying really hard to write more and this summer has been a challenge because I swore off doing what I normally do, which is too much. I took a semester or two off from my Master’s and got my head down in some work at work. I should probably update y’all on what I am doing because I typically write about that periodically – maybe not often enough?

So, I’m currently in my third year as Lead Learning Technologist – which is kind of middle management, kind of feet on the ground type of role. I don’t have budgetary oversight, but I do oversee the LMS team and play a role in guiding that ship through the stormy weather. The LMS team is small (two front-line support analysts and one senior systems administrator who manages the backend processes) but rockstars from top to bottom. My role is to plan out the projects, lend a hand where possible, do some of the long-term visioning, do some short-term put out fires, and help clean up messes. Ultimately, I’m not entirely responsible, but I am in many ways entirely responsible. That kind of middle spot is somewhat frustrating, but also rewarding. I get some freedom still, but I still have to run the major decisions up the flagpole. I also am the conduit between central IT and the LMS team which sits within Teaching and Learning.

It took a while, but I let go of the last of the old job (which was front-line support and LMS administrator, plus more) which was doing the LMS updates for the university about six months ago. It felt weird, and I will admit I left that until the last because I frankly loved keeping my toe in the water. Perspective is a weird thing in that the closer I come to the top (and it’s only been one minor step up) I can see that everyone is as frustrated as I am at the middle management problem – you’re always constrained by someone else’s decisions. I’m lucky in that I can have that clarity and I couldn’t imagine how I would function if I wasn’t able to see it and it was hidden from me.

We’re doing two large projects at the moment, finally launching a course outline library tool (and re-architecting some decisions that weren’t exactly built into the first iteration of the design of the tool’s functionality) and doing an LMS RFP – which is overdue because we were in a funded project for several years which precluded our ability to RFP and then the pandemic happened. So this is the first time since… forever. I’m acting as technical lead on both those projects. Technical lead is a weird thing that probably deserves it’s own chunk of writing (so wait for it, coming in 2025!).

I’ve been trying my best to help guide some things around AI without getting too involved in the policy, making diversity and equity a component of technological purchasing, and thinking a lot about process. I also spent a good chunk of time on this chapter in this ebook/anthology celebrating the 50th Anniversary of the Teaching and Learning centre at McMaster.

Other than that, it’s just business as usual.

ETEC 512: Applications of Learning Theories to the Analysis of Instructional Settings

This course was good, well designed, facilitated by a helpful, gracious, insightful instructor – and still sapped me of the will to live. I guess the big take-home for me is that I’m not particularly keen to chat about theorists, especially psychologists (who are doing a difficult job in trying to understand the most complex part of the human body).

While I understand the need to fill out this course to ensure that people have some knowledge of how people learn, it missed a lot of the educational theorists (maybe thinking that we’d already know them?) and in our offering, no one took Vygotsky and did a group project on his ZPD, so we missed a huge chunk of what I think a lot of modern course designs take into consideration.

As I’m writing this and reflecting back on the last few courses, I’m feeling like some curmudgeon, complaining about every little thing. While that’s not entirely untrue, I’m prone to that sort of whinging. So here’s a little more balanced attempt at what I think.

The course was structured in a way that gave a sampling of several different viewpoints of how people learn, and while it’s difficult to demand depth and breadth in a subject, this course and it’s readings, as designed, tried to do that. I think ultimately it was unsatisfying (and again, a course doesn’t have to be satisfying in the sense of eating a nice meal) as it felt the things that I wanted out of it (discussion about learning design in the context of online learning, different theories of how and why people learn online, how theories impact educational technology), I didn’t get it explicitly. Now it’s pretty simplistic to understand that LMS’s replicate the very teacher-centric approach to technology in the classroom. Has there not been any more done to expand this in the last two decades? I cannot fault the course, the facilitator or anything else, and there is in fact nothing stopping me (except time) from diving into Vygotsky and just reading the ever-loving hell out of it. Just it felt so jam-packed with theory, that it didn’t really dip into the practical side of it. Again, I could’ve done that myself, but when a course is framed, with grades, well, you’re going to try to achieve good grades and some of the ancillary learning (and reading) will go by the wayside.

Actually another takeaway from the course is that Vygotsky’s ZPD probably has more applications in an online learning context than any other theory outside of the more modern ones (Connectivism, Rhizomatic Learning) that attempt to describe learning.

ETEC 511: New Foundations of Educational Technology

This was a core course and to me the framing of the course was slightly confusing. We talked about tools, and the two phenomenological positions that tools might occupy (tools control and condition us; tools are controlled by us). To me that was the key feature of the course, but it was clouded with some distracting approaches to the readings – there was never a key linkage back to the core concept of the course, and while that makes for a challenging course… it also makes for a confusing effort. The assessments never made a clear connection to the theoretical approach – in fact the rubrics had to be consulted to see the connections, which again could be the way the instructor approaches the course, and could be the way the course itself was constructed. I liked the use of other tools, however, I really really wish this program would be really student-centered and allow US to select the tools we want to use for communication. There’s a lot of hand-waving about student focused (at one point, the instructor made a point of saying “the LMS is terrible for teaching” to which I wanted to respond, the LMS isn’t doing the teaching… it’s the place we the students are looking to keep track of stuff). We used Slack, which I have a personal set of problems with (the threading of the chat is limited at best; search is abyssmal; I really have a problem with the way sub-channels? group conversations? are managed) which seemed to be more of the instructor’s choice rather than a collaborative effort.

And if one was concerned about student data being in a private, for-profit, hosted in the US system like Slack when Mattermost is available free to any UBC user makes a ton of sense…. but alas.

Technical choices aside, although in an educational technology course I don’t think you can put them aside, this course was disjointed, the assessments were all over the place – the individual assignments worth 5% apiece – some were written; some required media elements to be designed. There was no equivalency in the time spent between them. I can write a page in about a couple hours of focused work. I can create a video in about a day. In the end, I didn’t really want to engage with any of them as they were all duplicating effort based on the weekly readings and discussions we had already on the topics. While I did find the variety of topics engaging, some of the assignments made some gross errors of assumption. Like I can’t control the use of my phone. Or I don’t use my technology critically. I’ve been working in technology related fields since the late 90’s. I was early in on designing web pages. I saw some of the first javascripts to alter peoples behaviour on webpages (this was in 1997 advertising to draw people’s mouse pointers to elements, think image maps with gravity wells to slow mouse speed and to subtly draw their pointer to hover over objects with pop up descriptions). I taught a course on searching the web as Google moved to a semantic engine for analyzing search results, thus shifting their focus on quality search to engagement on search and selling advertising. The majority of the general populace may not be attentive to attention; but the people in a Master’s level program about technology should be paying attention. Professionals in the field damn well better be. I’m sure that particular assignment about attention could be framed more neutrally.

I realize the design has to hit both audiences for these courses – teachers new to the field and educational professionals who are seeking a post-graduate level degree (like myself).

I was shocked that there was no readings whatsoever about danah boyd’s work, or Ursula Franklin or Neil Postman (beyond the one article) or well, any of the history of the Internet. I’m lucky to have lived through it, but if you’re talking about the foundations of educational technology, you’re talking about the foundations of the world wide web. If you’re talking about the foundation of educational technology outside of the basic roots of web-based instruction – you really need to start talking about Audrey Waters most recent book, Teaching Machines. If you’re talking about online communities you need to include Howard Rheingold’s works. I guess the foundations course I’d design is far-and-widely different than what UBC has done. That’s fine, and probably the perspective I need to hear, rather than the perspective I’d want to hear. Most of that work was done outside academia. It’s not lost on me that most of the educational technology work is historically at-risk as it’s been published on the open web and not in academic journals.

Outside of that, I really, really loved the first thing we did in the course, which was take time to think about settler relationships with indigenous populations through text analysis. It was a thoughtful exercise and I’m constantly thinking about how I can fold that into our work as educational technologists.

How To Use D2L Awards Across Multiple Courses

Parts of this post were drafted in 2018. I’ve left them as-is and finished the blog post because I think it’s kind of crucial to understand that things have (and haven’t changed) a lot in this space. The D2L Awards tool is in desperate need of improvements, which is related to the underlying release conditions logic limitations and the way courses are positioned in the LMS. Those are structural issues, and entirely not D2L’s fault. In fact, it’s an education problem. Anyways, to the post:

I’ve been working with D2L Awards since they became available at my institution, around the Fall of 2014. I’d spent some time prior to that adding my two cents on the Open Badges Community calls, and tried to add the higher education perspective where I could (around the badges specification and higher education policy). One of the great values of badges is that they are very transportable – they essentially belong to the earner, with some caveats (like the badge hasn’t been revoked, or expired). To me this makes a lot of sense when you think of skills developing and documenting learning, which is the areas I’ve been working in the last few years.

So when D2L announced that the Awards tool would issue Digital Badges, I was very very happy. Well, truth be told, I wasn’t happy at all, because I had been working on installing our own badge issuing server and integrating it with D2L, so all of that work (the previous year or so) was down the tubes. But the upside was that it was an integrated experience and worked out of the box (so to speak). One of the first challenges was to get global badges, or the sort of thing that might transcend a course to work. The theory was that if you earned several badges from several courses (in D2L admin-speak, course offerings) you’d need to have some sort of way to know that. The somewhat simple approach is to use a higher level organization unit to manage that for you.

Typically, in a LMS structure you have the top level, or Organization level; underneath that some form of Department or Faculty, and underneath that Courses.  D2L Brightspace also has these things called Templates above the Courses, other LMS’s might have those structures, maybe not. Much of that structure is determined by your institutions Student Information System (typically Banner or PeopleSoft, but may be renamed to suit the institutions’ whims).

An example organization structure with various levels of groups

To facilitate badges to be issued as a result of other courses, or as part of an unofficial grouping (think HR related training) you will have to create a shadow structure that connects the Courses and Templates to a shadow Department. You could use the existing Departments to do this as well, but it’s generally safer to do this in a shadow organization rather than the real one. There’s little danger of doing anything damaging in this space, but you will need to be in and out of here doing enrollments. Some SIS systems already have enrollments at Department levels (we don’t) – so you definitely don’t want to mess with what your SIS does. If your SIS doesn’t do enrollments at the higher levels (excepting Organization and Course levels) then you could use existing structures, but you then risk breaking things if the SIS changes Departments or enrollments shift.

An example organization structure with various levels of groups, with a shadow structure to facilitate outcome achievement to levels above the course

The other benefit of a shadow structure is that you could combine things in unofficial ways. For instance, you could connect all the Community Education courses together across the institution, or connect experiential learning, or co-op… you get the idea.

Essentially, you don’t use the Awards tool as a relationship, but the Competencies tool with the Award tool as the outcome of the relationship. The Competency feeding up the hierarchy of organizational units, and then you can trigger awarding a Badge or Certificate with the activity that granted (in whole or in part) at a higher level. The student would get a badge from an organization they may or may not see (depending on the D2L permissions at that level, if any).

ETEC 511 – Project Retrospective

I feel that this was an interesting project to be a part of, and coming into it late, eliminated some of my ability to influence the direction of the group, but I think my role became an early critic – asking pointed questions so that I could understand why decision were made – and help support them. One of the things I often did in meetings was try to help tie our decisions back to course readings, course content, and thinking about the project less as a project and more as an assignment. I think I also served as a bit of a wrangler of ideas, trying to limit scope creep.

One other thing I did was in my section (accessibility legislation) really try to refine the information into questions and answers, so that particular section became conversational. I recall talking about making our language plain, simple and understandable and how we could use that as an engagement strategy, and in turn making it more usable, which was I think my most important contribution to the process. I will admit my passenger-ness to the project, and did not feel 100% like it was my project (again, coming in late to it made it a bit of a challenge) so I always saw my role as a bit of a servant to the whole.

From a Nielsen (2003) perspective, Twine was at the same time easy to use and learn the basics of quickly. The simplicity of the design of the tool (questions and answers with expositional text) were easy to construct in Twine. The complexity of the subject matter however, made Twine a difficult choice to manage how each component piece linked. Had we not limited our choices to discuss, we absolutely would have been tangled in a web of, well, Twine. I think that the affordances of Twine’s output, especially in the way we designed the tool, kept the complexity down, which in turn allowed the tool to ultimately be more usable.

I don’t know if Twine ended up being the best choice as we had to bend it quite a lot to make it do what we wanted it to do. The tool ultimately configured us as designers, as we were pretty locked into Twine. While it ended up perfectly fine, it did limit us in some ways, that due to our technical understanding of what was possible and the time constraints it would have taken to further use Twine beyond what it is built for, I wonder if this would have been simpler to build using HTML and delivered a more accessible tool in the end?

References

Nielsen, J. (2003). Usability 101: Introduction to usability. Useit. 

ETEC 511: IP#8 – Attention

This assignment includes a requirement to do an attentional record. Here’s that:

TimeActivityDistractions
9-10Work tasks, email, tickets,Moving from task to task – completed chunks then moved to different task  
10-11Work meetingCataloging was a distractions
11-1211-11:30 lunch 11:30 – 12 work tasks, email, MS TeamsMoving from task to task – completed chunks then moved to different task  
12-1Work meetingDiscussion raised some questions that I searched for. MS Teams message came in from another team member, answered it  
1-2Work meeting (different) 
2-3Video Interview (2-6)Needed to be present during this, phone turned off, no distractions
3-4  
4-5  
5-6  
6-7Dinner prep and dinner, finish workNo phone/computer during dinner prep, work required focus and attention
7-8TV (listen to TV, not actively engaged)Doodle on phone, check email, play games
8-9TV (listen to TV, not actively engaged)Doodle on phone, check email, play games
Attentional Record

This is a bit of a curious exercise as it wants you to turn this data into some visual, but all my visual storytelling skills tell me that it’s not going to add any sort of additional information and abstracting this information one step further is actually obfuscating the information and making analysis harder. So I’m not going to do that.

This exercise for me was interesting, as the exercise was more distracting than my normal process. Typically, I am not a distracted person. I quite often choose not to look at my phone, or check email, or get distracted from what I am doing. If I am “distracted” chances are I’m bored (which is also how I relax, just not pay attention to anything, and stop being actively engaged). Setting out an activity where I have to pay attention to my attention – well, that’s going to be a recipe to double down my already disciplined approach to work, tasks and life. So, I don’t know that I have some great revelatory technique to deal with distractions – I’m not some ascetic monk, I just believe that being in the moment and present is important. In many ways, that’s what Citton (2017) is talking about in the Joint Attention section of the book – “we are always attentive in a particular situation.” (p. 83) In educational situations, attention varies depending on the student and their role – as if attention is social and co-constructed. However, there’s some social norms that drive attention (albeit younger students might adhere to this better than middle school students – who are more likely to be testing social norms). While I don’t necessarily agree that attention is co-constructed, it is (and our current social media world confirms) most certainly socially constructed. Peer groups can “pay attention” to certain musical acts, and ensuring you know those musical acts ensures your social status. Those relationships are social. Families and friends are often the most important people to drive attention and, in my chart, the times where I’m with friends and family, are also the ones where my attention is most undivided.

That sounds so high and mighty to write… but it’s true. The attention that I pay has the most value when I value the people around me. Thinking beyond this particular chart, but into the territory when I do use my phone for entertainment – it’s in transit, between places, and alone.

I will also say that Citton missing out on Neil Postman’s critiques of mass media for entertainment (and thus attention) is a gap that I paid attention to after reading the chapter.

ETEC 511: Tipping Point: A Critical Case Study of the LMS replacing websites

Dating back to the early use of the world wide web to produce websites, academic institutions provided space and access to servers to faculty. Naturally this early exploration of the space included a number of uses –personal spaces, spaces dedicated to research, community and academic teaching. The archive.org McMaster University archive goes back to 1997, however I know that website use for teaching purposes existed prior to 1997 and went back as far as 1994 (Cuneo et al, 2000). Outside of a brief resurgence of website use in 2010 (while the on-premises LMS solution Blackboard installation was besieged with technical issues) the decline has continued to the present day.

One challenge in providing proof of this decline is that there is no comprehensive way to catalog and decipher each department and faculty approaches to personal webhosting for academic faculty and no standard approach to naming and hosting. To this end, I did encounter a deep archive of links for the faculty of Humanities websites.

In a sampling of pages that I could find archived for the Languages and Linguistics department within the faculty of Humanities (out of the 26 courses listed) (Department of Modern Languages and Linguistics, 2001) – there were 4 courses with individual webpages, and a further 16 courses with course outlines linked from the page. Some of the courses with web presences beyond a course outline were last archived in 2005, however the last successful archiving of an actual active course was 2003. (Solo Testo, 2003). The later archives were of a redirect to the Faculty of Humanities homepage likely as a result of a restructuring of the website.

By 2004, as the proliferation of WebCT and First Class (known locally as Learn Link) at McMaster became more prevalent, use of websites for teaching started to decline. It makes some sense that this trend starts to appear at this point – historically the Learning Technologies Resource Centre (LTRC) began to support central LMS tools around this time, and nearly a decade after the technology started to be used at McMaster, there are likely to be some combination of the maturity of the tools and a convergence as commercial software begins to become commonplace at higher education institutions. At McMaster, while there was some standardization of tools provided by the university to faculty at no additional charge, there was also no mandate to use the centrally supported tools. In fact, the only mention of LMS like systems were Learn Link (First Class) and WebCT in the Central IT report for 2005 was a mention for both systems’ server requirements as part of the LTRC inventory (Barret, 2005, p.187) Alternatively, there was no mention of hosting websites as a service across any of the independent or central IT units at the time. Further to this idea, the archives I could find on the Wayback Machine for McMaster all seem to indicate websites were not to be used for teaching, rather it seems the commonplace use of websites were for communication. It is reasonable to say website hosting for the purposes of teaching was far less used than the central LMS’s during this first decade of the World Wide Web. If we look back at one of the first major commercial LMS, WebCT, it was designed initially by Murray Goldberg in the mid-90’s as a way to supplement his in-person lectures. (Chan, 2005) Most web-based LMS systems replicate the functionality of WebCT – certainly most of the ones used in higher education (Blackboard, D2L’s Brightspace, Canvas and Moodle) – and so a system of software based on the experiences and design of an instructor who was looking to improve grades in courses rooted in a lecture based approach has become the defacto location for teaching to happen in an online environment.

While websites as a course site was not a priority of the university, it is likely that the impetus for this displacement in teaching tools came from several different areas, in addition to the lack of support from the university, converging slowly in the mid-2000’s to the present day:

  1. Instructors were increasingly becoming sessional and labour to build one’s own site is not compensated. The LMS is provided (although not necessarily the simplest for all).
  2. Using an LMS is easier than managing your own website for teaching as much of the infrastructure is provided by someone else.
  3. Some instructors who are interested in using the LMS for teaching were early adopters and acted as champions for the tool.
  4. Instructors who moved from institutions who have mandated LMS use to McMaster, brought along that practice, expecting McMaster to also have mandated the use of the LMS.
  5. Students who experienced LMS use in secondary school have now graduated into higher education and expected the use of an LMS to provide materials and activities.
  6. Faculties and departments each have suggested over the years tool use based on their own experiences and desire to introduce efficiencies which the LMS can provide.
  7. Educational institutions being interested in reducing paper in the name of sustainability.
  8. Vendors of LMS products convinced institutions of the efficiency, redundancy, and security of the LMS.

While each of those individual claims, can be discussed at length, this transition has definitely re-entrenched the transmission model of teaching and learning. The LMS reinforces the transmission model by having a roles that allows certain members in the system more or less power to control other users. For instance, an instructor role could be designed to allow for posting of materials, whereas a student role could view those materials. In some LMS systems these are determined by the local system administrators (often with the help of the vendor) while others have these archetypes predefined. While a vendor might argue it is up to the institution to configure the system how they wish, the archetypal uses of the system are designed around fundamental assumptions of what an instructor might be. Very rarely are LMS structures built with features that promote constructive sense-making, and as such, they are often designed for behaviourist approaches to teaching. This is an example of what Woolgar (1990) might describe as configuring the user, especially when considering the history of the development of the LMS being from a limited perspective and for a limited purpose.

This shift from a more open website to a closed, more secure but also more deterministic LMS does not necessarily negatively impact teaching per se, it does make it more challenging to use a LMS with certain pedagogical approaches. For instance, if you are teaching in a constructivist manner, it is difficult to have people   If at every turn it is difficult and time-consuming to make the LMS do what you want to do pedagogically, and the education system is rewarding certain choices you make with your labour, it becomes even more challenging to teach in ways that might be more constructivist, so you might come to believe that it is impossible to teach in that way through the LMS. While it is true it is more difficult, and might make the LMS itself unusable, it might be impractical, or more time-consuming to teach in that way. In those scenarios, a stand-alone website would probably be a superior choice.

In our class readings, there is a similar issue with labour being reduced by choice being taken away by making the undesirable behaviour more costly for the worker to do. This was raised in Crawford (2021) with Amazon workers being restricted in unauthorized actions, one such example of unauthorized actions being too many breaks. (p. 53) By the design (and in many ways the usability) of the LMS rewarding efficiency and punishing inefficiency, there is a pervasive enforcement of “traditional” teaching methods. Individually designed and constructed websites have no preconceived notions of what teaching is, and in fact if the teacher is the designer of the site themselves, it can accurately reflect their pedagogy in as much as they can use HTML, CSS and Javascript to bring that reflection to life. While the course’s content is clearly about digital labour, one of the underlying currents of that module is about how digital labour is often not done by machines but by humans in often inhumane conditions. In much the same ways humans are doing the mechanical turk work or ghost work (Gray and Suri, 2019), the work being done to build courses in LMS systems are often not done by the instructor themselves, but by unseen labour. It might be offloaded to a teaching assistant, or an instructional designer or someone hired from outside the institution delivering the course to design a course virtually on-demand. Ghost work typically involves moving the production of labour to countries that have less labour costs than in western countries, this does happen in educational institutions where labour shifts from full-time, tenured and senior professors, to teaching assistants and lower paid members of the department or faculty.  In an institution it is entirely possible for some teachers to teach courses designed by someone else and built by yet another different person. While this undoubtedly happened with the people who were using websites to teach as well; getting a TA or instructional designer (or learning technologist) to build a website that the teacher could use to deliver content. Even in those cases, more design choices, more personalization would happen would require at least some input on those choices from the teacher and thus the system used to teach would also be closely designed by the teacher. With frequency, the labour for putting courses into the LMS have become the responsibility of teaching assistants. In 2021, McMaster and CUPE (the union for Teaching Assistants) agreed to paid training on various teaching related topics, including a module on the LMS use (An Introduction to Avenue to Learn). (McMaster University, 2021, p.2) While this labour previously might have gone uncompensated, and the teaching assistant may have had to learn this on their own or attend training, clearly the recognition from the university is that this activity is no longer in the realm of teachers alone.

Another aspect of LMS use is the ease of recycling course materials from semester-to-semester. While this is not a practice of all teachers using LMS’s, there are sessional instructors who are not compensated for the development of courses. If a fellow instructor or teacher chooses to share their content, and intellectual property, then the recipient is lucky. If the sessional instructor does not have access to content from a prior taught course, they are often designing the course as it goes, which does not provide an ideal learning experience. This is essentially a labour issue. As the university lowers the number of full-time and secure jobs to teaching faculty, it also creates a precarious market for labour, with faculty becoming sessional. If one can separate the course from the individual, and keep the course in a centrally maintained place, then that labour can be passed on without care for the individual who designed it. I am not saying that was the explicit goal of McMaster in instituting the LMS, but it is possible to separate the labour of creating a course with delivery of a course using an LMS, which makes it easier to retain beyond the employment of the developer.

            While the shift from open websites to a more closed LMS system also mirror the academic labour changes over the last two decades, in some ways both developments have been symbiotic – allowing for teaching to become more prescriptive, and the culture of teaching to be more limited.

References

Barret, D. (2005). University Technology Strategy. McMaster University. https://www.mcmaster.ca/cio/UTSMar05.pdf

Chan, L. (2005) WebCT Revolutionized E-Learning. UBC News. https://news.ubc.ca/2005/07/07/archive-ubcreports-2005-05jul07-webct/

Crawford, K. (2021). Atlas of AI. Yale University Press.

Gray, M. L. & Suri, S. (2019). Ghost work : How to stop silicon valley from building a new global underclass. Houghton Mifflin Harcourt.

McMaster University (2001). Department of Modern Languages and Linguistics @ McMaster University. Wayback Machine. https://web.archive.org/web/20010214022946/http://www.humanities.mcmaster.ca/~modlang/mlhome.htm

McMaster University (2003). Solo Testo. Wayback Machine. https://web.archive.org/web/20030928012724fw_/http://www.humanities.mcmaster.ca/~gargann/text.htm

McMaster University. (2021) McMaster University CUPE Local 3906, Unit 1 Mandatory Training Frequently Asked Questions. https://hr.mcmaster.ca/app/uploads/2021/08/Mosaic-TA-Training-Module-FAQs_FINAL.pdf

Woolgar, S. (1990) Configuring the user: The case of usability trials. The Sociological Review, 38(1, Suppl.), S58-S99.

AI Writing

So I saw some tech-guru on Twitter pontificating about how awesome AI writing assists were and I thought, let’s give that try. So, I signed up for a demo of a product, and it’s pretty slick. (HASHTAG NOT AN AD #notsponsored). The thing that I used is called Moonbeam which is aimed at helping you become a better writer (and yes, it does help with the blank page syndrome).

My initial though is: slick, but not quite there yet for niche interests (which holds out hope for teaching (more on that in a minute).

I gave it the prompt “The Disasterous Choice to Teach Using an LMS”.

It then provided me with a fleshed-out draft – three arguments against, three arguments for and a conclusion. I edited the three arguments for out and generated this:

The Disasterous Choice to Teach Using an LMS

Introduction: Discuss the reasons why some instructors choose to use an LMS, or learning management system, to teach their students and how this negatively impacts learning.

Some instructors choose to use an LMS because it is a convenient way to deliver course content and track student progress. While this may be appealing at first, using an LMS can have negative consequences for learning. For example, a more passive learning environment can create difficulty for students in accessing and understanding the material. Additionally, using an LMS can limit instructor creativity and flexibility in designing and delivering instruction. As a result, the choice to use an LMS should be based on what will best meet the needs of the instructor and the students.

There are several potential disadvantages to using an LMS, including the following:

There are several potential disadvantages to using an LMS, including the following:

Increased potential for cheating: With an LMS, it is easier for students to find and copy answers from each other, which can lead to decreased grades and a loss of learning.

Limited interaction with instructors: If problems arise or clarification is needed, students may have difficulty getting in touch with their instructors in a timely manner through an LMS.

Technical difficulties: An LMS can be prone to glitches and crashes, which can interfere with classes and disrupt learning.

High cost: Some LMSs can be expensive for schools or individual teachers to implement and maintain.

a) It can be difficult to keep track of all the different features and functions of an LMS.

It can be difficult for educators to keep track of all the different features and functions of an LMS. This can lead to frustration and confusion for both teachers and students. An LMS can also be expensive, and many school districts cannot afford to purchase one. There is also a lack of research on the effectiveness of using an LMS in the classroom. Some educators believe that an LMS can actually hinder learning rather than promote it.

b) An LMS can be time-consuming to set up and maintain.

An LMS can be time-consuming to set up and maintain, which can take away from time that could be spent teaching.

An LMS can be difficult to use, which can lead to frustration for both teachers and students.

An LMS can be expensive, which can limit its use to only those who can afford it.

An LMS can be inflexible, which can make it difficult to adapt to changing needs or requirements.

c) Some students may find it difficult to navigate an LMS due to design of the course or the LMS itself.

Some students may find it difficult to navigate an LMS due to its design or the LMS itself, leading to frustration and a feeling of being overwhelmed. This can impact their ability to learn effectively.

An LMS that is designed with user-friendliness in mind can help alleviate some of these issues. Additionally, providing support and training for students on how to use an LMS can also be beneficial.

Ultimately, it is important to consider the needs of all students when choosing to use an LMS for teaching purposes.

Lack of control over the learning environment, dependence on technology, and limited interaction with students are some of the challenges that arise when using an LMS for instruction. These issues require significant investment of time and resources if they are going to be overcome.

Ineffective for certain types of learners is another potential downside of using an LMS for teaching purposes; however, this can be remedied by taking into account the specific needs of each student group before designing a course content strategy around an LMS platform

Now that’s not a bad starting point. I could flesh that out and expand on the points. And that’s the value that Moonbeam proposes – giving folks an option to edit all the way through (which likely improves their AI model). If I were generating content as a content provider, I definitely would be using this as a way to flesh out ideas and keep things a little more focused while working rapidly.

Now if I were a niche content producer (oh say, talking about educational technologies) then the downside to this is you’re not providing much additional value over a Google search as I’m sure the AI is using scraped data for one of it’s source, and so the niche stuff which doesn’t have a lot of linkage, or doesn’t have a lot of major traction, like most of the educational technology focused blogs, then it probably won’t deliver anything but a surface understanding of the subject. I can’t imagine the wrongness of something about punk (actually I can, because I generated a blog post about how 1982 is the best year for punk – it included the sentence: 1984 was a crucial year for punk music, as it saw the rise of crossover poppunk and hardcore metal.)

The testimonials on the front page of the site, well, they tell you that (if you’re an educator) your students are already using the tool. A quick social media search shows that is true as well. If it’s on Twitter, chances are there’s thousands more on private Discord servers and SnapChat.

The hope for education? Well, it’s the same argument we had about the web destroying education – access to all the information in the world does not provide the context one needs to form a deep understanding of the subject. Instead of rehashed, crappy essays (and please, can we get away from essays) and going towards more participatory ways of demonstrating learning I’m sure higher education will simply try to outlaw these things and adhere to stricter more surveillancy kinds of approaches to solving the problem. The problem is that more participatory ways of demonstrating learning is that they aren’t easily scalable – or are they? One thing that MOOCs taught me is that with a large enough bank of assessments, it is totally possible to demonstrate learning at scale (now assessing… that’s a different matter). Look at the DS106 Assignment Bank. It’s got clear assignments linked to loose themes (which could be your learning outcomes if that’s how you roll) that students can select from. I suspect that these would be difficult to do through AI. Another strategy might be to use the AI and show how you’d improve that output as part of the essay building process.

Will we get there though? The labour costs of doing the DS106 assignment bank is huge – and the assessment of 600+ students (thinking about first year folks) would be considerable. Maybe there’s ways around it, I’m sure that people could come up with clever ways to assess at scale – but anything offloading assessment to students or peers is a band-aid on the way higher education is running.