Screenshot from The Simpsons "Treehouse of Horror" episode where a Krusty The Clown doll tries to murder Homer because it was on the "Evil" setting.

Welcome back to University of Hard Knocks, your friendly neighborhood higher ed newsletter. Hope everyone's having a good Friday.

This week I'm going to provide a debrief on a busy week over here. You'll find some thoughts on AI, digital workshops, and academic conferences.


On Thursday I virtually attended a conference titled "Pens & Pixels: Generative AI in Education" (an event sponsored by UC-Irvine and the Spencer Foundation). There were some really interesting perspectives and insightful presentations focused on use-cases for AI in educational contexts, though I was left a little uneasy by some of the folks who seemed to want to put to bed any opposition to using AI in classroom contexts. One presenter literally said "resistance is futile." I am not some kind of monster opposed to weaving pop culture references into academic spaces and the comment was a bit tongue-in-cheek, but I would love to find fewer folks who are apparently sympathetic to The Borg lifestyle.

I did appreciate the longer view taken by Mark Warschauer (Director of UC-Irvine's Digital Learning Lab) in his opening remarks on the ways that "tools become part of human agency and activity." Like Warschauer, I have also seen critiques of AI and attendant justifications for banning AI use by students that construct idealized and romanticized visions of human expression and experience. Using the work of folks like Gregory Bateson and Walter Ong, Warschauer productively complicated tidy binaries between humans and technologies, highlighting (via Ong, for instance) ways in which "[t]echnologies are not mere exterior aids but also interior transformations of consciousness." While many educators may already be on board with these bigger questions concerning technology, human experience, and creativity, I've seen a fair share of rhetorical moves made in the wake of AI's rise that would justify bringing some "what we talk about when we talk about technology" framings to these conversations.

That being said, as someone who is on board with these ways of thinking about technologies, I do wish more time had been spent thinking about AI through the lenses of labor and capitalism. Some educators are less resistant to the idea that tools become part of human agency and activity and more concerned about the ways these tools become part of human agency and activity in relation to the topics we teach: placing additional pressures on laborers in hypercapitalist systems where labor is already devalued, for instance.

I was thinking a lot about, say, the financial motivations driving desires to use AI to create digital replicas of background actors in Hollywood productions. I was also thinking about the repairman in the classic "Treehouse of Horror" Simpsons episode about the Krusty The Clown doll that keeps trying to murder Homer. The repairman comes in and flicks a switch on the doll so it becomes "good," not "evil." Problem solved. Sometimes it can feel like advocates for AI in various contexts assume that these kinds of switches will be built in to company workflows and use-cases. I don't think it's too cynical to be worried about corporations behaving badly when it comes to AI. We've already seen lots of bad actors already: publishers replacing writers and editors with shoddy AI articles,  nonprofits replacing human specialists with chatbots that offer harmful advice.

The dire labor conditions in higher ed have been bubbling to the surface even more over the last few years. Early pandemic teaching contexts highlighted various issues like widespread lack of support – staffing, tools, but especially money and time – for educators forced to acclimate to new learning modalities. Capitalist motivations fueled frantic efforts to maintain active semester calendars at all costs...well, "at all costs" save, you know, paying living wages to educators and creating conditions of labor that benefit staff and students alike. We did see money materialize via emergency grants and temporary funding for staffing and faculty training in a number of higher ed sectors. And I am sure that some institutions have been taking a long hard look at what made these last few years so challenging, with an eye towards transformative change. I would love to know the names of those institutions, by the way.

AI is viewed by many higher ed practitioners as a disruptive force in part because many of us know that our responses to AI – developing new course and institutional policies, improving individual and cross-campus digital literacies, assessing particular tools and data policies and use-cases, exploring contexts where AI can benefit or do harm to our specific campus communities – will invariably be underfunded, labor-intensive, and not as good as they could be if we had more time and money. Many educators have nonetheless gone to great lengths to create and disseminate resources on their home campuses and across the web. Some of them are even making some money off these efforts via keynotes and webinars and publications and startups.

We're also seeing in real-time how AI tools and the benefits these augmentations allegedly bring will not be equally distributed (sorry), in educational contexts or elsewhere. For every institution that has the monetary and staffing resources to do research and development, offer digital literacy trainings to students and staff, even develop AI use-cases that model the benefits of these technological innovations to their communities, we'll also likely see schools where staff and even instructors may be replaced by AI tools with sleek marketing and lower economic costs.

Despite learning a lot at the virtual conference regarding AI and seeing some of the promising developments happening in certain education sectors, stuff like the strikes in Hollywood and the generally dire state of things in the world has me feeling pretty worried about what's on the horizon regarding AI and labor. I will wrap up this section with a few positive takeaways:

  • In a discussion of student digital literacies and misinformation, Warschauer talked about the importance of distinguishing between "vertical" and "lateral" reading methods in online contexts. A lateral approach involves, say, reading a news story or a piece of social media or an AI-generated output and then opening a new browser tab to dig further into the claims made in that initial piece of writing. Many digital readers have learned to adopt these strategies when verifying information or expanding our perspectives beyond an individual narrative. But many of our students (or even peers) may benefit from cultivating, refining, and discussing these strategies in our classrooms. I don't think lateral reading is a magic bullet that addresses all concerns about misinformation and hallucinations in current AI outputs (and I don't think Warschauer suggested that!), but I appreciated the descriptions of these tendencies and the emphasis on their potential value.  
  • I really enjoyed a presentation by Dora Demszky (Stanford Assistant Professor in Education Data Science) on researchers thinking about AI applications in student feedback (and instructor feedback) contexts. Demszky and her collaborators also seem heavily invested in ethical workflows and methodologies: collaborating with fellow educators, digging in to training data, reflecting on specific use-cases and concerns about privacy, etc. Demszky also recently co-authored an open-access article (with Rose E. Wang) titled "Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance for Scoring and Providing Actionable Insights on Classroom Instruction."
  • It was fascinating to learn about what Duolingo is doing with AI tools via Cindy Berger, a Lead Learning Designer / Senior Learning Design Manager at the company. While I'm sure the company benefits from touting their wares, it was cool that the conference organizers invited her to speak about these experiences and contexts beyond the classroom, given the widespread use of Duolingo and the conversations about AI's potential value to educational efforts in language learning.  

So while I was left feeling bummed out, I was glad to take the time to attend some of the sessions at this event. I am getting a little fatigued from attending virtual programming on AI and higher ed, but please feel free to share any information on upcoming events like this one.


I spent the past week taking a free virtual course on Python Basics, taught by Nathan Kelber as part of Constellate's Text Analysis Pedagogy Institute (TAP). I signed up in large part to move my knowledge of Python from "I can read a Python Notebook and kind of get what is happening there" to...more than that, lol. The last time I dealt with Python in earnest was at the tail end of grad school, when I talked a bit with a data scientist who was very nice but also clearly aware that I was out of my element. That experience left me feeling like Python was just something I would never really be able to use in a hands-on way. At the very least, I wanted to be more comfortable in collaborative contexts where I might end up working with folks who were well-versed in Python so I had a better sense of what to talk about and think about.

In short, Nathan was a fantastic instructor and I feel so much more confident now! As someone who often runs programming like workshops, I was particularly impressed by the way he managed a large and varied class size (I think we got close to 200 participants on one of the first Zoom class sessions!) and by the design of each day's tasks and attendant documentation. The notebooks offered us chances to test out our emerging knowledge in nice, compartmentalized chunks, and each lesson wrapped up with some challenges to help us break a sweat after learning the basics. But I especially loved Nathan's commitment to what he called at one point "an atmosphere of playfulness and discovery" in this learning environment. Messing things up and writing incorrect code was part of the process of education and discovery. As someone who hasn't had a ton of formal training in coding, I was delighted to be a part of this type of class. And while I started the Basics course wondering if I even had a need to learn more about this topic, I was left humming and thinking about potential use-cases and small projects that might help me in edtech and in research contexts.

We worked in Constellate's Lab environment, but you can access the notebooks we used in the course (and other resources) via GitHub as well. There are some additional TAP courses happening this summer too. I wish I had time to take more, but I'm really glad I made the time for the Basics course at the very least.

Nathan also recommended Al Sweigart's Python books as a set of helpful resources, so I think I'm going to check those out when I have some free time.


This week is the 2023 Digital Humanities Conference, an academic event regularly attended by a number of folks doing DH work across the globe. I've come across a few interesting reads via the conference's #dh2023 hashtag on Twitter and Mastodon. Not as much as I've found in previous years, but I'd likely chalk that up to the cost of conference travel, pandemic conditions, an uptick in other virtual gatherings and regional / hybrid offerings, Twitter being a tire fire, etc. I did ask folks on Twitter if there were particular sessions or discussions about DH and labor; if you heard about one (or participated in one!), let me know!

  • Earlier this year Ashley Champagne, Director of the Brown University Library Center for Digital Scholarship, published an academic article in the open-access journal Interdisciplinary Digital Engagement in Arts and Humanities (IDEAH) titled "Planning for Uncertainty: Building Trust in the Midst of Uncertainty in Digital Scholarship Projects." I admire this piece because it demonstrates how Ashley (who was a great collaborator during my time at Brown as a postdoc;) is helping her team navigate and document the accomplishments as well as the challenges that digital support staff can face in collaborations on long-term initiatives.

    There's often a need for support to move a bit out of their comfort zone depending on the project's components, for instance. And Ashley is clearly trying to create communication channels (internally and in conversations with faculty) that allow her collaborators in the library to be transparent about these conditions of uncertainty. Project management and communication skills can often be viewed as "soft skills" in digital initiative contexts, but it's clear that they are pretty essential, especially in professional contexts with folks who don't always collaborate on these kinds of multi-year, evolving projects.  
  • Digital Humanities Workshops: Lessons Learned is a new collection of essays from Routledge, edited by Laura Estill and Jen Giuliano (the link back there goes to an OA edition of the book). I haven't dug in yet but it looks like there's a great range of perspectives and contexts in there. I dig the Where / Who / How organization of the collection too.

Woof, that was a long one! I'm still working through the kinks of this newsletter project thing but hopefully some of what is in here made sense or seemed useful.

Outside of higher ed stuff, it was kind of a quiet week. I spent most of it alternating between air-conditioned office work and taking my dog / son Charles to local parks and dog-friendly breweries (thank you, Notch!).

I did watch The Return of the Living Dead for the first time last night. Super fun movie! I almost shoehorned some references to it into the AI discussion, but I don't think I needed any zombies or new wave punks in there. Maybe next time.


Thanks for reading! Feel free to check in via Twitter while it still lasts (or Mastodon), or you can email me: jimmcgrath[dot]us[at]gmail[dotcom]. You can also learn more about my interests and work on my website. Oh yeah, I'm on Bluesky now too (jimmcgrath.bsky.social)

UHK #2: AI and Academic Labor