AI | Open World
Open World
Lorna M Campbell
The 16th annual
Open Education Conference
(OER25) is taking place in London next week and the theme “Speaking truth to power: open education and AI in the age of populism” could be more urgent or important. Chaired by Sheila MacNeil and Dr Louise Drumm, both of whom have a long standing commitment to critical engagement with ed tech, the conference features
keynotes
by Helen Beetham and Joe Wilson.
Helen’s keynote, “When speaking truth is not enough: repurpose, rebuild, refuse”, will explore the links between the AI industry and the politics of populism. Helen’s thoughtful, contextual approach to education technology and AI in particular has already made me step back and question the foundational concepts of artificial intelligence. I’m still thinking about her keynote at the 2023 ALT Winter Conference “
Whose Ethics? Whose AI? A relational approach to the challenge of ethical AI
.”
Joe Wilson has been my
Open Scotland
partner in crime for over a decade now and I’m continually inspired by his optimism and his commitment to openness. Joe’s keynote, “Shaping Open Education ” will focus on the challenges of closing the attainment gap, promoting social mobility, ethical use of AI and keeping open education at the heart of change.
I’m also really pleased to see that Natalie Lafferty and Sharon Flynn will be leading a workshop on reviewing ALT’s
Framework for Ethical Learning Technology
, which is more critically important now than ever. The workshop will inform an updated version of the framework, which is due to be launched at the end of the year.
I’ve been hugely privileged to attend all fifteen OER Conferences, going right back to
OER10
in Cambridge, but unfortunately I won’t be able to go to London this year. I’ve had to step back from all work commitments as I was diagnosed with stage two throat cancer earlier in the year. I’ve already completed six weeks of radiotherapy treatment and am now (hopefully!) on the slow and convoluted road to recovery. (The jury is still out as to whether and how this relates to the
autoimmune disease
I was diagnosed with last year. That remains to be seen.) Over the last six months I’ve been deeply moved by how immensely kind people have been, I really can’t express my gratitude enough.
I haven’t had much energy to focus on anything other than recovery for the last six months, but during occasional bright spots I’ve found myself turning more and more to independent writing and journalism in an attempt to find some respite from endless doomscrolling. Shout out to Audrey Watter’s
Second Breakfast
, Rebecca Solnit’s
Meditations in an Emergency
, Carole Cadwalladr’s
How to survive the Broligarchy
, and Helen Beetham’s
imperfect offerings
for keeping me sane, more or less. All inspiring women with fearless voices speaking truth to power.
I’ve also been enthralled by the
Manchester Mill’s
tenacious investigative journalism that led to the suspension of two member’s of the University of Greater Manchester’s senior leadership team, including the vice chancellor, and the subsequent police enquiry into “
allegations of financial irregularity
“. As a former (brief) employee of the University of Greater Manchester, when it was better known as the University of Bolton, I’ll be watching with interest to see how this investigation develops.
I’ve been making a rather half-hearted attempt at following the progress of the government’s questionable Data (Use and Access) Bill, particularly as it relates to AI and copyright, but I haven’t got the brain or will power to write about that right now.
In the meantime, I’ll hopefully be able to follow some of the OER25 Conference online and I’ll be with everyone in spirit, if not in person, this year.
(This post was previously published on the
Open.Ed Blog
.)
With many image and media applications now integrating AI tools, it’s easier than ever to generate all kinds of eye-catching graphical content for your presentations, blog posts, teaching materials, and publications. Want a picture of a cartoon mouse to liven up your slides? No problem! Stable Diffusion, Midjourney, DALL-E, or
Media Magic
can create one for you. And if your AI generated rodent happens to bear a striking resemblance to another well known cartoon mouse, well that’s just a coincidence, no?
Copyright and AI
The relationship between ownership, copyright and AI is still highly contested both in terms of the works ingested by the data models driving these tools, and also the content they generate. Many of these data models ingest content that has been scraped from the web, with scant regard for intellectual property, copyright and ownership. Whether this constitutes legal use of protected works is a moot point. Creative Commons position is that
“training generative AI constitutes fair use under current U.S. law”
. Not everyone agrees; several
artists
and
media organisations
are attempting to sue various AI companies that they claim have used their creative works without their consent. Creative Commons believe that
preference signals
could offer a way to enable creators to indicate how their works can be used above and beyond the terms of the licence, and are exploring the practicalities of this approach (
Preference signals for AI training
.) It remains to be seen whether this is likely to be an effective solution to an intractable problem.
The European Union have taken a slightly different approach to copyright and AI with their EU
Artificial Intelligence Act
. Broadly speaking, the Act permits GenAI providers to use copyright content to train data models under the terms of the text and data mining exceptions of the existing Directive on Copyright in the Digital Single Market (
DSM Directive
). However, rights holders are able to reserve their rights to prevent their content from being used for text and data mining and training genAI. Furthermore, providers must keep detailed records and provide a public summary of the content used to train their data models. In short, it’s a compromise; Gen AI models can scrape the web, but they must keep a public record of all the content they use, and they must allow copyright holders to opt out. How this will work in practice, remains to be seen.
The UK is one step behind the EU, the government is undertaking an
open consultation on Copyright and Artificial Intelligence
, which appears to be broadly
following the EU’s approach
Copyright of AI Generated Content
Then there’s the issue of who owns the copyright of AI generated content. One common assumption is that AI generated images are not subject to copyright because they are not creative works produced by humans. Creative Commons perspective is that “creative works produced with the assistance of generative AI tools should only be eligible for protection where they contain a significant enough degree of human creative input to justify protection.” (
This is not a bicycle: Human creativity and generative AI
.) The problems start when AI tools generate images that are almost indistinguishable from the content they have ingested. Take that AI generated cartoon mouse for example. The reason it’s so similar to Disney’s famous, and famously copyright mouse is that the AI data models are likely to have scraped millions of images of Mickey Mouse from the web, with little regard for Disney’s intellectual property. Rights holders may be able to argue that an AI generated image infringes their copyright on the basis of substantial similarity (
The complex world of style, copyright, and generative AI
.) This represents a risk which AI application developers are keen to shift on to their users. It’s not uncommon for AI applications to explicitly make no copyright claim over the images generated by their tools. For example with regards to the copyright of AI generated images,
Canva states
“The treatment of AI-generated images and other works under copyright law is an open question and the answer may vary depending on what country you live in.
For now, Canva does not make any copyright claim over the images you create with our free AI image generator app. As between you and Canva, you own the images you create with Text to Image (subject to you following our
), and you give us the right to host them on our platform and to use them for marketing our products.”
So if Disney does happen to spot your AI generated cartoon mouse and decides to sue, it’s you, or your employer, that’s going to be liable, not the tool you used to generate the image.
OER Service Guidance
The University of Edinburgh’s
OER Service
currently provides the following advice and guidance on using AI generated images:
We also recommend consulting the University of Edinburgh’s
Generative AI Guidance for Staff
Public Domain Images
A more ethical, and environmentally friendly, alternative to using AI generated images is to use public domain images, of which there are millions, with more entering the commons every year. Public domain works, are creative works that are no longer under copyright protection because copyright has expired and they have entered the public domain, or they have been dedicated to the public domain by creators who choose to give up their copyright. This means that they can be used free of charge, by anyone, for any purpose, without any restrictions whatsoever. You don’t even have to provide attribution to the creator, though we always recommend that you do.
There are many fabulous sources of easily discoverable public domain images on the web, including:
Flickr Commons
Wikimedia Commons
Rijksmuseum
Getty Museum Open Content Program
Europeana
Openverse
British Library
on Flickr
Public Domain Review
Public Domain Image Archive
Public Domain Day
is
celebrated
on
the 1
st
of
January
each year.
In many countries,
his is the day that copyright expires
on creative
works,
and they become part of
the public domain.
This year, on Public Domain Day, the Public Domain Review launched a new interface to their
Image Archive
to enable users to search and explore their collections.
Public Domain Image Archive
And if you do happen to be looking for a cartoon mouse to use in your slides you’ll find one in the public domain that you can use with no restrictions or risk of copyright infringement, either for you or your employer. The original version of Mickey Mouse from the 1928 cartoon
Steamboat Willie
entered the public domain in 2024.
Mickey Mouse by Walt Disney, public domain image from the 1928 cartoon Steamboat Willie.
Further Reading
CC Responds to the United States Copyright Office Notice of Inquiry on Copyright and Artificial Intelligence
AI Act of the European Union
AI, the Artificial Intelligence Act & Copyright
European Parliament Directive on copyright and related rights in the Digital Single Market
The complex world of style, copyright, and generative AI
This is not a bicycle: Human creativity and generative AI
Preference signals for AI training
The Power of Open Culture
Happy Public Domain Day 2025
Mickey’s Adventure into the Public Domain
Mickey Mouse Is Now in the Public Domain After 95 Years of Disney Copyright
(This post previously appeared on the
Open Scotland
blog and on
Open.Ed
.)
The
3rd UNESCO
World
OER Congress
took place in Dubai last week. The previous two congresses, held in Paris in 2012, and Ljubljana in 2017, resulted in the
Paris OER Declaration
and the
Ljubljana OER Action Plan
, which was the forerunner of the 2019
UNESCO Recommendation on OER
. The output of the 3rd OER Congress is the
Draft Dubai Declaration on OER
The theme of the Dubai congress was “Digital Public Goods: Open Solutions and AI for Inclusive Access to Knowledge”. Digital public goods (DPG) are defined by the UN’s
Roadmap for Digital Cooperation
, as
“open-source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the sustainable digital goals (SDGs)”.
In this context open education resources are regarded as digital public goods that “support the enrichment of the global knowledge commons”.
In addition to the
Sustainable Development Goals
, the
UNESCO Recommendation on OER
, and the
Road Map for Digital Cooperation,
the Dubai Declaration also references Commitment 7 of
Our Common Agenda
: to “Improve digital cooperation”.
Key themes of the Declaration are harnessing the opportunities afforded by emerging technologies such as AI and blockchain to create new OER, curate and index existing OER, translate OER, and “ensure the provenance, integrity, and lawful use of OER”.
The Declaration outlines Recommendations in five areas (paraphrased from the draft):
Capacity Building
Support professional development for educators, content creators and those working on Gen AI projects, on copyright (inc. exceptions and limitations) and open licensing, to understand challenges posed by emerging technologies and ensure sharing and collaboration that respect copyright laws.
Promote digital literacy for users and developers to engage in the responsible creation and use of emerging technologies for OER.
Develop technologies such as cryptographic signing, semantic interoperability, and machine learning to improve attribution and discoverability of OER. E.g. Embedding metadata into OER, identifier generation standards, author-identity credentials, time-stamping mechanisms and signing OER packages.
Prioritise digitally signed works for OER repositories, and their use in the training open AI models.
Implement strategies grounded in human rights that are open, accessible, multistakeholder and gender inclusive to ensure respect for user generated data, metadata, privacy and attend to ethical practices and respect copyright rules.
Policy
Policy environments should focus on the protection and verifiability of authorship of OER and other Digital Public Goods.
Open licensing should be incorporated into the Terms of Use of AI applications specifying that it is only to be used by humans to generate openly licensed content.
Support embedding licensing information of training content in the output generated by AI tools. When open licensed materials are used to train AI models, the resulting generated content should be made available under compatible open licenses, and attribution to the copyright owner(s) of the training materials should be reflected in the generated content.
Encourage and support research into next generation attribution systems to enable tracing the use and re-use of OER.
Ensuring inclusive and equitable access to quality OER
Support the development of AI-enabled OER that is accessible in low-bandwidth scenarios and designed to enhance the accessibility of vulnerable groups.
Include cryptographic signing into quality criteria for the production of OER. Emphasise the connection of signatures to real-world identity of authors – to create incentives for publication and counter misinformation.
Support the translation and contextualisation of OER with the participation of different user communities.
Encourage the engagement of diverse participants in communities of open practice.
Sustainability Models for OER
Support approaches IPR protection & OER development driven by the
ROAM-X
principles of human rights, openness, accessibility, and multi-stakeholder participation.
Promote sustainable environmental approaches for digital public goods to minimise energy consumption and reduce the carbon footprint, recognising when the use of AI-tools is not necessary or appropriate.
Practice participatory governance, active transparency, public reporting and regular audits for the complete OER ecosystem (including technological, legal, and pedagogical aspects) to build trust among stakeholders.
Prioritise public infrastructure and public-private partnerships, while also supporting private initiatives for OER using emerging technologies, that adhere to the principles of digital public goods and openness.
International cooperation
Promote human centered use of emerging technologies, including AI, for the implementation of the UNESCO Recommendation on OER
Engage with the open community and legal experts on open licensing and IP law to ensure that emerging technologies adhere to legal terms and address the demands of diverse stakeholders.
Develop ethical frameworks and new technologies to promote OER, including more effective identification of provenance and tracking using AI-based techniques.
Encourage OER repositories and content source to implement policies that prioritize digitally signed works, and define how they may be processed and used, including criteria for the training of AI models.
Develop AI platforms to create and OER adhering to the UNESCO Recommendation on OER.
A few thoughts
As with previous congresses, there were no representatives present from UK government ministries, education authorities, or institutions. While this is disappointing, we do hope that the new Declaration will prompt the education sector in Scotland to reconsider the benefits and affordances of open educational resources. It was the Paris OER Declaration that originally inspired the development of the Scottish Open Education Declaration, and Joe Wilson and I were fortunate to attend the 2nd World Congress in Ljubljana to represent Open Scotland. Though we had limited success persuading the Scottish Government of the benefits of supporting OER, the Scottish Open Education Declaration did prove to have some influence further from home, particularly in Morocco, where it informed the development of a similar initiative. I was pleased to see that Morocco were active participants in the Dubai Congress where they highlighted their “national OER and Open Science strategy that aims to modernise education and expand research accessibility, driven by strong engagement from educators.” (
Latifa bint Mohammed inaugurates 3rd UNESCO World OER Congress in Dubai.
I’m very encouraged that the Declaration highlights the importance of developing digital skills and copyright literacy to ensure everyone is able to understand the impact of AI and emerging technologies. Supporting digital skills development has always been one of the cornerstones of the University of Edinburgh’s OER Policy and OER Service. Our approach is to empower staff and students to develop the skills and confidence to make informed decisions about creating and using open educational resources and open licensed content.
I’m also pleased that the Declaration recognises the importance of supporting diverse communities of open practice, though I do feel that supporting open practice should underpin
all
the recommendations of the Declaration.
I’m a bit surprised by the prioritisaton of digital signatures and cryptographic technologies and I’m alarmed by the recommendation that signatures should connect to authors’ real-world identities. While this approach does have the potential to address issues relating to attribution and verification, and to combat misinformation, it’s also potentially ripe for abuse.
It’s interesting that embedding metadata in open content and tracking OER have reappeared. Both are great ideas, but neither are straightforward to implement. I know, I worked with educational metadata standards for many years, and also managed a programme of small OER tracking projects way back on 2010. Part of the problem is that open educational resources are such a diverse class of things and, by their very nature, they are scattered all over the internet. I can’t help feeling that many of these recommendations pre-suppose that OERs exist in curated repositories. While some do, the vast majority don’t, and never will. Semantic search services have long been seen as the key to enable cross searching and discovery of heterogenous resources distributed across the web, but I’m not sure how much progress has been made towards making this a reality.
While I’m not surprised that the Declaration focuses on the affordances and challenges of generative AI and emerging technologies, I am concerned that it rather glosses over the many problematic ethical issues, including algorithmic bias, exploitative and extractive labour practices, and environmental impact. The Declaration does reference the
ROAM-X
principles, sustainable environmental approaches and highlights the importance of recognising when the use of AI-tools is not necessary or appropriate, but I feel it could have gone a lot further. I would like to have seen some acknowledgement of the risks of rapidly embracing these new technologies, risks that is not evenly distributed across the globe, and to focus instead on human centred approaches to achieve the aims of the UNESCO Recommendation on OER and the Sustainable Development Goals.
Resources
3rd UNESCO World OER Congress
3rd UNESCO World OER Congress livestream recordings
Draft Dubai Declaration on OER: Digital Public Goods and Emerging Technologies for Equitable and Inclusive Access to Knowledge
Latifa bint Mohammed inaugurates 3rd UNESCO World OER Congress in Dubai
I know it’s a crowded field, but I came across an AI / open data development recently that really made me stop and take a breath.
The Living Museum
introduces itself as follows:
If the artifacts in museums could talk, what would you say to them? Would you ask about their origins, or what life was like back in their eras? Or would you simply listen to their stories?
Created by an independent developer,
Jonathan Talmi
, The Living Museum is an experimental AI interface that uses content from the BM’s open licensed
digital collections database
to enable users to curate personalised exhibits and “talk” to individual artefacts about their history and origins. The developer is unaffiliated with the British Museum and makes it clear that the data is used under the terms of the CC BY-NC-SA licence.
In an
introductory blog post
Talmi says
I hope this project demonstrates that technology like AI can increase immersion, thereby improving educational outcomes, without sacrificing authenticity or factuality.
The app was launched on the Museums Computer Group mailing list and twitter a couple of weeks ago and it was met with a generally favourable response. However there were some dissenting voices, from curators, art historians, and authors, who pointed out the problematic nature of imposing AI generated voices onto artefacts of deep spiritual and cultural significance, whose presence in the BM’s collections is hugely contested.
Others questioned the macabre ethics of foisting an artificial voice on actual human remains, such as the museum’s collection of mummies. I had a surreal conversation with the mummy of Cleopatra, who died in Thebes aged 17, during the reign of Trajan. It was a deeply unsettling experience.
This is where “authenticity and factuality” were both sacrificed…
The response actually acknowledges the disrespectful and ethically questionable nature of the whole project. My head was starting to melt at this point.
Pressing the question of repatriation prompts the voice to “step out of the artificial artifact persona”…
The whole experience was as surreal as it was disturbing
There was also criticism from some quarters that the developer had “exploited” the work of professional curators by using the British Museum’s data set without their explicit knowledge or permission. It’s important to note that the CC BY-NC-SA licence does explicitly allow anyone to use the British Museum’s data within the terms of the licence, however just because the license says you
can
, doesn’t necessarily mean you
should
. When it comes to reusing open content, the licence is not the only thing that should be taken into consideration. This is one of the key points raised by the
Ethics of Open Sharing
working group commissioned by Creative Commons in 2021, and led by Josie Fraser. The report of the working group acknowledges that not everything should be shared openly, and highlights issues relating to cultural appropriation:
Ethical open sharing may require working in partnership with individuals, communities and groups and ensuring their voices are heard and approaches respected. While in some cases openly sharing resources can help to promote cultural heritage and redress gaps in knowledge, in others it may be experienced as cultural insensitivity, disrespect or appropriation — for example, in relation to sacred objects or stories and funerary remains.
Something that both the British Museum and developers using its digital collections should perhaps consider.
By coincidence, the launch of The Living Museum coincided with the release of
Mati Diop
‘s film
Dahomey
, winner of the Berlin Film Festival’s Golden Bear award.
Dahomey
, also gives a voice to sacred cultural artefacts; a collection of looted treasures being repatriated from France to the former kingdom of Dahomey, in current day Benin. In Diop’s absorbing and hypnotic film the power figure of the Dahomeyan king Ghezo speaks in
Fon
, his voice disembodied and electronically modified.
In an interview with Radio 4’s
Screenshoot
(23:20), Diop spoke eloquently about “the violence of the absence of the artefacts from the African continent.”
“These artefacts are not objects, they have been objectified by the Western eye, by the colonial perspective, locked into different stages, art objects, ethnographic objects, even locked into beauty.”
“To me it was immediate to give back a voice to these artefacts because I felt that the film is what restitution is about, which is giving back a voice, which is giving back a narrative, a perspective. The film tries to embody the meaning of restitution.”
I was lucky enough to see
Dahomey
at the GFT accompanied by a conversation with Giovanna Vitelli, Head of Collections at
The Hunterian
, and Dr Christa Roodt and Andreas Giorgallis, University of Glasgow. The Hunterian is just one of a number of museums interrogating the harms perpetuated by their colonial legacy, through their
Curating Discomfort
intervention. The conversation touched on power, control and sacredness, with Vitelli noting
“Possession means power. We, the museums, hold the power, and control the power of language. The film speaks powerfully about voices we in the global north do not hear.”
I’ve written in the past about the importance of considering
whose voices are included and excluded
from open spaces and the creation and curation of open knowledge. On the surface it may appear that AI initiatives facilitated by the cultural commons, like The Living Museum, have the potential to bring collections to life and give a voice to marginalised subjects, however it’s important to question the authenticity of those voices. By imposing inauthentic AI generated voices on culturally sensitive artefacts there is a serious risk of perpetuating exploitative colonial legacies and racist ideology, rather than addressing harms and increasing knowledge equity. Something for us all to think about.
I’ve been dipping my toes back into the debate about open education and AI over the last few weeks. I stepped back from this space earlier in the year both for personal reasons and because I was getting a bit dispirited by the signal to noise ratio. It’s still a very noisy space, more so if anything, but there are some weel-kent voices emerging that are hard to ignore.
David Wiley laid out his stall last month in the webinar
Why Open Education Will Become Generative AI Education
, and his views have been predictably polarising. There have already been several thoughtful response to David, which I can highly recommend reading:
Openness isn’t just about product
~ Martin Weller
Is Open Education becoming Gen-AI Education?
~ Robert Schuwer
The Soul of Open is In Danger
~ Heather M. Ross
I don’t want to repeat the very pertinent points that have already been made, but I do want to add my concerns about the staring point of David’s argument which is
“the primary goal of the open education movement has been to increase access to educational opportunities. The primary strategy for accomplishing this goal has been to increase access to educational materials. And the primary tactic for implementing this strategy has been to create and share OER.”
Why Generative AI Is More Effective at Increasing Access to Educational Opportunity than OER
This is certainly one view of the open education movement, (which is by no means a homogenous entity), but open education isn’t just about goals, strategies and tactics, there are other perspectives that need to be taken into consideration. I find this content centric view of open education a bit simplistic and reductive and I had hoped that we’d moved on from this by now. I would suggest that the primary purpose of open education is to improve knowledge equity, support social justice, and increase diversity and inclusion. While content and OER have an important role to play, the way to do this is by sharing open practice.
This slide in particular made me pause…
Leaving aside the use of the
Two Concepts of Liberty
, which is not unproblematic, I’m presuming “users” equates here to teachers and learners, which is a whole other topic of debate. It’s certainly true that open licences alone don’t grant the skills and expertise needed to engage in “high-demand revise and remix activities”, but I’m not sure anyone ever claimed they did? And yes GenAI
could
be a way to provide users with these skills, but at what cost? There’s little discussion here about the ethical issues of copyright theft, algorithmic bias, exploitation of labour, and the catastrophic environmental impact of AI. Surely a more responsible and sustainable way to gain these skills and expertise is to connect with other teachers and learners, other human beings, and by sharing our pedagogy and practice? While there’s a certain logic to David’s hypothesis, it doesn’t take into account the diversity of practice that can make open education so empowering.
Aside from the prediction that Generative AI Education will save / replace / supersede OER, I couldn’t help feeling that there is still an underlying assumption that OER = open textbooks. (This was also an issue I had with one of the keynotes at this year’s
OER24 Conference
) It shouldn’t need saying, but there are myriad kinds of open resources above and beyond open textbooks. What about student co-created OER for example? It’s through the process of creation, of gathering information, of developing digital and copyright literacy skills, of formulating knowledge and understanding, that learning takes place. The OER, the content created, is a valuable tangible output of that process, but it’s not the most important thing. If we ask GenAI to produce our OER, what happens to the process of learning by doing, creating and connecting with other human beings?
This issue was touched on by Maren Deepwell and Audrey Watters in the most recent episode of Maren’s brilliant
Leading Virtual Teams
podcast. It’s been really inspiring to see Audrey
re-enter the fray
of
education technology criticism
. We need her clear incisive voice and fearless critique now more than ever.
Touching on the language we use to talk about AI, Audrey reminded us that “Human memory and computer memory are not the same thing.” And in her
The Extra Mile
newsletter she says:
“I do not believe that the machine is or can be “intelligent” in the way that a human can. I don’t think that generative AI and LLMs work the same way my mind does.”
This very much called to mind Helen Beetham’s thoughtful perspective on ethics and AI at the
ALT Winter Summit
last year where she said that “generative”, “intelligence”, and “artificial” are all deeply problematic concepts.
“Every definition is an abstraction made from an engineering perspective, while neglecting other aspects of human intelligence.”
Towards the end of the podcast, Maren and Audrey talked about the importance of the embodied nature of being and learning, how we tap into such a deep well of embodied knowledge when we learn. It’s unthinkable to outsource this to AI, for the simple reason that AI is stupid.
The embodied human nature of learning was also the theme of Marjorie Lotfi’s beautiful six-part poem,
Interrogating Learning
, commissioned by Edinburgh Futures Institute for the inaugural event of their Learning Curves
Future of Education
series. Marjorie weaves together the voices of displaced women and, I believe, speaks more deeply about what it means to learn than any disembodied “artificial intelligence” ever could.
What have you learned?
When asked this question how will a woman answer?
For a moment she’s back in her mother’s belly
a heart beating out a rush of cortisol
or a warm dream of sleep listening through a barrier of skin and blood
before even her own first breath.
And then the day she’s born
blinking at the bright of daylight, candle, bulb,
hearing the low buzz of electric
and the sudden clarity of a voice she knows already.
Learning it again.
There have been a thousand things to learn in every day I’ve been alive,
the woman thinks,
and I am 53 this year.
Hands of Hope, Cork, CC BY, Lorna M. Campbell
Last week the OER24 Conference took place at the Munster Technological University in Cork and I was privileged to go along with our OER Service intern Mayu Ishimoto.
The themes of this year’s conference were:
Open Education Landscape and Transformation
Equity and Inclusion in OER
Open Source and Scholarly Engagement
Ethical Dimensions of Generative AI and OER Creation
Innovative Pedagogies and Creative Education
The conference was chaired with inimitable style by MTU’s Gearóid Ó Súilleabháin and Tom Farrelly, the (in)famous Gasta Master.
The day before the conference I met up with a delegation of Dutch colleagues from a range of sectors and organisations for a round table workshop on knowledge equity and open pedagogies. In a wide ranging discussion we covered the value proposition and business case for open, the relationship between policy and practice, sustainability and open licensing, student engagement and co-creation, authentic assessment and the influence of AI. I led the knowledge equity theme and
shared experiences and case studies from the University of Edinburgh.
Many thanks to Leontien van Rossum from SURF for inviting me to participate.
A Cautionary Fairy Tale
The conference opened the following day with Rajiv Jhangiani’s keynote, “
Betwixt fairy tales & dystopian futures – Writing the next chapter in open education
“, a cautionary tale of a junior faulty member learning to navigating the treacherous path between commercial textbook publishers on the one hand and open textbooks on the other. It was a familiar tale to many North American colleagues, though perhaps less relatable to those of us from UK HE where the model of textbook use is rather different, OER expertise resides with learning technologists rather than librarians, OER tends to encompass a much broader range of resources than open textbooks, and open resources are as likely to be co-created by students as authored by staff. However Rajiv did make several point that were universal in their resonance. In particular, he pointed out that it’s perverse to use the moral high ground of academic integrity to defend remote proctoring systems that invade student privacy, and tools that claim to identify student use of AI, when these companies trample all over copyright and discriminate against ESL speakers. If we create course policies that are predicated on mistrust of students we have no right to criticise them for being disengaged. Rajiv also cautioned against using OER as a band aid to cover inequity in education; it might make us feel good but it distracts us from reality. Rajiv called for ethical approaches to education technology, encouraging us not to be distracted by fairy tales, but to engage with hope and solidarity while remaining firmly grounded in reality.
Rajiv Jhangiani, OER24, CC BY Lorna M. Campbell.
Ethical Dimensions of Generative AI and OER Creation
Generative AI (GAI) loomed large at the conference this year and I caught several presentations that attempted to explore the thorny relationship between openness and GAI.
UHI have taken a considered approach by developing policy, principles and staff and student facing guidance that emphasises ethical, creative, and environmentally aware use of generative AI. They are also endorsing a small set of tools that provide a range of functionality and stand up to scrutiny in terms of data security. These include MS Copilot, Claude, OpenAI ChatGPT, Perplexity, Satlas and Semantic Scholar. Keith Smyth, Dean of Learning & Teaching at UHI, outlined some of the challenges they are facing including AI and critical literacy, tensions around convenience and creation, and the relationship between GAI and open education. How does open education practice sit alongside generative AI? There are some similarities in terms of ethos; GAI repurposes, reuses, and remixes resources, but in a really selfish way. To address these ambiguities, UHI are developing further guidance on GAI and open education practice and will try to foster a culture that values and prioritises sharing and repurposing resources as OER.
Patricia Gibson gave an interesting talk about “Defending Truth in an Age of AI Generated Misinformation: Using the Wiki as a Pedagogical Device”. GAI doesn’t know about the truth, it is designed to generate the most most accurate response from the available data, if it doesn’t have sufficient data, it simply guesses or “hallucinates”. Patricia cautioned against letting machines flood our information channels with misinformation and untruth. Misinformation creates inaccuracy and unreliability and leads us to question what is truth. However awareness of GAI is also teaching us to question images and information we see online, enabling us to develop critical digital and AI literacy skills. Patricia went on to present a case study about Business students working collaboratively to develop wiki content, which echoed many of the findings of Edinburgh’s own Wikipedia in the curriculum initiatives. This enabled the students to co-create collaborative knowledge, develop skills in sourcing information, curate fact-checked information, engage in discussion and deliberation, and counter misinformation.
Interestingly, the Open Data Institute presented at the conference for what I think may be the first time. Tom Pieroni, ODI Learning Manager, spoke about a project to develop a GAI tutor for use on an Data Ethics Essentials course:
Generative AI as an Assistant Tutor: Can responsible use of GenAI improve learning experiences and outcomes?
CC BY SA, Tom Pieroni, Open Data Institute
One of the things I found fascinating about this presentation was that while there was some evaluation of the pros and cons of using the GAI tutor, there was no discussion about the ethics of GAI itself. Perhaps that is part of the course content? One of the stated aims of the Assistant AI Tutor project is to “Explore AI as a method for personalising learning.” This struck me because earlier in the conference someone, sadly I forget who, had made the sage comment that all too often technology in general and AI an particular effectively remove the person from personalised learning.
Unfortunately I missed Javiera Atenas and Leo Havemann’s session on
A data ethics and data justice approach for AI-Enabled OER
, but I will definitely be dipping in to the slides and resources they shared.
Student Engagement and Co-Creation
Leo Havemann, Lorna M. Campbell, Mayu Ishimoto, Cárthach Ó Nuanáin, Hazel Farrell, OER24, CC0.
I was encouraged to hear a number of talks that highlighted the importance of enabling students to co-create open knowledge as this was one of the themes of the talk that OER Service intern Mayu Ishimoto and I gave on
Empowering Student Engagement with Open Education
. Our presentation explored the transformative potential of engaging students with open education through salaried internships, and how these roles empower students to go on to become radical digital citizens and knowledge activists. There was a lot of interest in Information Services Group’s programme of student employment and several delegates commented that it was particularly inspiring to hear Mayu talking about her own experience of working with the OER Service.
Open Education at the Crossroads
Laura Czerniewicz and Catherine Cronin opened the second day of the conference with an inspiring, affirming and inclusive keynote
The Future isn’t what it used to be: Open Education at a Crossroads OER24 keynote resources
. Catherine and Laura have the unique ability to be fearless and clear sighted in facing and naming the crises and inequalities that we face, while never losing faith in humanity, community and collective good. I can’t adequately summarise the profound breadth and depth of their talk here, instead I’d recommend that you watch to their
keynote
and read their accompanying
essay
. I do want to highlight a couple of points that really stood out for me though.
Laura pointed out that we live in an age of conflict, where the entire system of human rights are under threat. The early hope of the open internet is gone, a thousand flowers have not bloomed. Instead, the state and the market control the web, Big Tech is the connective tissue of society, and the dominant business model is extractive surveillance capitalism.
AI has caused a paradigmatic shift and there is an irony around AI and open licensing; by giving permission for re-use, we are giving permission for potential harms, e.g. facial recognition software being trained on open licensed images. Copyright is in turmoil as a result of AI and we need to remember that there is a difference between what is legal and what is ethical. We need to rethink what we mean by open practice when GAI is based on free extractive labour. Having written about the contested relationship of invisible labour and open education in the past, this last point really struck me.
HE for Good
was written as an antidote to these challenges. Catherine & Laura drew together the threads of
HE for Good
towards a manifesto for higher education and open education, adding:
“When we meet and share our work openly and with humility we are able to inspire each other to address our collective challenges.”
CC BY NC, Catherine Cronin & Laura Czerniewicz, OER24
Change is possible they reminded us, and now is the time. We stand at a crossroads and we need all parts of the open education movement to work together to get us there. In the words of Mary Robinson, former President of Ireland, former UN High Commissioner for Human Rights, and current Chair of the Elders:
“Our best future can still lie ahead of us, but it is up to everyone to get us there.”
Catherine Cronin & Laura Czerniewicz, OER24, CC BY, Lorna M. Campbell.
The Splintering of Social Media
One theme that emerged during the conference is what Catherine and Laura referred to as the “splintering of social media”, with a number of presenters exploring the impact this has had on open education community and practice. This splintering has lead people to seek new channels to share their practice with some turning to the fediverse, podcasting and internet radio. Blogging didn’t seem to feature quite as prominently as a locus for sharing practice and community, but it was good to see Martin Weller still flying the flag for open ed blogging, and I’ve been really encouraged to see how many blog posts have been published reflecting on the conference.
Gasta!
The Gasta sessions, overseen by Gasta Master Tom Farelly, were as raucous and entertaining as ever. Every presenter earned their applause and their Gasta! beer mat. It seems a bit mean to single any out, but I can’t finish without mentioning Nick Baker’s
Everyone’s Free..to use OEP,
to the tune of Baz Luhrmann “Everybody’s Free (To Wear Sunscreen)”, Alan Levine’s
Federated
, and Eamon Costello’s hilarious
Love after the algorithm: AI and bad pedagogy police
. Surely the first time an OER Conference has featured Jon Bon Jovi sharing his thoughts on the current state of the pedagogical landscape?!
Eamon Costello, Jon Bon Jovi, Tom Farrelly, Alan Levine, OER24, CC BY, Lorna M. Campbell
The closing of an OER Conference is always a bit of an emotional experience and this year more so than most. The conference ended with a heartfelt standing ovation for open education stalwart Martin Weller who is retiring and heading off for new adventures, and a fitting and very lovely impromptu verse of
The Parting Glass
by Tom. Tapadh leibh a h-uile duine agus chì sinn an ath-bhliadhna sibh!
Martin Weller, Tom Farrelly, Gearóid Ó Súilleabháin, CC BY, Lorna M. Campbell, OER24.
The title of this blog post is taken from this lovely tweet by Laura Czerniewicz.
Last week I joined the
ALT Winter Summit on Ethics and an Artificial Intelligence
. Earlier in the year I was following developments at the interface between ethics, AI and the commons, which resulted in this blog post:
Generative AI: Ethics all the way down
. Since then, I’ve been tied up with other things, so I appreciated the opportunity to turn my attention back to these thorny issues. Chaired by Natalie Lafferty, University of Dundee, and Sharon Flynn, Technological Higher Education Association, both of whom have been instrumental in developing ALT’s influential
Framework for Ethical Learning Technology
, the online summit presented a wide range of perspectives on ethics and AI, both practical and philosophical, from scholars, learning technologists and students.
Whose Ethics? Whose AI? A relational approach to the challenge of ethical AI – Helen Beetham
Helen Beetham opened the summit with an inspiring and thought-provoking keynote that presented the case for relational ethics. Positionality is important in relational ethics; ethics must come from a position, from somewhere. We need to understand how our ethics are interwoven with relationships and technologies. The ethics of AI companies come from nowhere. Questions of positionality and power engender the question “whose artificial intelligence”? There is no definition of AI that does not define what intelligence is. Every definition is an abstraction made from an engineering perspective, while neglecting other aspects of human intelligence. Some kinds of intelligence are rendered as important, as mattering, others are not. AI has always been about global power and categorising people in certain ways. What are the implications of AI for those that fall into the wrong categories?
Helen pointed out that DARPA have funded AI intensively since the 1960’s, reminding me of many learning technology standards that have their roots in defence and aeronautical industries.
A huge amount of human refinement is required to produce training data models; this is the black box of human labour, mostly involving labourers in the global south. Many students are also working inside the data engine in the data labelling industry. We don’t want to think about these people because it affects the magic of AI.
At the same time, tools are being offered to students to enable them to bypass AI detection, to ‘humanise” the output of AI tools. The “sell” is productivity, that this will save students’ time, but who benefits from this productivity?
Helen noted that the terms “generative”, “intelligence”, and “artificial” are all very problematic and said she preferred the term “synthetic media”. She argued that it’s unhelpful to talk about the skills humans need to work alongside AI, as these tools have no agency, they are not co-workers. These approaches create new divisions of labour among people, and new divisions about whose intelligence matters. We need a better critique of AI literacy and to think about how we can ask questions alongside our students.
Helen called for universities to share their research and experience of AI openly, rather than building their own walled gardens, as this is just another source of inequity. As educators we hold a key ethical space. We have the ingenuity to build better relationships with this new technology, to create ecosystems of agency and care, and empower and support each other as colleagues.
Helen ended by calling for spaces of principled refusal within education. In the learning of any discipline there may need to be spaces of principled refusal, this is a privilege that education institutions can offer.
Developing resilience in an ever-changing AI landscape ~ Mary Jacob, Aberystwyth University
Mary explored the idea of resilience and why we need it. In the age of AI we need to be flexible and adaptable, we need an agile response to emerging situations, critical thinking, emotional regulation, and we need to support and care for ourselves and others. AI is already embedded everywhere, we have little control over it, so it’s crucial we keep the human element to the forefront. Mary urged us to notice our emotions and think critically, bring kindness and compassion into play, and be our real, authentic selves. We must acknowledge we are all different, but can find common ground for kindness and compassion. We need tolerance for uncertainty and imperfection and a place of resilience and strength.
Mary introduced Aberystwyth’s AI Guidance for staff and students and also provided a useful summary of what constitutes AI literacy at this point in time.
Achieving Inclusive education using AI – Olatunde Duruwoju, Liverpool Business School
Tunde asked us how we address gaps in inequity and inclusion? Time and workload are often cited as barriers that prevent these issues from being addresses, however AI can help reduce these burdens by improving workflows and capacity, which in turn should help enable us to achieve inclusion.
When developing AI strategy, it’s important to understand and respond to your context. That means gathering intersectional demographic data that goes beyond protected characteristics. The key is to identify and address individual students issues, rather than just treating everyone the same. Try to understand the experience of students with different characteristics. Know where your students are coming from and understand their challenges and risks, this is fundamental to addressing inclusion.
AI can be used in the curriculum to achieve inclusion. E.g. Using AI can be helpful for international students who may not be familiar with specific forms of assessment. Exams trigger anxiety, so how do we use AI to move away from exams?
AI Integration & Ethical Reflection in Teaching – Tarsem Singh Cooner
Tarsem presented a fascinating case study on developing a classroom exercise for social work students on using AI in practice. The exercise drew on the
Ethics Guidelines on Reliable AI
from the European Group on Ethics, Science and New Technologies and mapped this against the
Global Social Work Ethical Principles
The assignment was prompted by the fact that practitioners are using AI to uncritically write social work assessments and reports. Should algorithms be used to predict risk and harm, given they encode race and class bias? The data going into the machine is not benign and students need to be aware of this.
GenAI and the student experience – Sue Beckingham, Louise Drum, Peter Hartley & students
Louise highlighted the lack to student participation in discussions around AI. Napier University set up an anonymous padlet to allow students to tell them what they thought. Most students are enthusiastic about AI. They use it as a dialogue partner to get rapid feedback. It’s also helpful for disabled and neurodivergent students, and those who speak English as a second language, who use AI as an assistive technology. However students also said that using AI is unfair and feels like cheating. Some added that they like the process of writing and don’t want to loose that, which prompted Louise to ask if we’re outsourcing the process of critical thinking? Louise encouraged us to share our practice through networks, adding that collaboration and cooperation is key and can lead to all kinds of serendipity.
The students provided a range of different perspectives:
Some reported conflicting feelings and messages from staff about whether and how AI can be used, or whether it’s cheating. Students said they felt they are not being taught how to use AI effectively.
GCSEs and the school system just doesn’t work for many students, not just neurotypical ones, it’s all about memorising things. We need more skills based learning rather than outcome based learning.
Use of AI tools echoes previous concerns about the use of the internet in education. There was a time when there was considerable debate about whether the internet should be used for teaching & learning.
AI can be used to support new learning. It provides on hand personal assistance that’s there 24/7. Students create fictional classmates and partners who they can debate with. A lot of it is garbage but some of it is useful. Even when it doesn’t make sense, it makes you think about other things that do make sense.
A few thoughts…
As is often the case with any new technology, many of the problematic issues that AI has thrown up relate less to the technology itself, and more to the nature of our educational institutions and systems. This is particularly true in the cases of issues relating to equity, diversity and inclusion; whose knowledge and experiences are valued, and whose are marginalised?
It’s notable that several speakers mentioned the use of AI in recruitment. Sue Beckingham noted that AI can be helpful for interview practice, though Helen highlighted research that suggested applicants who used chatGPT’s paid functionality perform much better in recruitment than those who don’t. This suggests that we need to be thinking about authentic recruitment practices in much the same way we think about authentic assessment. Can we create recruitment process that mitigate or bypass the impact of these systems?
I particularly liked Helen’s characterisation of AI as synthetic media, which helps to defuse some of the hype and sensationalism around these technologies.
The key to addressing many of the issues relating to the use of AI in education is to share our practice and experience openly and to engage our colleagues and students in conversations that are underpinned by contextual ethical frameworks such as ALT’s Framework for Ethical Learning Technology. Peter Hartley noted that universities that have already invested in student engagement and co-creation are at an advantage when it comes to engaging with AI tools.
I’m strongly in favour of Helen’s call for spaces of principled refusal, however at the same time we need to be aware that the genie is out of the bottle. These tools are out in the world now, they are in our education institutions, and they are being used by students in increasingly diverse and creative ways, often to mitigate the impact of systemic inequities. While it’s important to acknowledge the exploitative nature and very real harms perpetrated by the AI industry, the issues and potential raised by these tools also give us an opportunity to question and address systemic inequities within the academy. AI tools provide a valuable starting point to open conversations about difficult ethical questions about knowledge, understanding and what it means to learn and be human.
Recent Posts
On the threat of mass redundancy
OER25 – Stepping back and speaking truth to power
Copyright and Cartoon Mice – Gen AI Images and the Public Domain
For those about to blog
2024 End of Year Reflection
Categories
April 2026
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Tags
23things
23ThingsEdUni
altc
cetis
cost of freedom
creative commons
EDE
femedtech
femedtechquilt
freebassel
further education
higher education
history
innovation
jisc
jorum
lrmi
lrmi implementation
metadata
moocs
oeps
oer
oer16
oer17
oer18
oer19
OEweek
okfn
open
openbadges
opendata
OpenEdFeed
open education
open education practice
open knowledge
open practice
openscot
poetry
policy
standards
ucustrike
ukoer
uoe
wiki loves monuments
wikimedia
CC BY
, Lorna M. Campbell unless otherwise indicated.
Meta
Entries feed
Comments feed
WordPress.org
US