IGF 2024-Day 1 -Workshop Room 9 -OF 30 Harnessing GenAI to transform Education for All

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Okay.  So welcome.  Good afternoon, everyone.  I hope everybody had a good lunch and a good coffee break.  And so, today's session is about generative AI and education.  Harness Generative AI to Transform Education for All.

So, we take a different approach.  We take the systems approach, because for the use of generative AI, there are different perspectives to look at it.  So from our end, it's more of a whole society, multi-stakeholder approach.  So, I'll explain to you why I say it this way.

Before I talk about that, I have to introduce my organization.  So that's my job.  I'm the director of United Nations University Research Institute in Macau.  How many of you have heard of UNU?  One, two, great.

>> We've heard of them.

>> MODERATOR: Wonderful thank you.  So U.N. University headquarters is in Tokyo.  We have 13 research institutes in 12 different countries.  We are the U.N.  We also have an identity which is academic so that's why we do research and training and education.

13 research institutes in 12 countries, and we have different institutes covering different expertise.  The one, as you can see, on the map, those are the locations of our institutes.

So, our institute that I'm heading is UNU Macau that specializes in digital technology and sustainable development goals.  We have been around for more than 30 years.  Recently, we have been working more towards AI governance, AI ethics, including in addition to digital tech with women, gender, cybersecurity, growing up online.  So we have a huge portfolio and if you're interested we can talk later.

And today's approach is a systems approach to talk about generative AI and education.

Multi-stakeholder.  So when we talk about generative AI, we certainly will look at the system itself, but beyond systems and usually, the system or the technical background is usually less important, I would consider so than people.  So, people have to be in the centre.

So, let's look at the people, people picture.  So on the right side, you can see that there are teachers certainly and teachers have been taking advantage of tools and trying to develop personalized education tools and using generative AI and we also have learners, students and maybe nowadays we talk about life-long learners, actually everybody has been using it.

And we also look at the schools, school administrators.  For example, universities.  Now with generative AI, which transforms how we learn and how we teach drastically and what kind of curriculum would be relevant.  How do we train people for future generations.  These are the questions that the administration needs to think about.

And if we look at the bigger outer ecosystem, we need to look at the policymakers, ministry of education also and also the regulators.

And parents.  Some of you being parents and you also understand sometimes, you want to know what your children are doing online with generative AI.

So, this is -- also, we have the technology company.  They are actually the ones who developed the technology.

With this people map, I'm very happy to introduce to you our panellists, because we represent actually all the roles here.  We also have researchers.

So, this is what I mean by a systems approach or whole-society approach to discuss generative AI and education topics.

So, I would like to introduce you to our wonderful panel.  It's alphabetical order.

First one is Dr. Antonio Saravanos.  Later, you would see him, associate professor of information systems management from New York university.  And sitting next to me is Dr. Eliamani Laltaika, judge of the high court of Tanzania, and faculty member of Nelson Mandela University.  And he's from Tanzania.  And we have Professor Mike Perkins, associate professor and head of centre for research and innovation British university Vietnam.

And we have Mr. Mohamed Shareef, director of government and information relations, OXIQA and former minister of state from Maldives.  Unfortunately, Dr. Gao cannot make it with us, so we'll stay with the five people here.

The first one I would like to ask our panellists.

We will have our presentation and then later, I would encourage you to interact with us and share your best practices with us.  Later, I will invite you to speak.

First, I would like to first ask you.

Would you please share how the educators in Maldives use generative AI in their classroom.

>> MOHAMED SHAREEF: I had the opportunity during the last kind of year, year and a half to interact with educators in the Maldives, mostly K-12 educators, but also faculty from the main universities in the Maldives.  There has been an increasing interest in educators in AI.  Now AI for practitioners like me and you I'm -- (Audio breaking up)

And everyone's looking to see how they can be supercharged with AI.  And the same is true in the Maldives is a small island, developing nation.  It's upper middle-income nation so it's not a least developed nation, but -- (Audio cutting out) -- technology adoption.

Now when we look over the last year, I interacted with about two -- the first thing I asked -- I know what generative AI is.  They have a sense of this is something they need.  This is something they want.

But then I ask do you use it?  And what I found was quite surprising.  That nearly 85% of educators in the Maldives already use it.

But not as I hoped they would, but they use it.

And for that reason, what I found was, what do you do?  AI can make beautiful slides.  This is the first thing.  Because creating slides is a big headache I guess for educators.

And with AI, you can just give it your notes and it will create the bullet points.  And if you have a better AI, it will even put it all into PowerPoint or whatever tool you want.

So it is the idea that teachers are already taxed in terms of time they have, but what I found interesting was that about 15% of educators both in higher education and in K-12 were already using generative AI on a daily basis to teach or to aid them in their teaching duties.  And this was surprising, because I didn't really expect that they would be using this outside of, say, casual kind of exploratory work.  But this is quite surprising.

But what is even more surprising is that when I asked them what are your concerns?  Because I thought there would be concerns.  I thought there would be concerns with generative AI, because generative AI could replace them.  But when I asked them K-12, they're concerned most that they don't have the knowledge to leverage generative AI.  And they don't have the training opportunities.

And their second concern was how can we access AI?  Access to AI is limited.  And their third concern was accuracy.  They're teachers, when they create something, they know if it's not accurate.  They're really concerned about the accuracy of AI.

And at the bottom of the list, they say yeah, maybe, only 2% of the respondents told me they have any concerns about being replaced.  I think because at the top -- when I ask in higher education, there is contrast.

The top concern for them is plagiarism and cheating.  But for higher education, for them the concern is how am I going to assess these guys when they are going to be using AI's work as their own, trying to pass AI work as their own work?  So there is definitely a lot of concern.  There is a lot of general demand for it.  So like in Maldives, two things that educators are looking for.  One, AI.  Two, how to keep children safe online.  Cybersecurity.  So these topics have been in high demand and they go hand in hand together.

And this is from a developing country.  So, especially the fact that educators are putting at the top of their concerns their own capacity and plagiarism and how can we assess students that use AI.

>> MODERATOR: And this is very interesting.  If the technicians, can you please bring the Zoom?  So we will invite our second speaker, since Mohamed mentioned plagiarism and how are faculty members going to assess their students?  Let's invite up Dr. Saravanos to talk about, as a professor, researcher and computer scientist, he can probably easily discern when your students submit work made by generative AI.

How can you teach your students to use their generative AI judgment so they can better use generative AI to enhance learning?

>> ANTONIOS SARAVANOS: So, you bring up an excellent point.  Unfortunately, it's quite easy to detect the use of ChatGPT or another artificial intelligence, like specifically natural language processing at the student level.

I was teaching an intro to programming course, and I would repeatedly see submissions where the solution used elements of the language, I was teaching python that we hadn't yet covered.

And when you have a discussion with the students it's clear that they don't really understand what the material is.  So, it's easy to catch them.  And there are many ways to catch the use of AI.

For example, when students submit essays.  You see them citing resources that don't exist, and that's quite common for ChatGPT to just make up references.

So, I think someone more experienced can kind of catch them out.

As an educator, I recognize the rise of GenAI tools is both a challenge and an opportunity in an academic environment.  So from the teaching approach that I've adopted, focuses on reframing the challenges as opportunities in order to empower students and guiding them to use GenAI not as a shortcut for producing answers, but as a tool to deepen their understanding, creativity, and problem solving abilities, because whether we want to or not, when they go into industry, they'll be relying on this tool so they need to be able to use it effectively.

So, I have many dimensions to this and we're short on time so I would say my foundation begins with helping the students understand the capabilities and the limitations of the GenAI to begin with.

The first thing is to make sure that they understand that AI tools aren't like this omnipotent source of knowledge and there are inherent flaws.  So, we need to begin with that.  And then once they have that, we can move forward.

So, to illustrate this I'll present case studies in class where the AI outputs contained some mistakes, biases, fallacies.  First emphasizing the importance of the human element.

So, I may have students generate a solution to a problem with ChatGPT and the class goes over and critiques a solution with me, identifying mistakes.

But you could even generalize this exercise.  So with respect to students, anything where you have a GenAI response being compared to a peer-reviewed article and then highlighting discrepancies, and the challenges to identify what the AI might have produced being flawed or incomplete responses.  This is what the AI gave us.  How do we tell there's a mistake there?

Generating these metacognitive abilities.  Thinking critically is where it's at.

Hopefully, this answers the question.

>> MODERATOR: Thank you.  Antonios has been incorporating and embracing AI into his teaching.  Let's move on to Mike.  One of your research interests is academic integrity.  Would you please share with us some strategies for detecting AI-generated content in academic settings?  Are the current tools effective?  What are your insights on the responsible and ethical use of generative AI?

>> MIKE PERKINS: I'm going to start off by saying how can we detect it?  You can't.  I'm going to disagree with what Antonios said then.  Educators cannot effectively detect the use of generative AI tools.  There have been several studies which have demonstrated this earlier this year, University of reading study found that 94% of test submissions produced with GenAI, were not detected.

We created a series of GenAI produced assessments using GPT4 and submitted these into the piles of all the faculty marking them and gave them the GenAI detection tools and said just tell us if you spot any tools that have been used, any assessments that have been created using AI.

Performance, extremely low in terms of people being able to pick this up.

Some of the comments that you do hear people saying and Antonios mentioned about ChatGPT making up fake sources.  Originally, ChatGPT 3.5 that was true.  It's getting less and less true now.  And now, we have new tools, such as Google Research last week, which actually carries out an agent-based search, creates a literature from real web sources and will produce a full literature review for you.

So, this sort of story that AI tools, you can always tell when we can detect them, it's simply not true.  If you think you're spotting a piece of work that you think has been created through GenAI, you may be wrong.

Now you might say okay I've got an AI detection tool.  Also, wrong.  For the research that I've carried out and many researchers in the academic integrity field there's actually a consensus that these tools are not suitable for accusing students of committing plagiarism as we say.

You might say well, these tools software companies tell me that they've got a 98% accuracy rating.  Okay so you have 1,000 students, how many students are you going to accept if you falsely accuse of plagiarism, you mark them a zero, they maybe fail an assessment, drop out of university, is that acceptable to you?  Certainly not to me.

And the research I've been carrying out highlights time and time again that it's actually the students who are at most risk of being in a precarious situation at their institution, maybe they're neurodivergent, maybe they're English as a second language speaker, and these are the students who write in this style that people say that's GenAI.

People write in lists or write in a certain structured way.  Sometimes, GenAI tools replicate that, but that's because they're standardized forms of producing text.

And especially when you're an ESL speaker, you've been taught in this particular way of using certain words in a certain format.

What you end up doing is saying these students have been caught using GenAI tools and they're cheating, and they haven't, but then they suffer some really severe consequences.

We've also got to really consider broader issues of equity and access for these tools, because you can make GenAI output even if this is detected as GenAI produced with a few simple techniques, you can turn this into text that is not going to be detected through any AI text detector.

And we carried out this research.  We created pieces using GenAI tools.  We tested them against the seven most popular and research-backed AI detectors and found that simply they were very low accuracy to begin with.  44% accuracy rating for unchanged text.  But if you're a student who's wanting to cheat, to get away with something, that's not how you use AI.  You don't just copy and paste your prompt and say there you go.

But if you do, you're probably a struggling student who needs more support, who doesn't need to be told you've cheated, I'm going to throw you out of university now.  If you give me 15 minutes and a piece of text, I will make that text completely undetectable.

It might be 1,000, 2,000 words.  We demonstrated with a few simple prompts in terms of integrating these directly into our created prompts, just by saying write this in a more complex way, add some spelling errors to this, make this less complex, make this sound more human.  Add some versiness to it.

Change the sentence length, the paragraph length.  What you're doing here is causing temperature changes in the underlying model.  If you have API access, you can set temperatures for the model, and you'll find a higher temperature will give you a higher variation.  We're talking about models here when try to predict what's going to be the next word in a sequence, but if you just change that up and you add in some additional words, rewrite some sections, you're not going to get the detection that is going to be acceptable in really any formal academic integrity process.

If you take a look at the Guardian yesterday, I was quoted in there about this subject, and it's an interesting article which talks about these challenges.  It's the students who get falsely accused and these are the ones who are struggling.  Or it's students who admit to taking some shortcuts, but is that their fault?  If they're using ChatGPT or other GenAI tools to do the assessment, why haven't you changed your assessment to account for these tools?  It's been out for two years now.  What's going on?

There are some really big changes that need to be made in education more broadly to recognize these tools and see how we can integrate them.  Thank you.

>> MODERATOR: We've been talking about academic integrity.  Let's move on to our lawyer.  You're an expert in intellectual property.  GenAI tools could be trained by items protected by IP.  There's a significant question about whether AI tools represent IP infringement.  What are some implications on education?

>> ELIAMANI LALTAIKA: Thank you very much.  For that question.  Before I go into the question, I appreciate the last speaker, the professor, for opening this up.  As a judge, I was overseeing a case where the student was suing the university for being accused of using GenAI, and they could not graduate, because this work is accused of being AI.

And as a judge, I need to determine this for the university and the student.  Surprisingly the professor has said it's impossible to detect.  We are trying to take the AI which has been so hyped -- at the centre.  And copyright has been at the centre of the conversation.

Whenever you come across something very impressive, you should be sure that someone has -- ChatGPT or any other generative AI and you get a wonderful text that suits your expectations -- you should be sure (?) (Audio cutting out) copyrighted work was heavily violated.

Say I'm the professor of sociology (?) ChatGPT has violated copyright so that has an implication on education.

Secondly is an issue of attribution or ownership.  Academia is known for acknowledging some other people's work.  This professor has cited so many works half a page is footnotes.

-- and sadly, we are seeing a big overreaction.  We have extremely restrictive rules.

Some people think whatever a judge says is binding.  (?) should we encourage open licensing?  Very much support monetary compensation to authors and creatives, but this is shooting ourselves in the leg.

I also think there's a need to establish ethical guidelines at every level.  We should have ethical guidelines at the university level, at the ministry level (?) for the next two minutes or so, but AI is a blessing in disguise in developing countries.

In Tanzania (Audio technical difficulties)

>> MODERATOR: The issues related to GenAI in the global south.  The second round of questions will focus more on the perspective of the global south.  In the U.N., leave no one behind is a central value.

Let's look at GenAI from the global south for a second.

The first question will also go to Mohamed.  Would you please reflect on how generative AI creates a digital divide between global north and global south?  What policy makers should consider reducing the divide and promote more equitable access to GenAI?  Particularly in the small island countries like Maldives?

>> MOHAMED SHAREEF: The potential of GenAI is undoubtedly huge.

Today, everyone expects actually and should expect AI to support them in the sustainable development goals.

AI is already supporting us in the darkest of our times.  But what I fear most is AI is a new -- we are already facing a lot of challenges trying to catch up to the rest of the world.  AI is like Red Bull.  It gives digital transformation wings.

Folks who don't have the Red Bull, how are you going to catch up with them?  In the global south, we just get a drip, drip, drip of this Red Bull.

There are three aspects for the digital divide when we talk about the digital divide.  The three aspects we talk about is access to technology.  It's the educational gaps.  The north has a huge advantage.  The north is already investing their wealth today in AI and in particular in generative AI.

And not only that, but there’s also a huge educational gap, because they're investing in the ecosystems in this part of the world -- becomes an educational gap.  Then begin, access to this is extremely limited.  I'm sure you've already heard 26% of the world still remains offline.  And 50% of that is in the Asia Pacific.  The island communities are particularly impacted.

So, what do I suggest as someone who has been for a long time a practitioner in digital transformation in the developing part of the world?

I would say we've got to find a narrative where even the low-income communities or the least wealthy countries need to prioritize investment in IT infrastructure.

In Maldives, we've gone from having one submarine cable to the rest of the world to five submarine cables.  So, we want to make sure that we are connected, and we are connected everywhere possible, from under the water and from the sky down.

This is extremely hard, as well.  I'm working with higher education in particular to develop AI models that are multidisciplinary and taught to every student, from nurses to finance specialists, AI needs to be taught.

And we've got to develop an ecosystem where we can retain our smartest rather than lose them to the West.

So, governments need to partner.  Nation states and educational students need to come up with policies for ethical use of AI.

We cannot just jump into AI.  We need guidelines that safeguard our data and our privacy and finally, we cannot do it alone.

And that is why we are very glad to be working alongside institutions like the UNU.  We've got to work together if we are going to have to actually bridge the AI divide and not let it divide us even further.  Thank you.

>> MODERATOR: The next question for Antonios would be if you were teaching teachers from the global south to integrate generative AI tools to their classroom, what would you tell them about the nature of the tools to help them understand the benefits and limitations?  Talking about the teachers' education in the global south.

>> ANTONIOS SARAVANOS: An excellent question.  I would begin by highlighting that there are two sides to the tool.  So on the one hand, we have the solutions being used by teachers to make them more productive and that was mentioned by other panellists, as well.  Generating slides, generating these types of resources, perhaps assessments and so on.

So, in that sense, it's quite powerful and then there needs to be training for that.

And then the other half is okay so how can we use it in assignments for the students?  So how can the students be using it to make them more productive, and so on and so forth?

And so, one is understanding that there's two dimensions.  The other again mentioned by other panellists the digital divide.  Luckily, they're free tools that one can use, but it's important also to recognize that they're restricted, so I think this goes a bit tangentially, but again, it would be quite wise for academics to work together to figure out ways that they can gain access to the more advanced solutions, and also develop their own local tools that they can use that might not be as limited.

So, a lot of the work is open source.  Can we ruin our own AI solutions locally with support and so on and learn and develop in that area, so as not to be left behind and so on.

First one teaching teachers from the global south, I would begin by highlighting the technical nature of these tools.  So highlighting their benefits and their limitations.  I think a good starting point is what is GenAI in general, right?

What can be used to generate text, images, music, code, so they get a good perspective of everything that it's possible to do.  Sometimes, one is kind of limited to use cases they've heard from others and so on.

So, I think a good overview is a great starting point.  And then highlighting the advantages and disadvantages.

So, it can summarize, explain, create content, but it doesn't think or understand.  It makes one think that they're thinking, it's not really a human.  It's just guessing.  Probabilities.  What should come next.

So, talk about what the paid solutions are, open solutions.  Google Collab has a solution for generating code.

And not everything will be appropriate for every instructor.  It depends on what subject matter you're teaching.

As was mentioned before, it may not be as easy to catch someone plagiarizing using ChatGPT in sociology, but it might be easier if it's an intro to programming course.

It really depends on the context.

I'm running short on time, but happy to expand on the conversation offline if anyone is interested.

>> MODERATOR: May I please ask our technicians to put the PowerPoint back up again.  The next question is for Mike.  You developed the AI assessment scale, which allows GenAI to be integrated -- would you please introduce the scale and explain whether it can be used in the educational setting in the global south and if so, how?

>> MIKE PERKINS: Thank you very much.  So I was earlier just telling you about how it's not feasible to say we're going to tell the students to use generative AI tools, but especially if you're not in the global south.  These tools also cost a lot of money.

And the most accurate tools are going to be the ones that are most expensive.  And you in the global south may not be able to do this.  So what's the alternative?  What can we do to change things up?

What I've developed is a framework for now we can actually introduce generative AI tools in an ethical way into assessments.

What this is a conversation starter between academics and students to say look, we know that GenAI tools exist.

We can't put the genie back in the bottle as much as some academics would like to and say let's go back where we've been.

So, what we have is a situation where in the last two years, academics have been saying oh, these students cheating using GenAI, yet still setting the same essay questions they've set for 20 years.

Now is the time to change and the AI assessment scale is a way to say look, what are the important things you need to know and how can you introduce GenAI in an effective way?

So, let's start off right from the very beginning where we say look there's sometimes where we can't use any AI at all.  If you are a medical student and you are training future nurses, and doctors, you want to ensure that when that student graduates and becomes a doctor, they actually know the fundamental biological aspects of a human.

So how are you going to test that?  You're not going to say here's an assignment and write me about the human heart, you're going to put them in an exam hall or a face-to-face assessment situation or a presentation and say tell me about this.

There's a corpse.  Demonstrate you know how to cut it up.  You're not going to give them an assignment.  Hopefully, they're going to be able to deal with live humans.  Sometimes, we need this fundamental knowledge to be tested and there's sometimes where there's no AI and this is a secured assessment.

As soon as we go from a secured assessment, it is no longer possible to really control student use of GenAI.

So, if you can't control how the students are using GenAI tools, what you need to do is change your assessment so that they focus on the things you want to train them about.

For example, at level 2, this is where we're talking about process-based assessments.  You're a writing instructor and you want to teach students about how to plan an essay.  What tools can you use to help you plan an essay?  And you submit that as an assessment, and we explore it.

When students graduate, they are going to be asked to do tasks by their employers and what we want is for them to be able to finish that output.  We don't sigh you've got to use -- you're not allowed to use the internet to do your job.

We've got to train students how to use them effectively for different situations.

At the next level, we want to have AI as a collaborator.  We want to train students how to use GenAI tools to draft text, to adjustment what they are creating, to give them feedback on their work and what we're looking at is this co-creation element rather than trying to say well, you wrote that part and the AI wrote that part.

I write using GenAI tools and by the time I finish writing my journal paper, I can't tell which part I wrote myself and which part the AI wrote and it's all my ideas and all my voice.  We're trying to train students on maintaining their voice and maintaining a critical approach to what is the best way to use AI.

And then we can go beyond that and say there's sometimes where we want students to use AI tools specifically and therefore, we want to assess how well, they're actually using GenAI tools.

We might say rather than just oh, yeah, you can use AI for this, we say you must use AI for this or show me your use of this tool.

We say this is fillet.

And this used to be our final version of the scale.  Then we recognized that technology is changing so rapid, we need to recognize the increasingly multimodal use of AI and that's why we have this final AI exploration level.

Now in this AI exploration level, what we're looking at is solving problems that we don't necessarily even know exist yet.  How can we use GenAI to solve problems that have been created by GenAI or to do things in a different way.  To fundamentally use GenAI to have this element of co-design and co-working between an educator, a student and a GenAI tool to actually solve something new?

We're not going to be talking about K-12 students here on this, but we may be talking final projects, maybe undergraduate dissertations, Ph.D. students, master's students.  This is how we can bring all of these together into this five-point scale, which hopefully supports students in using AI in a different way.

So that's how I think we can use it in the global south.  It's a free framework.  It doesn't require any licensing.  If you want to take this and adapt this, we actually have tools available so this one on the bottom you can download a translation.  We have this translated into 12 different languages already with more to come and we also have the design linked there.  An asset that you can change and adapt that your own context.  Not everybody is the same.  Every country has different requirements, so we've got able to change accordingly.

So, some information about the AI scale.

Thanks.

>> MODERATOR: I think the tools are very useful and thinking about the global south, probably the capacity building for the teachers and the students will be a challenge compared to the global north.  That's just my impression.

The last question, we go back to our legal expert.  The global south is expected to contribute to 62% of the global population in the next years.  Tanzania is expected to grow their population by 50-90%.  How can Africa benefit from and contribute to the development of GenAI in education?

>> ELIAMANI LALTAIKA: Thank you very much for that question and I will start by (?) and innovation providing the next generation of Africans and increasing competitiveness and as a result, formal institutions were named after him.

And I come from the one, the Institution of Science and Development, I'm an adjunct faculty member there, and this has been our way of positioning ourselves to benefit from the global innovation scale, but also to contribute.

So just how we established Nelson Mandela institutions, this one was established in 2011, the one in Nigeria was established in 2007, (?) contributing meaningfully to the next generation of Africans.  Our current president who happens to be a lady, a female president of state, we are very proud of our president, she has pioneered STEM education, science, technology, engineering and mathematics to prepare the next generation to contribute meaningfully to innovation.

And just last week, she reshuffled the cabinet.  When addressing the nation, she told the minister of ICT, the ministry has been narrowed so that you can focus specifically on ICT to ensure that he explores whatever is going to help Tanzania compete in terms of ICT.  We have a long way to go, and this is how I want to finish my contribution by asking anyone to ensure that you give a hand to the global south.

Personally, I got my Ph.D. in Germany by the generous scholarship of the Max Planck Society.

We are not seeing this happening anymore.  So we still need this so those in the south can share expertise and knowledge.  That's how we can position ourselves to benefit and contribute meaningfully to GenAI and STEM in general.

>> MODERATOR: Thank you.  So conscious of time, the session has to end.  But I would like to encourage all of you, whoever would like to have an exchange of ideas with the panel members, please come up and we can have individual conversations.

And for those online sorry, we cannot accommodate questions.  Feel free to write to us and we can have an exchange later.

Thank you very much, panellists, and thank you for being here and listening to the session.  Thank you.