Skip to main contentSkip to footer
  • 5 days ago
How can AI protect young people online instead of putting them at risk? Fellows from Youths IRL, Andesh Thevendran and Ephesean Tan Yin Kah, share their insights on youth-driven solutions, policy priorities, and the future of digital safety.
Transcript
00:00Hello and welcome to It's About Youth. I'm your host, Fah Nashay. Now, as AI becomes more deeply
00:13woven into our digital lives, the risk of young people facing online threats are evolving very
00:20fast. Now, while policy makers and tech giants are working to solve some of the problems concerning
00:26AI, youth voices are also stepping up to shape this conversation. And on today's show, we're joined
00:32by two youth online safety advocates who aren't just raising the alarm but also offering real
00:38solutions. Thank you so much for joining me, Andesh and Effie, Fellows from Youth's IRL. Thank you so
00:44much for joining me. Thank you so much for having us. Let's draw the landscape here first before we
00:52start going to some of the solutions, right? How do you see AI impacting youth online experience
00:57differently than let's say the older generations or, you know, the time before? Are these issues
01:04that you think are being overlooked, right? Are there issues you think are being overlooked right
01:08now in the broader AI debate? Let's start with you, Effie. Well, if you ask me, right, in my community,
01:13you know, the older generation that when you always come to like family gathering or you just go for,
01:19you know, coffee shop conversation, when you talk about AI for them, it's like, huh? AI? Or is that
01:24a robot who's going to do my cleaning at home? Right. Things like that? Because let's be honest,
01:29like those AI robots, it's also considered AI. And so for me, the older generation in my community,
01:36they do not really have the thoughts or the knowledge of AI. But if we are talking about the millennials
01:43or my generation, my era, the youths, we do heavily use AI. I mean, my surroundings, they do use it,
01:50like it can start, it can go from grammar checking. ChatGPT, for example. Yes, chatGPT-Gen AI, yes. And then they
01:57will use it for email drafting. And then it goes to like, even they plan their travel itinerary with that.
02:03So if you are seeing the impact of youth experience, it's actually a very huge topic on like how online or AI,
02:11it's affecting them. So if I were to narrow it down, I would give you two. The first one will be the
02:18erosion on their critical thinking skill. And second is the privacy concern. Right. Yes. Yes. So if,
02:26because most people will go on ChatGPT and basically write their whole information on there. Yes. And they get
02:31over reliance on, on ChatGPT. Like this, there's a lot of Gen AI too, but mostly they use ChatGPT. So like assignment
02:38or even creative arts, they just throw everything into ChatGPT and then they just put things like that
02:43and submit to the lecturers. And I heard there's also a way for you to sort of disguise it so that it's not
02:49plagiarism and all that. Yes, yes. As long as you have the money to pay the AI too, that's a lot of
02:56things you can do. So yeah. An Desh, what's your thought on this? For me, when it comes to the
03:01older generation, you have to understand that they're actually adapting to it. Like nowadays,
03:06I've seen the older generation, like for example, even my grandma, she sees that AI is like a tool.
03:12It's like a blessing because with AI tools, you can now generate art or create music. So it's a very
03:18essential tool when it comes to the entertainment sector. Right. But for us, like me, for the younger
03:25generation, we see that AI is now actually something that we're growing along. I believe it was in 2022
03:31when ChatGPT was really popular. So as we're growing along with it, we are much more aware about what
03:37are the negative impacts of AI. I see. Like for example, because of people using AI too much, people
03:44from the creative sector are facing the impact. The ironic thing is that this AI, they're actually
03:49trained based off real art produced by real artists and even musicians. So one issue that's always
03:55overlooked is for the older generation, they might just want to create like a social media post. So
04:01rather than commissioning an artist and waiting like one or two days for them to create the art,
04:06they can just simply go to like ChatGPT or even now Google VEO and just prompt it to create an art and
04:13then just publish it in one second. There's something called the butterfly effect where one
04:18small action can create a huge difference. Maybe not instantly, but it'll take time. So when one
04:24person does this and he thinks like, okay, maybe it's just one post, they won't do much. The real
04:28issue comes in when a lot of other people also do the exact same thing. Right. I think the best example
04:34for this is like during, I think Riot, they start the Gilby trend with AI. So they put their photo
04:42into ChatGPT and then ask them to recreate the art style. And I hear the producers aren't happy as
04:47well, like the studio themselves aren't happy as well. Of course, because and they can't do anything
04:51because they actually don't have a law, you know, like copyright such on such thing on AI tools. Yeah.
04:58They're like also wanting to keep up with the trend. So they kind of just overlook about
05:02what's the issue like in the longer run. Yeah. Yeah. You mentioned like copyright,
05:07I mean, deep fakes, for example, all that comes along with this advancement as a lot of people
05:13are using it, but at the same time, there's a lot of damage you can do as well. Right. Let's move on.
05:18So what are some of the misconceptions when people, people have still have about what online safety
05:25means, you know, coming from use online safety advocates, especially in this AI driven world right now.
05:32Well, if you ask me, I think the biggest misconception is you want to avoid harm you don't use.
05:38But that's the thing, you know, nowadays youths, we talk about youths, like we grow up with digital,
05:45we grow up digitally, but the thing is not everyone is AI literate. You grow up, yes, you grow up in this
05:52era, but do you know what you're doing? So my take is the misconception for usually parents is, oh,
05:59I have to limit their screen times with parental controls on the app, this and that. Yes, it's helpful,
06:04but there is only so much, you know, it can be done, you know. So I believe that for this kind of
06:09misconception, the key to break this is actually communication. Right. Like create a safe space for
06:16your child, you know, so that when they are facing, for example, if they are sexually groomed,
06:21you have an open communication with your kid, they will actually come and tell you instead of, you know,
06:25they just hide it and go through the difficult situation themselves. Right. I mean, I have nieces
06:31and nephews and somehow they always find a way to get online, even with parental controls or even
06:38with limiting screen time, because they're smart these days, you know, they always find a way to get
06:43online. What about you, Andesh? What's the misconception when it comes to online safety that people just don't
06:49quite understand? From what I've observed and based on my perspective, I think the most general
06:56misconception is that when you mention online safety, people always correlate it with like scams,
07:01like sketchy links, like don't click on that link, don't reply to that message because you'll get scammed.
07:06But when we bring in a topic of AI, people are using AI to create deepfakes and even fake voice modules,
07:12like pretending to be someone trusted in order to scam money from them. So people would say that
07:18online safety is about protecting your finance. But one misconception is that online safety is also
07:25about protecting your minds. Because nowadays, when you go to like a video platform, because of the AI,
07:31even younger kids, they're being exposed to videos that are actually masked or they have hidden
07:36inappropriate content and themes inside of it. So rather than just, you know, thinking online safety
07:42is about protecting your money or just don't go to that link or this link, it's also about protecting
07:46your mind and beware of what you're exposed to. And just like you mentioned, like for younger kids or
07:51even like your nephews and whatnot, they should know and even their parents should take precautions in
07:55order to avoid their kids and younger ones to not be exposed to such content. Because in the longer run,
08:01when you're exposed to all of these kind of contents, it can actually bring them like
08:05emotional pressure and even affect their cognitive thinking skills.
08:09Right. So I guess that brings me to my next question, actually, which you have addressed a
08:13little bit here, but maybe you can elaborate a little bit more. So what connections do you find
08:18when it comes to online safety and things like mental health, digital rights, or even inequality,
08:24for example, you know, these are some things that you've touched on. But paint me a picture of the
08:29connection or the impacts when it comes to, you know, some of the issues that we mentioned just now.
08:38Okay, well, if you ask me, right, online itself, it's like a blessing and so a curse, you know,
08:45because if we talk about digital rights, like online and digital rights, online is nice because you can
08:52stay anonymous. That is like your your freedom of being private, that's your privacy, right?
08:58But at the same time, the curse come in is when people start to misuse the anonymous they can have.
09:03So like, you know, they can start being anonymous, they think it's fun to go around and just abuse
09:09this kind of privacy they have and start with cyber bullying, cyber grooming. And this, like, it
09:17eventually is a domino effect, it leads to the mental health issue where has a kid or even has
09:24not a kid, but as adult, if you are, you know, let's say for today, my photo is being, you know,
09:29put in the AI tool, defect it and make it as a nude, and just spread it out, inappropriate content,
09:36just spread it out. And it will affect me as well in the long run, like, it will give me a lot of trauma.
09:41I will feel very anxious, like, oh, what's going to happen if I can't even use social media anymore.
09:46So that's where I think the rights and the mental health are from.
09:51And if you talk about the inequality in the online space, there is, of course, inequality,
09:56because like we mentioned just now, like certain people, they can pay the Gen AI tools,
10:01I mean, they subscribe to it, and then they can ensure that their work don't have plagiarism.
10:07But what about those who are not digital literate or who are coming from the rural area,
10:13which without much financial support, they can't even do that.
10:16Yeah. And Andesh, if you don't mind explaining a little bit, do you think youths are aware of some
10:25of the issues that are raised by EFI just now? Do you think, oh, are they too engrossed with being
10:32online and keeping up with the trend that, you know, all of these are being sidelined?
10:37I believe so, because like we've mentioned earlier, when a trend happens, people just kind of overlook
10:43everything else just to be a part of like a group, like you don't want to be left out.
10:47So when it comes to youths and like what you've mentioned on safety, they just kind of focus on
10:52what are the pros rather than on the cons. So like, for example, when it comes to mental health,
10:57like I've mentioned earlier, it's also about protecting your mind. So when you just consume all this
11:04content, you kind of forget about like how you're actually being manipulated and how you're being
11:09exposed to inappropriate content, which can affect your cognitive thinking skills. So when you're just
11:15exposed to all of this and you're being addicted to it, you just kind of forget that, oh, there's
11:19actually such an issue which I should, you know, raise my concern about and even voice out on.
11:24Right. Actually, to add on to this, I do think youths are aware of what's going on, but it comes back to
11:31mental health, like the peer pressure. Every one of my friends is doing it. I want, I FOMO, the fear of
11:37missing out. You know, they have these kind of thoughts. Oh, they are, for example, they are putting
11:41the photos into church EPT to generate has. Sailor Moon, I want to do it as well because I'm FOMO. If
11:48someone go into the big group and say, I don't want to do it because it's not good to put your photo into
11:53the gen AI, the data will keep it. But everyone is doing it. They will look at her like, you sure?
11:58I think you are just, you know, making a fuss about it, paranoid. It's not real. So I think the peer
12:04pressure, I think youths are aware. Also, if I may add earlier, you asked about what are the connections
12:08between online safety and also digital rights, right? I think something that's really popular is that
12:14when everyone is using services on a huge platform, you tend to believe that everything on that platform
12:19can be trusted. Like, I mean, for example, if everyone is using it, you obviously wouldn't
12:24think something is wrong with it, right? Right. So in return, you kind of ignore and don't realize
12:29what digital you write, what digital rights you have with it, because you believe that if everyone
12:34is using it, then there's nothing to be worried about. Right. But in return, you should be aware
12:38of how exactly your data is being used and how exactly you're being manipulated with the content
12:43that's been recommended to you. I mean, let's be honest, who among us read terms and conditions?
12:49We really don't. And I think they make it purposely in small fonts, so we don't read them.
12:55So yeah, I completely, I completely agree with you with regards to, you know, how youths are handling
13:01it and the FOMO bit. I'm a bit older, so I probably don't have FOMO anymore. But yeah, I mean, I see it in
13:07young people when it comes to all these trends, they do want to jump on it as soon as a lot more
13:13people are on it because they don't want to miss out. Right. Okay. We want to talk more about
13:20these issues and the current safeguards as well as some of the solutions that you guys have presented.
13:24But we'll go for a quick break first.
13:43Hello and welcome back to It's About Youth. I'm your host, Fana Shea. And today we are joined by
13:47Fellows of Youth IRL. And as well as Effie, thank you so much for being here and for explaining to us
13:53just now some of the issues related to AI that youths are facing right now. Now, we talked about
14:00the issues and we talked about, you know, some of the impacts it has. But let's talk about the current
14:06safeguards here, right? Because, you know, the government has announced a few safeguards like
14:10licensing, for example, or, you know, making sure social media are accountable and, you know,
14:18making sure there's basic AI learning in schools, for example. So what do you make of the
14:23current safeguards in place right now to protect people, especially youth when it comes to threats
14:28that comes with AI? Maybe, Andesh, we can start with you. For me, I believe that the safeguards are
14:34there, but they're not enough. They're more so just like surface levels. Right. Like, for example,
14:39platforms, you know, they have just like, okay, you're supposed to just have parental control. So just
14:45like a kid's mode for them to use. But that's like I said, it's just a surface level. Because when you
14:50dive deeper, there are still cracks here and there where inappropriate content still slips
14:55through the recommendation system, especially for users who are kids. So rather than just telling,
15:01you know, platforms should do this and that, the government should actually come up with like
15:05guidelines or even like laws where you're supposed to be aware and also you should like disclose how
15:11exactly your AI recommendation system is working. So that it can actually improve trust for the users
15:18towards the platforms. And also parents, especially, can be aware of how exactly the data is being used
15:24and how exactly such content are being recommended to kids. Right, right. I mean, I think there's a lot
15:30of information out there that is missing when it comes to AI, because a lot of people use it, but they
15:37like the convenience of it, but they don't really know what goes into that convenience, right? Effie,
15:43what's your thought on this? I mean, I agree with what Andesh mentioned, like it's pretty much on the
15:48surface area, because the thing is AI, it's an evolving ecosystem. Every day, if there's something
15:55new, every day, they are feeding on our data. So I would say right now, what Malaysia is doing, it's
16:01okay, it's not that bad. I would say they're doing quite well, because Malaysia is one of the front
16:07frontiers for in Southeast Asia, you know, when it comes to AI, these kind of things, they're actually
16:13doing it. But I believe it can't just be the government who's doing all this. It comes back
16:19to, you know, everyone have to take part, like the tech developers, the youth themselves, the parents,
16:24or even school, if I would say. So right now, it's just like that. But I believe as it goes further
16:31and further, you know, with the help of other parties coming in, it will be better, the safeguard,
16:36especially when the youth itself, the user itself have the knowledge of what's going on.
16:41Right. Okay, so you guys have presented some policy recommendations to the Ministry of
16:45Communications recently. Let's hear them. So could you walk us through your top two or three? And
16:52I understand there's quite a few. So what are they? And more importantly,
16:56why you prioritise these recommendations? Okay, for me, my top two is actually one of the proposed
17:03amendments to PDPA 1024, the Personal Data Privacy Act. Another one, it's a content moderation tool
17:11that we suggested. So I will start with the one, the amendments that we suggested to PDPA, which is
17:17notice and choice, notice and choice. So for that, we actually, like we talked about just now, platforms,
17:26they will just give you the smallest guideline ever, but you would take a lot of time to read
17:32through the terms and condition of how they're going to process your data, how they collect your data.
17:36What we are seeing is this kind of platform, we suggested to, you know, PDPA, this kind of platform
17:41should actually make it more user friendly, easier for people to, you know, or maybe some graphics,
17:48some diagram will be nice, because we are looking at it, we read it, and we understand it,
17:53and we'll be able to make much more informed decision, when we know what they're going to do
17:58with our data. And the second one is the content moderation tool, which is the op-op option.
18:04Now, this op-op option, it's a bit different, because it's not like the usual mute or hide button
18:12you see on Facebook, for example. It's a bit different. This one, it's to actually empower
18:19victims or minor that, you know, have their pictures to become a CSAM, a child sexual abuse material.
18:26So if they see it, if they go, if it goes viral, they see it, what they can do is with the op-op option,
18:30once they click on it, they'll be able to stop, you know, like delete that thing from the platform,
18:37until the platform moderators step in to, you know, remove everything. And when we are presenting this
18:44to the Ministry of Communication, right, one of the questions they ask, what's so different about
18:50this, when you can just request a platform to remove the content? The thing is, a lot of people,
18:56yes, a lot of people didn't know that to request that, it take a long time. Yeah, it's not easy,
19:01it's complicated. So the op-op option tool, it's mostly make it easy for people to do it. And faster.
19:06Yes. Andash, do you mind sharing? For me and my team, we come up, my top two would be first of all,
19:13AI content moderation, where basically, the problem stems from, even though there's like AI in this
19:19algorithm or moderation, right, to take good care of like the content on social media platforms,
19:25still, inappropriate content, especially that are directed to us kids, they still slip through,
19:31and it still ends up in your recommendation systems. So in order for us to come up with this policy,
19:35where it's regarding AI content moderation, we propose that for every social media platform that
19:41uses AI content moderation, there should be a human team behind it. So whenever there's like a report,
19:47like let's say, this should be flagged or that should be flagged, a human team should look over it.
19:52So when they look over it, they can now like decide whether it's actually true, like what's flagged by the
19:59AI, is it actually correct or is it wrong? And in return, they can improve the AI system, which in
20:04return can actually improve their reliability, and also the accuracy of the AI content moderation
20:10system. So the Ministry of Communications did ask how exactly can this be done. So we propose that
20:16there should be two human teams, one would be more generalized, and another one would be more
20:21specialized. Like let's say, if the AI flags like something that is known as child sexual abuse
20:27material, that should be the highest priority. So that should be more focused on, and the human team,
20:32the specialized human team, will decide whether that is correct or not. As for others such as like
20:37spam or even scams, a smaller or more general human team will look over it. And with this,
20:44the AI content moderation, as we know, is still imperfect, can be improvised over time.
20:49Next, we propose that there should be AI algorithm transparency and regulation. As we know, and as I've
20:57mentioned earlier, Malaysia government should come up with guidelines for social media platforms to
21:02disclose how exactly these algorithms work and what kind of data are being taken, especially from
21:08users that are kids. The main idea for this is to improve trust between the user and also the social
21:14media platform so that they can better understand what exactly is being taken in and how exactly are
21:21such content being recommended to these kids. So if the Malaysian government comes up with guidelines,
21:27I didn't even loss, they can actually motivate the social media companies to disclose how exactly
21:33their AI work. This in return can improve the trust, like I've mentioned, between the user and also
21:39the social media platform. Right. These are brilliant suggestions and coming from youths yourself.
21:47And you know, I think if you bring this to parliament, I'm pretty sure you'll get some uproar there.
21:52But okay, so talk to me about the, not to say reaction, but when it comes to youth voices being
21:59seriously taken, do you think they are being seriously taken enough to shape AI and online
22:06safety policies? Or what needs to change? Because you mentioned just now that, you know, a lot of
22:12youth representation when it comes to policy making is a bit missing. So what needs to change here?
22:16I think it most, for me, right, I think the thing that they have to change is the mindset. It's like,
22:25you know, youths, we grew up with these kind of things, we grew up with technology. Although sometimes
22:31the way, you know, we recommend might not be as professional, as how, you know, the older people
22:35think it is. But I think they need to stop having the mindset where I'm old enough, I have a lot of
22:42experience. So I don't think you are a young kid, you know what you're saying. So that's the thing
22:48that I think they have to have to change. It's just like what we've mentioned earlier, that if this
22:55policy is actually regarding the youth themselves, then the policy itself should be developed and even
23:01made with the youth themselves from an earlier stage, rather than just consulting them in the end.
23:06Right, I think we have time for one last question. So let's talk about, you know, imagine a world of
23:17safe, or what does a safe and empowering digital future look like for you. And if you have a message
23:24for young viewers as well, on advocating for online safety and to shape the digital way forward for
23:32Malaysia, let's do that to the camera. So let's tackle the first question first.
23:37To answer your first question, how I visualise, you know, a safe and empowered online environment is
23:44trust. Okay, when we talk about a lot of things, like a lot of parties can do different things,
23:50like the platform should, you know, do something about the content moderation, the government should
23:56come up with the rules, this and that. But at the end of the day, it's the trust, you know,
23:59we have with among each other, we are on online, we need the trust, it is good, it seems as when
24:05you're offline. Like, just like a normal person, you go out from your house, you trust the country,
24:12like you trust, like, you will be safe. It's the same concept. That's what I'm thinking.
24:16Right. Andash, let's safe and empowering digital future looks like to you?
24:22Well, if it goes for trust, I'll go for comfort. Because like I've said, these platforms themselves
24:29are the ones providing all these services. So, if they themselves know that they have a quite a
24:34huge responsibility to bear, and they can ensure that their platforms are now safe to use by even
24:39young users, then now we can, you know, be sure and assure ourselves that all these AI induced
24:45algorithms or even AI content moderation, they're all done with empathy. So, these platforms, they also
24:51prioritize user safety over clicks. Right. I like that, trust and comfort. Now, very brief, short
24:58message to the viewers out there about advocating for online safety, online safety, especially from
25:04youths itself. Which camera can we look at? Maybe this camera? All right. For me, my message to the
25:11younger viewers who are watching this right now is, your voice are bigger than you think it is. You know,
25:16don't because of FOMO, don't because people are not doing it, then you're not doing it. Do it. You know,
25:22your voice matters. Don't, you have to think that you want to be in the world where they shape
25:28the platform for you, or you shape the platform for yourself. And you can do that by joining Youth RL,
25:36because we have another application batch opening up soon. Yes. Pass to you, Andesh.
25:43Right. So, Youth's IRL, they'll be opening their fellowships starting next month. So, be sure to be
25:49on the lookout for the applications. And make sure to follow their socials, which is at Youth's IRL,
25:54on TikTok and Instagram. Maybe we can have a shot, what Youth's IRL stands for?
26:00Youth in real life? No, it's like, what is your mission or vision? Oh, yeah. The purpose of the
26:07organization. Okay. The purpose of Youth's IRL is actually a fellowship, you know, for youths age
26:1318 to 25, you know, for them to shape policy that can make online environment safer. And also empowering
26:22them, give them a platform, give them a place for them, for youths to voice out their opinion and
26:28also, you know, their thoughts on how the future of AI could be.
26:32Right. It's a very good platform if you believe that you could be a change maker and also bring
26:36more awareness that no matter who you are or how old you are, every voice there is actually very
26:41valuable. And if I may add, this fellowship is actually organized by Ratio Cause, a social impact
26:48agency, and CellcomDG with the support of Content Forum. Right. Okay. I don't think there's that much
26:56else for me to say. Applications are open. You can get more information. I'm pretty sure if you
27:02Google and go to the social media platforms, you'll get more information on that. Thank you so much,
27:07Effie and Nandesh, for your time and for enlightening us on, you know, the importance of online safety in
27:13this AI-driven world. That's it from me. Thank you so much for watching. Bye.
27:26Bye.

Recommended