Léim ar aghaidh chuig an bpríomhábhar
Gnáthamharc

Joint Committee on Children, Equality, Disability, Integration and Youth díospóireacht -
Tuesday, 13 Feb 2024

Protection of Children in the Use of Artificial Intelligence: Discussion

The agenda item for consideration this afternoon is engagement with stakeholders on the protection of children in the use of artificial intelligence. We are joined this afternoon by the following stakeholders: Professor Barry O'Sullivan of University College Cork; Ms Caoilfhionn Gallagher KC, special rapporteur on child protection; Ms Alex Cooney, chief executive officer, and Ms Clare Daly, board director, from CyberSafeKids; Dr. Johnny Ryan, director and senior fellow, and Dr. Kris Shrishak, senior fellow, at Enforce, a unit of the Irish Council for Civil Liberties. Ms Cooney is joining us remotely. I welcome our witnesses and thank them for their attendance. I apologise for the late start as a vote in the Dáil delayed us slightly.

I will go through our normal housekeeping matters before we start. I advise members that the chat function on MS Teams should only be used to make the team on-site aware of any technical issues or urgent matters that may arise during the meeting and should not be used to make general comments or statements. I remind members of the constitutional requirement that as they must be physically present within the confines of the Leinster House complex in order to participate in public meetings, I will not permit a member to participate where he or she is not adhering to that constitutional requirement. I ask any member who joins us through Teams to first confirm he or she is on the grounds of the Leinster House campus.

In advance of our guests giving their opening statements, I advise you all of the following in relation to parliamentary privilege. I need to point out to witnesses appearing before the committee virtually that there is uncertainty if parliamentary privilege will apply to your location outside of the parliamentary precincts of Leinster House. Therefore if you are directed by me to cease giving evidence on a particular matter, it is imperative that you comply with any such direction. The evidence given by witnesses and members from within the parliamentary precincts is protected pursuant to the Constitution and statute by absolute privilege. They are reminded of the long-standing parliamentary practice that they should not criticise or make charges against any person or entity by name or in such a way as to make him, her or it identifiable or otherwise engage in speech that might be regarded as damaging to the good name of the person or entity. Therefore if your statements are potentially defamatory in relation to an identifiable person or entity, you will be directed to discontinue your remarks and it is imperative that you comply with any such direction.

As that completes our housekeeping matters, I will proceed with the meeting. The order for our witnesses will be Professor O'Sullivan first, followed by Ms Gallagher, Ms Daly and Dr. Ryan. When the opening statements are finished, we will open it up to members. I call Professor O'Sullivan.

Professor Barry O'Sullivan

I am honoured to appear as a witness today. I am a full professor at the school of computer science in UCC and have worked in the field of artificial intelligence for more than 25 years. I am founding director of the ISFI research centre for data analytics at UCC and the Science Foundation Ireland centre for research training in AI. I served as vice chair of the European Commission high-level expert group on AI from 2018 to 2020, which formulated the EU's ethical approach to artificial intelligence. I currently represent the European Union at the global partnership on artificial intelligence. I am a fellow and a past president of the European Association for Artificial Intelligence and a fellow of the Association for the Advancement of Artificial Intelligence, as well as a member of the Royal Irish Academy. I hold a number of ministerial appointments including chair of the national research ethics committee for medical devices and membership of the Government's recently constituted AI advisory council. In 2016, I was recognised as SFI researcher of the year and I also received its best international engagement award in 2021. In 2023, I was the first Irish person to receive the European AI association's distinguished service award. In addition to my academic work, I contribute to several global track two diplomacy efforts and related activities at the interface of military, defence, intelligence and the geopolitics of AI. I am, for example, senior technology advisor to INHR in Geneva, New York, and Washington DC. I serve on the AI governance forum at the centre for new American security in Washington DC, and I am one of three polymath Fellows at the Geneva centre for security policy.

The term "artificial intelligence" was coined in 1955 by John McCarthy - the son of a Kerry immigrant , Marvin Minsky, and others. They proposed that in the context of the Dartmouth summer research project on AI, which took place in 1956. The field of AI is challenging to define and there is no agreed definition. I normally define it as a system that performs tasks normally associated with requiring human intelligence. These include, for example, the ability to learn, reason, plan, understand language and vision. Much recent interest in AI has been as a result of the success of a subfield of AI called machine learning, and specifically the success of deep learning, a subfield of machine learning. The general public has become aware of specific recent success stories in AI through systems such as ChatGPT, one of many large language models, LLMs. LLMs are one of many forms of generative AI, which are systems that can generate text, images, audio, video, and so on, in response to prompts or requests. Despite the hype, while the field of AI has made progress over the past decade or so, major obstacles still exist to building systems that really compete with the capabilities of human beings.

Over the past decade there has been considerable focus on the governance and oversight of AI systems. As part of our work at the European Commission's high-level expert group on AI, for example, we developed the EU's approach to trustworthy AI, built on a set of strong ethical principles. We also proposed a risk-based approach to the regulation of AI. Over the past few weeks, the European Union has finalised the EU's AI Act, which will govern all AI systems deployed in the Union. The Act builds strongly upon our work at the high-level expert group on AI, HLEG-AI. There are specific considerations regarding the protection of children in the AI Act, including some specific use cases that will be prohibited in the EU. I had the pleasure of participating in the national youth assembly on AI in October 2022, which was hosted by the Departments of Children, Equality, Disability, Integration and Youth, and Enterprise, Trade and Employment in partnership with the National Participation Office. The assembly brought together a diverse group of 41 young people from across the country aged between 12 and 24 years. At the national youth assembly on AI delegates considered the issues affecting young people and provided a set of recommendations to the Minister of State, Deputy Calleary, and the Department of Enterprise, Trade and Employment on Government policy on AI. A key objective of the assembly was to discuss the role, impact and understanding of AI in the lives of children and young people, and their opinions, thoughts and possible fears about the technology and its potential. Recommendations are available along four dimensions, which are AI and society, governance and trust, AI serving the public and AI education, skills and talent. They have produced a nice poster.

Children encounter AI systems every day they are working online, using smart devices or gaming, but there are many other modalities. The content they are presented with on their social media accounts, for example, is recommended to them using AI technology known as recommender systems. The movies suggested to them on Netflix and other platforms are curated using AI methods. Smartphones are packed with AI systems such as image editing, image filtering, video production, facial recognition and voice assistant technology. The technology itself is not problematic per se, but it is powerful, and can, therefore, be abused in ways that are extremely impactful. Combined with the power of social media, the combination can be devastating given the reach possible. Children can also encounter AI-generated content. This can range from harmless memes to more sinister uses of deep-fake technology. A deep fake is essentially a piece of content, often generated using AI methods, that does not correspond to something real and may be generated for nefarious purposes. Nudify apps, for example, are becoming readily available, which generate fake images of people in the nude that are often impossible to recognise as such. Technology to create pornographic videos from input images of a third party are also possible and are among the most concerning and harmful uses of AI technology. It is also possible to encounter fake content designed to create hallucination, such as believing an online profile belongs to a person known to the user, or something else they might be comfortable interacting with.

UNICEF issued its policy guidance on AI for children in 2021, building on the UN Convention on the Rights of the Child. This guidance proposed nine requirements for child-centred AI: support children's development and well-being; ensure inclusion of and for all children; prioritise fairness and non-discrimination for children; protect children's data and privacy; ensure safety for children; provide transparency; explainability and accountability for children: empower governments and businesses with knowledge of AI and children's rights; prepare children for present and future developments in AI; and create an enabling environment.

Educating children, parents, guardians, and wider society on the responsible use of AI technology and how AI might be encountered is key. I chaired a committee for the expert group for future skills needs focused on AI skills, which reported in May 2022. Our report assesses the skills that are required by a variety of personas in respect of AI and how skills development initiatives could be delivered. At UCC we host a free online course called the elements of AI, which teaches the basics of AI to anyone interested in the topic. It is our aim to educate at least 1% of the Irish population on the basics of AI. Both an English- and Irish-language version of the course are available. There are of course many educational benefits to AI. Personalised learning experiences can help students achieve higher grades and competence. AI technology can be used to search for additional relevant material and search through vast sources of information. We often do not regard the Google search engine as an AI system, but that is exactly what it is, so people have been using it for a very long time. However, AI technology also has the potential to undermine the integrity of assessment processes. It is, for example, becoming trivial to use AI to produce content that can be submitted as part of an assessment at school or university. Dealing with these issues can be challenging.

While not an instance of children using AI, it is finally important to note that AI is also widely used to protect children. There are, for example, many systems that filter out harmful content before it reaches children. There are, for example, several AI content moderation platforms available. I point to one of those in my notes to this statement. AI systems are also used in the detection of child sex-abuse material, CSAM, online. I previously chaired an advisory board for a project at the invitation of Europol. The Global Response Against Child Exploitation, GRACE, project was aimed at equipping European law enforcement agencies with advanced analytical and investigative capabilities to respond to the spread of online child sexual exploitation material. The project was successful.

I look forward to answering questions.

Ms Caoilfhionn Gallagher

I thank the committee for extending the invitation to appear before it today in my capacity as special rapporteur on child protection. I thank them, first and foremost, for considering this important topic. To follow Professor O'Sullivan, I will quote from the UNICEF document he referred to, which states:

Today’s children are the first generation that will never remember a time before smart-phones. They are the first generation whose health care and education are increasingly mediated by AI-powered applications and

devices, and some will be the first to regularly ride in self-driving cars. They are also the generation for which AI-related risks, such as an increasing digital divide, job automation and privacy infringements must be addressed before becoming even more entrenched in the future.

That is why UNICEF says it is essential that child specific considerations are front and centre in AI development. As special rapporteur, and bearing in mind the special expertise of my fellow witnesses, a key focus of my role is ensuring that children’s rights principles are embedded in legislative and policy frameworks to comply with the UNCRC, and Article 42A(1) with respect to child protection.

On many other issues which fall within my mandate, there is an abundance of international material. There is a very clear UN Committee on the Rights of the Child guidance on a topic but this is one topic in respect of which in the international policy debate, there has long been a clear gap at the intersection of children’s rights and artificial intelligence, AI, resulting in children’s rights often being overlooked or added as a belated afterthought in guidance and policy documents. All too often, children are simply entirely left out of the policy conversation. Although the rights of children are recognised by the UN Secretary General as needing "acute attention in the digital age", I agree with and adopt UNICEF’s criticism that this is "not being reflected in the global policy and implementation efforts to make AI systems serve society better".

I will focus on three topics in my opening statement, bearing in mind the detailed and specific expertise my colleagues have. I will focus upon gaps in the policy discourse concerning AI and children’s rights. Second, I will highlight a number of key international materials that may assist the committee in considering the issues before it. I note that there is an overlap between myself and Professor O'Sullivan on that. Finally, I will briefly note a number of specific issues arising in the Irish context that require careful consideration. I welcome the views of the subsequent witnesses on those issues.

As I have indicated at the outset, in the international policy debate, there has long been a clear gap at the intersection of children’s rights and AI. The UN Secretary-General’s remarks to the Security Council on AI last July, for example, contained zero mention of the rights of the child or the threats posed thereto by the proliferation of artificial intelligence technologies. This is quite a stark example of children being left out of the AI conversation at the highest level internationally. In 2021, we saw the publication of the UN Committee on the Rights of the Child’s 25th general comment, which addresses children’s rights in respect of the digital environment but fails to comprehensively address the unique threats posed to children by AI or the unique opportunities that arise for children in respect of AI. I recognise that the UN special rapporteur on the right to privacy specifically addressed AI and children in his 2021 report but that, of course, rightly reflects the limits of his mandate with the focus being upon privacy and data protection issues.

The gap I have referred to is also apparent in the most recent draft of the Council of Europe’s Framework Convention on AI from December 2023, which includes a generic, catch-all reference to "the rights of persons with disabilities and of children". The Council of Europe has taken steps towards rectifying this gap by adding a supplementary chapter on AI to its 2020 Handbook for Policy Makers on the Rights of the Child in the Digital Environment, which did not have that chapter at first. It was added later so this is an example of the afterthought approach to this issue with regard to children's rights.

Following what is a clear international pattern, in the Government of Ireland’s 2021 AI strategy, the section dedicated to "risks and concerns" is brief and there is no dedicated focus upon child protection issues or children’s rights. The overall focus of the document is upon building public trust and engagement with AI. This is something that from reviewing AI policies in over s60 countries worldwide, UNICEF says is a common theme. The focus is upon the economy, opportunities presented by AI and children's rights are largely sidelined.

I recognise, of course, that this concern at domestic level has to an extent been overtaken by the extensive consultation of young people at the National Youth Assembly on Artificial Intelligence in October 2022, which I welcome and support. The involvement of young people in AI policy as literate yet vulnerable users of digital technologies is crucial. I welcome further consultation with established pathways for the integration of young people’s perspectives on these issues. I also acknowledge and welcome the work of Coimisiún na Meán, about which I addressed the committee when I appeared before it previously. I also recognise and welcome the EU work and the superb work of Professor O'Sullivan and colleagues in that regard, including the very recent EU developments.

While many relevant international guidance and policy documents concerning AI fail to deal with children’s rights and AI’s impact on them, those that do address children’s rights often follow a restricted approach considering only the potential threats that AI may play relating to children’s privacy, exposure to harmful content and the risk of online exploitation. These are, of course, vitally important issues and need to be explored but they are far from the only issues arising. In order to ensure that the best interests of the child are at the heart of the development of policy, legislation and practice concerning AI, it is vital that the breadth of both the risks that AI poses and the opportunities that AI presents are considered through a children’s rights lens. I recognise that the gaps in the international discourse on this topic pose unique challenges for the Government, the Legislature, policymakers and this committee in ensuring that both risks and opportunities are considered in a child-centred way because there is no ready international yardstick to which they can point.

I commend to the committee three international policy documents following on from what Professor O'Sullivan said because they buck the trend I identified above. The first is the UNICEF and the Ministry for Foreign Affairs of Finland's Policy guidance on AI for Children from November 2021 is a superb document that is very helpful. The second is the JRC Science for Policy Report from the European Commission, Artificial Intelligence and the Rights of the Child, from 2022. It is also important to have regard to the Council of Europe Draft Framework Convention on AI, Human Rights, Democracy and the Rule of Law from 2023. I also flag the importance of the UNICEF policy guidance because it takes as its basis the UNCRC, which sets out the rights that must be realised for every child to develop to his or her full potential. Importantly, this guidance recognises that AI systems can uphold or undermine children’s rights depending on how they are used and it addresses risks and opportunities - how to minimise the risks and leverage the opportunities, respectively, in ways that recognise the unique position of children and, importantly, the different contexts for certain groups of children, particularly those from marginalised groups and communities. There are specific sections in it concerning girls, LGBTQI+ teenagers and children from ethnic minorities. The guidance uses three child-specific lenses when considering how to develop child-centred AI: protection, provision and participation. As Professor O'Sullivan said, it sets out nine requirements for child-centred AI, which he has addressed. It is a helpful and important document and I hope it will be useful to the committee. The documents I referenced recognise the importance of protective measures - ensuring that children are safe - but also the importance of ensuring inclusion - non-discriminatory inclusion - for children in technology that already profoundly affects their lives and will have unknown and far-reaching ramifications for their futures and respect for children’s agency.

I am conscious that Ms Daly and Dr. Ryan are doing to address some of these issues in more detail. Finally, I note three issues in particular in the Irish context that merit careful consideration as this topic is being explored by this committee. First, as my opening statement makes, it is important that the full range of risks posed by AI are considered within the framework set out by UNICEF in particular in that document I referenced. This must include the risks of systemic and automated discrimination and exclusion through bias and the limitation of children’s opportunities and development from AI-based predictive analytics and profiling. I note in particular UNICEF’s warning that profiling and micro targeting based upon past data sets "can reinforce, if not amplify, historical patterns of systemic bias and discrimination". UNICEF gives the example of AI systems that may "reinforce stereotypes for children and limit the full set of possibilities which should be made available to every child, including for girls and LGBT children. This can result in, or reinforce, negative self-perceptions, which can lead to self-harm or missed opportunities." Any of us who are parents of teenagers may well have seen examples of that where a child who is interested in gaming or military history may then receive material that suggests that he or she is going to interested in white supremacy or racism. This is something I have seen with my 13-year-old son. That is a very important topic and one to bear in mind.

Second, a specific issue of serious concern relates to "recommender algorithms". I am conscious that other witnesses are going to deal with this in more detail. This includes social media algorithmic recommender systems that may "push" harmful content to children. In my opening statement, I referred to the 2022 study by the Center for Countering Digital Hate, CCDH, on TikTok’s recommendation algorithm. That study concluded that it pushes self-harm and eating disorder content to teenagers within minutes of them expressing interest in the topics. The study is worth looking at. It found that TikTok promoted content that included dangerously restrictive diets, pro-self-harm content and content romanticising suicide to users showing a preference for the material even if they were registered as under 18. The study was based on accounts registered as age 13 in the US, UK, Canada and Australia. The researchers set up both "standard" and "vulnerable" accounts. The "vulnerable" accounts included reference to the term "loseweight" in their usernames based on their research. Over a 30-minute initial period when the accounts launched, the accounts "paused briefly" on videos about body image, eating disorders and mental health and liked them.

On the standard accounts, content about suicide followed within three minutes and eating disorder material was shown within eight minutes. That research also found that accounts registered for 13-year-olds were proactively shown videos advertising weight loss drinks and tummy tuck surgery. For the vulnerable accounts, the researchers found that the content was even more extreme, including detailed methods of self-harm and young people discussing plans to kill themselves. CCDH said that a mental-health- or body-image-related video was shown every 27 seconds to vulnerable user accounts. This requires attention urgently and I welcome the attention that other witnesses will bring to it. I also welcome Coimisiún na Meán’s detailed focus on the issue of the use of recommender algorithms.

Finally, I will emphasise that AI systems also have great potential to safeguard children. Dedicated services and products using AI technologies plainly have the potential to protect children. I have seen some of that in my work involving children who were abused cross-border in south-east Asia and Uganda. UNICEF has highlighted, for example, the ability to identify abducted children and detect known child abuse material and to detect and block livestreamed abuse and potentially identify the perpetrators, users and affected children. When considering the issue of AI and child protection, it is vitally important to consider AI’s potential to proactively vindicate children’s rights and not only defensive concerns regarding how AI can threaten children’s rights. That reflects the provisions of the UNCRC and Article 42A(1) of the Constitution, which states that the State shall “as far as practicable, by its laws protect and vindicate” children’s rights. It is important to look at the defensive issues and how to protect children from risks but it is also important to look proactively at AI's potential to vindicate children's rights and to give them greater protection.

Ms Clare Daly

I am on the board of directors of CyberSafeKids. My colleague Ms Alex Cooney, who is our CEO, joins us online. We thank the Cathaoirleach and members of the committee for inviting us here today. We welcome the opportunity to talk about this very important topic.

Established in 2015, CyberSafeKids is the only Irish charity dedicated to enhancing online safety for children nationwide. Our mission is to ensure that children are safer online and that the online world is made safer for children. At our core is an education and research programme for primary and post-primary schools, providing expert guidance to pupils aged eight to 16 and to teachers and parents. We also publish trends and usage data annually, which helps to paint a picture of what children are actually doing online, the levels of access they have and the areas of vulnerability. Our education programme has directly reached 65,000 children and 15,000 parents and educators across Ireland.

I will begin by acknowledging the highly important role the Internet plays in all of our lives and recognising that it is a very beneficial resource for children for learning, creating, socialising and entertainment purposes. In 2021, the UN Convention on the Rights of the Child formally adopted general comment No. 25. This recognised children’s rights in a digital environment to be the same as their rights offline, including the right to participate, the right to access accurate information, the right not to be exploited and the right to be protected from harm. While the Internet brings us opportunities that we could not have imagined 20 years ago, it also brings risks, particularly for children. The Internet was not designed with children in mind. These are environments that many adults struggle to understand and manage effectively, let alone children and young people.

While much of the current discussion around AI focuses on the latest developments in generative AI, it has been around for years and has been actively impacting on children in their use of technology over the past ten years. AI is behind machine learning, which drives the algorithmic recommender system that dominates feeds across social media. The likes of Facebook, Instagram, Snapchat, X, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content to their users. The main aim is to keep eyes on screens. While social media and gaming companies might argue that it is all interest-driven and designed to ensure that we are getting the best content and targeted ads for us, it can be deeply problematic for children as the result of inappropriate content related to self-harm and suicide and pro-anorexia and sexual content being recommended.

Frances Haugen, the ex-Facebook employee turned whistleblower, said Instagram's algorithms can lead to addiction in its young users by creating "little dopamine loops”. Children get caught in the crosshairs of the algorithm and sent down rabbit holes, engaging with sometimes frightening or enraging content because, as Haugen further stated, “it's easier to inspire people to anger than it is to other emotions”. One mother we recently worked with in regard to her 13-year old daughter said:

As a mother I have huge concerns for our teenage children. Last summer it was brought to my attention that my 13-year daughter had been bullied during First Year and by expressing her sadness in a video posted on TikTok, the app starting flooding her daily feed with images of other sad teenage girls referencing suicide, eating disorders & self-harm. The damage & sadness this has caused my family has been immense as we discovered that my daughter saw self-harm as a release from the pain she was suffering from the bullying through the information this app is openly allowing. Anti-bullying efforts by schools are of no use unless these social media platforms are held responsible for openly sharing all this hugely damaging content with children.

Cybercriminals seeking to sexually extort online users, including children, are using advanced social engineering tactics to coerce their victims into sharing compromising content. A recent report from Network Contagion Research Institute noted an exponential increase in this type of criminal activity over the past 18 months and further found that generative AI apps were being used to target minors for exploitation. We know that this is impacting children in this country because we have had calls from families whose children have been affected. One such case involved a teenage boy who thought he was talking to a girl of his own age in a different county. He was persuaded to share intimate images and immediately told in the aftermath that if he did not pay several thousand euro, it would be shared in a private Instagram group of his peers and younger siblings. The threat is very real and terrifying and has led, in some cases, to truly tragic consequences.

To make matters worse, there are new apps that are facilitating such efforts, including ones that remove clothing from photographs, which bypass the need to put people in compromising positions. The photos can be taken from social media accounts and then sent to the individual to begin the process of extorting him or her. Such sophisticated technology is greatly increasing the proliferation and distribution of what the UK’s Internet Watch Foundation describes as "AI-generated child sexual abuse material". There is a real fear, highlighted in the Internet Watch Foundation report, that this technology will evolve to be able to create video content too.

We know from recent headlines regarding celebrity deepfakes that the problem is becoming more widespread. Deepfake software can take a person’s photos and face-swap them onto pornographic videos, making it appear as if the subject is partaking in sexual acts. Research in this area points out that while much of the abuse is image-based, such as exploiting broadly-shared open-source content to generate CSAM, it can also be used in grooming and sexual extortion texts, which pose significant risks to children.

The rise in AI technology also poses risks as regards peer-on-peer abuse, which has been snowballing into a very significant area of risk over the last number of years according to figures from CARI. Peer-on-peer abuse is already massively increasing and the courts in Ireland have reported underage access to online pornography as being a major contributing factor in serious crimes. In September 2023, 28 Spanish girls between the ages of 11 and 17 were subjected to peer abuse when their social media images were altered to depict them as nude and these nude images were then circulated on social media. The reports suggest these images were created and circulated by 11 boys from their own school.

Over the past year, we have seen new AI features being rolled out into the hands of children with little thought as to the consequences. Snapchat added its "My AI" feature onto every subscriber’s account in March 2023. It should be borne in mind that 37% of eight to 12-year-olds in Ireland have Snapchat accounts. It was touted as being like a friend of whom you could ask anything. If you read the small print, you could see that it was still being tested and might return wrong or misleading information. Further testing by external experts found that it forgot that it was talking to a child very quickly into the conversation and started returning inappropriate information. Nine months later, in January 2024, Snapchat added a parental control to restrict the use of My AI.

Children are being treated like guinea pigs in the digital world. This was put succinctly by the Harvard professor and author of The Age of Surveillance Capitalism, Shoshana Zuboff, who wrote:

Each day we send our children into this cowardly new world of surveillance economics, like innocent canaries into Big Tech’s coal mines. Citizens and lawmakers have stood silent, as hidden systems of tracking, monitoring, and manipulation ravage the private lives of unsuspecting kids and their families, challenging vital democratic principles for the sake of profits and power. This is not the promising digital century that we signed up for.

Why do the companies behind these services not do more to protect children using them? One simple answer is money. They would need to invest a lot more money to bring about real change and in the meantime, they are making billions of dollars of profit off the back of advertising to children. A recent Harvard study found that collectively in 2022, Meta, X, Snapchat and TikTok made $11 billion from advertising to children in the US, $2 billion of which was to children under the age of 12.

We acknowledge that there are no easy solutions and this is further complicated by the fact that the EU and the US have very different regulatory approaches, with the former being more bureaucratic and heavily protective of the individual’s right to privacy. That said, we do have some suggestions. First, we should try to harness the power of AI to better protect children in online spaces such as relying on age assurance to determine the age of child users. We know that technology companies are able to market to users based on age. Further investment in accuracy could see this technology being used to better safeguard children. It could be used, for example, to prevent underage users from accessing the platforms. We know from our trends and usage data that 84% of eight- to 12-year-olds have their own social media profile in Ireland. AI can be used to better protect child users on platforms from exposure to harmful content, targeted advertising and data profiling.

Second, we must ask how well existing legislation mitigates the risks. Does existing law include artificially created images? The emergence of deepfake technology means there is no longer a requirement for a perpetrator to possess real intimate images of a victim. Non-consensual pornographic deepfakes are alarmingly easy to access and create. A report by Sensity AI found that 96% of deepfakes were non-consensual sexual deepfakes. Of those, 99% were of women. The Harassment, Harmful Communications and Related Offences Act 2020 was enacted, in part, to criminalise non-consensual intimate image abuse. Section 3 prohibits the recording, distribution or publishing of an intimate image of another person without that other person’s consent. The definition of intimate image in relation to a person means any visual representation, including any accompanying sound or document, made by any means including any photographic, film, video or digital representation. Section 3 does not appear to clearly extend to images generated without consent. Notably the new EU directive on child sexual abuse will revise the 2011 directive to update the definition of the crime to include child sexual abuse material in deepfakes or AI-generated material. Ireland should be leading the charge in this arena given that we are regarded as one of Europe's leading tech hubs. Our legislation needs to match this status.

In terms of policy, regulation and enforcement, safety by design is a key criteria in devising technologies that are being accessed by children. We know that technology companies are compliance oriented but generally speaking, as commercial entities, they will not go beyond basic compliance where legislation does not demand they do so. How can these powerful concepts of safety by design be included in regulation? Coimisiún na Meán is currently drafting binding safety codes and in our response to its public consultation, CyberSafeKids recommended that definitions be extended to include AI-generated images. We also suggest that the regulations in this area be brought into line with any such definitions. Algorithm-based recommender systems should not be allowed to serve content to child users. Regulation is only beneficial when it is properly enforced and there needs to be great focus on how to do so.

In terms of finding fresh perspectives, we need new thinking and the confidence to believe we can make real progress on this very tough issue. As with the 2015 Paris climate agreement which has been to made to work, there needs to be skin in the game and a financial incentive. We suggest that the Government sets up and funds a research and development laboratory with representatives from academia, industry and the not-for-profit sector to look at how to better protect users in meaningful ways. Ireland can and should be a trailblazer in online child protection given our data protection status, our Europe, Middle East and Africa headquarters status and the strides taken to protect children through legislation over the past 20 years. This could include economic incentives that will change the behaviours of tech companies. For example, if the companies collaborate with academics and other stakeholders, they could get some kind of financial reward or grant based on outcomes, not just on participation.

Policymakers are faced with an enormous and urgent challenge that is growing at pace. There are no quick fixes but a meaningful solution will involve legislation, regulation, education and innovative approaches. Nothing that we have in place currently is good enough or strong enough to take on this challenge properly. We remain hopeful that this will change, but it needs to happen quickly. None of those digital rights referenced in general comment No. 25 are being upheld for children online in the current online environment. They are exposed to a wealth of misinformation and disinformation and are being bombarded with harmful content. This is having a genuine impact on their mental health, contrary to what Mr. Zuckerberg said in the congressional hearing just two weeks ago. At that hearing he was shamed into making an apology to parents who can testify to the tragic consequences for their children of insufficient regulation and oversight. AI offers children opportunities but if it is not properly regulated from the outset, we will see similar scenarios play out, where children are unwittingly testing dangerous, unregulated products for the profit of corporations.

I thank members for their time and look forward to answering any questions they may have.

Thanks very much. The final speaker is Dr. Ryan

Dr. Johnny Ryan

Thank you, Cathaoirleach. It is a privilege to be here with colleagues today. The standard of information provided thus far has been very high.

As the committee has already heard, AI is not a tomorrow or future technology. TikTok, YouTube, Snapchat, Instagram and others use it to shape the world that our children see through their platforms every day. The AI of these corporations builds a tailored diet of content and pushes it into each child’s feed. There is an action here. They are pushing it into the feeds and that system is known, loosely, as a recommender system. A recommender system builds a feed based on each person's estimated likelihood of engaging with material. Often that requires salacious or outrageous content or things that play upon the individual's sensitivites and vulnerabilities. That is very bad news for society but it is excellent news for the tech companies because it keeps the person on the platform for longer, which massively increases advertising opportunities. This is how the companies make money today.

Let me add to the examples that the committee has heard, all of which were excellent. I have a few more equally horrifying tales. I will begin with the fact that the United Nations has said that Meta played a determining role in Myanmar's 2017 genocide. This month, lawyers for Rohingya refugees put the blame firmly on Facebook’s recommender system which, they stated, “magnified hate speech through its algorithm”. My next example is the fact that nearly three quarters of problematic YouTube content seen by more than 37,000 test volunteers was shown to them because it was pushed at them by YouTube’s own recommender system. This is not passive; it is a push. A recent investigation by the Anti-Defamation League showed that Facebook, Instagram, and X late last year were pushing hate and conspiracy content into the feeds of 14-year-old test users. Investigations by our colleagues at the Institute for Strategic Dialogue found, similar to what Ms Gallagher referenced, that young boys were having extremely hateful misogynistic content routinely pushed at them through YouTube's new video shorts feature. Our colleagues at Uplift shared a story from a member about recommender systems and I will read two lines from it: “My beautiful, intelligent, accomplished niece was encouraged, incited to see suicide as a romantic way to end her life. She did end it.” Entirely separately, Amnesty International published a remarkable study which was so simple and elegant and not dissimilar to the one described earlier.

The organisation set up an account posing as a 13-year-old girl and the account started to look for mental health content. It took a little longer on TikTok, where it was 27 minutes before she was getting material pushed at her that glamourised suicide. In the case of Meta, YouTube, Instagram, X and TikTok, their AI recommender systems are manipulating and addicting our kids. They are promoting childhood hurt, hate, self-loathing and suicide, so what can be done?

The first step is for us to acknowledge, at long last, that we cannot put our faith in voluntary action by the tech companies. The technology corporations have proven they have a remarkably poor record of self-improvement and responsible behaviour, even when they know their technology is harmful and even when lives are at risk, as they were in their tens of thousands in Myanmar. The lesson is that tech corporations will not save our children. We must have learned that by now. We have to stare this problem in the face and pick up the tools to face it. Coimisiún na Meán, in its forthcoming binding code for video platforms, is anticipated, although we will have to see whether it does it, to introduce an important new rule whereby these AI recommender systems that are based on a profile of you, a child or so-called special category data about you being a child will be turned off by default. If you want to switch it on, maybe you can do so but it will be off by default, not longer on automatically, until a person makes the decision.

We and more than 60 other organisations in Ireland have written to urge Coimisiún na Meán to do this, to introduce its rule and also to go further, namely, to make that rule inescapably binding such that the tech firms cannot wriggle out of it. We know from our polling, which was commissioned by Uplift with Ireland Thinks just a month ago, that 82% of the Irish public support such a rule. That support for a binding rule to switch off these recommender systems by default crosses the divisions in Ireland of age, education and income. Everyone wants this; there is consensus. There is also overwhelming international support for this. Coimisiún na Meán, if it proceeds as envisaged, will be leading the world. In Brussels, in December, a group of senior MEPs across the political spectrum formally wrote to the European Commission and urged it to take Coimisiún na Meán's rule, if it is binding, and apply it as a model throughout Europe. Ireland, therefore, can finally lead the world. We can at long last hold our head up high on digital regulation if we do this. A United States federal trade commissioner, Alvaro Bedoya, took to X recently to praise Coimisiún na Meán's proposed rule as the model for the White House to follow, because the White House is considering how to protect kids online.

It is clear we want binding rules to switch off these AI recommender systems by default, but it remains to be seen whether Coimisiún na Meán will, in fact, introduce this rule and whether it will be introduced in a strictly binding way. It is inevitable that that will be strongly opposed by the tech corporations that put our children in harm's way. Coimisiún na Meán will have to be resolute and will need committee members’ support individually to be resolute. We at the ICCL are urging members of the committee and the committee as a body to strongly support this rule being made strict and binding. We have the tools to address this crisis. We need to pick them up and confront this problem. Ireland can and should lead the world.

I thank Dr. Ryan.

I apologise if I have to leave when the witnesses are responding. I am a member of the justice committee and we are working in a similar vein, on facial recognition technology at 4 o'clock. I will look back at the Official Report but I wanted to apologise in advance.

Professor O'Sullivan highlighted some of the UNICEF guidelines, one of which related to providing transparency, explainability and accountability. Will he speak a little more to them, especially in regard to who is responsible in the context of those guidelines?

In respect of the UNICEF guidelines on creating an enabling environment, Ms Gallagher and others spoke to how we also need to look at that enabling aspect in the context of being able to measure and remedy some of the risks while also leveraging some of the advantages of AI. Where AI is used to aid accessibility and inclusivity, whether in speech, ALT text, closed captions or other ways in which AI is used, are we doing enough to fully leverage the benefits while also fully mitigating the risks?

Turning to Cybersafe Kids, I was struck by the mention of addiction in young users by the creation of dopamine loops. It made me think of categories of children who may be more affected by or susceptible to that, such as kids who have ADHD. In the context of the education or awareness aspect, is any work or research ongoing into categories of kids or young people, that dopamine addiction aspect and how relatable that is to young people who have ADHD, especially in the context of hyper focus and the seeking out of that dopamine hit?

On recommender systems, Dr. Ryan or another witness might like to come in on this. Is there any jurisdiction currently that disables these systems for young people? If the system is disabled, does that have to be done individually for each website or app, or is there a way we can programme each device to do it generally? I am probably not asking this question well because I am still trying to understand the issue. How do we best achieve what I am asking about? Dr. Ryan referred to Coimisiún na Meán and so on, but does the disabling function have to be app and website specific, or is there a way to address recommender systems across the board? If I am not asking the question correctly, the witnesses might let me know.

Professor Barry O'Sullivan

UNICEF guideline F relates to providing transparency, explainability and accountability for children and this requirement is common among almost all ethical guidance on AI. The explainability part is the easiest one to explain, if you pardon the expression. A user can ask why they are getting certain content, where it came from and why the company is showing it to them. It is a surprisingly difficult problem, and we can explore that in greater detail if the Senator wishes. Under the GDPR, there are all sorts of rights to explanation such that the explanations have become quite diluted because they are technologically very difficult to generate.

Transparency relates to issues such as what information there is about me and others, the provenance of the content and so on. It is about being able to look into the system and get a sense of why somebody is seeing something. Accountability is both within the system and what kind of mechanisms there are for seeking redress. Ireland is doing fantastically well on issues of redress but, unfortunately, our Online Safety and Media Regulation Act does not do a good job of this in the digital space because it says that if someone is wounded, they need to get into line with everybody else. Of course, the scale of these technologies means 200 million people can be harmed in a heartbeat, so it is a rather long line. We need to think of creative ways of dealing with that. Transparency, explainability and accountability are really about understanding where the content is coming from, why the person is getting it, what data was used and so on.

These are not easy problems to solve but they should be solved.

A couple of things are often overlooked in the context of recommended systems. Everybody uses recommended systems every day of the week. When people go home and use Netflix, the reason they get a movie is the platform thinks they will like it. One of the real challenges around children in the context of recommended systems is essentially there is no serious age verification technique online. If the online world was a nightclub, and basically someone comes up and says “I am 18, boss”, in he or she goes. Companies do not verify that the people are the age they say they are.

The other problem is that there is no technique for ensuring that the content that children get is age appropriate. We really need to try to look at that. Of course, the technology companies are a problem but we also need to reflect on ourselves and ask the question: where does the content comes from? I agree with everything that has been said about the recommended system challenges but society as a whole in some sense is complicit in this kind of thing because, unfortunately, everybody, including younger and older people, is generating content that is very poisonous. We very much need to look at that issue as well.

Recommended systems is such a ubiquitous technology. We come in contact with it. In every Google search, we are using a recommended system of some sort. I am happy to come back to discuss those issues, if necessary, later.

Does Ms Daly wish to contribute on the cyber issue?

Ms Clare Daly

Yes as regards the little dopamine loops that were identified in Frances Haugen’s commentary, and the risk that this posed to children with additional educational needs. The question was around education, the different categories of children and the different impacts the algorithms might have. I do not have any details of any such educational thesis in front of me but the question has very much identified these emerging risks and challenges that are only coming to the fore now. These AI technologies are continually evolving and presenting these new challenges and risks.

Another difficulty is that AI systems can be highly complex and it makes it challenging to assess and mitigate the potential risks they pose to children with regard to their safety, particularly children who have additional needs and might be more prone to the more addictive nature of the algorithm. Perhaps Ms Cooney may have something further to add to this issue.

I see Ms Cooney has her hand up online there. Does she wish to contribute on this issue?

Ms Alex Cooney

Yes, to add to what Ms Daly said, there is some evidence that children with, for example, attention deficit hyperactivity disorder, ADHD, are more likely to become addicted. The challenge is the way the algorithm works is that it is designed to be addictive for all users. Children are more vulnerable, perhaps, because they do not have such high awareness about the techniques that are being used to hold their attention, keep eyes on screen, and so on. There are greater vulnerabilities within that wider cohort for sure. Then we have vulnerable adults to consider also. We have done some educational work with vulnerable adults because even though they are technically adults, there is, unfortunately, a great deal of risk for them online too.

There has been some evidence and we can certainly put together something for Senator Ruane, if that would be helpful, on some of the studies that have been undertaken. I believe that children with ADHD are more vulnerable.

I thank Ms Cooney and I believe Ms Gallagher wishes to make a point.

Ms Caoilfhionn Gallagher

I would like to follow up on the question about the UNICEF guidelines. The two principles raised by Senator Ruane were principle 6, on providing transparency, explainability and accountability for children and principle 9, on creating the enabling environment.

To add to what Professor O’Sullivan said on principle 6, one of the key things the UNICEF guidelines make clear is that age-appropriate language should be used to describe AI and children should be explicitly addressed when promoting the transparency of AI systems. Something the committee might find very helpful is a series of pilot case studies from different countries. One that is particular good on principle 6, which I would recommend, is on page 49 of the guidance document. It is from Helsinki University Hospital and it is called Millie the chatbot in Finland. It is an AI-powered chatbot that uses natural language processing to help adolescents and teenagers in Finland open up to learn about mental health issues. The application was the result of precisely the kind of collective effort Ms Daly was talking about earlier, where interdisciplinary experts and practitioners worked together to design it with children also involved in the design. The result was that Millie’s avatar was redesigned to appear in a way that worked for children so we had experts assisting in the design but with children also being a key part of the design. That is a very good example.

On principle 9, creating an enabling environment, the short answer is that, No, we are not doing enough. I agree about Ireland having the potential to be world leading and having an obligation to try to be world leading in this space. On that, I would recommend in the guidance, pages 42 and 43 are very good on this final topic about creating an enabling environment. They set out four standards that countries should strive to meet. They are, first, supporting infrastructure development to address the digital divide aiming for equitable sharing of the benefits of AI. They are emphasising there that that AI-related policy, strategies and systems exist within a broad ecosystem. Just focusing on policy and practice at the end is not enough. We need to think about the infrastructure from the outset. The second standard is to provide funding and incentives for child-centred AI policies and strategies. The third is supporting research on AI for and with children across the systems life cycle, including in the design phase. The final standard is engaging in digital co-operation across border and learning lessons.

I hope those pages are particularly useful. I realise that we have all given the committee members a long homework list of documents to read but, very much, pages 42 to 50 are very good on those two principles raised by Senator Ruane.

I thank Ms Gallagher and call Deputy Brady to speak now, please.

I thank and welcome all of our witnesses. It is very insightful for me and very concerning. AI has huge potential and has proved positive, as has been alluded to here, in safeguarding children online from sexual predators and all of that. There are benefits but there are real concerns there. I wrote down four or five areas that these could be condensed into. Privacy is a huge one. Children’s data privacy can be compromised. That leads to my first question on GDPR legislation. Is it strong enough to protect from those threats around AI?

I will go through the questions and ask the witnesses to take note of these. There is the question of safety in general, inappropriate content, interactions, ethical considerations, dependency, which was alluded to, physical health and social media. Their goal it is to get children, in particular to engage in more screen time and I am worried about the psychological impact of this . These are the main areas of concern for me.

Some data was given out and I want to drill down into it as it was quite concerning. A total of 84% of children between the ages of eight and 12 have social media accounts. According to all the major social media platforms, whether it is TikTok, Instagram or Snapchat, users are supposed to be aged 13 and above. That says quite clearly that self-regulation does not work. I presume everyone agrees with that. Separately, 37% of children between the ages of eight and 12 are on Snapchat. Is data available for each of the major platforms?

A survey carried out by Harvard University in 2022 probably sums up why these platforms are very slow at, or are not enforcing, their own guidelines. It found that Meta, Twitter, Snapchat and TikTok made a combined profit of $11 billion from children under the age of 18, $2 billion of which was from children under the age of 12.

That is absolutely startling. Do we have that data specifically in respect of Ireland? If we do, it would certainly be very helpful.

As legislators, the critical piece for us is the legislation that is required to be put in place. One example was given, and concerns cited, around section 3 of the Harassment, Harmful Communications and Related Offences Act. Concerns around AI were given in that regard. Is that Act critical legislation that should be amended immediately? Should it be given priority? What other legislation, and this is a very broad question, do the representatives deem as priority legislation to be enacted?

Coimisiún na Meán is obviously very welcome. Are the terms of reference given to that body extensive enough to deal with all these issues around AI? Are there concerns that it will not have enough powers or there will not be enough focus on such issues? I thank our witnesses.

Dr. Kris Shrishak

I will take up the issue of AI-generated images that was mentioned. There are disconnected elements in the legislative tools that exist. For instance, there is the GDPR, when it comes to the personal data aspect. Enforcement is a question there. While the nearly complete artificial intelligence Act has an element on labelling deep fakes, it is still difficult to see how unlabelled deep fakes will be detected. That is a challenge. In addition, there is the EU Digital Services Act, which is being adapted now. Article 16 of that Act allows people to alert a platform about illegal content. That is one element that is possible. In addition to the Irish regulation that was mentioned, there are similar regulations in some other countries, such as Germany, which also criminalise distribution of deep fakes. The key in all of this, however, is that none of this touches on the aspect of production or generation of deep fakes. That is a key element. It is circulation and distribution that are primarily being targeted by different legislative tools, but generation is not necessarily. We do not have a complete solution but there are pieces here and there.

Dr. Johnny Ryan

I thank the Deputy for these questions. I will get into the question of the GDPR and whether it is strong enough. Our view is that it is very much strong enough. It has some useful tools, even before the AI Act, which we could use right away. For example, Article 9 of the GDPR lets us decide what specially protected personal data is. Anything that could reveal, for example, people's sexual life, ethnicity, political views or outlook, or any interesting data about them that lets us know what makes them tick so you can push their buttons, which is what we are talking about, falls into that category. We have strong protections, if they were enforced, for those data. We also have protections that relate to so-called automated processing. We have some useful law. We are just waiting to see it enforced. On the enforcement question, the ICCL has found it necessary to litigate directly. Luckily, the GDPR allows us to do that in Germany, Ireland, Luxembourg and Belgium, as we have this big deficit in enforcement. We are waiting for big news about the new leadership of the DPC today. Let us see what happens there.

On the question about the framework of legislation that applies and the role of Coimisiún na Meán, we have another important law at European level, namely, the audiovisual media services directive, AVMSD, which was last updated in 2018. That directive, together with the Digital Services Act, are bundled together domestically in the Online Safety and Media Regulation Act, which empowers Coimisiún na Meán. Under this binding code we have all spoken about at some point this morning, the coimisiún cannot do anything on Netflix in particular, in Professor O'Sullivan's example, but it is empowered to act on social media video platforms, which arises from the AVMSD. Its powers are very broad in that respect. We have got two players: the DPC and Coimisiún na Meán. We have tools to solve these problems.

Professor Barry O'Sullivan

I will make a couple of points. Ireland has a good story on privacy, to some extent, when it comes to children's data. I was before the committee six years ago to the very day with a colleague, Professor Mary Aiken, when we argued that Ireland should set the digital age of consent at 16. Deputy Sherlock was very helpful in that respect and argued very strongly for it in the Dáil. We have a digital age of consent that is among the highest in Europe. The great thing about that is it forces these companies to ensure they are not processing the data of children inappropriately. We need to make sure that parents and guardians behave in the right way in that respect, but the challenge always is how we know that someone online is the age he or she claims to be. As a country, we should be figuring out how, and requiring, these companies demonstrate they have rigorous age verification techniques in place. I encourage the committee to bring representatives from those companies in to ask them to show members how they verify that a 13-year-old is a 13-year-old or a nine-year-old is a nine-year-old. There are many ways of doing that, but it would be interesting to ask them how. I think it will be found that they simply ask someone to declare his or her date of birth and, for example, a 72-year-old can come onto the platform. Age verification is very challenging but those companies must take it seriously.

As was mentioned, the AI Act requires that deep fakes be labelled but that does not address the generation issue. I previously said that as well as looking at the role the technology companies play, society also needs to take responsibility for production of this content. We need to come up with some mechanism whereby it is effectively a crime on society to generate content that is harmful. If I am at home, sitting down and producing a deep fake that will impact the election, or some person in school or whatever the case might be, and it is a harmful image, I am effectively guilty of some crime against the fabric of society. How one frames that is not my area of expertise, but we need to take that seriously. We do not have to wait for Europe to do it. We can do it ourselves. That is something we should be doing here. I encourage the committee to look into that.

When it comes to inappropriate content, one of the things we have not discussed is why these systems do what they do. This is what is called a value alignment problem. What do social media platforms want to do? They want to make money. What does Google want to do? It wants to make money. It does that by getting people to engage. In the process of engagement, the content people engage with that generates the interest and money is not aligned with what we consider as consistent with our values. There also needs to be some regulatory instrument that ensures there is a value alignment between what these companies do, and their algorithms and business models, and the well-being of society. That has to be demonstrated in some way. It is specifically called the value alignment problem.

We need to deal with the value alignment problem and age verification. We need to think about ways in which the production of harmful content is in some sense a crime.

Ms Alex Cooney

I thank the Deputy for his questions. I will pick up on Professor O'Sullivan's points on the GDPR and the digital age of consent in Ireland being set at 16.

We did research on this in 2019 and 2020. We looked at what had changed with regard to the sign-up procedures of the social media platforms and their efforts to check the age of users signing up. What we concluded was that while it got slightly harder to sign up by 2020, really, for a determined child, it was extremely easy to bypass the age verification systems that had been put in place. We know they are not effective. Even though we have a digital age of consent in Ireland of 16, it is not being enforced and adhered to.

To pick up on the data points the Deputy referenced, we have been tracking this over the past six or seven years. In our trends and usage data reports that we publish each year, we look at what social media platforms and gaming platforms children are engaging with. The figure of 84% in this regard relates to eight- to 12-year-old children who have accounts on the likes of YouTube, TikTok, Snapchat, Instagram and WhatsApp. Those tend to be the five most popular apps year on year. YouTube was always the most popular followed closely by TikTok and Snapchat. That is where those figures come from. All those children are under the minimum age requirement for those services. WhatsApp has a minimum age requirement of 16.

I will pick up again on Professor O'Sullivan's point about the people who are creating this content. To my knowledge, under the e-safety legislation that is in place in Australia, there is some onus or responsibility put on the individual who posted whatever harmful content is in question. Maybe that is something. When we were presenting to a committee here a few years ago, we suggested that it be considered for inclusion in the legislation that governs us. It was not considered necessary, however, so we do not have it in place. It might be worth referencing what the Australians have in place under that legislation. It could be a way of deterring people from posting some of this content. That is everything I want to say at this point.

There are a few other witnesses looking to come in, but I need to keep it moving in order to allow members to ask their questions. I will come back to them. They might make a note of the points they wish to make and perhaps come in on some of the other questions. I am just seeking to ensure that we do not run into a problem with time. Deputy Creed is next. He might confirm that he is on the grounds of Leinster House.

Yes, I am. Like Senator Ruane, I am trying to keep track of a couple of committee meetings that are ongoing.

I thank the Cathaoirleach and our guests for the very compelling subject matter that, obviously, is critical in the context of the safety of children. I am sure the Cathaoirleach is aware that some time ago, Rishi Sunak, the UK Prime Minister, asked Elon Musk what he thought of artificial intelligence. His reply was that there will come a time when no job that we currently know will be necessary and everything will be done by artificial intelligence. What is abundantly clear is that while that is somewhat true, there is an oversight role for somebody. A question arises as to how we construct that most effectively. Is there international best practice that we can emulate? If we can cut and paste rather than, if you will pardon the regressive pun, reinvent the wheel in the context of artificial intelligence, it would be a much easier job.

Like many others, I watched the news coverage of the recent hearings on Capitol Hill about social media companies and the adverse impact of their platforms on child health and the gamut of adverse incidents, including suicide, which was referenced earlier, that have occurred. It was suggested in one of the interactions that TikTok's operation in China is much more benign than might be the case in less regulated countries where the worst excesses can occur. I suspect that for the ICCL, authoritarian regimes like that in China might not exactly fall under its favourable analysis. If it is true, however, it certainly suggests it is technologically possible to move step for step with these social media companies in terms of control.

Much is expected of Coimisiún na Meán, and that was referenced earlier as well. Is there best practice out there or are we all learning on the job? There is undoubtedly an increased appetite, certainly among the public and, I think, politicians, for greater oversight and regulation. However, some of us are intimidated - and I put my hand up in this regard - because we are not perhaps as technologically literate as younger generations. We struggle sometimes to comprehend what is possible from a legislative point of view and what is feasible when it comes to the interface between legislation and technology. Because these things are changing so rapidly, we also have to look at how best to design the oversight aspect in order to take that into account. This is a roundabout way of saying that, as parents and as legislators, we want to do the right thing for the betterment of society.

Is there international best practice? I will hone in specifically on the point that came across at the hearings on Capitol Hill. These companies have different offerings in different regulated areas. In the context of the questioning of the CEO of TikTok at the hearings in question, it seemed to be suggested that is a much more benign offering in mainland China because of the level oversight there than is the case maybe in the US or here. I am interested in any comments in that regard.

People who were looking to come in last time can come in on this round. They might also be conscious of time. We will start with Ms Gallagher, Dr. Ryan and Ms Daly, and then others may indicate if they wish to come in.

Ms Caoilfhionn Gallagher

I will answer questions from Deputy Brady and then touch briefly on the question from Deputy Creed. I know Dr. Ryan wants to come in on that point, so I will not take up too much time on it. I have three points I want to make to Deputy Brady-----

(Interruptions).

Ms Caoilfhionn Gallagher

I apologise. That was my watch. The technology is talking back to me.

The first issue was on privacy. I agree with points made by Dr. Shrishak, Dr. Ryan, Professor O'Sullivan and Ms Cooney. I just want to add one additional point that has not been raised on privacy, which is important. In the UNCRC general comment mentioned earlier, when it considers privacy, there is an additional issue to be alert to, which is that it highlights that there are sometimes risks to children from caregivers. That is extremely important. This is why a nuanced approach is essential. When ensuring there are robust practices with regard to privacy, it does not, for example, prevent a child from accessing a helpline searching for sensitive information, particularly where we have a situation where a child may be at risk in his or her home environment or within his or her community. That is a particular issue with regard to girls in some communities. It is a particular issue, for example, with regard to LGBT youth and a number of other areas. That is just a very important point. It is in paragraphs 76 and 77 of the general comment. Also in respect of privacy, the general comment makes the point that one of the rights protected in the UNCRC is the principle of evolving capacities and the idea that as a child grows older, he or she develops in a different way. They are not mini adults. A child who is eight is very different to a child who is 15, of course. Children also have a right of access to information and freedom of expression. It is a nuanced area, and I think it is dealt with well in the general comment. I hope that is of use.

The next point was the issue about age verification. In principle, I of course agree that this is a topic with which we have not yet grappled adequately and we do not yet have an answer. I am not aware of a magic bullet that answers this properly. Often, some of the proposals people come up with regarding age verification create additional problems. They are either unworkable or they may impact on privacy rights themselves, for example, by creating databases of information. This is an issue that has been dealt with for a long time in the offline world. I am in my late 40s. When I was under 18, I spent many evenings attending the Grove on the northside with a fake ID and, for some reason, learning my horoscope and what my zodiac sign was.

There was a view that somehow the doorman at The Grove knew a lot about the signs of the Zodiac and you could say, "I am a Scorpio," or something, and it was key. It has been an issue we have grappled with for a long time. It is a key issue but I am afraid I do not know of a magic bullet which answers it.

The last point was on the production of harmful content. Inspired by what Professor O'Sullivan said, the committee will be pleased to hear I am not going to come up with a criminal law on the hoof. However, there is very important issue here, and not just in Ireland. Many principles in relation to cyber-harassment, etc., focus on the idea of a pattern of conduct aimed at a single particular victim and you need to show a pattern of conduct aimed at a single particular victim, but really what Professor O'Sullivan was talking about, and what we are all talking about here, relates to a pattern of conduct which is dangerous to the public in general or a pattern of conduct which is dangerous to particularly women or girls, for example, or particular minorities. There is a real challenge there because many of our laws assume one victim pattern of conduct over a period of time and do not look at pattern of conduct which is perpetrator pattern rather than victim-related pattern.

Finally, on international best practice, Dr. Ryan wants to deal with this and will deal with it far more comprehensively than I. I just wanted to point to the UNICEF guidance on that issue. I am afraid it makes clear that UNICEF and the Government of Finland, when they conducted their review in detail of 20 national AI strategies internationally, concluded that engagement on children's issues was immature. Their conclusion was there were problems. There are some good examples in the case studies, particularly from Finland and Sweden, from the Nordic countries and they can be found at pages 48 onwards. They are worth looking at, but they are quite sector specific. The short answer is that there is not any magic bullet in that field either.

Dr. Johnny Ryan

It is useful to distinguish in our minds between the medium that we are talking about now and what we used to, or still, have. Once upon a time, I worked for a newspaper in this country. There are illegal things one can publish. We have got quite a lot of content law and plenty of things are illegal. If I published something illegal and Sean Sherlock was my editor, you would phone him up because I had written it about you and say, "I am going to sue you." Sean Sherlock would ding his employee and the content would be removed. That is how we have been. The verb that mattered was "publish". The problem now is that you have got millions of people and they are all publishing all the time, and if I put up a post on social media, no one is going to notice it because, for a start, I am wearing all my clothes so I am immediately less interesting than my competitors. When someone walks in to a building and sprays machine-gun fire and videotapes that, as we had in Christchurch, you do not necessarily know for sure that that is going to be of interest to people unless you are a system that is checking the content, seeing who that might tickle, artificially pulling it out and amplifying it by pushing it into everyone's feeds. The verb that matters, in this problem, is not as much "publish" - it is like a tree falling in the wood when no one is there to hear it because no one might see the content. It is that it is artificially selected and amplified. It is "amplification".

On the question of best practice in dealing with amplification, we are best practice. We just do not practise it. We have got the GDPR that solves a lot of these problems, if enforced, and we have got the AVMSD, which I referred to before. We have got the DBC and we have got Coimisiún na Meán. The international best practice we should be thinking about is the change we have had in the last three-to-four years at the US Department of Justice and in the USFTC. If we had a similar change here, a real culture of enforcement, of dedicated investigation and of people who are willing to actually do the work of taking a scalp, we would be best practice because we have the law.

Ms Clare Daly

I would make a number of clarifications around questions raised by Deputy Brady. Talking about the Harassment, Harmful Communications and Related Offences Act 2020 that Deputy Brady mentioned that was in our opening statement, this is known as Coco's law. It is a ground-breaking piece of legislation. The question Deputy Brady asked was on whether there should be amendments made to the law. The issue is that it is not sufficiently clear in terms of whether it extends to images generated without consent. That is the difficulty. It prohibits the recording, distribution or publishing of an intimate image of another person but the question around images that are generated is not clear. That is the clarification there.

In relation to questions raised by Deputy Creed, in talking about whether or not we see different mechanisms of enforcement in other jurisdictions and whether that illustrates that certain social media companies are able to adjust the service or the product per regulator per jurisdiction, that hits the nail on the head. This appears to be capable of being done. We see that in France, for example, I understand from headlines, there is a requirement of parental consent if a child is to sign up to social media. We also saw a lot of headlines over the last year coming from different US states imposing different types of laws on social media per state so it appears that there are different products that can be aimed at different jurisdictions from the social media companies.

Professor Barry O'Sullivan

I will come back to Deputy Creed's questions. On the question of whether there is international best practice, I say that AI governance is a topic that is divided by a common language. What I mean by that is if you were to pick up the ethics guidelines in China, in Europe and in the United States, they all look like they are saying the same thing, but when you peel away what they are really saying, for example, when they all say they respect human rights, they all have a slightly different definition of what human rights is. If you sit in an ethics debate at an international level about human rights, you will spend 80% of your time arguing, believe it or not, in 2024, whether women have the same rights as men or whether LGBTQ is a thing. It is amazing that there is not international consensus on the very basic things that you would expect there to be international consensus on. That is the first point. Even though things look the same, they are not the same. That makes it very difficult to have international best practice.

This year is going to be a very interesting year. We are now in the Super Bowl of democracy. In the next year, 3 billion people will go to the ballot box. There will be an insane amount of AI technology generating deep fakes on an hourly, if not minute, basis. The entirety of legislators, such as the committee members, and society in general will become extremely concerned about this technology. We will be having a very different debate about AI technology in six months' time than we are now.

If I may say so, it is regrettable that the big AI companies went to Congress last year asking the US to regulate them. These are the wealthiest companies in the world which really do not believe what they were saying because they did not go home and turn off their AI systems. In fact, they upgraded them and made more of them. On the international level, unfortunately, there is not best practice.

There is a concern in Europe we should be aware of that, while we are fantastic at regulation, we produce some of the leading lights in AI but they do not work here. They work in China and in the United States. Europe is essentially an importer of this technology. We need to be cognisant of the impact of regulation on essentially making Europe a user of imported and in some sense secondary technologies. It is really challenging.

Dr. Ryan raises a very interesting issue around the responsibility of different platforms. One of the challenges that we have in Ireland is that the Online Safety and Media Regulation Act 2022 regulates two completely different worlds and they share a common staple. In one, there is the regular media, editorial control, publication and responsibility. On the other hand, we have no editorial control and no form of publication. These are not the same. We really need to think about them differently because not only are they regulatorily different, but their potential for harm is different. If Deputy Sherlock is defamed in an article, there is someone he can go to. However, if 500,000 children are harmed by some content that has been produced in a garage somewhere, how do we go after the person who has produced the content? That is not so clear.

Senator Clonan has kindly agreed to swap with Deputy Costello, who is also a member of the Joint Committee on Justice. Therefore, Deputy Costello is next. I see Ms Cooney indicating, but I will come back to her.

I apologise to our guests; like Senator Ruane I am trying to attend two committees at once. In a way they are on the same issue. I apologise if I ask questions that have already been asked. I want to start where Professor O'Sullivan finished. There is clear liability here and it is beyond reckless. These companies know. In 2014 Facebook published in academic journals research about the emotional contagion and manipulating users' posts. That was ten years ago. They know exactly what they are doing to make significant money. Surely they are liable for the harm they have caused. If they are not, why not? Do we need legislative reform to look at this? I look to the special rapporteur for children's rights regarding legislative reform in this regard. Regulation needs enforcement at the same time. We have seen underregulation and underenforcement of data protection measures. The justice committee has spent a lot of time speaking about the challenges in this regard. The committee should look at this and make recommendations not only on regulation but on the enforcement that goes with it and ensure these things have proper teeth.

Finland has produced its own policy guidance and adaptations. I am curious about the value of us doing this. Many times when we as TDs and Senators tell the Government it needs to look at X, Y and Z we are told that Europe is doing it. We are told we do not need to do it because Europe will do it or that we do not need to legislate on something because Europe will do it for us. I would love to hear a bit about the interaction in this regard. What can we do here that will not be surpassed by Europe? What should we do here to ensure there is effective regulation coming at European Union level?

I ask people to be mindful of the time.

Professor Barry O'Sullivan

We have regulatory instruments in Europe. We have GDPR, the AI Act and the Digital Services Act. There are also many other regulations relevant to AI. AI was not unregulated before GDPR. We have the instruments and we need enforcement; Deputy Costello is absolutely right. We do not do a good job on enforcement. We spoke about the digital age of consent earlier. We do not enforce it enough. Nothing has changed sufficiently.

With regard to how we put some teeth into enforcement regarding companies that deliver content that is harmful to society, we need to think about some sort of charge of a crime against society. This becomes difficult to define. People who create content that is harmful should be accountable for it. Organisations that disseminate it should be accountable. Nobody is really looking at this. Another aspect we should consider is value alignment. Immediate work we could do on societal harm is to look at how we ensure the business models and processes of these large companies are aligned with the values of society. How do we define this? Unless we can define what the values are, the regulation that would characterise people as being in violation of them and having committed a crime is secondary. Work can be done on this and we could do it in the context of elections or young people. We have heard about a lot of fantastic research that has been done. We can point to examples where harm is being done and we can think about what we want to do about it. There are ways of doing it.

Dr. Kris Shrishak

It is great that Deputy Costello has mentioned enforcement. Many of the regulations are coming from the EU but much of the enforcement must happen in member states such as Ireland. We know the issues with GDPR enforcement but perhaps this will change given the changes in the DPC. There is also an opportunity to set up a fantastic enforcement body and regulator for AI. This could start early through recruiting great technical and legal people. They will be very important. This would set the tone for the rest of the EU. This is something that can be done and I emphasise this.

I would like to touch on what Professor O'Sullivan mentioned. When it comes to business models there are other tools we can use, such as competition. Competition in the marketplace may not be something that Ireland has to take up and it could be done at EU level. In some cases the European Commission has to intervene in the AI marketplace and this is another tool because it touches on some of the business models. It may not be directly relevant but it is certainly indirectly relevant.

Ms Alex Cooney

I thank the Deputy Costello for his question. Liability is an interesting issue. Coimisiún na Meán has spoken about holding leaders of companies to account through fines with regard to the content hosted on the platforms. We are seeing a little bit of an improvement on liability. This is something that remains to be seen as we see the publication of the first online safety code. The issue is at what point a breach has clearly occurred. The current wording is not clear on when a platform will have crossed a line. Regarding the wording on the online safety code, during the consultation it was felt it would be too stringent to set specific targets and timeframes for companies to adhere to. This is concerning; if we do not set the bar at a certain level how will we know when it has been crossed? The liability issue is potentially there but there has to be a very clear bar to be crossed. I am not yet sure this will be there but it remains to be seen.

I fully agree with the point about regulation typically being underenforced. This is something we need to change. We also need to look more broadly and not only at regulation in terms of how we address the issues we are speaking about. Yes, regulation and enforcement are very much key strategies to protect children but we also need to look at education programmes and general awareness programmes to ensure we look at this very holistically. These things should work together so that society in general is more aware, parents make good and informed decisions about children's use of and access to these products, there are good opt-in and opt-out options and things are not just on by default as Dr. Ryan mentioned earlier, and there is real choice in these matters. I hope we see more holistic solutions being recommended and put in place in future.

Ms Caoilfhionn Gallagher

I agree with the points made by Professor O'Sullivan and Dr. Ryan about the strength of the legal and regulatory framework and that the real issue is enforcement. When I came before the committee previously I used a phrase about the gap between the principle and practice. This is a clear example of a gap between us being world leading in principle but there being real issues in practice. There is an enforcement gap and this is a key issue.

With regard to the very important question Deputy Costello asked about the mindset that Europe will do it, I agree that it is problematic. It is particularly problematic because of the issues I outlined in my opening statement about the gap at international level and children's rights often being an afterthought. We see this in documentation from the Council of Europe and, to an extent, from the EU. This is a key issue. Fundamentally we have a situation where Ireland is the European home of a large number of tech companies that have a particular interest in the development of AI. I endorse and support the phrase used by Dr. Ryan earlier when he made clear in his opening statement that Ireland has the ability to become a trailblazer and a world leader and it should do so. If we adopt the approach of waiting and seeing what happens at European level, it will always mean that we are a little behind the curve. Ireland can and should do more.

I thank the witnesses for all of the information they have given us.

I do not know where to begin. When I was five, my parents gave me a toy gun and a cowboy hat for my birthday. One of the first things I did was to take out my mother's crystal glasses and put them on the sideboard along with the decanter. She asked me what I was doing and I told her I was going to drink some whiskey. I blame Sergio Leone and watching spaghetti westerns. I had internalised so much from visual culture. That was back in the 1970s. Funnily enough, in Catholic Ireland of the early 1970s, I did not take my role models from what I experienced in the here and now or the everyday. Rather, it was the hyperreality of what I consumed in the visual culture. We inhabit a visual culture right now in ways we have not previously. As homo sapiens, we are hardwired to respond to the visual. The print epoch seems to have been just a blip. We come from a culture that probably reifies the written word. This speaks to the power of what we consume and what children consume online. To that end, I wonder whether this is just a new technology. The dynamics appear broadly similar. The witnesses listed matters such as self-harm, suicide, sexual contact, anorexia, cybercrime and grooming. Was it ever thus? Dr. Ryan referred to the role of Meta in amplifying hate, such as in the context of Myanmar. Did the caricatures and stereotypes of Irish people as simian-like that were published in Punch magazine in the 19th century contribute to the neglect of the Irish people and what happened to them during the Famine? Is this just a new technology, with the associated moral panic, or is it an inflection point? Is it a complete game-changer with a horizon we cannot yet see or understand? That is my principal question.

I understand how compelling negative and hateful content is and how it generates traffic and keeps people occupied, along with the associated advertising and revenue. Ultimately, however, what is unethical becomes unsustainable. Meta or whoever will eventually get caught in a class action or it will become financially non-viable to do this. I agree that regulation is the way, along with intervention, and Ireland could play a leading role in that regard. If this is a game-changer or an inflection point, however, does AI have the potential to have moral agency? If so, do the witnesses believe it is ultimately on a benign trajectory or a malign one?

I was involved in the International Society for Military Ethics before I got elected. One of its main focuses was automated and autonomous weapons systems and trying to see over the horizon to the ultimate destination of autonomous weapons systems, which are AI. Most people do not believe they will ultimately be useful. Apart from being completely unethical and amoral, it does not serve a useful or sustainable purpose to have an autonomous weapons system. The argument will be made that there is more precision, accuracy and so on but really it just is not fit for purpose in the context of human beings in all their capricious and idiosyncratic ways of being. Aside from intervening where we can in terms of regulation and so on, is AI ultimately a malign, game-changing and unpredictable thing with its own moral agency? If it has its own moral agency, will it behave? Will it continue to facilitate harmful outcomes? I do not know if the witnesses have a view on that matter. It might sound like a naive question.

Finally, most of the recommender systems are programmed by people. It may be naive on my part but I suspect the tech bros - men and women - probably collaborate with them and extend a lot of deeply patriarchal ways of thinking, including everything from their crunch time concepts and so on. This relates to value alignment. Is AI deeply patriarchal and misogynistic? Does it target women and girls more than it targets men or are the outcomes equally harmful? My apologies for the broad and sweeping questions.

Ms Clare Daly

On the Senator's initial point regarding the gift he received when he was five, a recent Cybersafe Kids survey carried out with Amárach Research found that one in four six-year-olds has a mobile phone. That is an age similar to that of the Senator when he received the gifts to which he referred. He asked whether this is a new technology. The answer is, "Yes", there is no other kind of business that has such a direct contact with children, often without our consent. That is novel and a new thing. That is where regulation needs to step in.

I watched spaghetti westerns with my family. It was with the consent of others, or with their dodgy oversight.

Dr. Johnny Ryan

Then he grew up and joined the Army.

With access to real weapons.

Dr. Johnny Ryan

The Senator left that bit out of the story. These technological questions can be viewed in the context of many technologies. Let us take ammonia, for example. More than 100 years ago, there was a significant advance in ammonia and it changed fertilisers. That allowed us to have a global population not of 1.5 billion people, but of 8 billion people. The same technology is what revolutionised ordnance. All the explosions in the First World War would have been much tinier if we had not had the breakthrough in that chemical process. On one hand, many more humans were able to survive on the planet, while on the other hand there was mass carnage. Technology keeps presenting these problems. It has gifts and drawbacks and it is for us to balance them.

In the context of the story of media, sometimes people look at Gutenberg's printing press as a moment before which there was an oral tradition where information was flexible and changed. There was no authority and things were not written down. Then the printing press came along. When the church, through the Council of Trent, was trying to enforce what the Bible is, it could not do so because things were written down with berry juice on animal hide. The printing press, however, allowed us to finally have the absolute truth, until Web 2.0, Wikipedia and now the new morality. Our generation is at a hinge point in history where we are dealing with these new things and information is flexible. Luckily, we have an incredibly simple and elegant solution, to finally answer Senator Ruane's question from earlier. In the case of recommender systems, solely for the big social media video platforms, we can apply this across the market. It is not necessary to look at each person's device or app. Rather we can say we are not going to allow a particular system to be on by default. It is similar to the worries there may have been about the printing press and its possible consequences. We have been at this game long enough to have some levers we can pull.

Professor Barry O'Sullivan

The Senator referred to lethal autonomous weapons systems. I do a significant amount of work with militaries throughout the world in a Track II capacity. I agree that militaries are very cautious about these systems. For many reasons, including those referred to by the Senator, I do not think we will ever see lethal autonomous weapons systems existing. In the context of this debate, we need to be careful with regard to the significant focus on social media. AI does not equal social media. People certainly experience it on social media, but that is not the only place it exists. We need to be careful about that. Amara's law, which is well known, states that we overestimate the impact of a technology in the short term but underestimate its impact in the long term. That holds true in this case. Despite the huge amount of hype around AI, we are at risk of underestimating its impact in the longer term.

You do not have to look any further than what happened in the 1970s and the US manufacturing law and the measure that was introduced, misunderstanding the impact that would have in the long term. I would argue that the long-term impact of that was Trumpism and the fragmentation of American society. It created a new world of have-nots. They used to have but they do not have it anymore.

Regarding the questions around moral agency and these sorts of things, we have to ask ourselves what the purpose of a technology is. The purpose of any technology is to remove friction. The thing about AI is that it removes huge amounts of friction. People have access to information they never had access to. You now scale up things that you never scaled up before. All you need is an Internet connection and a computer, and you can really have a global audience. We not only have a technology that removes friction but we have one that has potentially something that can scale to global levels. With that comes fantastic opportunity but enormous risk. We need to be worried about that.

The technology itself is just a lump that has a battery. If the battery runs out, it does not work anymore. However, the morality and the ethics comes when the technology meets the use, and that is us. That is the human being, and we really need to start taking responsibility and not talk about AI as if it is some sort of thing that exists on its own. It does not exist on its own. It is an amplifier and a technology that removes frictions that are sometimes useful to have. We need to be very cautious about that in terms of how children can access information, all the harmful kinds of things we mentioned today but also things like sitting at home and buying something that you really do not need, placing a bet that you did not want, or buying a drink that you did not want to have. These kinds of frictions are removed by technology, and AI is one of these technologies that is so commoditised now that a 14-year-old with a computer and an Internet connection can literally change the world. If we look at Mark Zuckerberg, God bless him, when he was in college, that is where Facebook came from. Who knew that what he was producing would have such an impact? That is where Amara's law comes in. We overestimate the impact in the short term but we underestimate the dramatic impact in the long term, and we really need to resolve that.

The question is around where the morality and the values come from. Regardless of whether we believe that AI is this fantastic genie in the bottle or not, we need to self-reflect on our own values, what we want as a society and what we teach our children. That is where the morality, the data and the ideas come from. Unless those kids are supported and society is supported in having a good values system, then the technology can become very dangerous.

Okay. Ms Alex Cooney is next, and then Ms Caoilfhionn Gallagher. I ask the witnesses to be really mindful of time and be brief. Ms Cooney is on mute.

Ms Alex Cooney

Apologies. I put my hand down but I forgot to turn on the microphone.

I thank Senator Clonan for the questions. I agree with many of the points that have already been made but I think the amplification here is just extraordinary, taking something like the potential to groom someone online compared to grooming offline. Typically and historically, it would have been one perpetrator and one victim, but now, through the enablement of technology, you can have one perpetrator and in some cases hundreds of victims. The potential for the amplification of problems is really there. This is not just technology on its own; it is human-manipulated technology. It is technology designed to hold and capture our attention, and to make us take actions that will benefit the companies, whether it is giving them more of our data or buying products as a result of stuff that has been targeted at us. There is real manipulation behind it.

As we said in our opening statement, it is not designed with children's safety in mind. Children's safety is not a central consideration, even in the AI Act that we now have in place. It does not talk about child-centred design being an important and essential consideration. We have some way to go. There have been a few moments that have been described as "tobacco moments", for example, the coroner's inquest into the death of Molly Russell, where it was concluded that she had not only sought out content related to self-harm and suicide but she had also been served up thousands of pieces of search content by the services that she was using. I wonder how many tobacco moments we need for real change to actually happen.

Ms Caoilfhionn Gallagher

In response to Senator Clonan, these are very important and huge questions, such as the "was it ever thus?" question. Essentially, the Senator's question is around whether it is really different to the old problems, and whether it is old problems in new packaging. Is it the equivalent of the Judy Blume book passed around by teenage girls without their parents knowing about it? I agree very much with what Ms Cooney has just said about the key difference essentially being the vastly different scale. The reach and the ease of access is critical, and it has both risks and benefits. We must remember that it also has benefits. That is one of the reasons this is such a nuanced and complicated issue because the risks include, as Ms Cooney just said, issues around child abuse and online grooming. It also has the potential where someone who is in a very obscure group with very particular racist views, for example, might never meet someone in real life who shares those views but is able, through this mechanism, to meet people worldwide who might share these very twisted, difficult views and it may potentially allow them to organise. They are huge risks.

It also has benefits, for example, with regard to isolated teenagers in particular who are in groups that do not have real-life support from their families. That is why I made the point about the UN Committee on the Rights of the Child, emphasising that sometimes children are at risk from their caregivers, for example. We have to be sure we do not throw the baby out with the bathwater, and when we are looking at ways in which we design a response to these very serious and grave issues we are talking about today, we do so in a way that also recognises there can be protective elements to connections through social media.

I have two more points. Regarding the Senator's point about AI, the patriarchy and misogyny, AI of course is not monolithic. The short answer is that yes, somehow AI is misogynistic and patriarchal. It works from historic datasets so it will reinforce historic disadvantage. That is one of the really key issues that is outlined in the UNICEF guidelines, and is important to bear in mind.

Finally, I agree with Professor O'Sullivan on not conflating AI with social media, and recognising, in respect of child-centred design, the importance of looking at automated decision-making processes that impact children both directly and indirectly in a vast range of ways. With regard to the question where horizon scanning was referred to, one of the real challenges we have here is that ultimately what we are looking at are the disruptive effects of AI that are going to transform children's lives in ways that are difficult to predict and understand. That is why we need to get ahead of the curve, do more, and do it more quickly. The international materials we have referred to are a helpful touchstone when doing that.

I am astonished by what I have heard. What I have really learned here is that it is the centrifugal dimension, and children consuming this on their own, completely solitary. This is compared to the centripetal, like when we were younger, and consuming stuff in a wider social context. I thank the witnesses.

Professor Barry O'Sullivan

There is a fantastic short story by E. M. Forster called "The Machine Stops". It is worth a read.

I thank Professor O'Sullivan.

I thank the Chair for arranging such a very interesting meeting. It is very educational to me. I was not in the room for the start of it but I listened to every bit of it. I want to compliment our witnesses on the comprehensive reports they have given and how they have intertwined with and complemented each other. It has been a very useful and fulfilling meeting from my point of view anyway because I have a lot to learn. At my age, having been finally able to conquer the Internet and the world wide web and figure out, with the help of my grandchildren, how to work my iPhone, now I am confronted with this whole new dystopian future of AI, and all the kind of terrors that are implicated in it. Even though I understand that there are positive and negative elements to it, I would be like the general public. I would say most of the general public has a fear of AI and sees it a something sinister that is going to change their lives not for the better but for the worse.

I have some personal experience of dealing with that kind of "deep fake" stuff. I am an addict and in long-term recovery, thank God, and I do a lot of work with young addicts. I have seen the harmful influence of AI and these weird platforms on young people who are struggling with early recovery.

If the young people are lucky enough to join organisations such as AA or Gamblers Anonymous they will get help there and someone to talk to but very often they do not and have recourse to their laptops or their iPhones in the dead of night on their own and encounter all of these weird suggestions and encouragement to go a certain route. A lot of it is for a profit motive. They want to sell people a therapy package but an awful lot of the time it goes very queer. I am aware of a number of suicides that were at least partly caused and encouraged by insidious social media AI.

I have a question but I just want to say that I was very interested in Dr. Ryan's comments. He gave me focus for the first time ever on the whole profit motive underlying an awful lot of this business. Obviously, someone is making a packet out of it and money has no morals. It strikes me that it should not be beyond the global body politic to find some way of hurting these companies. The only way we can hurt people who are in it for profit is to reduce their profits and take money from them in one way or another whether it is by sanctions or some form of indirect taxation such as levying huge costs on advertisers. Very often people will not get a conscience until they have to pay for it. That is my own view.

My question is around what can we do as public representatives, as members of this committee, as an Oireachtas and as a Government. What should we be doing that we are not doing? A meeting such as this will help raise awareness. A lot of people are watching this. I know that and I have gotten feedback on it.

Does Dr. Ryan want to come in first?

Dr. Johnny Ryan

I thank Senator O'Sullivan for his question. I remember when I was still in industry we had this much-hyped GDPR thing to send and my colleagues in other organisations in Silicon Valley were asking, "Are the Europeans serious about this?" because it would change everything. It would change how you make money. It would change the entire industry. Naively, I said: "Yes, yes I think they are with the law as it's written. I think this means we're going to have to change." The answer is, I think the Oireachtas can do two things. One, we have made a specific request in our opening statement which is that this committee, and the members individually, support Coimisiún na Meán in a very specific measure that it proposes to bring in and urge it to go further so that measure is fully binding and the tech firms cannot wiggle out of that measure. That measure is specific to recommender systems.

The other action, which ICCL has been asking for for quite some time, is that we see pressure on the Data Protection Commission to prove me right when I told my colleagues in industry that Europe was serious. We have the leading role to play in Europe on many of these issues and we do not play them and we have not played them. It is for us to push our enforcer to uphold its responsibility which extends to everyone across Europe.

Professor O'Sullivan will be next and then Ms Cooney.

Professor Barry O'Sullivan

I thank the Senator. On the public's fearful perception of AI, I agree that unfortunately there is a fear out there but it would be a pity to let that go unchallenged because everybody here uses AI every day. In fact, we have Ms Cooney on an MS Teams link right now and the reason we can see her so well is because any delays in the connection are being compensated for using AI. We can hear and see her perfectly because of AI. Our email is not overwhelmed by spam because of AI. Cars are safer because of AI. Everybody uses maybe ten or 15 AI systems every single day and it is a power for good. Of course, there are downsides and we need to focus on those but we should not let the world think that the technology is bad.

The Senator raised a very interesting question about the business models. I said earlier that one of the big challenges we face around the misalignment between what we think these companies should be doing and what they do is that there is a value alignment problem. Our societal values and their value mechanisms are not aligned. They are basically selling access to content for profit and, unfortunately, human beings love to engage in a particular kind of content and may be being mischievous about the production of certain kinds of context. I know I am understating certain contexts but there is a value alignment issue that needs to be addressed. There is no regulatory structure in the world, as far as I am aware, that calls companies to task on questions of value alignment. Until that happens we cannot create this notion of a harm against society which organisations could perpetrate or even individuals could perpetrate by creating fake information. We will see lots of this fake information this year as the world goes to vote. There will be a lot of that.

The last thing I want to comment on is the Senator's remark about the vast sums of money these companies make. It is interesting that the raw material they use for making these vast profits comes from us and, guess what, we give it to them for free. It always strikes me as astonishing that these companies make absolutely eye-watering profits from the data you and I post, the emails we have in our inboxes or the things we interact with online but we get no benefit from that. Maybe, to some extent there is a harm created by it. I will not go on that tangent because we do not have time but there is a question about whether there should actually be some transactional benefit associated with providing information to these companies for them to make a profit. It is a really interesting question; even raising it and having an international forum on whether there should be, I do not want to say a tax, but some sort of contribution to the contributor, to the producer of the raw material - you and me - is interesting. There is the question of whether we should get some of the upside here but we do not, unfortunately.

Ms Alex Cooney

I thank the Senator for the question. I absolutely agree with Dr. Ryan's call for the recommender systems to be set in some default way to be off and then it is optional whether someone wants them to be turned on. They should be off for children. Cybersafe Kids certainly supports that call. We think there is a lot of potential for the technology the tech companies are using for advertising and profiling purposes to be harnessed to safeguard children. There are things we can request around age assurance and certainly Coimisiún na Meán is also looking at this but we should demand more of the companies. They essentially know the ages of their users so they should use that knowledge to better protect younger users.

Another consideration, picking up on Professor O'Sullivan's point which has been raised before, is the idea that we can use these services for free. It is not really free because we are giving our data and the companies are benefiting to the tune of billions of dollars in profit so maybe we should look at paid-for models. That is the model rather than our data and advertising being the basis on which they make money. A subscription model could be another consideration. I would certainly urge the committee members sitting in the room and who will be putting a report together on the back of this to stress that education and public awareness campaigns are essential. There is so much unknown and, obviously, we talked before about the fact that there is a lot of fear around AI and people do not know how to regulate it or what to expect. There should be a lot more awareness around it. We need to teach children in schools about how to use this technology in a safe and smart way. We need to teach parents and care givers and raise awareness there as well. Whatever recommendations are put in place should also include educational awareness programmes.

Ms Caoilfhionn Gallagher

First, I will not add to what others have said about Coimisiún na Meán. I made clear in my opening statement that I welcome the fact it is focusing on the issue of recommender algorithms and exploring that topic. When we hear back further from Coimisiún na Meán will be critical. Overall, a key thing is to comply with the UNICEF nine principles referred to by Professor O'Sullivan and me in our opening statements.

This is key. As Ms Cooney has just said, this involves matters like education and ultimately putting resources into these issues. The nine principles are key. This involves having a child-centred focus, not only at the stage of realising a particular product or AI system is causing a problem but also ensuring there is a child-centred focus right from the outset, from the design stage. The bottom line is that many of the problems we are talking about today involve AI systems and automated decision-making based on human-defined objectives, which have gone in at the outset and have not had a child focus. This is a fundamental problem if the human-defined objectives in a machine-learning system have not taken account of the different nature of children. Children are not mini-adults. They are fundamentally different, evolving beings who deserve and are entitled to be treated differently. One of the problems we have is that often we have the product, and then afterwards there comes the late-stage sticking-plaster approach. We actually need to have child-centred design right from the outset.

I agree entirely with the point made by Senator O'Sullivan about recognising the fear of AI. This is completely understandable because of the very serious issues we have been talking about concerning the pushy algorithm problem. It is also important to give thought to the fact that in many circumstances AI can be used in a positive way. As with the example of Milli the chatbot from Finland mentioned earlier, I draw attention to the fact that the UNICEF guide also refers to what I think is a very useful example called SomeBuddy in Finland and Sweden. This is a crime detector system that helps to support children and teenagers who potentially have experienced online harassment. It is a really good example of children having been involved right from the outset of the design and of there being checks and balances along the way.

When children report incidents such as cyberbullying, the system initially has an automatic filter to analyse the case using natural language processing. It prepares a first-aid kit, and then there is human input later. If we were to rely on human input alone, however, we would only be able to get to a very small number of children. It allows, therefore, a reach to a wider number of children. This is a good example, supported by UNICEF in Finland, of AI working well, with good child-centred design right from the outset and with checks and balances to ensure the system is working well and trying to pick up any false positives or negatives. This is in addition to the example of Milli the chatbot referred to earlier in respect of Helsinki University Hospital. We must remember the opportunities in this area as well as the risks when we are thinking about child-centred design.

I must move on. I have to ask the witnesses to be very brief or I will have to start cutting people off because we still have three members to ask their questions. I call Senator Mary Seery Kearney.

That is great. I thank the witnesses. The content this afternoon has been fascinating. When I first came into the Oireachtas, I used to feel that if I raised issues like this, people would look around and wonder where my tinfoil hat was because my thinking seemed so catastrophic. I have been on a campaign to have no smartphones. I do not believe children should have smartphones. I welcomed the research from Cybersafe Kids last week regarding one in four six-year-olds having a smartphone and 45% of ten-year-olds having access to their phones in a room without any supervision. I think this is quite a frightening prospect. At the same time, I suppose I know the diagnostic benefits of all of the ways in which AI is deployed for the benefit and betterment of society and for the common good. However, there are things that have been neglected. Regarding the idea of values alignment, I like the terminology of all of that.

Some of the material I read, worry about and consider in respect of trying to figure out how we can get our heads around it involves this idea of cognitive security. We have children growing up in a context where the horse has already bolted. We are now in a world where children have smartphones and where parents do not engage and do not consider the dangers that therein lurk for their children. We have a system that deploys AI or, rather, companies that deploy AI to deliberately capture attention. It is the attention of the individual that is being sold. This is the business model. While these companies are allowing us and children access to content, what they are actually selling and commodifying is the attention of those children with inherent and deliberately conceived addiction and behavioural consequences, as well as isolating and dehumanising consequences.

I refer to children growing up with a complete lack of resilience and ability to have self-respect and self-regard. Their regard is for how many likes they get and the impulse related to this metric. My child does not have a phone. She goes on YouTube on the television in front of us in the room when we are in the room. This is because she must be able to go into school and talk about the YouTubers with all the other girls in her class in the same way I went in and talked about who was on the national song contest or Eurovision with the other girls in my class. I feel a duty as a parent to ensure my child has a certain amount of access so that she can be or feel relevant. At the same time, I am wary that she is commodified, that there is a potential for her to be isolated and dehumanised and that she is being stalked by algorithms and AI systems for the particular clicks and likes, as we all are.

My fear in all of this is that we are not talking about the context. We are not talking about the fact that our culture has already changed. I remember dial-up Internet access and the associated sounds and all that. I remember when that was the case. Now, however, I expect to connect instantly and not have the blue line taking ages to come when I am looking up information. Regarding this idea of friction-free access, yes, we are impatient to have everything at our fingertips. We think we have a right to it and all of that.

We have a duty concerning public awareness. Personally, I think we should have a special category of taxation that completely funds global enforcement in this context. The idea of Mark Zuckerberg standing up, looking around, probably at the prompting of his PR people, and apologising in the United States Congress to parents who had lost children is horrific, to be perfectly honest, knowing that behind all that is a deliberate for-profit model that has, deliberately and consciously, deployed this approach.

We need to have a way of litigating against those who have deep pockets. The mob rule that puts up horrific content or amplifies misinformation and disinformation usually involves some idiots in garages who do not have deep pockets, so I would be wasting my time suing them. The best place for me to sue is Facebook. None of us who are politicians, and I am sure this is the case for many others in the room as well, sit here free of and unscathed by the horrific implications of having to contact the Garda about the things that happen to us. It is a waste of time going after those individuals in their garages, except in the criminal sphere. We must be able to access and hold the likes of Meta and all those types of companies liable for the publishing of what is there. I cannot figure out, though, how we are going to do that, in a way, because they work on such a global sphere. If Ireland alone were to do this and bring in this type of legislation, those companies would figure out a way around it.

Meta, Google or one of these multinationals threatened Australia a few years ago when that country threatened to bring in legislation to put a limit on them. I feel we need a public information campaign that calls this out for what it is, namely, that it is an appalling business model, which, in itself, needs to be challenged, kicked back against and taxed the hell out of until oblivion. We must, however, also be able to go after those with the deep pockets from a suing and liability perspective. I welcome ideas from the witnesses on how we can do this.

Dr. Johnny Ryan

I will give a very brief response to the Senator. It is that, as she knows, data protection law is kryptonite to that business model. It is all illegal.

Dr. Johnny Ryan

We have not even grasped the lowest-hanging fruit in data protection enforcement. One thing this committee can do right away is to write a letter to the incoming commissioners of the DPC when they are announced-----

Des Hogan and Dale Sunderland.

Dr. Johnny Ryan

-----and tell them we expect them to do their job.

Yes, they were announced this afternoon.

Dr. Johnny Ryan

Okay, we were waiting for that. They should be told we expect them to do their job. Once we see enforcement of the GDPR, an awful lot of these problems will start looking different.

Professor Barry O'Sullivan

There have been some fantastic remarks. In the context of cognitive security and what is being sold, attention is certainly being sold but what is more concerning is that self-esteem is being outsourced. The sense that an individual has self-worth is measured by the number of interactions, followers, likes and so on, and that is really problematic. It is a sort of quantification of development and it is a way of incentivising development in a certain way that might not necessarily be in people’s best interests. I share the Senator's concerns about what I call Mark Zuckerberg’s "sorry about that, folks" moment, which was extraordinary. A technology has been responsible for killing people and the company has effectively just said it was sorry about that. That that might be acceptable in some sense is just astonishing.

Some things can be easily done. The concept of trolling applies both to children and to public representatives, but it is not a difficult problem to get rid of trolls-----

Will Professor O'Sullivan please tell us how to do it?

Professor Barry O'Sullivan

An AI system that a 15-year-old could implement, if it could be deployed by Mr. Elon M, could easily address that problem. These are not technologically difficult problems. There is just no willingness to do it. We need to push for simple solutions, such as sorting out trolling. I think the world would want us to do that.

The Senator is correct to say there is a new sort of capitalism now. Shoshana Zuboff calls it surveillance capitalism, which is a great term. One step Ireland could take, which would have an international reach, is to identify a small number of topics where we want to have global agreement. We are not going to get a treaty or a law but there could be a form of peer pressure. In my experience of working with the social media companies at the European Commission through the high-level expert group, the first thing that surprised me was that a lot of these technological companies really want regulation, because they do not want to be caught on the wrong side of a line they do not know they have crossed until the public has told them they have crossed it. They want politicians to tell them where the line is in order that they can stand up against it, and then it will be politicians' responsibility because they drew the line. They are very sensitive to peer pressure, and we could identify a small number of topics, such as the outsourcing of self-esteem, trolling and other low-hanging fruit, although I am not saying the self-esteem issue is low-hanging fruit, on which we could get international agreement. Perhaps there could be some sort of charter or code of conduct that we would expect these companies to sign up to. If they signed up to that, good old-fashioned shame and peer pressure might have a considerable impact. These companies are very sensitive to that these days.

Ms Alex Cooney

To go back to the remarks Mark Zuckerberg made at those hearings a couple of weeks ago, he denied in his opening statement that there was a causal link between the consumption of social media and the impact on mental health. He was later, as we know, forced to make that apology and he was strongly encouraged to stand up and do so, as weak as it ended up being. We could demand much greater transparency from these companies about what they know. We do not have all the information on how social media is impacting on the mental health of children and young people. We can see the impacts and the increases in the numbers of children presenting with eating disorders, self-harm or suicidal ideation. We see those increases globally and we know these companies know more than they are telling us.

Whistleblowers such as Frances Haugen have come out and said it but somehow the companies are still getting away with not being transparent about what they know, and that is not acceptable. I have heard of academic institutions starting to explore these issues but, ultimately, it does not conclude and they do not hold all that data. There is a vague commitment to it but not a sincere one. We need to demand that they be much more transparent regarding what they know about the impacts of extended use of not just social media but also gaming platforms, which often fly under the radar and are immensely popular with kids. Obviously, they are great fun but there is a downside there too. We need to demand greater transparency there.

Ms Clare Daly

In respect of the availability of data protection enforcement in the context of regulation here, the UK has a designated children's code and in Ireland we had a fundamental code for the processing of children's data, but we could look at that going further in Ireland. In the US, for example, the Federal Trade Commission issued an algorithmic deletion demand in 2022 concerning a mobile phone app designed for use by children that violated the US Children’s Online Privacy Protection Act in the case United States of America v. Kurbo, Inc. It was found that children's data had been used to train the company’s algorithms but that the company had failed to notify parents. Given the GDPR is in existence, we could seek to rely on it a bit more strongly, as Dr. Ryan pointed out in his submissions. Perhaps the Data Protection Act in Ireland could be looked at in that regard, with a similar code to the children's code in the UK being considered.

In movies from the 1950s, the promotion of smoking was so blatant that how prevalent it was is shocking now to the senses. A journey had to happen to say it was dangerous and that had to be proven in court and so on. I think we need regular warnings on social media to say this is an issue for mental health but getting to that place, whether through litigation or otherwise, has to happen in order to arrive at a place that proves that causal link that Zuckerberg attempted to deny. Obviously, his PR people texted him to say it would be a good step for him to turn around and apologise to the public after so many have died, but we need a health warning on all these platforms that comes up as regularly as their adverts do, and they should be obliged to do that.

Deputy Sherlock was the person who suggested this for our work programme, and we are finally getting to it.

I thank the Chair. This is probably one of the best tutorials I have had in my 16-odd years as a TD, so it is a great privilege to be in a room with such august individuals. It has been fascinating and I have sat through it all. It will take me a couple of days to unpick what I have heard today. What prompted me to bring this up and call for this hearing was that I read an article in The Guardian about AI-created child sexual abuse images threatening to overwhelm the Internet. It was written by Dan Milmo, who is the global technology editor, and was published on 25 October 2023. I will not read out the document except to read one quotation in it, which relates to the IWF warnings that issued in the summer of last year. The IWF is quoted as stating:

Chillingly, we are seeing criminals deliberately training their AI on real victims' images who have already suffered abuse. Children who have been raped in the past [I am sorry to use such strong language] are now being incorporated into new scenarios because someone, somewhere, wants to see it.

The foundation went on to state it had also seen evidence of AI-generated images being sold online and that its latest findings were based on a month-long investigation into a child abuse forum on the dark web. This is the children's committee and we do everything we can to have hearings of this nature to see how we can best protect children. I am going to stay focused on children and the law. This has been a fascinating interaction with the witnesses.

Forgive me if I dispense with formal titles but in their interactions Barry and Caoilfhionn spoke on the EU Act, the AI Act, the Finnish model and the Council of Europe draft framework. Barry referred on no less than three occasions to the Online Safety and Media Regulation Act. I do not want to misinterpret his words but if I understand him correctly I am picking up that there may be some something in what he is saying that suggests the Act may not be fit for purpose for the times we live in. That is the first question for Barry.

The second question is for Caoilfhionn as the special rapporteur on child protection. On the issue of Coimisiún na Meán and the recommended algorithms, what will be the commission's powers of enforceability in law? Has the commission the legal standing to be able to call those companies to account? What is Caoilfhionn's interpretation of the law specifically in relation to that? Those are the first two questions.

Professor Barry O'Sullivan

With regard to the article the Deputy points to, one of the real challenges now with AI is that the people who are seen might never have existed. There is actually a website, thispersondoesnotexist.com, which creates images of people who do not exist. It is easy to extend that, and it has been extended, to children. The kind of content in that article might not contain any actual human being. They might not actually be in some sense a victim. I am not a legal expert but if the person depicted as being abused and as being subject to a crime that does not actually exist then what is the crime? We need to worry about that in the broad context of generative AI because what one is seeing might not actually be real. There are people far more qualified than I am to talk about the legal issues of this.

Professor O'Sullivan is getting down into the real nub of it now. If it is machine generated content then on who should we place an onus? Is it, in the old language, the Internet service provider, is it open AI, is it DeepMind Entropik or any of these companies?

Professor Barry O'Sullivan

Frankly, it is everybody who is involved in the chain, from the production of it, including the producer, right through to the person who presents it in front of the viewer. Within AI we have to get to the notion of provenance. If information was food we would want to make sure there is traceability right back to where it began. We do not have this with data. Despite the fact that data is, as we have seen, impacting the minds of our children and society more generally there is no sense of traceability. Not only does there need to be traceability there needs to be accountability right along that sort of data chain that brings us right back to the person who has developed the technology that produces it. We do not have that. As we heard earlier, the AI Act has this whole labelling issue where if something is generated by AI, it needs to be labelled as such, but unfortunately the people who create this content are already criminals, so what do they care about the AI Act. They are not going to label anything.

On the question of the Online Safety and Media Regulation Act, the commission we have is fantastic and the organisation is fantastic but my only concern about the legislation itself is that we cannot expect those who are harmed to form an orderly line. As we have heard today, the scale and the nature of the harm can be enormous. Literally hundreds of thousands, millions or tens of millions of people could be harmed simultaneously. How do we deliver accountability and redress to people who are harmed by technology on that scale? My concern with the Online Safety and Media Regulation Act - and there are people here who are far more qualified to talk about it - is that it does not quite envisage that scale. Not only does it not quite envisage the scale, we also need to figure out what "harm" means, what kind of harms there are, and what harmful content is. A lot of work needs to be done on establishing what that means. We can identify some extreme forms but there are subtle forms of harm as well, which was part of the response to Senator Mary Seery Kearney's remarks about the outsourcing of self-esteem, which in a sense is a harm. What does "harm" mean? What the people involved are doing is fantastic and I have absolutely no criticism of that whatsoever but I am not sure the legislation is cognisant of the scale and the impact of this technology and what is needed to seek redress.

I thank Professor O'Sullivan. When answering the second question Dr. Ryan might also have an input on this as well. We refer to the UNICEF report and Ministry for Foreign Affairs of Finland policy guidance, but it is policy guidance and it is not law. The UNICEF recommendations are not law they are only recommendations. As a parent whose kids are not yet at the age where they have smart phones I know I speak for thousands. They are avid readers, thank goodness, but it is only a matter of time before they procure smart phones and God knows what kind of content. As a parent one wants to absolutely regulate that. I want a body of law behind me protecting me as a parent and giving us as parents protections in knowing that we are not in the battle alone as parents and that the law will be on our side when we want to take these guys on. How do we shift from guidelines on protecting kids to ensuring we have robust laws? We live on an island where nine of the top ten born-on-the Internet companies are. They have an EMEA presence here and are IDA Ireland-backed companies. I sometimes think there is a bit of a trade-off there between the amount of jobs they create and perhaps successive Governments sitting on the fence a little with the regulation and content because we do not want to compromise that inward investment. We are now, however, moving to a time where we need robust laws. I am seeking comfort here on how we can legislate. Will the recommended algorithm piece of work to be done by the commission actually be robust enough to deal with this?

Dr. Johnny Ryan

I believe there is a body of law. Let me guide the committee through it. Before we get to the Online Safety and Media Regulation Act or domestic law, one of the things that this brings into domestic law is the audiovisual media services directive. The relevant parts for the Deputy's question are article 6(a)(1) and article 28 (b)(1)(a) which state that services must take appropriate measures to protect minors from various things which "may impair their physical, mental or moral development." It is broad language that reoccurs. That should move in the right direction from your perspective. On the Online Safety and Media Regulation Act, which is domestic law, the question of enforceability arises. Section 139K (4)(a) states that the commission's safety code may provide "standards that services must meet, practices that service providers must follow, or measures that service providers must take ".

The final bit I would add is that section 139ZZC, which is a horribly drafted part of the Online Safety and Media Regulation Act, states that Coimisiún na Meán can go to the High Court if it deems it necessary and apply for a blocking order, which would say, "You are out of the market because you did not do what we asked you to do." The commission has the power and they just have to use it.

Dr. Kris Shrishak

I wish to touch on the aspect of the deep fakes, and the transparency issues of deep fakes.

I was just thinking about certain prohibitions that the Act has. One of them targets a clear view of AI which indiscriminately scrapes images to create facial recognition technologies in the background. Something similar is what is needed because in this case, even if we do not specifically target the deep fake generation, if the fact that input images are scraped through Internet or various sources can be prohibited, that could essentially stop it at the source.

Dr. Johnny Ryan

Therefore, make the scraping illegal as it were?

Dr. Kris Shrishak

Yes. Making the scraping of images illegal can essentially be a way. Then we have other elements I mentioned such as circulation and all of that so then you are plugging the hole from different areas. I wanted to mention that.

Ms Caoilfhionn Gallagher

I thank Deputy Sherlock for the questions and for all the work he has done in this space, which is hugely important. He started his remarks by making reference to that horrifying article about AI-created child sexual abuses images. He may know I have acted for organisations that represent child victims of sexual abuse across borders, in the Philippines, Uganda and in a range of other countries, where the children have been abused online by people from Europe, the US, Australia and so on, with directed abuse. I have some examples from the organisations with which I have been working of AI-created child sexual abuse images arising from some of the images they had because some were not sufficiently extreme and there was more and more of a market on the dark web. I am happy, if it assists, to put the Deputy in contact with some of those organisations that are very helpful. One of them is ECPAT International but another is Child Redress International that looks at ways in which victims who are in countries abroad can access victim-focused remedies which are primarily designed for those who are abused as children in Ireland. The victim mechanism assumes you are in Ireland and you are a child abused by an individual in Ireland but if you happen to be in the Philippines, our systems do not work very well cross-border. I realise that is straying a little outside today's topic but I wanted to indicate that I am happy to engage further on that. It is hugely important and I thank the Deputy for starting with such a child-centred focus. The bottom line is what we are all concerned about here is child protection and those impacted most severely by the issues we are talking about.

I obviously agree with what was said by Dr. Ryan. I agree with the concern he raises as a matter of principlel in that many of the items we are talking about in the guidance documents are essentially soft law. They are guidance documents and do not have teeth. Many of the ways in which those international materials end up having teeth is in a way that does not particularly assist in this space. For example, through Article 8 of the European Convention on Human Rights when bringing a case against a state in the European Court of Human Rights, the court will increasingly look at these topics. However, when looking at a case against a state, that does not assist particularly in this sphere. In addition to the point made by Dr. Ryan, there is also the fact that if there is non-compliance with the guidance that has been produced, that does not automatically mean there has been a breach of the code but it is a relevant factor to take into account and keeping a close eye on that will be important.

The final thing I want to say on Coimisiún na Meán relates to something on page 14 of the consultation document. It is important we have a look at this in light of some of the earlier discussions. The AVMS directive, when looking at content harmful to the general public, has a definition which excludes some of the issues referred to earlier by Professor O'Sullivan particularly. That is important because it is kind of old fashioned in some ways. It relates to illegal content that is harmful to the general public in the sense of it being provocation to commit terrorist offences or content which constitutes a criminal offence relating to child pornography - a phrase which is not one I would use myself - and so on. Those definitions are drawn from the AVMS directive. However, the final two paragraphs are important because a number of respondents in the call for inputs considered that Coimisiún na Meán should include a wider range of harms, looking at, for example, context that promoted misogyny, attitudes that lead to gender-based violence, and encouragement of racist and other discriminatory attitudes. The current position is the draft code focuses on the harms covered by the AVMS directive and on those wider issues what it has said is that Coimisiún na Meán will consider the potential relevance in relation to content that promotes discriminatory attitudes in collaboration with the European Commission and its counterparts in other member states. It is very important we and the Oireachtas ensures Coimisiún na Meán does that, works with like-minded partners and takes a lead on that issue because many of the harms about which we are talking are not solely about individual victims, but about the promotion of a set of attitudes that ultimately changes the water cooler moments the Senator spoke about earlier and the way we as a society interact. That is critical and at the moment it is not covered because it is out with the AVMS directive. I understand why that is, and the need as set out by the coimisiún to have a broader base of support, but it is critical. We need to keep an eye on it and we need to be trailblazers on that topic.

I cannot give more time on this. We still have Deputy Jennifer Murnane O'Connor left to ask questions. Apologies, we are short on time.

I will be very quick. Everything said here today was critical and so important. As previous speakers have said, child protection is of utmost importance and we need more robust laws. Many of the questions I wanted to ask have been asked. I speak to families and children and, on that, can Ms Cooney and Ms Daly tell me what engagement they have had with the Department? This has become a huge issue for me and I am sure they are aware of it. Many schools or parents associations have to pay or provide funding for speakers to come and talk about cybersafety. What engagement have the witnesses had with the Department of Education about having this funded and implemented in schools? This is something that was asked of me by a parent recently. Critical thinking is not taught as a subject in schools. Do the witnesses think it should be, given the amount of AI and of misinformation online? Previous speakers have spoken about education and awareness. That is where you start. We can have all the policies in the world and all the legislation we want but unless there is an awareness out there and education, and the children and the families are taught how we do this, it does not matter. I am a firm believer in that.

My second question is to Ms Gallagher. This is again something we face daily and it is important. Children are more across AI than we are. Should our policies actually be dictated by children in this case? I will give two examples. One looks at the BT Young Scientist Exhibition. The members will be all aware of it because it is excellent. This affects every child and every school in the country. I had an intern in my office recently who participated and won a highly commended award. He is from the Tullow Community School and his project was about misinformation online. Another project from Tyndall College Carlow was on TikTok and about the pandemic and the threats to children online on that platform. Many of the projects this year focused on digital media. In fact, the winning project, of which I am sure we are all aware, was on AI in authorship. The children are already seeing the challenges. Should we, as policymakers, consult the children first and then the other stakeholders in this discussion? That is something we need to look at. Are parents trusting the technology too much? Do they need to be guided further regarding AI technology as well as the benefits in those cases where good work is being done? We know there is good work being done and all of us are here today for the right reasons. However, we need to start at the schools and with the families, provide the education and have an awareness. Children are so intelligent these days. As the speakers have said, this is about child protection and robust laws but we need to start at the early stages and schools can play a huge role in this. The teachers are excellent. Everybody wants the best here. Those are my questions. I do not know if any of them have been asked but these are things we are seeing on a daily basis. That is how we can make an impact. I thank the witnesses again. I thought today was very important.

Ms Alex Cooney

I thank the Deputy for the questions. Yes, we have engaged with the Department of Education. We have been recommending for a number of years that education on digital media literacy and online safety be made a more core topic in schools. It is currently a peripheral topic covered within the curriculum but it is absolutely not given core focus and teachers are not being routinely and adequately trained on this topic either.

It is certainly a significant issue facing schools. We have raised these issues. I met with the Minister for Education last summer and we spoke about the programme we are delivering in schools. There has been a lot of talk about a ban on smartphones in schools and the move that some school communities have made to reach out to parents and agree that there will be a holding off on smart devices but it should really come down to education. Education will focus on equipping children to be able to navigate these online environments, to be wary of the content they are coming across and to question the content and people they come across online. This is so important. These are skills we need to instil in children and we need to work with parents and teachers. We have raised these issues and sought funding from the Department of Education because we are very underfunded as an organisation. We do not receive State funding and have, unfortunately, been told that there is no funding at the moment and there is not a focus on making it a central part of the curriculum. I hope this will change because we feel that once this is done and parents feel equipped through resources and awareness campaigns, there will be no need for an organisation like Cybersafe Kids. We are working towards a time when, hopefully, we will not be needed and it will be well covered within the system.

I will give the last word to Ms Gallagher as the special rapporteur on child protection.

Ms Caoilfhionn Gallagher

I have two points to make. I agree with the Senator's point about children being more across AI than we are in many ways. The UNICEF Finland document we referred to makes that clear. Its research included not just children in Europe but also children it spoke to in a wide range of other countries, including South Africa and Brazil. The overwhelming conclusion from all jurisdictions was that most parents do not have sufficient knowledge on these topics. That was the view of the children who were spoken to so that very much reflects the Senator's experience and it is reflected in the document.

On the second point about consultation with children, the short answer is "Yes". The way in which that is done depends on the system but I very much support the second principle from UNICEF, which is ensuring inclusion of and for children from the outset and at all stages of the process in terms of AI design and also with regard to the issue of regulation and policymakers making decisions that profoundly affect children. That is critical. It is also important that it is not just consultation with children generically but is consultation with children that ensures diversity and that isolated voices are also included. I commend the work others are doing in that area. The short answer, in principle, is "Yes". Children must be involved in the design of AI and should and must be involved in the conversations we are having today.

I thank the witnesses. We have discussed a combination of very interesting but also very scary stuff, particularly as most of us are parents or have nieces and nephews and are very mindful of that. As was said, we are looking at this from the point of view of children. We are not looking at the overall picture. We do not have responsibility for the overall legislation but our aim is to have our report feed into the legislation to protect children. Coimisiún na Meán will appear before us next Tuesday to discuss this topic. Some really good points came out of today's discussion. I hope we can raise them with Coimisiún na Meán. Is it agreed to publish the opening statements on the website? Agreed.

The joint committee adjourned at 6.04 p.m. until 3 p.m. on Tuesday, 20 February 2024.
Barr
Roinn