Cyber security expert gives evidence to UK inquiry
The public is being fed disinformation via feedback loops built into their social media usage, a cyber security expert at LJMU has told a group of senior MPs.
Any online interaction with a piece of disinformation, even inadvertent, could result in a “domino effect” whereby the user is consistently recommended similar misinformative material, explained Dr Áine Mac Dermott, a senior lecturer in LJMU’s School of Computer Science and Mathematics, who was giving evidence to the Commons Science, Innovation and Technology Select Committee.
The Committee is conducting an inquiry into social media, disinformation and harmful algorithms and their impact on the summer riots, which are believed to have been driven in part by false claims spread on social media relating to the Southport attacks.
The MPs took written evidence from Dr Mac Dermott and others to understand the sources of online harms.
Dr Mac Dermott said: “If you take X, it now allows any user account to become ‘verified’. This means that malicious accounts and bot accounts masquerading as legitimate users can create materials or share resources, meaning that many users would take the information as credible when in fact there are malicious means behind it.”
The Select Committee is building a picture of the approaches tech companies are taking, and the role that social media business models play in shaping their response.
According to Áine the game is currently rigged to maximise engagement and ad revenues.
She said: “The problem with the SM companies is that they are global companies and while this committee is discussing riots occurring here in the UK, providers such as Elon Musk have argued that people are allowed freedom of speech.”
A recent study by TrustLab found that X now has the highest ratio of misinformation of six social media platforms – Facebook, Instagram, LinkedIn, TikTok, YouTube, and X.
“Elon Musk has removed the 'election integrity team' at X a department that was responsible for combating the spread of misinformation online and removed a feature that lets users self-report false political statements,” said Aine.
She recommends further safeguards, as UK legislation doesn’t go far enough.
“More should be done to raise awareness of the questionable materials published on social media platforms via malicious actors and/or generative AI. In particular, running online safety campaigns in schools could be used to help users identify questionable sources and question online narratives.
“Awareness needs to be made on the repercussions of these offences so that young people and children feel more supported online. Also, promoting awareness to social media companies on the consequences for them (e.g. monetary fines) associated with criminal actions on their platforms should be further encouraged.
“More regulation in this area, beginning with the introduction of the Online Safety Act, will help raise awareness of the role algorithms have in recommending and tailoring content,” she concluded.