Insights

Ransom attacks, fake digital evidence and IP theft: technology advances prompt new legal issues

July 11, 2024 by Rachel Rothwell

Ransom attacks, fake digital evidence and IP theft: technology advances prompt new legal issues

As technology is constantly evolving, so too are the legal issues and questions that surround it. At a media roundtable hosted by City firm Cleary Gottlieb’s disputes team this month, partner James Norris‑Jones gave a fascinating rundown of some of the most pressing technology-related issues where lawyers are now being called on to give advice.

Cyber crime

Ransomware attacks are on the rise, with statistics published by the Information Commissioners’ Office (ICO) in 2023 showing a record number of such attacks in the UK.

‘We’re talking to clients a lot about what to do in that situation,’ said Norris‑Jones. ‘Research published earlier this year showed there were 1,200 international companies that had been targeted with ransomware attacks. I’m quite surprised by the statistic that 81% actually paid the ransom. That’s not something we generally advise our clients to do for two reasons.

‘One is, there’s no guarantee, when you’re dealing with people who are hacking you, that you are ever going to get your data back; indeed 33% of that 81% never got their data back.

‘But secondly, as soon as you pay a ransom, you are putting yourself in a position where you are going to be a target for future attacks.’

Norris‑Jones added that the payment of ransoms could potentially give rise to legal difficulties. He explained: ‘In most jurisdictions including the UK, it’s not illegal to pay a ransom per se. But you have to be thoughtful about who you’re paying the ransom to, because you can get into trouble with anti-terrorism legislation for example.

‘There are state actors, for example North Korea, who are classified as terrorists under US sanctions regimes and rules, who are hacking organisations in the west in quite a systematic way. Arguably, making ransom payments to them would put you in breach of criminal legislation.’

AI-related issues

Norris‑Jones also discussed the current legal issues arising from the use of artificial intelligence, where he identified several different categories of risk that are creating problems for companies. He said:

‘One is straightforward fraud, where deep fake technology is being used to impersonate people in order to steal money.

‘For example, there was a case earlier this year involving British firm Arup [which had] £20 million stolen from it in a fraud where the company’s CFO was impersonated using deep fake technology.

‘That’s straightforward theft. But then you have risks associated with the use of chatbots. For example, Air Canada had a situation where a chatbot on its website was giving out incorrect information about discounts, and the airline tried to take the position that the bot was responsible for its own actions. Perhaps unsurprisingly, that didn’t succeed. Clearly there is a problem about how these bots work, where they get their data from and who is responsible for what they do.’

Norris‑Jones also pointed to the infringement of intellectual property as another area of AI-related risk. ‘[This refers to] situations where AI is either deliberately or otherwise infringing intellectual property. The inadvertent example would be where you ask an AI model or bot to produce something, and it will produce text that is effectively lifted from someone else. So it’s an infringement of copyright, but it won’t be obvious to you when you receive that information from the AI engine.

‘The other example is where AI is being used to try to generate revenue by mimicking, for example, another musical act – a situation we’ve been dealing with quite a lot, that has attracted quite a lot of attention.’

In the US, legislation is being considered that will make it illegal to impersonate any individual without their consent; while the EU is bringing forward an AI Act that focuses on disclosure and transparency as its key principles.

‘There’s a scramble by regulators to try to catch up with this, and regulators are increasingly focused on how regulated businesses are using AI,’ noted Norris‑Jones. ‘Interestingly, regulators are now starting to use AI themselves [for] monitoring businesses. For example the FCA [Financial Conduct Authority], which published its AI strategy earlier this year, is now using AI to monitor compliance with sanctions; to identify scam websites; [and] to monitor market activity. [But] the FCA hasn’t said anything about how its own use of AI is being governed and monitored. That will be an interesting area going forwards.’

Finally, Norris‑Jones pointed to the use of digital evidence in litigation as another key area of risk. ‘This is an increasing problem, particularly in America, where it’s becoming increasingly difficult to trust the reliability of digital evidence that is put forward; because it has become so much easier to create very convincing false evidence, whether it’s documentary evidence, or audio evidence that you can’t tell from the real thing.

‘Court systems are having to start thinking about how you verify digital evidence, and how you separate manipulated evidence from real evidence. That’s going to be a real area of focus and risk going forward in the litigation sector.’

It seems clear that these multi-faceted and fast-paced developments in AI and other technologies are set to spark a whole new set of legal challenges, that will be keeping lawyers busy for the foreseeable future.


July 11, 2024 by Rachel Rothwell

Insights

Subscribe to our newsletter to keep in the litigation loop.

330 High Holborn,
London, WC1V 7QT

© 2024 Sentry Funding. Brand and website by UnitedUs