img
kv Blogs
22 August 2017

How Artificial Intelligence Could Transform Trials in the Future

Artificial intelligence (AI) is already having a major impact across a huge array of areas, from marketing to medicine. It’s AI’s potential to touch every aspect of our lives which has convinced our founder, Tej Kohli, of the importance of encouraging development in the field through Kohli Ventures. But AI also has more nefarious applications – including in fraud, where it raises worrying questions about how our legal system can accommodate the rise of artificial intelligence. Here’s how fraudsters are able to misuse AI, and why we need to look at our standards of evidence to meet that challenge.

Tk_Image_05Can AI be Used to Commit Fraud?

To understand AI’s applications to fraud, it’s important to bear in mind the systems which are already in place to protect us from the practice. The standard of requiring a signature for a contract to be valid is a simple but effective way of preventing fraud – it means that no one can falsely claim that you’ve given them assets which you haven’t, without forging your signature. But the strength of the requirement rests on the difficulty of committing such a forgery – and AI is making it less and less difficult to do so.

Researchers at UCL were recently able to create an algorithm which can imitate your handwriting with startling accuracy, based only on small samples of writing. While the researchers claim that forensic experts would still be able to distinguish between what the algorithm produces and your true handwriting, that will become increasingly difficult as the technology becomes more sophisticated. Both signatures and handwritten notes could now be forged far more easily with the help of AI – and it will only become more accurate as time goes on.

Even your own voice can now be imitated with the help of an algorithm. Google’s WaveNet can create a natural, authentic-sounding reconstruction of your voice, meaning similar technology could be used to fake whole phone calls or recorded conversations. At the moment, WaveNet requires large samples to produce an accurate simulation, but we can expect the technology to improve rapidly, to the point where only a handful of conversations may one day be enough for AI to fake your voice.

These technologies have powerful, exciting applications for legitimate purposes, but they also pose serious challenges for our legal system, as it gives fraudsters the ability to forge an enormous amount of material with previously impossible accuracy. So how can we best cope with the difficulties created by these new developments in AI?

How Can the Legal System Respond?

In fraud trials, written records and voice recordings are usually taken to constitute a high standard of evidence – but as AI becomes more sophisticated and capable of forging such evidence, the weight placed on them will have to be lessened. Existing practices like requiring a witness, as is already the case with legal documents like wills, may have to become more widespread to compensate. Experts are already used to distinguish fake from genuine signatures, and while their work may become more difficult, that doesn’t mean it will be impossible. Standard legal practices may be enough to cope with the challenges posed by AI, as long as jurists are aware that less emphasis can be placed on evidence which is more easily forged.

AI may also offer its own solutions to combating fraud. Specialists are already developing machine learning techniques to detect fraud cases, trawling through vast quantities of data to spot discrepancies which would be missed by human investigators. It’s important to remember that AI has the potential to improve our lives immensely – while it can be used for illegitimate purposes in fraud, by adapting existing practices we can accommodate and embrace the changes artificial intelligence will bring.