The 1983 film “WarGames” exposed many people to the concept of tangible Artificial Intelligence (AI) for the first time. As microcomputers were beginning to become more commonplace in the bedrooms of teenagers around the world, Matthew Broderick’s character “David Lightman” became an inspirational figure for the new era of wannabe bedroom hackers. When Lightman successfully hacked (albeit accidentally) into the US Department of Defense’s “WOPR” department (where nuclear missiles are controlled from), he was met not with resistance, but with interaction, from an unlikely source known as “Joshua”. Spoiler alert – Joshua turned out to be a computer system, and unfortunately even in the Hollywood world of futuristic Artificial Intelligence, even Joshua made mistakes, almost causing the humans to start a nuclear war.
The year 2018 can be seen as an amplified version of WarGames. We still have hackers – yet instead of the teenage arcade gamers dialing into local systems after school, we now have organised governmental units launching attacks on other nations.
We still have nuclear facilities that are the targets of digital attacks – just look at the recent Stuxnet worm, designed to attack Iranian nuclear facilities. In fact, the worm’s algorithm specifically checks if the infected device has particular PLC components manufactured by Siemens (to ensure the host device was indeed the targeted nuclear system), and if not, it simply ceases functioning.
We also still have Artificial Intelligence – however these days, it is a bit more advanced than Joshua was depicted in 1983. Just as Joshua was in place at the defense facility, organisations in 2018 are using AI systems in their own lines of defense, helping tackle problems ranging from malicious botnet attacks to financial fraudsters.
In addition to defensive AI, the rapid advancements in Artificial Intelligence are allowing hackers to develop systems that can create their own methods of attack. A keynote presentation by Hyrum Anderson at DEF CON 2017 demonstrated how an AI system can teach itself to bypass anti-virus software, by constantly making small changes to malicious binaries.
This post will focus on the defensive side of cyber security, mainly online credit card fraud, one of the more prevalent problems faced in E-Commerce today . What advancements are emerging in the world of fraud prevention, and how is Artificial Intelligence changing the traditional face of the “manual review” team?
For as long as E-Commerce has accepted credit cards, online credit card fraud has existed. Traditionally, and continued to this day by 79% of North American businesses, manual review teams have been the key factor in preventing fraudulent transactions.
Individual employees will review on average 25% of all transactions made on their platform. Of these reviewed transactions, 89% are approved. With each transaction taking 5-15 minutes to review, and with annual online sales exploding in popularity (2.3 trillion dollars and growing!) it is clear that manually reviewing transactions to detect fraud is an extremely costly business expense. Retail giants such as Amazon strive to ship orders as quickly as possible, and a delay in manual review is no excuse for a delay in shipping, especially if a customer has ordered next-day delivery.
The big buzzword in fraud prevention, echoing through the halls of every security conference in recent history, is “machine learning”. Like most tech buzzwords, this will eventually become part of our every day vocabulary. Already, dedicated companies are investing millions of dollars into anti-fraud platforms based almost solely on a machine learning backbone. The idea is simple – can costs be cut, and accuracy increased, by replacing a manual review team with an automated platform? As fraudsters become more and more sophisticated with an ever-increasing number of tools at their disposal, it is vital that the fraud detection systems learn and adapt, in real time, to keep ahead of the attackers. Traditional “rules-based” systems (where transactions are flagged for review if transaction variable “x” matches a rule in the system, for example, if the customer’s billing zip code and shipping zip code do not match) are expensive to maintain (unless they are allowed to become outdated), and difficult to modify quickly when urgently needed.
Data, with all the recent controversy surrounding it, is one of the most powerful things available to an anti-fraud platform. By using historical purchasing data of every “good” transaction in the database, and looking at the real-time transactions being received, an Artificial Intelligence system can use regressive analysis to “learn” which transactions are good and bad, and therefore approve them in real time. This then adds more data to the system, improving its future results. Where an ordinary manual review employee may use the recent data they have personally accessed, a machine learning system can process millions of pieces of data in a manner of minutes, and it will never forget.
We can combine the concept of human determined rules most commonly used in E-Commerce today, with an Artificial Intelligence system, creating a “random forest” algorithm that the AI model learns from. In this system, we can provide a number of rules that the AI system will use to detect fraudulent transactions. The system can run transactions through multiple variations of these rules, and eventually predict transactions that are likely to be fraudulent, based on the traits similar transactions have had. You can think of it as a number of different “decision trees”. By using just one decision tree, you may come to one logical conclusion. But by using a large number of varying decision trees, the conclusion will be more accurate.
Theories such as this are not just theories, but are currently being applied in the “real world” by a variety of companies. For example, Ravelin, with their product “Ravelin Enterprise” feature a machine learning model that scores every customer in real time, immediately stopping detected fraud. Sift Science, with their self-titled “Sift Science” anti-fraud platform, have a machine learning model that learns and updates itself in real time, and from the features they advertise, it is clear the “random forest” model is at work, where data is being drawn from other companies who also use the software, in a large “knowledge sharing” style platform. Contrary to traditional knowledge sharing, this knowledge is being shared between computers, rather than human participants. PayPal use an in-house machine learning platform that can detect potential scammers before they abuse the service, based on the characteristics of confirmed scammers already in their database. If you have ever been “auto banned” from a website, it may be due to a poor decision by the machine learning system.
What may be seen by many as simple marketing of a new technology, all indicators show that machine learning-based anti-fraud platforms work, and they work well. Gett, an on-demand mobility company, recently shared their internal data showing how introducing a machine learning model reduced their fraudulent transactions by 80%.
While manual review teams cannot (currently) be replaced, it is of high importance that they at least consider making use of an Artificial Intelligence system, even if it is to simply run alongside their current systems as a facilitating service to help them fight fraud.
I predict an era of cyber-crime where the aim is to “infect” the AI systems – send confusing, contradictory signals to the machine learning software; teach it incorrect information, with the ultimate aim of committing fraud with little or no human presence to prevent it. The very nature of fraud makes it unpredictable, meaning that the expertise of human professionals who are networking with other professionals in the industry will likely always be needed to supervise over these systems, and help tweak them as fraud evolves in ways too drastic for an adaptive AI system to learn (for example, where an AI system may be trained to look for and learn from small changes in fraudulent attempts, an entirely new method of attack discovered by security researchers may be something it does not take into account).
At the current rate of development, it is likely only a small number of years before today’s Artificial Intelligence anti-fraud systems look like Joshua from WarGames. They will be outdated, and outmatched by the criminals. Current technology, especially the early-adopter pioneers, will become redundant, as servers are switched off to make way for new technology. It may therefore be wise for a company to begin experimenting with AI to prevent fraud, however a large investment may be best conducted when the technology becomes more mature and future-proof.
One thing is definitely clear – the days of an office full of people reviewing transactions manually based on a handful of variables are over. Artificial Intelligence is here, and the best thing we can do is welcome it with open arms.
War Games Wiki