Artificial Intelligence Will Be Assisting Cybercriminals

To effectively manage the risk that your business is under due to cybercriminals and their activities, it is important to acknowledge what attacks your business may soon have to deal with. Due to the increased accessibility of artificial intelligence and related processes, we predict that cybercrimes will likely use AI to their advantage in the very near future.

We aren’t alone in believing so, either. A recent study examined twenty such AI-integrating cybercrimes to see where the biggest threats would lie.

Here, we’re looking at the results of this study to see what predictions can be made about the next 15 years where AI-enhanced crime is concerned. Here’s a sneak preview: Deepfakes (fake videos of celebrities and political figures) will be very believable, which is very bad.

The Process

To compile their study, researchers identified 20 threat categories from academic papers, current events, pop culture, and other media to establish how AI could be harnessed. These categories were then reviewed and ranked during a conference attended by subject matter experts from academia, law enforcement, government and defense, and the public sector. These deliberations resulted in a catalogue of potential AI-based threats, evaluated based on four considerations:

  • Expected harm to the victim, whether in terms of financial loss or loss of trust.
  • Profit that could be generated by the perpetrator, whether in terms of capital or some other motivation. This can often overlap with harm.
  • An attack’s achievability, as in how feasible it would be to commit the crime in terms of required expense, technical difficulty, and other assorted obstacles.
  • The attack’s defeatability, or how challenging it would be to overcome, prevent, or neuter.

Split amongst themselves, the group ranked the collection of threats to create a bell-curve distribution through q-sorting. Less-severe threats and attacks fell to the left, while the biggest dangers were organized to the right.

When the group came back together, their distributions were compiled to create their conclusive diagram.

How Artificial Intelligence Cooperates with Criminality

In and of itself, the concept of crime is a very diverse one. A crime could potentially be committed against assorted targets, for several different motivating reasons, and the impact that the crime has upon its victims could be just as assorted. Bringing AI to the party—either in practice or even as an idea—only introduces an additional variable.

Having said that, some crimes are much better suited to AI than others are. Sure, we have pretty advanced robotics at this point, but that doesn’t mean that using AI to create assault-and-battery-bots is a better option for a cybercriminal than a simple phishing attack would be. Not only is phishing considerably simpler to do, there are far more opportunities to profit from it. Unless there is a very specific purpose to a crime, AI seems most effective in the criminal sense when used repeatedly, on a wide scope.

This has also made cybercrime an all-but-legitimate industry. When data is just as valuable as any physical good, AI becomes a powerful tool for criminals, and a significant threat to the rest of us.

One of the authors of the study we are discussing, Professor Lewis Griffin of UCL Computer Science, put the importance of such endeavors as follows: “As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

The Results of the Study

When the conference had concluded, the assembly of experts had generated a bell curve that ranked 20 threats, breaking each down by describing the severity of the four considerations listed above—specifically, whether or not they were to a criminal’s benefit. Threats were grouped in the bell curve based on similar severity, and so the results neatly split into three categories:

Low Threats

As you might imagine, those crimes ranked as low threats suggested little value to the cybercriminal, creating little harm and bringing no profit while being difficult to pull off and easy to overcome. In ascending order, the conference ranked low threats as such:

  • 1. Forgery
  • 2. AI-assisted stalking and AI-authored fake reviews
  • 3. Bias exploitation to manipulate online algorithms, burglar bots, and evading AI detection

(In case you were wondering, “burglar bots” referred to the practice of using small remote drones to assist with a physical break-in by stealing keys and the like.)

Medium Threats

Overall, these threats leveled themselves out. The considerations for most canceled each other out, generally providing no advantage or disadvantage to the cybercriminal. The threats included here were as follows:

  • 4. Market bombing to manipulate financial markets through trade manipulation, tricking face recognition software, blocking essential online services through online eviction, and utilizing autonomous drones for smuggling and interfering with transport.
  • 5. Learning-based cyberattacks (or an artificially intelligent distributed denial of service attack), fake AI sold in a snake oil misrepresented service, data poisoning by injecting false numbers, and hijacked military robots.

High Threats

Finally, we come to those AI-based attacks that the experts felt the most concerned about as sources of real damage. These columns broke down as such:

  • 6. AI being used to author fake news, blackmail on a wide scale, and disrupting systems normally controlled by AI.
  • 7. Tailored phishing attacks (what we call spear phishing) and weaponized driverless vehicles.
  • 8. Audio/visual impersonation, also referred to as Deepfakes.

Deepfakes are a digital recreation of someone’s appearance to make it appear as though they said or did something that they didn’t or were present somewhere that they never were. You can find plenty of examples on YouTube of Deepfakes of various quality. Viewing them, it is easy to see how inflammatory and damaging to someone’s reputation a well-made Deepfake could prove to be.

Don’t Underestimate Any Cyberattack

Of course, now that we’ve gone over these threats and described how much of a practical threat they really are, it is important that we remind ourselves that all of these threats could damage a business in some way, shape, or form. We also can’t fool ourselves into thinking that these threats must be staged with AI. Human beings could also be responsible for most of them, which makes them no less of a threat to businesses.

It is crucial that we keep this in mind as we work to secure our businesses as we continue to operate them.

As more and more business opportunities can be found online, more and more threats have followed them. Keeping your business protected from them—whether AI is involved or not—is crucial to its success.

ExcalTech can help you keep your business safe from all manner of threats. To find out more about the solutions we can offer to benefit your operations and their security, give us a call at (833) 392-2583.

Leave a Comment

Your email address will not be published. Required fields are marked *

See How We Can Help Your Business

Click the button below, and one of our friendly and knowledgeable Technology Advisors will call you within 15 minutes (Monday through Friday, 8:30 am to 5:00 pm Central Time).

Scroll to Top

Have Someone Call Me

Please enter your name and phone number below, and one of our friendly and knowledgeable Technology Advisors will call you within 15 minutes (Monday through Friday, 8:30 am to 5:00 pm Central Time).

Request a Call

ExcalTech Cybersecurity Essentials for Business Owners
Cybersecurity Essentials Guide Download

The guide will be delivered to your inbox.