Artificial Intelligence (AI) is a type of technology that simulates human intelligence in its ability to learn, problem solve, and make decisions, among other complex processes. As the field of artificial intelligence continues to advance, organizations and individuals must anticipate and prevent AI-assisted fraud. The Association of Certified Fraud Examiners (ACFE) recently reported on the dangers of AI fraud in the ACFE Insights article, “AI Fraud: The Hidden Dangers of Machine Learning-Based Scams,” by Laura Harris. In the following, we will summarize key takeaways from the article.
Why People Use AI to Commit Fraud:
Harris identifies a variety of reasons that may compel people to use AI for fraudulent purposes. People typically commit fraud for personal gain. It’s important to remember that AI, in itself, is not problematic. Instead, problems arise when people use AI as a tool to obtain personal gain through dishonesty or deception. Fraudsters turn to AI due to its speed, efficiency, and automaticity. AI can process large amounts of data quickly and can perform a variety of tasks instantly and automatically. In other words, fraudsters turn to AI for the same reason as those using it for legitimate purposes. AI also allows for anonymity and evasion of detection. AI-assisted fraud is difficult to trace back to humans, and it is often difficult for humans to detect fraudulent information created by AI.
Common Fraud Schemes That May Incorporate AI
Falsification of Documents: AI can be used to generate fake documents, such as contracts, invoices, checks, and sales records. These documents are then used to cover up asset misappropriation.
Scam Calls and Phishing: AI can be used to send out mass emails, calls, and text messages requesting money or personal information. AI is able to generate human-like text and speech, leading victims to believe that they are communicating with a legitimate person.
Impersonation: In addition to generating human-like text and speech, artificial intelligence can actually be trained to mimic a specific person! It can do so by matching an individual’s style, tone, and language patterns. Some AI systems can even generate fake images and videos of a target individual. Fraudsters can impersonate trusted individuals to request money or access confidential information.
How to Prevent AI Assisted Fraud
Harris shares the following recommendations for organizations and individuals:
- Use strong security measures, such as two-factor authentication and unique passwords for different accounts.
- Watch out for writing that lacks style and tone, is repetitive or formulaic, lacks cohesion, includes contextually irrelevant content, or contains grammar and syntax errors. These are all warning signs that the writing was generated by artificial intelligence.
- Verify the authenticity of information and communications, especially before sharing confidential information.
- Be wary of unsolicited requests for personal information or offers that seem too good to be true.
- Report known or suspected fraudulent activity to the appropriate authorities or organizations.
Organizations must protect themselves from all types of fraud, including fraud that incorporates the use of artificial intelligence. All members of an organization must remain vigilant when it comes to detecting and reporting fraud. Employees should be trained to identify, and report suspected fraud. Implementing the use of a third-party hotline, such as Red Flag Reporting, provides employees with a clear pathway for voicing their concerns. When everyone is equipped to detect and report fraudulent activity, losses are minimized.
To learn more on cyber-safety, see our article here.