Video: Detect AI Bot Swarms Using COGYNT

In a world where artificial intelligence is more accessible than ever, and individuals can do the work of entire tech teams, technological risks and opportunities are equally amplified. We are on the cusp of AI bot swarms overwhelming social platforms. It’s not uncommon to visit the comments section of YouTube videos and encounter nefarious comments that promote some form of fraud or financial scams.

In 2022, according to the Federal Trade Commission, older Americans reportedly lost $1.6 billion to fraud. Based on Cogility’s research, these AI bot swarms number in the hundreds of thousands if extrapolated from the small set of data we analyzed. Google, one of the most advanced tech companies with some of the best talent, is nonetheless having trouble detecting these AI swarm bots. This situation is costing platform owners money, creating a poor user experience, damaging their reputation, and raising their storage and data performance costs.

How are threat actors achieving this? Well, they started using AI to create comments that are indistinguishable from those made by humans. They employ varied posting dates and multiple accounts that exhibit indicators of regular human comments.

What can we do about it? Are we entering an era where this rampant exploitation of society’s most vulnerable is the new norm? Not if Cogility has anything to say about it.

At Cogility, we are developing a no-code complex event processing model builder that can detect behavior patterns typically associated with AI bot swarms, and much more. This approach can be quite effective and efficient compared to other alternatives. Our Cogynt HCEP models allow users to create micro-patterns without coding, which then feed into larger macro-patterns that can be overlaid on top of streaming data. The model can assign risk weights to every condition met, and its ability to aggregate risk weights makes it much more flexible to determine what passes a certain risk threshold and constitutes a legitimate finding.

This approach doesn’t involve storing terabytes of all Google comment data to then perform condition queries against it. Instead, it shines multiple narrow spotlights on incoming streaming data, efficiently and quickly extrapolating insights and feeding them into more complex patterns that then perform more intricate analyses. These models can be constructed to identify the weakest links in the data and search for other data that shares these relationships.

In our research, we were able to detect very inconspicuous bot comments that human analysts would otherwise fail to detect, and the bot signature details are irrefutable. AI swarms produce thousands of comments, have identifiable patterns, and make mistakes. We can detect these patterns and mistakes, uncover their accounts, and identify similar accounts that share these traits. Through our research project, we ran thousands of real-world comments and identified AI bot comments and accounts that otherwise evaded Google’s sophisticated filters. Thankfully, Cogynt can help counterbalance these aggressive AI bot swarm fraud schemes, which are becoming more common and sophisticated.

If you’re interested how Cogynt and HCEP models can solve real-world problems, check out our demo presentation, or visit cogility.com for more information.