Blog Post
See All Blog Posts

What is continuous intelligence, and why should you care?

What is continuous intelligence (CI)? The technology research and consulting firm Gartner says CI is “a design pattern in which real-time analytics are integrated into business operations, processing current and historical data to prescribe actions in response to business moments and other events.” In other words, CI is a way of making sense of past and present data to inform your organization’s most important decisions. It lets you process data (even as it’s coming in - aka data streaming) to draw clearer inferences and make more informed choices – whether applied to decision automation or decision support.

CI represents a new frontier of predictive analysis, in which high volumes of information lead to more reliable decision-making. As Sharmila Mulligan writes in Forbes, CI:

is not another phrase to describe real time, speed or throughput. It’s about frictionless cycle time to derive continuous business value from all data. It’s a modern machine-driven approach to analytics that allows you to quickly get to all of your data and accelerate the analysis you need, no matter how off the beaten track it is, no matter how many data sources there are or how vast the volumes.

There are numerous potential applications for CI in virtually any sector. For example, financial institutions could tap CI to flag suspicious transactions for fraud detection, shipping operations could use it in logistics analytics to identify bottlenecks or other impediments, manufacturers could use it to detect shortages in their supply chains, technicians could use its findings to guide machine learning, and more.

The computational challenge

Although continuous intelligence carries many exciting possibilities, it also presents a computational challenge. From a computing perspective, CI is difficult – and far too time-consuming – to implement using conventional automation methods.

For a slightly oversimplified picture, consider the cron job, a common automation tool. Cron jobs are designed to periodically execute tasks on stored data. (For instance, backing up a server every two weeks.) It works well for routine tasks involving data or databases that are more or less static.

But when it comes to dealing with large and continuously updating volumes of data, such as all the transactions a national bank completes in a day, something like cron would not be able to keep up. An overwhelming number of calculations would be involved, and between the storage costs and inefficient computational operations for the task at hand, it would be far too slow for organizations to consider. The intelligence would arrive too late to be optimally useful.

Solving the challenge with Apache Kafka

The solution to the computational problem is to take advantage of data streaming. Rather than recording data in a database and performing operations on it, data streaming involves incrementally processing data as it’s generated. This event stream processing saves significantly on storage and computing costs.

As of this writing, one of the most widely used streaming infrastructures available is Apache Kafka. This open-source distributed event streaming platform enjoys a high adoption rate and provides the backbone of many different projects.

Apache Kafka’s method involves writing, reading, storing, and processing streams of events, both as they occur and retrospectively. It’s an efficient and reliable way to make streaming possible – along with related applications like CI.

Platforms like Cogynt take advantage of Apache Kafka’s capabilities to provide powerful decisioning solutions. Cogynt combines Apache Kafka’s data streaming technology with its own complex event processor logic to empower decision making, enabling predictive analysis and other decision intelligence techniques. To learn more about Cogynt’s event driven architecture and uses, consult this datasheet.

Recent Related Stories