Our previous blog post, Designing and Deploying Cisco AI Spoofing Detection, Part 1: From Device to Behavioral Model, introduced a hybrid cloud/on-premises service that detects impersonation attacks using behavioral traffic models of endpoints. In that post, we discussed the motivation and the need for this service and its scope of operation. We then provided an overview of our Machine Learning development and maintenance process. This post will detail the global architecture of Cisco AISD, the mode of operation, and how IT integrates the results into its security workflow.
Because Cisco AISD is a security product, minimizing detection delay is critical. With that in mind, several infrastructure options have been designed into the service. Most Cisco AI Analytics services use Spark as a processing engine. However, at Cisco AISD, we use an AWS Lambda function instead of Spark because the warmup time of a Lambda function is usually shorter, which allows a faster generation of results and, therefore a shorter detection delay. Although this design choice reduces the computing capacity of the process, that is not a problem thanks to a custom-made caching strategy that reduces processing to only new data in each Lambda execution.
Global AI Spoofing Detection Architecture Overview
Cisco AISD is deployed on a Cisco DNA Center network controller using a hybrid architecture of an on-premises controller tethered to a cloud service. The service consists of on-premise processes as well as cloud-based components.
The components on the Cisco DNA Center controller perform several important functions. In the outbound data path, the service continuously receives and processes raw data obtained from network devices, anonymizes customer PII, and exports it to cloud processes over a secure channel. Along the inbound data path, it receives any new endpoint impersonation alerts generated by Machine Learning algorithms in the cloud, anonymizes any associated customer PII, and triggers any Changes to Authorization (CoA) through the Cisco Identity Services Engine (ISE) on affected endpoints.
Cloud components perform several key functions primarily focused on processing the high volume of data flowing from all on-premise deployments and running Machine Learning inference. Specifically, the review and discovery mechanism has three steps:
- Apache Airflow is the underlying orchestrator and scheduler to initiate computing functions. An Airflow DAG often puts compute requests for each active customer into a service queue.
- As each compute request is dequeued, a corresponding serverless compute function is invoked. Using serverless functions allows us to control the calculation of costs at scale. It is a highly efficient multi-step, compute-intensive, short-running function that performs an ETL step by reading raw anonymized customer data from data buckets and transforming them into a set of input features vectors to be used for the inference of our Machine Learning models for spoof detection. This compute function uses some of the common Functions of cloud providers as a Service architecture.
- This function also performs a model inference step on the feature vectors created in the previous step, leading to the detection of imitation attempts if they are present. If a spoofing attempt is detected, the search details are pushed into a database that is queried by Cisco DNA Center components and finally presented to administrators for action.
Figure 1 captures a high-level view of Cisco AISD components. Two components, in particular, are central to cloud inferencing functionality: the Scheduler and the serverless functions.
The Scheduler is an Airflow Directed Acyclic Graph (DAG) responsible for triggering serverless function executions on active Cisco AISD customer data. The DAG operates at high-frequency intervals that push events into a queue and trigger inference function executions. DAG executions prepare all the metadata for the compute function. This includes identifying customers with active flows, grouping compute batches based on telemetry volume, optimizing the compute process, etc. The inferencing function performs ETL operations, model inference, detection, and storing spoofing alerts if any. This compute-intensive process implements most of the intelligence for spoof detection. As our ML models are regularly trained, this architecture enables the rapid rollout—or rollback if necessary—of updated models without any change or service impact.
The inference function executions have a stable average runtime of about 9 seconds, as shown in Figure 2, which, as stipulated in the design, does not introduce any significant delay in detecting impersonation attempts .

Cisco AI Spoofing Detection in Action
In this blog post series, we describe the principles and processes of the internal design of the Cisco AI Spoofing Detection service. However, from the perspective of a network operator, all these internals are completely transparent. To start using the hybrid on-premises/cloud-based spoofing detection system, Cisco DNA Center Admins need to enable the corresponding service and cloud data export in Cisco DNA Center System Settings for AI Analytics, as shown in Figure 3 .
Once enabled, the on-prem component will begin exporting relevant data to the Cisco DNA Center in the cloud hosting the spoof detection service. Cloud components automatically initiate the process for scheduling model inference functionality, evaluating ML spoofing detection models against incoming traffic, and raising alerts when endpoint spoofing attempts are detected of the customer. When the system detects spoofing, the Cisco DNA Center in the customer’s network will receive an alert with information. An example of such detection is shown in Figure 4. In the Cisco DNA Center console, the network operator can set options to perform predefined prevention actions for endpoints marked as spoofed: close the port, flap the port, or re-authenticate the port from memory.
Share: