The SDK allows you to easily build custom integrations for third party tools that may not exist in the marketplace. Connectors are used to ingest data from data sources that usually, but not always, have some sort of alert queue. Cases, Alerts, and Events are created in the platform via the Connector’s interaction with the Chronicle SOAR application. A Connector will ingest base Event data, assign that data to an Alert and then send the Alert to the Chronicle SOAR application and its Data Processing Pipeline.

This article assumes you have a good understanding of Python and object oriented programming.

Instructions

To build a connector, it’s necessary to:

  1. Build a Manager for the integration that contains the actual API logic for the third party tool.
  2. Build the Connector utilizing the IDE.

Creating the Manager

The first step in creating the Connector is to actually build the Manager that will contain all of the API logic for the technology you are trying to integrate with. In this tutorial we will build a Connector for Netskope. The Netskope integration already exists in Chronicle SOAR and the Manager is part of that integration. In this case the Manager has all the logic we already need to interact with the Netskope API.

Creating the Connector

In the IDE, create a new connector by following the instructions in Building a Custom Integration. The IDE will populate with a generic template that explains the basic requirements for a Connector. This is a great starting point and provides some useful details in the code comments.

Connector logic varies but the basic steps can be broken down as follows:
Retrieve a list of alerts/detections/alarms/offenses from the third party tool. In this case, the Netskope Manager provides this capability with the get_alerts() method. Most tools with a queue of alerts have some way of querying alerts by time. Sending the query with the proper time fields ensures that alerts are not retrieved more than once and allows the Connector to operate in sequential order.

  1. Build an Alert by instantiating a variable to the AlertInfo (formerly called CaseInfo) class and ensuring that the mandatory properties are assigned valid values.
  2. Retrieve the Event data for each alert, flatten it to prevent issues with nested lists and dictionaries, and append it to the Alert as a Python list.
  3. Sort the alerts by time and retrieve the latest timestamp to save it in order to send it in the next query to retrieve the alerts. If no alerts are found in this iteration of the Connector running, then use the last timestamp.

Keep in mind that an Alert can have more than one Event. This is especially true for data sources such as SIEMs which use correlating logic to bundle events into an Alarm (such as McAfee ESM) or an Offense (i.e. QRadar). However, other data sources such as EDR solutions don’t typically perform this type of correlation so an Alert will only have one Event. The key takeaway here is to understand the data source that the Connector will be interacting with and allowing for the possibility that further logic will be needed to retrieve the base Events of a retrieved Alert.

Imports and the SDK

Every connector will import the SiemplifyConnectorExecution class from SiemplifyConnectors. An object of this class will be instantiated, usually in the main() function of the Connector script. The Connector script will end when this object passes a list of Alerts to the Chronicle SOAR application using the object’s return_package() method.

Every connector will import AlertInfo class from SiemplifyConnectorsDataModel. Instantiating an object of this class will actually create the Alert. In this case it’s renamed to SiemplifyAlertInfo to avoid confusion with Netskope alerts; this is completely optional.

SiemplifyUtils is a very useful module that contains some frequently used methods for handling logging, data formats, time, and a few other things. Always import output_handler and dict_to_flat. We’ll also import unix_now because it’s necessary for this Connector’s time logic.

Connectors will almost always import the integration Manager. Keep in mind that some integrations may have more than one Manager. A couple of standard libraries are also imported for this Connector.

In older Connector code, you may see CaseInfo  imported from SiemplifyConnectors  instead of AlertInfo .This is the same class with a deprecated naming convention.

Constant Variables

It’s a good idea, although not mandatory, to declare a few constant variables for later use. More on these later.

Creating the Chronicle SOAR Alert

The Alert is created by instantiating an object of the AlertInfo class (as discussed previously, it’s renamed to SiemplifyAlertInfo() in this Connector). The alert_info object has several properties that must be set in order for the  application to process the Alert correctly. Here the build_alert_info function receives a Netskope alert (raw JSON that is received from the API) and the Siemplify object (to be discussed below) as inputs and then parses the Netskope alert and sets the alert_info properties to the relevant values. This function also utilizes the constants set earlier. All of these object properties will become part of the Alert properties within the Chronicle SOAR application.

Line 35 of the below screenshot is perhaps the most important line of code in the entire Connector. Utilizing the dict_to_flat method that was imported earlier, the alert is flattened and then appended to the events property, which is actually a list of the base events per alert. 

 

Remember that an alert can have more than one event, so additional logic must be added if that is the case. Here there is only one event per alert, so this is sufficient. dict_to_flat is utilized to flatten the JSON due to nested lists and dictionaries within the JSON. The raw key and value fields are transformed into a flattened version that is slightly modified.

Going over the logic below:

display_id is set to a random value generated by the uuid library that was imported. Netskope alerts do not have a UUID field that can be used for this purpose, but if one existed, it could be used.

ticket_id is simply set to display_id. ticket_id and display_id MUST BE UNIQUE in the system per alert; that’s why a random value is generated with uuid.

name is the Alert Name and will be displayed in the GUI.

rule_generator is a field for the Rule that creates the alert in the original system. This field is not always present in the raw data and can be set to anything, but it must be set to something.

start_time and end_time are for timestamps from the alert. In this case, the Netskope timestamp is in epoch time and is multiplied by 1000 to convert it to millis time, which is what Chronicle SOAR expects. If the timestamp is in another format, conversion is necessary. See the SiemplifyUtils.py module for some helpful time conversion methods.

priority: the Chronicle SOAR application assigns a Priority to every Case based on the Alert priority in the case. The Chronicle SOAR API will map a numerical value to the displayed priority based on the following: {"Informative": -1, "Low": 40, "Medium": 60, "High": 80, "Critical": 100} where the integer value is what’s passed in the Connector. So for example, passing a value of 100 to the application will result in the Alert being prioritized as Critical in the GUI. In this Connector, there is an additional helper function that utilizes the SEVERITY_MAP constant to try and map the Chronicle SOAR priority to the severity field in the original alert. Unfortunately, the severity field is not consistent in the Netskope alerts and requires some additional logic to check multiple fields.

device_vendor and device_product are set to the constants defined earlier.

environment is extremely important if environments are defined in Chronicle SOAR. Here the property is being set to a property of the siemplify object which is going to utilize the set environment for the Connector from the Chronicle SOAR application.

Finally, in line 37 we are modifying the base event for this alert by adding an additional key to the event (which is really a Python dictionary at this point). product_name is set to “Netskope” because there is not a consistent field in the raw data to set a Product that can be utilized for mapping and modeling in the Ontology.

Running the Connector

The main function is defined for the actual execution of the Connector logic. The output_handler decorator is utilized for debugging and won’t be covered in detail. The main function itself has an optional parameter of is_test_run which is set to False by default. As the parameter name suggests, it will determine if the Connector runs in production and actually ingests alerts or whether it will run from the Connector Testing tab in the application. Two empty lists are created in lines 56 and 57; more on them later. In line 58, the siemplify object is instantiated from the SiemplifyConnectorExecution class. This object will be utilized for the majority of the Connector execution. Line 61 instantiates the Connector whitelist which isn’t used in this Connector and won’t be covered in detail.

In lines 67-69, variables are defined for the parameters in the Connector. In line 71, an object is instantiated from the NetskopeManager and passes two of the parameter variables to ensure successful authentication (the code doing this in the Manager is not shown here). In line 73, the timestamp is fetched from a file that is created on the filesystem when the Connector executes. Line 74 and 75 perform some basic error handling for the first time the Connector runs since Netskope will not accept a timestamp of 0. Because Netskope expects epoch time, not millis time (remember the conversion that was done earlier the other way), unix_now retrieves the current time in millis format and this value must be divided by 1000 in order for Netskope to recognize it. After the start time and end time are defined, they are passed to the get_alerts method from the Manager. Usually, end time is not a critical parameter to pass but the Netskope API requires an end time if a start time is used for querying.

Dealing with time and timestamps is one of the hardest things to do with a Connector. Different third party systems will return timestamps in various formats and some do not support querying by the format they return. Understanding the underlying API is critical.

A list of Netskope alerts is retrieved in line 77 and only the last one is selected in lines 79-80 if the Connector is executing a test run.

Overflow Logic

The Connector then iterates through the retrieved alerts and builds an Alert out of each utilizing the build_alert_info function created earlier. It appends the Alert to the empty list all_alerts defined earlier. The next piece of the Connector logic deals with Overflow; a very brief explanation of Overflow can be found here.
Essentially, Overflow is a threshold for alerts in a certain amount of time if an alert shares environment, product and rule generator. This is a built in mechanism to avoid system performance degradation. It’s not mandatory but it is a good practice. In lines 104-105, if the alert is not determined to be overflow, it’s appended to the empty alerts list defined earlier.

Ending the Connector Execution

If the Connector is not a test run, the timestamp must be updated to the last timestamp of retrieved alerts so that the Connector does not retrieve that data in its next iteration. Notice that the all_alerts list is sorted so that even overflowed alerts will contribute to sorting by timestamp. In line 126, the list of non-overflowed alerts is submitted to the application which will create the Alerts in the GUI. Lines 128-131 define whether or not the Connector is a test run. Don’t worry about the system arguments; these come from the application when the Run Connector Once button is pressed in the Testing tab.