Installation¶
This guide covers the installation and configuration of the Chronicle App for DomainTools, including Google Cloud Platform (GCP) resource setup, cloud function deployment, and Chronicle rule configuration.
Prerequisites¶
- Chronicle console and Chronicle service account
- DomainTools credentials (API username, API key, DNSDB API key)
- GCP Project with the below required permissions:
- GCP user and project service account should have Owner permissions
- GCP Services
- Memory store - Redis
- Cloud function (4-core CPU or higher is recommended for cloud function configuration)
- Google Cloud Storage (GCS) bucket
- Secret Manager
- Cloud Scheduler
- Serverless Virtual Private Cloud (VPC) access
- Looker instance
Creating zip file of the cloud function¶
The cloud function requires files from the Chronicle ingestion scripts repository.
- Access the Chronicle ingestion scripts repository: https://github.com/chronicle/ingestion-scripts/tree/main
- In this repository, locate the following directories:
domaintoolsdirectory (contains the ingestion script)commondirectory (contains shared utilities)- Download or clone the repository to obtain these directories
- Create a zip file containing both the
domaintoolsandcommondirectories
Cloud function deployment¶
There are two ways to create the required resources of the GCP to deploy the Cloud function:
- Manual deployment
- Command-based (automated) deployment
Manual deployment of the required resources¶
1. Add the secret in secret manager¶
- Log in to the Google Cloud Console
- Select the project created for DomainTools from the upper left dropdown.
- Navigate to
Secret Managerand selectCreate Secret. - Provide the name for the secret in the
Namefield. - Upload the file if there is a file for the secret, or provide the secret value directly in the 'Secret Value' field.
- Click the Create Secret button.
Add secret values for the Chronicle service account JSON, DomainTools API username, and DomainTools API key. A separate secret is required for each secret value. If the user wants to fetch the subdomains, then the secret value for the DNSDB API key also needs to be created in the secret manager.
After you create the secrets, provide the resource name of the secret as the value in the environment variable, e.g.,
2. Create a GCP bucket¶
- Navigate to
Bucketsin GCP, select theCreatebutton, and enter the bucket name. - Select the region and modify the optional parameters if required and then click the Create button.
- Open the created GCP bucket and select the
upload filesbutton. - (Optional) Upload a
txtfile containing comma-separated values of the Chronicle log types. The script will fetch the events from the specified log types only. If you don't provide this file, the script considers all log types. - (Optional) Upload a
jsonfile for checkpoint. The script will consider the start time for fetching the logs with the timestamp specified in the checkpoint file. The structure of the checkpoint file should be:
If not provided, the script will create a checkpoint file with the name "checkpoint.json" in the bucket when the script is initially executed.
3. Create serverless VPC access¶
- Navigate to
Serverless VPC accessand selectCreate Connector - Enter a connector name; select the region; select
networkasdefault; selectsubnetasCustom IP range - Enter any unique IP in the IP range box (for example,
10.0.0.0) and selectCreate
4. Create Redis instance¶
- Navigate to Redis and select
Create instance - Enter a unique instance ID and display name.
- Select
Standardin theTier Selectionandcapacityas4 GB(recommended) but it can be provided as per user requirements. - Select your region in the
Regionfield. - Select
No read replicasforRead Replicas. - Select
defaultin theNetworkdropdown of the Set up connection. - Optionally select a maintenance schedule.
- Select
Create instance.
5. Create cloud function¶
- Navigate to the Cloud Functions page and select
Create function - Select
2nd genin theEnvironmentdropdown. - Enter the unique function name and select your region from the region dropdown.
- Keep
Require Authenticationselected in theTriggersection. - Select the dropdown for
Runtime, build, connections, and securitysettings. - Select the below options in the
RUNTIME. Memory allocated- 8 GiB (Recommended)CPU (preview)- 4 (Recommended)Timeout- 3600 secondsConcurrency- 1Service account- select your DomainTools GCP project service account- Select
add variablesand add the below environment variables (next subsection) - Select
CONNECTIONSand select the below options. - Keep
Allow all trafficin the Ingress settings. - Select the created
Serverless VPC Accessconnector in the Network dropdown. - Select the
Route only requests to private IPsthrough the VPC connector. - Select
Next - Select
Python 3.11in theRuntimedropdown. - Select
ZIP Uploadfrom the Source code. - Keep the entry point as main.
- Select the created bucket in the
Destinationbucket dropdown. - Browse and select the downloaded application zip file.
- Select
Deploy. After a few minutes, the cloud function will be deployed successfully.
Environment variables¶
| Environment variable | Description | Default value | Required | Secret |
|---|---|---|---|---|
| GCP_BUCKET_NAME | Name of the created GCP bucket. | - | Yes | No |
| REDIS_HOST | IP of the created Redis memory store. | - | Yes | No |
| REDIS_PORT | Port of the created Redis memory store. | - | Yes | No |
| CHRONICLE_CUSTOMER_ID | Chronicle customer id. Navigate to settings in the Chronicle console for the customer id. | - | Yes | No |
| CHRONICLE_SERVICE_ACCOUNT | Copied resource name value of service account secret from the secret manager. | - | Yes | Yes |
| CHRONICLE_REGION | A region where the Chronicle instance is located. | us | No | No |
| DOMAINTOOLS_API_USERNAME | Copied resource name value of DomainTools API username secret from the secret manager. | - | Yes | Yes |
| DOMAINTOOLS_API_KEY | Copied resource name value of DomainTools API key secret from the secret manager. | - | Yes | Yes |
| DNSDB_API_KEY | Copied resource name value of DNSDB API key secret from the secret manager. | - | No | Yes |
| FETCH_SUBDOMAINS_FOR_MAX_DOMAINS | Fetch subdomains for the maximum number of domains. | 2000 (max) | No | No |
| LOG_FETCH_DURATION | Time duration in the seconds to fetch events from the Chronicle. Provide an integer value. Eg. If the user wants to fetch the logs of every 5 minutes then the user needs to specify 300 seconds. | - | Yes | No |
| CHECKPOINT_FILE_PATH | Path of the checkpoint file if provided in the bucket. If provided, events from the specified time will be fetched from the chronicle. If the file is present directly into the bucket then the user only needs to give the filename for this variable. If the file is given inside a folder then the path of the folder along with the filename needs to be specified like folderName/fileName. | - | No | No |
| FETCH_URL_EVENTS | Flag to fetch URL-aware events from the Chronicle. Accepted values [true, false] | false | No | No |
| LOG_TYPE_FILE_PATH | Path of Log type filename if provided in the bucket. If provided, events from those log types will be fetched from the Chronicle. Otherwise, all log types will be considered. Provide comma-separated Ingestion label values in the file. If the file is present directly into the bucket then the user only needs to give the filename for this variable. If the file is given inside a folder then the path of the folder along with the filename needs to be specified like folderName/fileName. Refer this page for the Supported log types. | - | No | No |
| PROVISIONAL_TTL | Time To Live (TTL) value if the domain has Evidence key and value as the provisional in the API response. Provide an integer value for this. If provided that value will be considered, otherwise default 1 day will be considered. | 1 day | No | No |
| NON_PROVISIONAL_TTL | TTL(time to leave) value for all other domains. Provide an integer value for this. If provided that value will be considered, otherwise default 30 days will be considered. | 30 days | No | No |
| ALLOW_LIST | Name of the allow list reference list created in the Chronicle. | - | No | No |
| MONITORING_LIST | Name of the monitoring list reference list created in the Chronicle. | - | No | No |
| MONITORING_TAGS | Name of the monitoring tags reference list created in the Chronicle. | - | No | No |
| BULK_ENRICHMENT | Name of the bulk enrichment reference list created in the Chronicle. | - | No | No |
6. Create cloud scheduler¶
- Navigate to the Cloud Scheduler and open the Cloud Scheduler page.
- Select
Create job. - Enter a unique name for the scheduler and select your region in
Region. - Enter the unix-cron format for
Frequency; selecttimezone. - Select
continue. - Select the
Target typeasHTTP. - Paste the URL of the Cloud function in the
URLfield. - Keep POST as it is in
HTTP method. - In the
Auth header, SelectAdd OIDC token. - In the
service accountfield, select your DomainTools GCP project service account. - Select
Continue - Enter
30min theAttempt deadlineconfig. - Select
Create
The Cloud function will be executed as per the frequency provided in the Cloud Scheduler.
Command based (automated) deployment of the required resources¶
1. Create Redis and bucket¶
- Log in to the Google Cloud Console and select the project created for the DomainTools from the upper left side dropdown.
- Select
Activate Cloud Shell - Select the
Open Editorbutton after Cloud Shell opens successfully. - Create a new file and add the below code to the file. The file type should be
jinja(for example,resource.jinja).
resources:
- name: {{ properties["name"] }}
type: gcp-types/redis-v1:projects.locations.instances
properties:
parent: projects/{{ env["project"] }}/locations/{{ properties["region"] }}
instanceId: {{ properties["name"] }}
authorizedNetwork: projects/{{ env["project"] }}/global/networks/default
memorySizeGb: {{ properties["memory"] }}
tier: STANDARD_HA
{% if properties["displayName"] %}
displayName: {{ properties["displayName"] }}
{% endif %}
- Create another file and add the below code to the file. The file type should be
yaml(for example,config.yaml).
imports:
- path: RESOURCE_FILE_NAME
resources:
- name: BUCKET_NAME
type: storage.v1.bucket
properties:
location: LOCATION
- name: REDIS_INSTANCE_NAME
type: RESOURCE_FILE_NAME
properties:
name: REDIS_INSTANCE_NAME
region: REGION
memory: 4
displayName: redis_display_name
RESOURCE_FILE_NAME: Name of the created resource file (for example, resource.jinja).REDIS_INSTANCE_NAME: Unique name of the Redis instance.BUCKET_NAME: Unique name of the bucket.LOCATION: A region for your bucket. For multi-region in United States specify the US. Refer to this page for bucket location.-
REGION: A region for your redis. Values can be us-central1, us-west1, etc. -
Select
Open Terminaland enter the below commands:
NAME_OF_DEPLOY: Unique name of the deployment manager.NAME_OF_CONFIG_FILE: Name of the created config file (for example, config.yaml).
If deployment is unsuccessful, delete the deployment manager instance and create it again. To delete the deployment manager, use the following:
2. Create a serverless VPC access¶
Enter the below command in the terminal after the deployment manager is created successfully.
gcloud compute networks vpc-access connectors create VPC_NAME --network default --region REGION --range IP_RANGE
VPC_NAME: Unique name of the VPC.`REGION: A region for your connector. Values can be us-central1, us-west1, etcIP_RANGE: An unreserved internal IP network and a /28 of unallocated space is required. The value supplied is the network in Classless Inter-Domain Routing (CIDR) notation (10.0.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network.
3. Create a cloud function¶
- Navigate to the bucket and open the bucket created for the DomainTools. Upload the cloud function zip file in the bucket.
- Enter the below terminal commands after the VPC network is created successfully.
gcloud functions deploy CLOUD_FUNCTION_NAME --set-env-vars ENV_NAME1=ENV_VALUE1,ENV_NAME2=ENV_VALUE2,ENV_NAME3= --gen2 --runtime=python311 --region=REGION --source=SOURCE_OF_FUNCTION --entry-point=main --service-account=SERVICE_ACCOUNT_EMAIL --trigger-http --no-allow-unauthenticated --memory=8GiB --vpc-connector=VPC_NAME --egress-settings=private-ranges-only --timeout=3600s
CLOUD_FUNCTION_NAME: Unique name of the cloud function.REGION: A region for your cloud function. Values can be us-central1, us-west1, etc.SOURCE_OF_FUNCTION:gsutil Uniform Resource Identifier (URI) of the cloud function zip in cloud storage. (for example,gs://domaintools/function.zip) where the domaintools is the name of the created bucket and function.zip is the cloud function zip file.SERVICE_ACCOUNT_EMAIL: email of the created service account of the project.VPC_NAME: Name of the created VPC Network.ENV_NAME1=ENV_VALUE1: Name and value of the environment variable to be created. For optional environment variables, provideENV_NAME= Environment variables
Provide all the required environment variables while creating the cloud function. The optional environment variables can also be provided after the cloud function is deployed by editing the cloud function.
4. Create a cloud scheduler¶
- Enter the below terminal comments after the cloud function is created successfully.
gcloud scheduler jobs create http SCHEDULER_NAME --schedule="CRON_TIME" --uri="CLOUD_FUNCTION_URL" --attempt-deadline=30m --oidc-service-account-email=SERVICE_ACCOUNT_EMAIL --location=LOCATION --time-zone=TIME_ZONE
SCHEDULER_NAME: Unique name of the cloud scheduler.TIME_ZONE: The time zone of your region,CRON_TIME: Cron time format for the scheduler to run in every interval (for example,*/10 * * * *).CLOUD_FUNCTION_URL: URL of the created cloud function. For the URL navigate to the Cloud Functions page and open the created cloud function for the DomainTools.SERVICE_ACCOUNT_EMAIL: email of the created service account of the project.LOCATION: A region for your connector. Values can beus-central1,us-west1, etc.
The overall flow of the cloud function¶
- After a successful deployment of the required resources of the GCP, the cloud function will fetch domain/URL aware (domain/URL present) events from the Chronicle as per the Cloud scheduler.
- The domain will be extracted from the events from the Chronicle and will be enriched from the DomainTools.
- A unique first 10 subdomains of the enriched domains will be fetched from the DNSDB if the DNSDB API key is provided in the environment variable.
- If the allow list is provided in the environment variable, the domains provided in the allow list will be excluded from the enrichment.
- The enriched domains will be stored in the Redis memory store with the TTL (time to leave). When the TTL value is passed, the domain will be removed from the Redis memory store.
- An enriched domain event will be ingested and parsed in the Chronicle.
Create lists in Chronicle¶
- Open the Chronicle Console and select
Searchin the sidebar panel. - Select the
Listsoption and the List Manager section will open; selectCreate. - Specify the list name (
TITLE), description and content (ROWS). Specify the content with one item on each line. - Create reference lists for allow list, monitoring list, monitoring tags and bulk enrichment ad-hoc script execution. The name of each list must be specified within the environment variable corresponding to its list type.
Create rules in Chronicle to generate detections¶
- Open the Chronicle console.
- Select the
Rules & Detectionsoption in the sidebar panel. - Select the
Rules Editorin the navigation bar. - Select the
NEWbutton to create a rule. Create rules for the High Risk Domain, Medium Risk Domain, Young Domain, Monitoring Domain, and Monitoring Tag Domain with the name mentioned in the attached screenshot. - Add the below code to the following Rules. ConsultGoogle Chronicle documentation for information on yara parameters.
high_risk_domain_observed¶
Rule high_risk_domain_observed {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High".
author = "" // enter your author name
description = "Generate alert when a high risk domain is observed in network"
severity = "" // enter severity, e.g. High
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.security_result[0].risk_score > 90 // replace 90 with your threshold risk score
// For a multi-event rule an aggregation function is required, e.g., risk_score = max(0).
condition:
$e
}
medium_risk_domain_observed¶
rule medium_risk_domain_observed {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
author = "" // enter your author name
description = "Generate alert when a Medium risk domain is observed in network"
severity = "" // enter severity e.g Medium
events:
$e-metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.security_result[0].risk_score > 80 and $e.security_result[0].risk_score < 89 // specify your range for medium risk score
// For a multi-event rule an aggregation function is required e.g., risk_score = max(e).
condition:
$e
}
young_domain¶
rule young_domain {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
author = "" // add author name
description = "Alert when young domain detects"
severity = "" // add severity e.g Medium
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.principal.domain.first_seen_time.seconds > (timestamp.current_seconds() - 86400) // replace 86400 with your time in seconds
outcome:
// For a multi-event rule an aggregation function is required, e.g., risk_ score = max(0) .
condition:
$e
}
Monitoring_list_domain¶
rule monitoring_list_domain {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High".
author = "" // enter author name
description = "Detect domain from the monitoring list"
severity = "" // enter severity e.g Medium
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.principal.hostname IN %domain_ref // replace domain_ref with your domain reference List
outcome:
// For a multi-event rule an aggregation function is required, e.g., risk_ score = max(0) .
condition:
$e
}
Monitoring_tags_domain_observed¶
rule monitoring_tags_domain_observed {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd Like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
author = "" // add author name
description = "Domain with specified DomainTools tags"
severity = "" // add severity
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.about.file.tags IN %domaintool_tags_list // replace domaintool_ tags_List with your reference List name
outcome:
// For a multi-event rule an aggregation function is required, e.g., risk_score = max(0) .
condition:
$e
}
Ad-hoc script execution¶
Allowlist Management¶
- The domains provided in the allow list will be excluded from the enrichment in the scheduled cloud function.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {"allow_list": "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
- The dummy events of the allow list domains will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Monitoring List Management¶
- The domains provided in the monitoring list will be enriched from the DomainTools and create the detection in the Chronicle if a monitoring list domain is observed in the user network.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {"monitoring_list: "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
- The enriched domain event with additional monitoring fields for the monitoring list domains will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Monitoring Tags¶
- The detection will be created in the Chronicle if tags provided in the monitoring list are present in the enriched domain event.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {monitoring_tags: "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
- The dummy events of the monitoring tags will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Single or Bulk Enrichment¶
- The domains provided in the bulk enrichment list will be enriched from the DomainTools with on-the-go requests.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {"bulk_enrichment": "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
- The enriched domain event for the bulk enrichment domains will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
View Parsed Logs in Chronicle Console¶
- Navigate to Chronicle console.
- Type
.*in the search field and click search. - Click raw log search.
- Select
Run Query as Regex. - Set the time interval in which the logs are ingested.
- Select the log source as "DomainTools Threat Intelligence" and click search.
- Open any particular log to see the raw log and mapped fields.