Skip to content

Installation

This guide covers the installation and configuration of the Chronicle App for DomainTools, including Google Cloud Platform (GCP) resource setup, cloud function deployment, and Chronicle rule configuration.

Prerequisites

  • Chronicle console and Chronicle service account
  • DomainTools credentials (API username, API key, DNSDB API key)
  • GCP Project with the below required permissions:
  • GCP user and project service account should have Owner permissions
  • GCP Services
  • Memory store - Redis
  • Cloud function (4-core CPU or higher is recommended for cloud function configuration)
  • Google Cloud Storage (GCS) bucket
  • Secret Manager
  • Cloud Scheduler
  • Serverless Virtual Private Cloud (VPC) access
  • Looker instance

Creating zip file of the cloud function

The cloud function requires files from the Chronicle ingestion scripts repository.

  1. Access the Chronicle ingestion scripts repository: https://github.com/chronicle/ingestion-scripts/tree/main
  2. In this repository, locate the following directories:
  3. domaintools directory (contains the ingestion script)
  4. common directory (contains shared utilities)
  5. Download or clone the repository to obtain these directories
  6. Create a zip file containing both the domaintools and common directories

Cloud function deployment

There are two ways to create the required resources of the GCP to deploy the Cloud function:

  1. Manual deployment
  2. Command-based (automated) deployment

Manual deployment of the required resources

1. Add the secret in secret manager

  1. Log in to the Google Cloud Console
  2. Select the project created for DomainTools from the upper left dropdown.
  3. Navigate to Secret Manager and select Create Secret.
  4. Provide the name for the secret in the Name field.
  5. Upload the file if there is a file for the secret, or provide the secret value directly in the 'Secret Value' field.
  6. Click the Create Secret button.

Add secret values for the Chronicle service account JSON, DomainTools API username, and DomainTools API key. A separate secret is required for each secret value. If the user wants to fetch the subdomains, then the secret value for the DNSDB API key also needs to be created in the secret manager.

After you create the secrets, provide the resource name of the secret as the value in the environment variable, e.g.,

CHRONICLE_SERVICE_ACCOUNT: projects/{project_id}/secrets/{secret_id}/versions/{version_id}

2. Create a GCP bucket

  1. Navigate to Buckets in GCP, select the Create button, and enter the bucket name.
  2. Select the region and modify the optional parameters if required and then click the Create button.
  3. Open the created GCP bucket and select the upload files button.
  4. (Optional) Upload a txt file containing comma-separated values of the Chronicle log types. The script will fetch the events from the specified log types only. If you don't provide this file, the script considers all log types.
  5. (Optional) Upload a json file for checkpoint. The script will consider the start time for fetching the logs with the timestamp specified in the checkpoint file. The structure of the checkpoint file should be:
   {
    "time": "2023-09-05 13:38:00"
   }

If not provided, the script will create a checkpoint file with the name "checkpoint.json" in the bucket when the script is initially executed.

3. Create serverless VPC access

  1. Navigate to Serverless VPC access and select Create Connector
  2. Enter a connector name; select the region; select network as default; select subnet as Custom IP range
  3. Enter any unique IP in the IP range box (for example, 10.0.0.0) and select Create

4. Create Redis instance

  1. Navigate to Redis and select Create instance
  2. Enter a unique instance ID and display name.
  3. Select Standard in the Tier Selection and capacity as 4 GB (recommended) but it can be provided as per user requirements.
  4. Select your region in the Region field.
  5. Select No read replicas for Read Replicas.
  6. Select default in the Network dropdown of the Set up connection.
  7. Optionally select a maintenance schedule.
  8. Select Create instance.

5. Create cloud function

  1. Navigate to the Cloud Functions page and select Create function
  2. Select 2nd gen in the Environment dropdown.
  3. Enter the unique function name and select your region from the region dropdown.
  4. Keep Require Authentication selected in the Trigger section.
  5. Select the dropdown for Runtime, build, connections, and security settings.
  6. Select the below options in the RUNTIME.
  7. Memory allocated - 8 GiB (Recommended)
  8. CPU (preview) - 4 (Recommended)
  9. Timeout - 3600 seconds
  10. Concurrency - 1
  11. Service account - select your DomainTools GCP project service account
  12. Select add variables and add the below environment variables (next subsection)
  13. Select CONNECTIONS and select the below options.
  14. Keep Allow all traffic in the Ingress settings.
  15. Select the created Serverless VPC Access connector in the Network dropdown.
  16. Select the Route only requests to private IPs through the VPC connector.
  17. Select Next
  18. Select Python 3.11 in the Runtime dropdown.
  19. Select ZIP Upload from the Source code.
  20. Keep the entry point as main.
  21. Select the created bucket in the Destination bucket dropdown.
  22. Browse and select the downloaded application zip file.
  23. Select Deploy. After a few minutes, the cloud function will be deployed successfully.
Environment variables
Environment variable Description Default value Required Secret
GCP_BUCKET_NAME Name of the created GCP bucket. - Yes No
REDIS_HOST IP of the created Redis memory store. - Yes No
REDIS_PORT Port of the created Redis memory store. - Yes No
CHRONICLE_CUSTOMER_ID Chronicle customer id. Navigate to settings in the Chronicle console for the customer id. - Yes No
CHRONICLE_SERVICE_ACCOUNT Copied resource name value of service account secret from the secret manager. - Yes Yes
CHRONICLE_REGION A region where the Chronicle instance is located. us No No
DOMAINTOOLS_API_USERNAME Copied resource name value of DomainTools API username secret from the secret manager. - Yes Yes
DOMAINTOOLS_API_KEY Copied resource name value of DomainTools API key secret from the secret manager. - Yes Yes
DNSDB_API_KEY Copied resource name value of DNSDB API key secret from the secret manager. - No Yes
FETCH_SUBDOMAINS_FOR_MAX_DOMAINS Fetch subdomains for the maximum number of domains. 2000 (max) No No
LOG_FETCH_DURATION Time duration in the seconds to fetch events from the Chronicle. Provide an integer value. Eg. If the user wants to fetch the logs of every 5 minutes then the user needs to specify 300 seconds. - Yes No
CHECKPOINT_FILE_PATH Path of the checkpoint file if provided in the bucket. If provided, events from the specified time will be fetched from the chronicle. If the file is present directly into the bucket then the user only needs to give the filename for this variable. If the file is given inside a folder then the path of the folder along with the filename needs to be specified like folderName/fileName. - No No
FETCH_URL_EVENTS Flag to fetch URL-aware events from the Chronicle. Accepted values [true, false] false No No
LOG_TYPE_FILE_PATH Path of Log type filename if provided in the bucket. If provided, events from those log types will be fetched from the Chronicle. Otherwise, all log types will be considered. Provide comma-separated Ingestion label values in the file. If the file is present directly into the bucket then the user only needs to give the filename for this variable. If the file is given inside a folder then the path of the folder along with the filename needs to be specified like folderName/fileName. Refer this page for the Supported log types. - No No
PROVISIONAL_TTL Time To Live (TTL) value if the domain has Evidence key and value as the provisional in the API response. Provide an integer value for this. If provided that value will be considered, otherwise default 1 day will be considered. 1 day No No
NON_PROVISIONAL_TTL TTL(time to leave) value for all other domains. Provide an integer value for this. If provided that value will be considered, otherwise default 30 days will be considered. 30 days No No
ALLOW_LIST Name of the allow list reference list created in the Chronicle. - No No
MONITORING_LIST Name of the monitoring list reference list created in the Chronicle. - No No
MONITORING_TAGS Name of the monitoring tags reference list created in the Chronicle. - No No
BULK_ENRICHMENT Name of the bulk enrichment reference list created in the Chronicle. - No No

6. Create cloud scheduler

  1. Navigate to the Cloud Scheduler and open the Cloud Scheduler page.
  2. Select Create job.
  3. Enter a unique name for the scheduler and select your region in Region.
  4. Enter the unix-cron format for Frequency; select timezone.
  5. Select continue.
  6. Select the Target type as HTTP.
  7. Paste the URL of the Cloud function in the URL field.
  8. Keep POST as it is in HTTP method.
  9. In the Auth header, Select Add OIDC token.
  10. In the service account field, select your DomainTools GCP project service account.
  11. Select Continue
  12. Enter 30m in the Attempt deadline config.
  13. Select Create

The Cloud function will be executed as per the frequency provided in the Cloud Scheduler.

Command based (automated) deployment of the required resources

1. Create Redis and bucket

  1. Log in to the Google Cloud Console and select the project created for the DomainTools from the upper left side dropdown.
  2. Select Activate Cloud Shell
  3. Select the Open Editor button after Cloud Shell opens successfully.
  4. Create a new file and add the below code to the file. The file type should be jinja (for example, resource.jinja).
   resources:
   - name: {{ properties["name"] }}
     type: gcp-types/redis-v1:projects.locations.instances
     properties:
       parent: projects/{{ env["project"] }}/locations/{{ properties["region"] }}
       instanceId: {{ properties["name"] }}
       authorizedNetwork: projects/{{ env["project"] }}/global/networks/default
       memorySizeGb: {{ properties["memory"] }}
       tier: STANDARD_HA
       {% if properties["displayName"] %}
       displayName: {{ properties["displayName"] }}
       {% endif %}
  1. Create another file and add the below code to the file. The file type should be yaml (for example, config.yaml).
   imports:
   - path: RESOURCE_FILE_NAME
   resources:
     - name: BUCKET_NAME
       type: storage.v1.bucket
       properties:
         location: LOCATION
     - name: REDIS_INSTANCE_NAME
       type: RESOURCE_FILE_NAME
       properties:
         name: REDIS_INSTANCE_NAME
         region: REGION
         memory: 4
         displayName: redis_display_name
  • RESOURCE_FILE_NAME: Name of the created resource file (for example, resource.jinja).
  • REDIS_INSTANCE_NAME: Unique name of the Redis instance.
  • BUCKET_NAME: Unique name of the bucket.
  • LOCATION: A region for your bucket. For multi-region in United States specify the US. Refer to this page for bucket location.
  • REGION: A region for your redis. Values can be us-central1, us-west1, etc.

  • Select Open Terminal and enter the below commands:

gcloud deployment-manager deployments create NAME_OF_DEPLOY --config NAME_OF_CONFIG_FILE
  • NAME_OF_DEPLOY: Unique name of the deployment manager.
  • NAME_OF_CONFIG_FILE: Name of the created config file (for example, config.yaml).

If deployment is unsuccessful, delete the deployment manager instance and create it again. To delete the deployment manager, use the following:

gcloud deployment-manager deployments delete NAME_OF_DEPLOY

2. Create a serverless VPC access

Enter the below command in the terminal after the deployment manager is created successfully.

gcloud compute networks vpc-access connectors create VPC_NAME --network default  --region REGION --range IP_RANGE
  • VPC_NAME: Unique name of the VPC.`
  • REGION: A region for your connector. Values can be us-central1, us-west1, etc
  • IP_RANGE: An unreserved internal IP network and a /28 of unallocated space is required. The value supplied is the network in Classless Inter-Domain Routing (CIDR) notation (10.0.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network.

3. Create a cloud function

  1. Navigate to the bucket and open the bucket created for the DomainTools. Upload the cloud function zip file in the bucket.
  2. Enter the below terminal commands after the VPC network is created successfully.
gcloud functions deploy CLOUD_FUNCTION_NAME --set-env-vars ENV_NAME1=ENV_VALUE1,ENV_NAME2=ENV_VALUE2,ENV_NAME3=  --gen2 --runtime=python311 --region=REGION --source=SOURCE_OF_FUNCTION  --entry-point=main --service-account=SERVICE_ACCOUNT_EMAIL --trigger-http --no-allow-unauthenticated --memory=8GiB --vpc-connector=VPC_NAME --egress-settings=private-ranges-only --timeout=3600s
  • CLOUD_FUNCTION_NAME: Unique name of the cloud function.
  • REGION: A region for your cloud function. Values can be us-central1, us-west1, etc.
  • SOURCE_OF_FUNCTION: gsutil Uniform Resource Identifier (URI) of the cloud function zip in cloud storage. (for example, gs://domaintools/function.zip) where the domaintools is the name of the created bucket and function.zip is the cloud function zip file.
  • SERVICE_ACCOUNT_EMAIL: email of the created service account of the project.
  • VPC_NAME: Name of the created VPC Network.
  • ENV_NAME1=ENV_VALUE1: Name and value of the environment variable to be created. For optional environment variables, provide ENV_NAME= Environment variables

Provide all the required environment variables while creating the cloud function. The optional environment variables can also be provided after the cloud function is deployed by editing the cloud function.

4. Create a cloud scheduler

  1. Enter the below terminal comments after the cloud function is created successfully.
   gcloud scheduler jobs create http SCHEDULER_NAME --schedule="CRON_TIME" --uri="CLOUD_FUNCTION_URL" --attempt-deadline=30m --oidc-service-account-email=SERVICE_ACCOUNT_EMAIL --location=LOCATION --time-zone=TIME_ZONE
  • SCHEDULER_NAME: Unique name of the cloud scheduler.
  • TIME_ZONE: The time zone of your region,
  • CRON_TIME: Cron time format for the scheduler to run in every interval (for example, */10 * * * *).
  • CLOUD_FUNCTION_URL: URL of the created cloud function. For the URL navigate to the Cloud Functions page and open the created cloud function for the DomainTools.
  • SERVICE_ACCOUNT_EMAIL: email of the created service account of the project.
  • LOCATION: A region for your connector. Values can be us-central1, us-west1, etc.

The overall flow of the cloud function

  • After a successful deployment of the required resources of the GCP, the cloud function will fetch domain/URL aware (domain/URL present) events from the Chronicle as per the Cloud scheduler.
  • The domain will be extracted from the events from the Chronicle and will be enriched from the DomainTools.
  • A unique first 10 subdomains of the enriched domains will be fetched from the DNSDB if the DNSDB API key is provided in the environment variable.
  • If the allow list is provided in the environment variable, the domains provided in the allow list will be excluded from the enrichment.
  • The enriched domains will be stored in the Redis memory store with the TTL (time to leave). When the TTL value is passed, the domain will be removed from the Redis memory store.
  • An enriched domain event will be ingested and parsed in the Chronicle.

Create lists in Chronicle

  1. Open the Chronicle Console and select Search in the sidebar panel.
  2. Select the Lists option and the List Manager section will open; select Create.
  3. Specify the list name (TITLE), description and content (ROWS). Specify the content with one item on each line.
  4. Create reference lists for allow list, monitoring list, monitoring tags and bulk enrichment ad-hoc script execution. The name of each list must be specified within the environment variable corresponding to its list type.

Create rules in Chronicle to generate detections

  1. Open the Chronicle console.
  2. Select the Rules & Detections option in the sidebar panel.
  3. Select the Rules Editor in the navigation bar.
  4. Select the NEW button to create a rule. Create rules for the High Risk Domain, Medium Risk Domain, Young Domain, Monitoring Domain, and Monitoring Tag Domain with the name mentioned in the attached screenshot.
  5. Add the below code to the following Rules. ConsultGoogle Chronicle documentation for information on yara parameters.

high_risk_domain_observed

Rule high_risk_domain_observed {
 // This rule matches single events. Rules can also match multiple events within some time window.

 meta:
   // Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High".

   author = "" // enter your author name
   description = "Generate alert when a high risk domain is observed in network"
   severity = "" // enter severity, e.g. High

 events:
   $e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
   $e.security_result[0].risk_score > 90 // replace 90 with your threshold risk score
   // For a multi-event rule an aggregation function is required, e.g., risk_score = max(0).

 condition:
   $e
}

medium_risk_domain_observed

rule medium_risk_domain_observed {
 // This rule matches single events. Rules can also match multiple events within some time window.

 meta:
   // Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
   author = "" // enter your author name
   description = "Generate alert when a Medium risk domain is observed in network"
   severity = "" // enter severity e.g Medium

 events:
   $e-metadata.log_type = "DOMAINTOOLS_THREATINTEL"
   $e.security_result[0].risk_score > 80 and $e.security_result[0].risk_score < 89 // specify your range for medium risk score
     // For a multi-event rule an aggregation function is required e.g., risk_score = max(e).

 condition:
   $e

young_domain

rule young_domain {
 // This rule matches single events. Rules can also match multiple events within some time window.

 meta:
   // Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
   author = "" // add author name
   description = "Alert when young domain detects"
   severity = "" // add severity e.g Medium

 events:
   $e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
   $e.principal.domain.first_seen_time.seconds > (timestamp.current_seconds() - 86400) // replace 86400 with your time in seconds

 outcome:
   // For a multi-event rule an aggregation function is required, e.g., risk_ score = max(0) .
 condition:
   $e
}

Monitoring_list_domain

rule monitoring_list_domain {
 // This rule matches single events. Rules can also match multiple events within some time window.

 meta:
   // Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High".
   author = "" // enter author name
   description = "Detect domain from the monitoring list"
   severity = "" // enter severity e.g Medium

   events:
     $e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
     $e.principal.hostname IN %domain_ref // replace domain_ref with your domain reference List
    outcome:
     // For a multi-event rule an aggregation function is required, e.g., risk_ score = max(0) .
    condition:
     $e
}

Monitoring_tags_domain_observed

rule monitoring_tags_domain_observed {
 // This rule matches single events. Rules can also match multiple events within some time window.
 meta:
   // Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd Like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
   author = "" // add author name
   description = "Domain with specified DomainTools tags"
   severity = "" // add severity
  events:
   $e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
   $e.about.file.tags IN %domaintool_tags_list // replace domaintool_ tags_List with your reference List name
  outcome:
 // For a multi-event rule an aggregation function is required, e.g., risk_score = max(0) .
  condition:
   $e
}

Ad-hoc script execution

Allowlist Management

  • The domains provided in the allow list will be excluded from the enrichment in the scheduled cloud function.
  • A user has to create and manage a list from the Chronicle.
  • A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
  • Now go to the Testing tab of the cloud function and enter the {"allow_list": "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
  • The dummy events of the allow list domains will be ingested in the Chronicle.
  • When the user updates the list, a user needs to execute the ad-hoc script again.

Monitoring List Management

  • The domains provided in the monitoring list will be enriched from the DomainTools and create the detection in the Chronicle if a monitoring list domain is observed in the user network.
  • A user has to create and manage a list from the Chronicle.
  • A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
  • Now go to the Testing tab of the cloud function and enter the {"monitoring_list: "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
  • The enriched domain event with additional monitoring fields for the monitoring list domains will be ingested in the Chronicle.
  • When the user updates the list, a user needs to execute the ad-hoc script again.

Monitoring Tags

  • The detection will be created in the Chronicle if tags provided in the monitoring list are present in the enriched domain event.
  • A user has to create and manage a list from the Chronicle.
  • A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
  • Now go to the Testing tab of the cloud function and enter the {monitoring_tags: "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
  • The dummy events of the monitoring tags will be ingested in the Chronicle.
  • When the user updates the list, a user needs to execute the ad-hoc script again.

Single or Bulk Enrichment

  • The domains provided in the bulk enrichment list will be enriched from the DomainTools with on-the-go requests.
  • A user has to create and manage a list from the Chronicle.
  • A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
  • Now go to the Testing tab of the cloud function and enter the {"bulk_enrichment": "true"} parameter in the Configure triggering event and click the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click the RUN IN CLOUD SHELL and click enter.
  • The enriched domain event for the bulk enrichment domains will be ingested in the Chronicle.
  • When the user updates the list, a user needs to execute the ad-hoc script again.

View Parsed Logs in Chronicle Console

  1. Navigate to Chronicle console.
  2. Type .* in the search field and click search.
  3. Click raw log search.
  4. Select Run Query as Regex.
  5. Set the time interval in which the logs are ingested.
  6. Select the log source as "DomainTools Threat Intelligence" and click search.
  7. Open any particular log to see the raw log and mapped fields.