DomainTools App for Google Chronicle SIEM¶
Overview¶
DomainTools Platform¶
DomainTools is the global leader for internet intelligence and the first place security practitioners go when they need to know. The world's most advanced security teams use our solutions to identify external risks, investigate threats, and proactively protect their organizations in a constantly evolving threat landscape.
The DomainTools Iris platform is a suite of security SaaS applications that help incident responders, investigators, and security analysts understand the risk of Internet domain names and the infrastructure that supports them.
Google Chronicle Platform¶
Chronicle is a modern, cloud-native SecOps platform that empowers security teams to better defend against today’s and tomorrow’s threats.
By combining Google’s hyper-scale infrastructure, unparalleled visibility, and understanding of cyber adversaries, Chronicle provides curated outcomes that proactively uncover the latest threats in near real-time, and enable security teams to detect, investigate and respond with speed and precision.
Chronicle App for DomainTools¶
The Chronicle App for DomainTools fetches real-time events from the Chronicle and extracts domains for further enrichment from DomainTools APIs. This app allows users to leverage the ad-hoc enrichment of domains when in need. This also contains Looker dashboards where users can visualize different metrics.
App Installation & Configuration¶
Prerequisites¶
- Chronicle console and Chronicle service account.
- DomainTools credentials (API username, API key, DNSDB API key)
- GCP Project with the below required permissions:
- GCP user and project service account should have Owner permissions
- GCP Services
- Memory store - Redis
- Cloud function (4-core CPU or higher is recommended for cloud function configuration)
- GCS bucket
- Secret Manager
- Cloud Scheduler
- Serverless VPC access
- Looker instance
Creating zip file of the cloud function¶
Create a zip file with the contents of the following files:
- Contents of the ingestion script (i.e.
domaintools
) common
directory
Cloud Function Deployment¶
There are two ways to create the required resources of the GCP to deploy the Cloud function:
- Manual deployment
- Command based (automated) deployment
Manual deployment of the required resources¶
1. Add the secret in Secret Manager¶
- Log in to the Google Cloud Console
- Select the project created for DomainTools from the upper left dropdown.
- Navigate to
Secret Manager
and selectCreate Secret
. - Provide the name for the secret in the
Name
field. - Upload the file if there is a file for the secret, or provide the secret value directly in the 'Secret Value' field.
- Click on the 'Create Secret' button.
Add secret values for the Chronicle service account JSON
, DomainTools API username
, and DomainTools API key
. A separate secret is required for each secret value. If the user wants to fetch the subdomains, then the secret value for the DNSDB API key
also needs to be created in the secret manager.
Once the secrets are created, provide the resource name of the secret as the value in the environment variable, e.g.,
2. Create a GCP Bucket¶
- Navigate to
Buckets
in GCP, select theCreate
button, and enter the bucket name. - Select the region and modify the optional parameters if required and then click on the Create button.
- Open the created GCP bucket and select the
upload files
button. - (Optional) Upload a
txt
file containing comma-separated values of the Chronicle log types. The script will fetch the events from the specified log types only. If not provided all log types will be considered. - (Optional) Upload a
json
file for checkpoint. The script will consider the start time for fetching the logs with the timestamp specified in the checkpoint file. The structure of the checkpoint file should be:
If not provided, the script will create a checkpoint file with the name “checkpoint.json” in the bucket when the script is initially executed.
3. Create Serverless VPC Access¶
- Navigate to
Serverless VPC access
and selectCreate Connector
- Enter a connector name; select the region; select
network
asdefault
; selectsubnet
asCustom IP range
- Enter any unique IP in the IP range box (e.g.
10.0.0.0
) and selectCreate
4. Create Redis Instance¶
- Navigate to Redis and select
Create instance
- Enter a unique instance ID and display name.
- Select
Standard
in theTier Selection
andcapacity
as4 GB
(recommended) but it can be provided as per user requirements. - Select your region in the
Region
field. - Select
No read replicas
forRead Replicas
. - Select
default
in theNetwork
dropdown of the Set up connection. - Optionally select a maintenance schedule.
- Select
Create instance
.
5. Create Cloud Function¶
- Navigate to the Cloud Functions page and select
Create function
- Select
2nd gen
in theEnvironment
dropdown. - Enter the unique function name and select your region from the region dropdown.
- Keep
Require Authentication
selected in theTrigger
section. - Select the dropdown for
Runtime, build, connections, and security
settings. - Select the below options in the
RUNTIME
. Memory allocated
- 8 GiB (Recommended)CPU (preview)
- 4 (Recommended)Timeout
- 3600 secondsConcurrency
- 1Service account
- select your DomainTools GCP project service account- Select
add variables
and add the below environment variables (next subsection) - Select
CONNECTIONS
and select the below options. - Keep
Allow all traffic
in the Ingress settings. - Select the created
Serverless VPC Access
connector in the Network dropdown. - Select the
Route only requests to private IPs
through the VPC connector. - Select
Next
- Select
Python 3.11
in theRuntime
dropdown. - Select
ZIP Upload
from the Source code. - Keep the entry point as main.
- Select the created bucket in the
Destination
bucket dropdown. - Browse and select the downloaded application zip file.
- Select
Deploy
. After a few minutes, the cloud function will be deployed successfully.
Environment Variables¶
Environment variable | Description | Default value | Required | Secret |
---|---|---|---|---|
GCP_BUCKET_NAME | Name of the created GCP bucket. | - | Yes | No |
REDIS_HOST | IP of the created Redis memory store. | - | Yes | No |
REDIS_PORT | Port of the created Redis memory store. | - | Yes | No |
CHRONICLE_CUSTOMER_ID | Chronicle customer id. Navigate to settings in the Chronicle console for the customer id. | - | Yes | No |
CHRONICLE_SERVICE_ACCOUNT | Copied resource name value of service account secret from the secret manager. | - | Yes | Yes |
CHRONICLE_REGION | A region where the Chronicle instance is located. | us | No | No |
DOMAINTOOLS_API_USERNAME | Copied resource name value of DomainTools API username secret from the secret manager. | - | Yes | Yes |
DOMAINTOOLS_API_KEY | Copied resource name value of DomainTools API key secret from the secret manager. | - | Yes | Yes |
DNSDB_API_KEY | Copied resource name value of DNSDB API key secret from the secret manager. | - | No | Yes |
FETCH_SUBDOMAINS_FOR_MAX_DOMAINS | Fetch subdomains for the maximum number of domains. | 2000 (max) | No | No |
LOG_FETCH_DURATION | Time duration in the seconds to fetch events from the Chronicle. Provide an integer value. Eg. If the user wants to fetch the logs of every 5 minutes then the user needs to specify 300 seconds. | - | Yes | No |
CHECKPOINT_FILE_PATH | Path of the checkpoint file if provided in the bucket. If provided, events from the specified time will be fetched from the chronicle. If the file is present directly into the bucket then the user only needs to give the file name for this variable. If the file is given inside a folder then the path of the folder along with the file name needs to be specified like folderName/fileName. | - | No | No |
FETCH_URL_EVENTS | Flag to fetch URL-aware events from the Chronicle. Accepted values [true, false] | false | No | No |
LOG_TYPE_FILE_PATH | Path of Log type file name if provided in the bucket. If provided, events from those log types will be fetched from the Chronicle. Otherwise, all log types will be considered. Provide comma-separated Ingestion label values in the file. If the file is present directly into the bucket then the user only needs to give the file name for this variable. If the file is given inside a folder then the path of the folder along with the file name needs to be specified like folderName/fileName. Refer this page for the Supported log types. | - | No | No |
PROVISIONAL_TTL | TTL(time to leave) value if the domain has Evidence key and value as the provisional in the API response. Provide an integer value for this. If provided that value will be considered, otherwise default 1 day will be considered. | 1 day | No | No |
NON_PROVISIONAL_TTL | TTL(time to leave) value for all other domains. Provide an integer value for this. If provided that value will be considered, otherwise default 30 days will be considered. | 30 days | No | No |
ALLOW_LIST | Name of the allow list reference list created in the Chronicle. | - | No | No |
MONITORING_LIST | Name of the monitoring list reference list created in the Chronicle. | - | No | No |
MONITORING_TAGS | Name of the monitoring tags reference list created in the Chronicle. | - | No | No |
BULK_ENRICHMENT | Name of the bulk enrichment reference list created in the Chronicle. | - | No | No |
6. Create Cloud Scheduler¶
- Navigate to the Cloud Scheduler and open the Cloud Scheduler page.
- Select
Create job
. - Enter a unique name for the scheduler and select your region in
Region
. - Enter the unix-cron format for
Frequency
; selecttimezone
. - Select
continue
. - Select the
Target type
asHTTP
. - Paste the URL of the Cloud function in the
URL
field. - Keep POST as it is in
HTTP method
. - In the
Auth header
, SelectAdd OIDC token
. - In the
service account
field, select your DomainTools GCP project service account. - Select
Continue
- Enter
30m
in theAttempt deadline
config. - Select
Create
The Cloud function will be executed as per the frequency provided in the Cloud Scheduler.
Command based (automated) deployment of the required resources¶
1. Create Redis and Bucket¶
- Log in to the Google Cloud Console and select the project created for the DomainTools from the upper left side dropdown.
- Select
Activate Cloud Shell
- Select the
Open Editor
button after Cloud Shell opens successfully. - Create a new file and add the below code to the file. The file type should be
jinja
(e.g.resource.jinja
).
resources:
- name: {{ properties["name"] }}
type: gcp-types/redis-v1:projects.locations.instances
properties:
parent: projects/{{ env["project"] }}/locations/{{ properties["region"] }}
instanceId: {{ properties["name"] }}
authorizedNetwork: projects/{{ env["project"] }}/global/networks/default
memorySizeGb: {{ properties["memory"] }}
tier: STANDARD_HA
{% if properties["displayName"] %}
displayName: {{ properties["displayName"] }}
{% endif %}
- Create another file and add the below code to the file. The file type should be
yaml
(e.g.config.yaml
).
imports:
- path: RESOURCE_FILE_NAME
resources:
- name: BUCKET_NAME
type: storage.v1.bucket
properties:
location: LOCATION
- name: REDIS_INSTANCE_NAME
type: RESOURCE_FILE_NAME
properties:
name: REDIS_INSTANCE_NAME
region: REGION
memory: 4
displayName: redis_display_name
RESOURCE_FILE_NAME
: Name of the created resource file(e.g. resource.jinja).REDIS_INSTANCE_NAME
: Unique name of the Redis instance.BUCKET_NAME
: Unique name of the bucket.LOCATION
: A region for your bucket. For multi-region in United States specify the US. Refer to this page for bucket location.-
REGION
: A region for your redis. Values can be us-central1, us-west1, etc. -
Select
Open Terminal
and enter the below commands:
NAME_OF_DEPLOY
: Unique name of the deployment manager.NAME_OF_CONFIG_FILE
: Name of the created config file (e.g. config.yaml).
If deployment is unsuccessful, delete the deployment manager instance and create it again. To delete the deployment manager, use the following:
2. Create a Serverless VPC Access¶
Enter the below command in the terminal after the deployment manager is created successfully.
gcloud compute networks vpc-access connectors create VPC_NAME --network default --region REGION --range IP_RANGE
VPC_NAME
: Unique name of the VPC.`REGION
: A region for your connector. Values can be us-central1, us-west1, etcIP_RANGE
: An unreserved internal IP network and a /28 of unallocated space is required. The value supplied is the network in CIDR notation (10.0.0.0/28). This IP range must not overlap with any existing IP address reservations in your VPC network.
3. Create a Cloud function¶
- Navigate to the bucket and open the bucket created for the DomainTools. Upload the cloud function zip file in the bucket.
- Enter the below terminal commands after the VPC network is created successfully.
gcloud functions deploy CLOUD_FUNCTION_NAME --set-env-vars ENV_NAME1=ENV_VALUE1,ENV_NAME2=ENV_VALUE2,ENV_NAME3= --gen2 --runtime=python311 --region=REGION --source=SOURCE_OF_FUNCTION --entry-point=main --service-account=SERVICE_ACCOUNT_EMAIL --trigger-http --no-allow-unauthenticated --memory=8GiB --vpc-connector=VPC_NAME --egress-settings=private-ranges-only --timeout=3600s
-
CLOUD_FUNCTION_NAME
: Unique name of the cloud function. -
REGION
: A region for your cloud function. Values can be us-central1, us-west1, etc. SOURCE_OF_FUNCTION:
gsutil URI of the cloud function zip in cloud storage. (e.g.gs://domaintools/function.zip
) where the domaintools is the name of the created bucket and function.zip is the cloud function zip file.SERVICE_ACCOUNT_EMAIL
:Email of the created service account of the project.VPC_NAME
: Name of the created VPC Network.ENV_NAME1=ENV_VALUE1
: Name and value of the environment variable to be created. For optional environment variables, provideENV_NAME= Environment variables
Provide all the required environment variables while creating the cloud function. The optional environment variables can also be provided after the cloud function is deployed by editing the cloud function.
4. Create a Cloud Scheduler¶
- Enter the below terminal comments after the cloud function is created successfully.
gcloud scheduler jobs create http SCHEDULER_NAME --schedule="CRON_TIME" --uri="CLOUD_FUNCTION_URL" --attempt-deadline=30m --oidc-service-account-email=SERVICE_ACCOUNT_EMAIL --location=LOCATION --time-zone=TIME_ZONE
SCHEDULER_NAME
: Unique name of the cloud scheduler.TIME_ZONE
: The time zone of your region,CRON_TIME
: Cron time format for the scheduler to run in every interval ( e.g.*/10 * * * *
).CLOUD_FUNCTION_URL
: URL of the created cloud function. For the url navigate to the Cloud Functions page and open the created cloud function for the DomainTools.SERVICE_ACCOUNT_EMAIL
: Email of the created service account of the project.LOCATION
: A region for your connector. Values can beus-central1
,us-west1
, etc.
The Overall flow of the Cloud function¶
- After a successful deployment of the required resources of the GCP, the cloud function will fetch domain/URL aware (domain/URL present) events from the Chronicle as per the Cloud scheduler.
- The domain will be extracted from the events from the Chronicle and will be enriched from the DomainTools.
- A unique first 10 subdomains of the enriched domains will be fetched from the DNSDB if the DNSDB API key is provided in the environment variable.
- If the allow list is provided in the environment variable, the domains provided in the allow list will be excluded from the enrichment.
- The enriched domains will be stored in the Redis memory store with the TTL (time to leave). When the TTL value is passed, the domain will be removed from the Redis memory store.
- An enriched domain event will be ingested and parsed in the Chronicle.
Create Lists in Chronicle¶
- Open the Chronicle Console and select
Search
in the sidebar panel. - Select the
Lists
option and the List Manager section will open; selectCreate
. - Specify the list name (
TITLE
), description and content (ROWS
). Specify the content with one item on each line. - Create reference lists for allow list, monitoring list, monitoring tags and bulk enrichment ad-hoc script execution. The name of each list must be specified within the environment variable corresponding to its list type.
Create Rules in Chronicle to generate detections¶
- Open the Chronicle console.
- Select the
Rules & Detections
option in the sidebar panel. - Select the
Rules Editor
in the navigation bar. - Select the
NEW
button to create a rule. Create rules for the High Risk Domain, Medium Risk Domain, Young Domain, Monitoring Domain, and Monitoring Tag Domain with the name mentioned in the attached screenshot. - Add the below code to the following Rules. ConsultGoogle Chronicle documentation for information on yara parameters.
high_risk_domain_observed¶
Rule high_risk_domain_observed {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High".
author = "" // enter your author name
description = "Generate alert when a high risk domain is observed in network"
severity = "" // enter severity, e.g. High
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.security_result[0].risk_score > 90 // replace 90 with your threshold risk score
// For a multi-event rule an aggregation function is required, e.g., risk_score = max(0).
condition:
$e
}
medium_risk_domain_observed¶
rule medium_risk_domain_observed {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
author = "" // enter your author name
description = "Generate alert when a Medium risk domain is observed in network"
severity = "" // enter severity e.g Medium
events:
$e-metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.security_result[0].risk_score > 80 and $e.security_result[0].risk_score < 89 // specify your range for medium risk score
// For a multi-event rule an aggregation function is required e.g., risk_score = max(e).
condition:
$e
}
young_domain¶
rule young_domain {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
author = "" // add author name
description = "Alert when young domain detects"
severity = "" // add severity e.g Medium
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.principal.domain.first_seen_time.seconds > (timestamp.current_seconds() - 86400) // replace 86400 with your time in seconds
outcome:
// For a multi-event rule an aggregation function is required, e.g., risk_ score = max(0) .
condition:
$e
}
Monitoring_list_domain¶
rule monitoring_list_domain {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High".
author = "" // enter author name
description = "Detect domain from the monitoring list"
severity = "" // enter severity e.g Medium
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.principal.hostname IN %domain_ref // replace domain_ref with your domain reference List
outcome:
// For a multi-event rule an aggregation function is required, e.g., risk_ score = max(0) .
condition:
$e
}
Monitoring_tags_domain_observed¶
rule monitoring_tags_domain_observed {
// This rule matches single events. Rules can also match multiple events within some time window.
meta:
// Allows for storage of arbitrary key-value pairs of rule details - who wrote it, what it detects on, version control, etc. The "author" and "severity" fields are special, as they are used as columns on the rules dashboard. If you'd Like to be able to sort based on these fields on the dashboard, make sure to add them here. Severity value, by convention, should be "Low", "Medium" or "High"
author = "" // add author name
description = "Domain with specified DomainTools tags"
severity = "" // add severity
events:
$e.metadata.log_type = "DOMAINTOOLS_THREATINTEL"
$e.about.file.tags IN %domaintool_tags_list // replace domaintool_ tags_List with your reference List name
outcome:
// For a multi-event rule an aggregation function is required, e.g., risk_score = max(0) .
condition:
$e
}
Ad-Hoc Script Execution¶
Allowlist Management¶
- The domains provided in the allow list will be excluded from the enrichment in the scheduled cloud function.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {“allow_list”: “true”} parameter in the Configure triggering event and click on the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click on the RUN IN CLOUD SHELL and click enter.
- The dummy events of the allow list domains will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Monitoring List Management¶
- The domains provided in the monitoring list will be enriched from the DomainTools and create the detection in the Chronicle if a monitoring list domain is observed in the user network.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {“monitoring_list: “true”} parameter in the Configure triggering event and click on the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click on the RUN IN CLOUD SHELL and click enter.
- The enriched domain event with additional monitoring fields for the monitoring list domains will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Monitoring Tags¶
- The detection will be created in the Chronicle if tags provided in the monitoring list are present in the enriched domain event.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {monitoring_tags: “true”} parameter in the Configure triggering event and click on the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click on the RUN IN CLOUD SHELL and click enter.
- The dummy events of the monitoring tags will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Single or Bulk Enrichment¶
- The domains provided in the bulk enrichment list will be enriched from the DomainTools with on-the-go requests.
- A user has to create and manage a list from the Chronicle.
- A list created in the Chronicle needs to be provided in the environment variable of the created cloud function.
- Now go to the Testing tab of the cloud function and enter the {“bulk_enrichment”: “true”} parameter in the Configure triggering event and click on the TEST THE FUNCTION button. If the TEST THE FUNCTION button is not present, click on the RUN IN CLOUD SHELL and click enter.
- The enriched domain event for the bulk enrichment domains will be ingested in the Chronicle.
- When the user updates the list, a user needs to execute the ad-hoc script again.
Looker Installation & Configuration¶
Pre-Requisites¶
- Billing Project ID, Dataset name, and Service account file of BigQuery that stores Chronicle data for database connection in Looker.
- BigQuery Export feature needs to be enabled for your Chronicle tenant. (Reach out to your Chronicle representative to set this up.)
- Admin Role User - to create a database connection and install block from the marketplace
Create a connection to Google Chronicle in Looker¶
- To create a connection to Google Chronicle, first open the Looker instance and navigate to the Home page.
- Now open the main menu, select Admin, and then go to the Connection page.
- Now click on the Add connection to create a new connection.
- Enter the name of the connection as you prefer and select Google BigQuery Standard SQL in the Dialect. Now several new fields will appear.
- Enter Billing Project ID field. Example: “chronicle-crds” here, where Chronicle data is present.
- Enter the datalake in the Dataset name.
- To configure authentication, select the service account method and upload your Chronicle service account file.
- In the optional settings, set both the timestamps (Database timestamp and query timestamp) as UTC (the time fields shown in dashboards will be populated accordingly).
- Click on the Connect button (![][image16]) to complete the connection setup. Looker is now connected to the Google Chronicle database.
Get the Block from GitHub Repository¶
- Go to DomainTools looker dashboard github repository and fork the same. Make sure to uncheck the option for fork only the main branch.
- Go to Looker and turn on “Development Mode” from the sidebar panel.
- Select Projects from the Develop menu.
- From the LookML Projects page, select New LookML Project to open the New Project page.
- On the New Project page, configure these options for your new project: Project Name: Give project name ‘domaintools_dashboards’. Starting Point: Select Blank Project. Click on Create Project. The project will be created and opened in the Looker IDE.
- Click on the Settings icon from the navigation bar, and open the Configure Git page by selecting the Configure Git button.
- In Looker's Configure Git section, paste the URL of the for forked DomainTools Looker Dashboard Git Repository in the Repository URL field, then select Continue. e.g.
https://github.com/<your_username>/looker-dashboards.git
. - Enter the github username and Personal Access Token, then click “Test and Finalize Setup”.
- If you get an error like “Ensure credential allow write access failed”, just enter the username and token again and click “Skip Tests and Finalize Setup”.
- Click on the git action tab and select ‘develop-dashboards-production’ branch in the current branch.
- Now, you should be able to see the code. If not visible
- In the ‘Git Actions’ tab from the left side, click on the “Pull from…” option.
- Select the “Pull From Remote (develop-dashboards-production)” option and click on the Confirm button.
- Click on the ‘File Browser’ tab from the left side, click on ‘manifest.lkml’, enter the value of the following constants and then click “Save Changes”.
- CONNECTION_NAME: Name of the database connection for the Chronicle dataset in BigQuery.
- CHRONICLE_URL: Enter the base URL of your Chronicle console tenant, for e.g. https://tenant.backstory.chronicle.security
- GOOGLE_CLOUD_FUNCTION_NAME: Enter the name of the cloud function.
- GOOGLE_CLOUD_FUNCTION_REGION: Enter the name of the cloud function region. List of regions can be found at https://cloud.google.com/functions/docs/locations
- GOOGLE_CLOUD_PROJECT_ID: Enter the name of the cloud function project id. Find Project ID https://support.google.com/googleapi/answer/7014113?hl=en
- In the Git Actions, click on the “Commit” to push changes to the repository and then click “Deploy to Production”: Note: ‘Deploy to Production’ will push code to the production branch that is set in the project settings. By default, it will be the ‘main’ branch. If you don’t want to push code to ‘main’ branch, then create your own branch and set it to ‘Git Production Branch Name’ in project settings. Then click on Deploy to Production.
- On the Homepage of your Looker instance, navigate to the “LookML dashboards” tab under the “Folders” tab to access and view all the dashboards.
Get the Block from Marketplace¶
- After a successful connection click on the ‘marketplace’ button in the top-right corner.
- Click on “Discover”.
- It will open a Looker marketplace.
- Search “DomainTools”, it will open the page for installation.
- Click on “install+”.
- Select “Install” And “Accept” terms and conditions.
- Click on Agree and Continue.
- Select Connection Name from the dropdown and enter other values.
- After Successful installation, the user will be able to see the DomainTools block under Home \=> Blocks. It would be displayed similarly to the Chronicle block displayed in the image below.
- After clicking on it, the user will be able to see the below listed dashboards, which would populate DomainTools data from your configured Chronicle instance. Example, Dashboards are displayed after installing the Chronicle Block.
Dashboards¶
Threat Intelligence¶
Young Domains¶
- This panel will display counts of the Young Domains observed in the Chronicle based on the selected Young Domain Threshold. The default value of the Young Domain Threshold will be 7 days.
- Domain will be displayed in the dashboard only if the (Ingested timestamp of the enriched domain event - first_seen of the domain)\<= Young Domain Threshold
Suspicious Domains¶
- This panel will display counts of the Suspicious Domains based on the selected Suspicious Domain Range. The default value of the Suspicious Range will be 75-100
- A Suspicious Domain's details will be displayed in the drill down table panel when the user clicks on the Suspicious Domains count. Following details will be displayed:
- Domain
- Risk Score
- View in Chronicle
- Last Observed (UTC)
- When the user clicks on the domain, a redirection link will appear, and when the clicks on the link, a user will be redirected to the DomainTools.
- When the user clicks on the View in Chronicle, a redirection link will appear and when clicking on the link, the user will be redirected to the Chronicle.
High Risk Domains¶
- This panel will display a bar chart of the domain name vs. no of observations during the selected time period.
- The High Risk Domains will be populated based on the selected High Risk Range. Default High Risk Range will be 90-100
- When a user clicks on any bar of the chart, events associated with that domain will be displayed in the drill down table panel. The latest 500 domain details will be displayed. To view all domain details, a user can download the file by clicking the Download option. The following details will be displayed:
- Domain
- Risk Score
- Event Timestamp (UTC)
- View in Chronicle
- When the user clicks on the domain, a redirection link will appear, and when clicks on the link, a user will be redirected to the DomainTools.
- When the user clicks on the View in Chronicle, a redirection link will appear and when clicking on the link, a user will be redirected to the Chronicle.
Medium Risk Domains¶
- This panel will display a bar chart of the domain name vs. no of observations during the selected time period.
- The Medium Risk Domains will be populated based on the selected Medium Risk Range. Default Medium Risk Range will be 75-89
- When a user clicks on any bar of the chart, events associated with that domain will be displayed in the drill down table panel. The latest 500 domain details will be displayed. To view all domain details, a user can download the file by clicking the Download option. The following details will be displayed:
- Domain
- Risk Score
- Event Timestamp (UTC)
- View in Chronicle
- When the user clicks on the Domain, a redirection link will appear, and when clicking on the link, a user will be redirected to the DomainTools.
- When the user clicks on the View in Chronicle, a redirection link will appear and when clicking on the link, a user will be redirected to the Chronicle.
Young Domains¶
- This panel will display the details of the Young Domains in the table. Following details of the young domains will be displayed:
- Domain
- Age (in days)
- Risk Score
- First Observed (UTC)
- Last Observed (UTC)
- Events
- When the user clicks on the Domain, a redirection link will appear and when clicks on the link, a user will be redirected to the DomainTools.
- When the user clicks on the Events, a redirection link will appear and when clicking on the link, a user will be redirected to the Chronicle.
Enrichment Explorer¶
- This dashboard will display the details of the enriched domain events. The following details of the domains will be displayed:
- Domain
- Age (in days)
- Active Status
- Overall Risk Score
- Last Enriched DateTime (UTC)
- Proximity Score
- Threat Type
- Threat Evidence
- Threat Profile Malware
- Threat Profile Phishing
- Threat Profile Spam
- Domain Registered From
- Domain Registered Company
- Domain Registered Region
- View in Iris
- Domain filter will be populated based on the selected value of the Last Enriched, Age, Risk Score.
- This dashboard will be populated based on the selected value of the Last Enriched, Domain, Threat Type, Age, and Risk Score filters.
- The Enrichment Explorer table panel will be populated with the latest 1000 domain details.
- When the user clicks on the Domain, a redirection link will appear and when clicking on the link, a user will be redirected to the Chronicle.
- When the user clicks on the View in Iris, a redirection link will appear and when clicks on the link, the user will be redirected to the DomainTools.
Domain Profiling¶
- This dashboard will display the pie chart based on the selected Enrichment Filter Value.
- The pie chart will be populated with the top 19 values of the Enrichment Filter value and all other values in the other pie chart.
- The details of domains will be displayed in the drill down table panel when clicking on the any pie of the chart. The first 500 domain details will be displayed. To view all domain details, a user can download the file by clicking the Download option. The following details will be displayed:
- Domain
- View in Iris
- When the user clicks on the View in Chronicle, a redirection link will appear, and when clicking on the link, a user will be redirected to the Chronicle.
- When the user clicks on the View in Iris, a redirection link will appear and when clicks on the link, the user will be redirected to the DomainTools.
Monitoring Dashboard¶
This dashboard will be populated based on the selected time range filter value. This dashboard will be populated with the detections of the latest current version of the rule. Details of the panels present in the dashboard are as follows:
Monitored Domain Detections over Time¶
- This panel will display counts of the monitored domain detections based on the detection timestamp.
Monitored Domain Detections¶
- This panel will display counts of the monitored domain detections.
- When clicking on the count of the monitored domain detection, a redirection link will appear and when clicking on the redirection link, a user will be redirected to the monitored domain detections in the Chronicle.
Monitored Tags Detections over Time¶
- This panel will display counts of the monitored tag detections based on the detection timestamp.
Tagged Domain Detections¶
- This panel will display counts of the tagged domain detections.
- When clicking on the count of the tagged domain detection, a redirection link will appear and when clicking on the redirection link, a user will be redirected to the tagged domain detections in the Chronicle.
Monitoring Domain List Management¶
- This panel will display the link of the monitoring domain list. When clicking on the link, a user will be redirected to the Monitoring Domain List in the Chronicle.
Monitoring Tag List Management¶
- This panel will display the link of the monitoring tags list. When clicking on the link, a user will be redirected to the Monitoring Tags List in the Chronicle.
Application Diagnostics¶
This Dashboard contains the Time Range filter. This dashboard will be populated based on the selected time range filter value. Details of the panels present in the dashboard are as follows:
View Logs of Cloud function¶
- This link will appear in the application logs panel. When clicking on the link, a user will be redirected to the cloud function logs in the GCP.
Domain Enrichment Log¶
- This panel will display details of the enriched domains. This panel will display details of the latest 1000 enriched domains. The following details will be displayed:
- Domain
- First Ingested (UTC)
- Most Recent Enrichment (UTC)
- View in Iris
Number of Enriched Domains based on timestamp¶
- This panel will display counts of the enriched domains based on the timestamp.
View Parsed Logs in Chronicle Console¶
- Navigate to Chronicle console.
- Type
.*
in the search field and click on search. - Click on raw log search.
- Select
Run Query as Regex
. - Set the time interval in which the logs are ingested.
- Select the log source as “DomainTools Threat Intelligence” and click on search.
- Open any particular log to see the raw log and mapped fields.
Limitations¶
- CBN parser will only be able to parse the DomainTools events.
- We suggest using the second generation of Cloud Function. The first generation of Cloud Function has a maximum execution time of 9 minutes and the second generation of Cloud Function has a maximum execution time of 60 minutes. If the execution time of the Cloud Function exceeds timeout then there are chances that the complete data (Alerts, Activities, Devices, Vulnerabilities) is not ingested in the Chronicle.
- There is a rate limit for Chronicle search API calls, allowing for 120 queries per hour, with each query yielding a maximum of 10,000 results. Hence, in total, we can fetch 1200k UDM data from the Chronicle in an hour. [More details can be found at https://cloud.google.com/chronicle/docs/reference/search-api#search_api_query_limits
- The rate limit for DomainTools enrich API is 60/minute. Hence in total, we could fetch around 360k domains in an hour.
- The cloud scheduler runs a maximum of 30 minutes. If cloud function execution is taking more than 30 minutes, execution of the cloud function will be stopped.
Known Issues¶
- The looker dashboard does not display data in the drill down table when there are too many records to be displayed.
- In the Enrichment Explorer dashboard, the Threat Type will always populate with Not a Threat filter value if there is a single record present in the applied time range filter.
- Looker will only show data from the past 180 days, but this can vary as per the retention policy configured in BigQuery.
- According to the query time zone selected by the user in connection with the Chronicle database, the Looker dashboards would be reflected according to the configured timezone.
- While redirecting to Google Chronicle, it will show an error like this ‘ERROR: Search has encountered an error and could not load data. Please try again, and contact support if this error continues.’ if the searched date range is more than 3 months.
Troubleshooting¶
This section describes the common issues that might happen during the deployment or the running of the app and the steps to resolve the issues.
- GCloud logs can be used for troubleshooting. Steps on how the user can get GCloud logs:
- Log in to the "https://console.cloud.google.com/" using valid credentials.
- Navigate to 'Cloud functions' and click on the deployed function where you can find the logs module.
- Logs can be filtered using severity.
- Sometimes GCloud default logs are not visible in the logs module of Cloud Function after testing the function manually or when the scheduler job invokes the function. To resolve this issue a quick page refresh is required on the GCloud.
- If you test the cloud function immediately after deploying it on gcloud, It might be possible that the cloud function will not work as expected. To resolve this, wait for a few seconds and then test it.
- If the cloud function stops its execution because memory exceeds the limit, reconfigure the cloud function’s memory configuration and increase the memory limit.
- The Looker dashboard takes data from the cache and does not display the latest events. Solution:
- Click on the three dots present on the rightmost side of the dashboard.
- Click on the Clear cache and refresh.![][image43]
- In the Enrichment Explorer dashboard, data present in the table and domain populated Domain filter are not matched. This happens because the Domain filter is populated with the Domain data present in the applied Last Enriched filter in the ascending order of the alphabetically and the table is populated with the descending order of the Last Enriched DateTime column.
- To display other domain details that are not present in the table but present in the applied Last Enriched time range, search that domain in the Domain filter and populate the dashboard by selecting that domain in the Domain filter.
- The data is not displayed on the dashboard: This could be a problem with the data source as the database connection might be wrongly configured.
- If desired events are not showing in the visualization: Ensure that the filters in the dashboard are configured correctly. If the filters are too restrictive, they may be preventing the dashboard from displaying any data.
- The dashboard may be slow to load or unresponsive: This could be due to a problem with the data source being unavailable or having too much data, the query that is being used, or the way that the dashboard is being rendered.
- If you encounter an error while redirecting to Google Chronicle: Check the searched date range as Chronicle supports a maximum of 3 months
- range to search.
- If you are unable to see any data populated in the dashboards, here are the steps you can follow to verify if data is present in the Chronicle:
- Open your Chronicle instance.
- Click the Application Menu in the upper right corner, then select UDM Search. ![][image44]
- In the UDM Search Bar, type in this query if you want to verify the data. (Note: Make sure the date range is the same as the dashboard and all other filters are unselected.)
- metadata.log_type \= "DOMAINTOOLS_THREATINTEL”
- These queries return possible events for the dashboards. If no data is being seen in the Chronicle, then it is expected that looker dashboards might not get populated. If data is being seen, then tile specific data needs to be verified in the Chronicle for the same time range.
GCP Resources/Services Approximate Cost Details¶
Service | Standard Configurations | Purpose | Reference |
---|---|---|---|
Memorystore for Redis | Service tier: Standard Instance size: 4GiB | Caching domains based on TTL provided. | Approx cost \~ $185/month https://cloud.google.com/memorystore/docs/redis/pricing. |
Cloud Functions | Type: Memory: 8192MB CPU: 4.8GHz Execution time per function (ms): 3600 Invocations per month: 1500 Minimum number of instances: 1 | Function / Script which pulls data from Chronicle & enriches from DomainTools API. | Approx cost \~ $66/month https://cloud.google.com/functions/pricing |
Cloud Storage | Total Amount of Storage: 1 GiB | Storage bucket used to manage API checkpoints | Approx cost \~ $0.02/month https://cloud.google.com/storage/pricing |
Secret Manager | Active secret versions per replica location: 4 Access operations: 1500 | Used to maintain credentials. | Approx cost \~ $0/month https://cloud.google.com/secret-manager/pricing |
Cloud Scheduler | Total amount of jobs: 1 | Scheduler which executes above cloud function at a specific time interval. | Approx cost \~ $0/month https://cloud.google.com/scheduler/pricing |
Serverless VPC access | - | Serverless VPC Access makes it possible for you to connect directly to your Virtual Private Cloud (VPC) network from serverless environments such as Cloud Run, App Engine, or Cloud Functions. | https://cloud.google.com/vpc/pricing |
Looker | https://cloud.google.com/looker/pricing#platform_editions | Visualization tool to visualize events from Chronicle. | https://cloud.google.com/looker/pricing Cost depends on the edition which we select from the link on the left column. |
Note: Users can also calculate (using pricing calculator) the estimated price of the Google Cloud services used.
References¶
- Install the gcloud CLI
- Deploying cloud functions from local machine
- Creating a resource file
- Creating a config file
- Looker
- Looker Marketplace
Changelog¶
Version | Release Date | Summary |
---|---|---|
1.0.0 | 2023-09-13 | Provided the cloud function to enrich domain data from the DomainTools and ingest in the Chronicle. |
Provided the ad-hoc script for the allow list, monitoring list, bulk enrichment, and monitoring tags execution. | ||
Provided below Chronicle rules to generate detections: high_risk_domain_observed, medium_risk_domain_observed, young_domain, monitoring_list_domain, medium_risk_domain_observed | ||
Provided below dashboards for visualization: Threat Intelligence, Enrichment Explorer, Domain Profiling, Monitoring Dashboard, Application Diagnostics |