SGNL can stream events to leading SIEM and storage providers while still making logs available within the SGNL Console and APIs. SGNL logs are formatted as individual JSON entries with a well-defined schema. An example access decision log entry takes the form of:
{
"accessDecision": "Allow",
"action": "access",
"assetId": "aws::arn:1111",
"clientId": "a5c5f108-1111-4b9a-2222-ed9787e3ce6b",
"eventType": "sgnl.accessSvc.decision",
"integrationDisplayName": "AWS",
"integrationId": "a5c5f108-3333-4b9a-4444-ed9787e3ce6b",
"level": "info",
"msg": "Access search service decision",
"principalId": "[email protected]",
"requestId": "a5c5f108-5555-4b9a-6666-ed9787e3ce6b",
"tenantId": "a5c5f108-7777-4b9a-8888-ed9787e3ce6b",
"timeAtEvaluation": "2024-06-28T20:05:03Z",
"time_now": "2024-06-28T20:05:03.289737017Z",
"ts": "2024-06-28T20:05:03Z"
}
To get started with Log Streaming, simply head over to the Admin section of the SGNL Console and start adding integrations.
SGNL uses Splunk HEC to stream logs. To get started, log into SGNL and into your Splunk console.
In Splunk:
Add Data from the Splunk LauncherMonitor to add log data from an HTTP endpointHTTP Event Collector method for receiving data, and give the collector a descriptive name, such as your SGNL and your clientNameAutomatic source type and select which indices you’d like SGNL log data to flow intoIn SGNL:
https://sgnl-log-stream.splunkcloud.com:8088
The next set of events that are generated will start to stream logs to Splunk – you should start to see them showing up in Splunk Search. You can trigger logs by making access evaluation requests, configuring and synchronizing a System of Record, or creating triggers, rules, and actions inside of the CAEP Hub
SGNL can also stream logs to AWS S3 Buckets. Setup is straightforward, but may depend on your chosen authentication method:
Login to the Console and choose Admin -> Add Log Stream -> Choose AWS S3
Give the Log Stream a name and optionally a description
Enter your Bucket Name, e.g. ‘sgnl-logs’
Enter the Region where your S3 Bucket is instantiated
Choose the Auth Method, either Access Key or Assume Role:
If using the ‘Access Key’ method:
If using the ‘Assume Role’ method:
With the ‘Assume Role’ method, you will then have to go and configure a Trust Policy in AWS, to allow SGNL to Assume the Role in your AWS Account. A Trust Policy may look something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::059615723535:role/nqr-1-log-forwarder-role",
"arn:aws:iam::059615723535:role/nqr-2-log-forwarder-role"
]
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
In particular, note the AWS Principals that are detailed here. These are comprised of SGNL’s Account ID, and the identifier for the shard you are using, in the example above nqr.
To determine your shard Id, if you don’t know it, you can ask SGNL Support, or perform a dns lookup on your client name.
% dig myclient.sgnl.cloud
; <<>> DiG 9.10.6 <<>> myclient.sgnl.cloud
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 845
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;myclient.sgnl.cloud. IN A
;; ANSWER SECTION:
myclient.sgnl.cloud. 300 IN CNAME nqr.sgnl.cloud.
nqr.sgnl.cloud. 60 IN CNAME nqr-1.sgnl.cloud.
nqr-1.sgnl.cloud. 300 IN CNAME k8s-istioing-istioigw-f95a003938-f4568858c6540cfb.elb.us-east-1.amazonaws.com.
Depending on your deployment, the AWS Account ID may be different for your SGNL Client. Please contact support if you need further information.
Note the answer section, the first result will give you the 3-letter code for your shard, in this case: nqr.sgnl.cloud. would result in SGNL AWS Roles of:
Before configuring Datadog log streaming in SGNL, ensure you have:
datadoghq.com, datadoghq.eu, us3.datadoghq.com, us5.datadoghq.com)The Datadog log stream will begin forwarding SGNL events to your Datadog instance. You can view and query these logs in the Datadog Logs Explorer.
Before configuring Loki log streaming in SGNL, ensure you have:
http://loki.example.com:3100)Loki supports multi-tenancy through the use of tenant IDs. For more information about Loki’s multi-tenancy features, see the Loki Multi-Tenancy Documentation.
http://loki.example.com:3100)/loki/api/v1/pushBefore configuring Azure Blob Storage log streaming in SGNL, ensure you have:
Your connection string can be found in the Azure Portal:
The connection string will look similar to:
DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.net
DefaultEndpointsProtocol=https;AccountName=mylogstorage;AccountKey=storageaccountkeybase64encoded;EndpointSuffix=core.windows.netsgnl-logs)Logs will be stored as objects within the specified container, organized by date and time to facilitate easy retrieval and analysis.