Access Gate forwards its audit and alert streams over syslog (RFC 5424). This guide covers the Elastic-side configuration needed to ingest those events through Logstash, store them in Elasticsearch, and surface them in Kibana.
Prerequisites
- Access Gate configured to forward logs over TCP syslog to your Logstash host. See Log Forwarding and SIEM Export.
- Elastic Stack 8.x — Elasticsearch, Logstash, and Kibana — reachable from the Access Gate appliance.
- A Logstash listener port. The examples below use
5514/TCPto avoid colliding with privileged port 514.
Configure the Logstash Pipeline
Access Gate events are emitted by the vigil process in RFC 5424 structured data format. The default Logstash syslog input only parses RFC 3164, so use the tcp input with a grok filter to extract Access Gate's structured fields.
Create /etc/logstash/conf.d/access-gate.conf:
input {
tcp {
port => 5514
type => "access-gate"
}
}
filter {
if [type] == "access-gate" {
grok {
match => {
"message" => "<%{POSINT:priority}>%{INT:version} %{TIMESTAMP_ISO8601:event_time} %{HOSTNAME:host_name} %{WORD:program} %{POSINT:procid} %{WORD:severity} \[%{DATA:sd_id} Log=\"%{DATA:event_log}\" Mitre=\"%{DATA:mitre}\" PrincipalIp=\"%{IP:src_ip}\" Rule=\"%{DATA:rule}\"\]"
}
}
date {
match => ["event_time", "ISO8601"]
target => "@timestamp"
}
mutate {
remove_field => ["message", "priority", "version", "procid", "sd_id"]
}
}
}
output {
if [type] == "access-gate" {
elasticsearch {
hosts => ["https://elasticsearch:9200"]
index => "access-gate-%{+YYYY.MM.dd}"
user => "${ELASTIC_USER}"
password => "${ELASTIC_PASSWORD}"
}
}
}
Reload Logstash after any pipeline change:
sudo systemctl restart logstash
This filter extracts five fields from every Access Gate event:
| Field | Source | Example |
|---|---|---|
event_log | Log | user Alice Salmon logged in using screen CUI Access |
mitre | Mitre | ----- |
src_ip | PrincipalIp | 192.168.100.59 |
rule | Rule | Access Screen Login Attempt |
severity | RFC 5424 MSGID | ALERT |
Apply an Index Template
Without an index template, Elasticsearch infers field types from the first document and may pick the wrong type for src_ip (string instead of ip). Apply this template before the first document arrives:
curl -u "$ELASTIC_USER:$ELASTIC_PASSWORD" -X PUT \
"https://elasticsearch:9200/_index_template/access-gate" \
-H "Content-Type: application/json" -d '{
"index_patterns": ["access-gate-*"],
"template": {
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"event_log": { "type": "text" },
"mitre": { "type": "keyword" },
"src_ip": { "type": "ip" },
"rule": { "type": "keyword" },
"severity": { "type": "keyword" },
"host_name": { "type": "keyword" }
}
}
}
}'
Validate the Pipeline
Send a sample event to Logstash with nc to confirm parsing before pointing the Access Gate at it:
echo '<130>1 2026-04-24T20:53:10.313Z access-gate vigil 334 ALERT [context@60446 Log="user Alice Salmon logged in using screen CUI Access" Mitre="-----" PrincipalIp="192.168.100.59" Rule="Access Screen Login Attempt"]' \
| nc -q1 logstash 5514
Confirm the document landed:
curl -u "$ELASTIC_USER:$ELASTIC_PASSWORD" \
"https://elasticsearch:9200/access-gate-*/_search?q=rule:%22Access+Screen+Login+Attempt%22&pretty"
The hit should show src_ip, rule, and event_log populated.
Create the Kibana Data View
In Kibana, open Stack Management → Data Views → Create data view:
- Name:
access-gate - Index pattern:
access-gate-* - Timestamp field:
@timestamp
Open Discover, select the access-gate data view, and set the time range to the last 15 minutes. A login event triggered from the Access Gate UI should appear with all extracted fields.
Add a Detection Rule
In Kibana's Security → Rules → Create new rule, use a Custom query rule with:
- Index pattern:
access-gate-* - Query:
severity:"ALERT" AND rule:"Access Screen Login Attempt" - Severity:
Medium
This fires whenever the Access Gate logs an authenticated login on a privileged screen — adjust the query for the rules you care about.
Related
- Log forwarding and SIEM export — configure the syslog destination on the Access Gate side
- Detection and alerts — what populates the alert stream
- System logs and diagnostics information — on-box logs for troubleshooting