Processing Syslog messages and sending to Splunk

Hi all,
I want to do a drop-in replacement for an existing syslog-ng + Splunk HF system. Currently, it receives a variety of logs via syslog, syslog-ng writes them to disk (using different paths per sourcetype), and then the Splunk HF reads them, applies index-time transforms based on sourcetype, and forwards to the indexing tier. I can see a variety of ways to approach this, so I’m looking for a best-practice recommendation. How would you set this up so that some event sources can be processed with Cribl, but others can be left to the Splunk HF/Indexer? In other words, I’d like to make use of things like the PAN pack, but also be able to have it just work the same as before with some outlier sources, which means passing to the HF to process props/transforms on the data.

1 UpGoat

My take would be to leave syslog-ng in place, listening on a specific port (or ports) for the outlier scenarios. The syslogng-HF path would remain for those. Install Stream along side to process the other sources. With unique ports for Stream and syslogng, there shouldn’t be any conflicts. Only question then is scale. How much data you talkin?

1 UpGoat

Ah, therein lies the rub. Many of these devices send only on 514, and I can’t easily change it. So I need some way other than the port to separate the data, otherwise I’ll have to just keep the HF in place. Volume is light at the moment because the firewall logs are not cut over yet, but I anticipate less than 500 MB/day.

1 UpGoat

Put Stream in charge of all syslog data, and use Route filters to send the outlier data to a syslog destination (syslog-ng → HF). You could minimally filter the data, or simply passthru. The Route filters could react to raw content, host IPs, host names, lookup tables, or any combination.

2 UpGoats