As the title says, I’m looking to create and update a lookup table based on incoming events, somewhat similar to the outputlookup function in splunk. Is there any way to update or create a table based on fields from events passing through a pipeline?
You can use the Redis function to update Redis-based data from within a pipeline using live data.
This isn’t currently possible with CSV-based lookups.
We’ve created a product with custom functions that will read/write tables in data stores, which can be mongo, Oracle, MySql, MSSQL, Postgress, DB2 and others, making the data available across workers. Ping me if this aligns with what you’re looking for.
You can optionally configure a script in “Sources” to run and collect the data you need from the source and output the results to /opt/cribl/data/lookups. This is handy in the instance you have a fileserver hosting reference data or an API endpoint that has the file or data ready to be consumed.
Example script for hitting endpoint and writing to lookup directory.
wget URL-to-Data -O /opt/cribl/data/lookups/filename.csv