We have updated our Terms of Service, Code of Conduct, and Addendum.

How do you capture, work on, live preview a massive event without browser JS engine timing out?

Options

One of the neatest features of the product is the live preview. This works great for relatively small events.

If I have a massive event with a pretty big intensive pipeline to boot, things time out and it really is a PITA to debug my code. How do people work through such a work flow?

I usually increase the timeout from 10 seconds to 60 seconds and "play" the pipeline, but I wanted to know if people had some other nifty hacks.

Thank you!
b1scu1t

Best Answer

  • Jon Rust
    Jon Rust Posts: 435 mod
    Answer ✓
    Options

    I have a pipeline here that might help you out. The idea is to use aggregation functions before and after your pipeline fires to get volume and count metrics.

    From the README linked above:

    1. Create a Collector source pointing to an NFS share or object store with your sample file
    • Alternatively, you can use a raw TCP source, and feed the file to the port using netcat
    1. Be sure you have an Event Breaker assigned to the source that works for your test data
    2. Add this Pipeline to your Worker Group
    3. Change the Chain function to point to the Pipeline or Pack you want to test
    4. Under Routing, Quick Connect, connect your source to the devnull destination, and select the cribl_inline_redux_report Pipeline
    • Optionally deliver to your analytics tool
    1. Navigate back to the source page and prepare to make a Full Run on your collector
    • Or prepare to fire off netcat with your file
    1. In a new window or tab, start a capture with your source as the filter
    • To help with the capture filter, you might add a field to the Collector definition, eg _TEST_FIELD => 1, and filter on that
    1. With the capture running, go back to the tab with the Collector source and run it (or fire off your netcat)
    2. The capture should show you 2 events: 1 for the original stats, and 1 for the processed stats

Answers

  • Jon Rust
    Jon Rust Posts: 435 mod
    Answer ✓
    Options

    I have a pipeline here that might help you out. The idea is to use aggregation functions before and after your pipeline fires to get volume and count metrics.

    From the README linked above:

    1. Create a Collector source pointing to an NFS share or object store with your sample file
    • Alternatively, you can use a raw TCP source, and feed the file to the port using netcat
    1. Be sure you have an Event Breaker assigned to the source that works for your test data
    2. Add this Pipeline to your Worker Group
    3. Change the Chain function to point to the Pipeline or Pack you want to test
    4. Under Routing, Quick Connect, connect your source to the devnull destination, and select the cribl_inline_redux_report Pipeline
    • Optionally deliver to your analytics tool
    1. Navigate back to the source page and prepare to make a Full Run on your collector
    • Or prepare to fire off netcat with your file
    1. In a new window or tab, start a capture with your source as the filter
    • To help with the capture filter, you might add a field to the Collector definition, eg _TEST_FIELD => 1, and filter on that
    1. With the capture running, go back to the tab with the Collector source and run it (or fire off your netcat)
    2. The capture should show you 2 events: 1 for the original stats, and 1 for the processed stats