Error writing epipe in splunk destination connector

Cribl suddenly started having issues sending logs to splunk and the errors below appeared in the destination config logs. Logs are still getting through to splunk but slowly and it’s no longer keeping up.

Recreating the connector with the same settings fixes the problem for a few hours and then it reappears.

{time:"2023-02-14T15:41:44.835Z",cid:"w0",channel:"output:splunk3",level:"error",message:"connection error",+endpoint:{3 items...},error:"write EPIPE"}
{time:"2023-02-14T15:41:44.641Z",cid:"w0",channel:"output:splunk3",level:"info",message:"flushing buffer backlog",count:2,totalSize:106765607}
{time:"2023-02-14T15:41:44.111Z",cid:"w0",channel:"output:splunk3",host:"splunk.local",level:"info",message:"attempting to connect",port:9997,tls:false}
{time:"2023-02-14T15:41:44.111Z",cid:"w0",channel:"output:splunk3",level:"warn",message:"sending is blocked",elapsed:2,+endpoint:{3 items...},since:1676389301}
{time:"2023-02-14T15:41:41.629Z",cid:"w0",channel:"output:splunk3",level:"error",message:"connection error",+endpoint:{3 items...},error:"write EPIPE"}

This is usually indicative of an issue with your splunk indexers. Have you checked the processing queues on your indexers? You can use splunk’s monitoring console to assess the health of your splunk indexing tier. Look at the queue fill ratios for all of the queues. I prefer to use median and perc95 functions as they paint a more realistic picture. If the queues are consistently full, then your indexers are more than likely at their data processing limit. Also check metrics.log for blocked=true messages. A high volume of those messages will also be indicative of indexing tier issues. index=_internal source=*metrics.log host=<indexers> blocked=true

1 UpGoat