LPC "Unable to get RIPC client buffer"

Hi,

We are consistently getting the below error from LPC (Legacy Protocol Converter) for one of our users. After the error, no live prices are available to that user. Things are working fine for other users:

Disconnecting secure channel secureChannel.146.  Disconnecting user socket 18 from host 10.8.193.234.  Reason: Unable to get RIPC client buffer to send message. ErrorId: -4 Error: </local/jenkins/workspace/RRTLPC/OS/OL7-64/esdk/source/esdk/Cpp-C/Eta/Impl/Transport/rsslSocketTransportImpl.c:1429> Error: 1009 ipcDataBuffer() failed, out of output buffers. The output buffer may need to be flushed.

Is anyone able to shine some light on what this might be, and how we might go about fixing it, please?

Thanks in advance,

David.

Tagged:

Best Answer

  • Jirapongse
    Answer ✓

    @david.hills

    Thanks for reaching out to us.

    The API used by LPC encodes the data into the output buffer before sending the data in the buffer to the network. The API reserves the number of output buffers when it starts.

    The error indicates that the API was unable to send the data to the network and it used all reserved output buffers.

    Typically, it could be a network issue or the client application is a slow consumer.

    For more information, you need to contact the LPC support team directly via MyRefinitiv to investigate this issue.

    I hope that this information is of help.


Answers

  • An LPC restart fixed things, for now. Would be good to understand what the problem was though, if anyone has any thoughts.

  • An LPC restart fixed things, for now. Would be good to understand what the problem was though, if anyone has any thoughts.

  • Thanks for getting back to me
    @Jirapongse. I've entered a ticket with MyRefinitiv and am awaiting a response. Interestingly, we restarted the client PC more than once but the same problem kept occurring until we restarted LPC itself.
  • Thanks @Jirapongse. The LPC support team pointed me to section 2.3.1 of the LPC manual which suggests tweaking TCP buffer settings as follows:

    net.core.rmem_max = 8388608
    net.core.wmem_max = 8388608
    net.core.rmem_default = 4194304
    net.core.wmem_default = 4194304
    net.ipv4.tcp_rmem = 4096 4194304 8388608
    net.ipv4.tcp_wmem = 4096 4194304 8388608
    net.ipv4.tcp_mem = 4096 4194304 8388608

    I've made those changes and have not yet seen a reoccurrence of the problem.