Data reception synchronization from 2 different consumers
Hi, we are designing a system that will have to forward to kafka some real time data received via the EMA API.
For resiliency issues we need to have 2 instances of this gateway running, one primary and one secondary that should be activated in case of fault of the primary one, both connected to the refinitiv stream.
The question is: in the event of a primary fault, if we want the secondary to resume forwarding data EXACTLY where the primary stopped forwarding, without loss of messages and without duplication, what's the best way to do it (if possible)? Are there any mechanisms in the API that can help us?
Best Answer
-
Hi @cdefusco,
You are referring to the hot-standby feature as described in this article. Unfortunately, the API stack itself does not provide any cues to synchronize data, and you will have to search and use data markers in the asset class of interests. For. e.g. you can use sequence numbers to synchronize, which is sent out with most L1 streams etc. This is completely in the user's application code.
If you want to use the failover feature in the SDK, then take a look at the Warm-Standby option.
1
Answers
-
Hi @Gurpreet, this article seems to refer to the resiliency implemented by the API in case of fault of the refinitiv servers. I'm talking about a fault in our application, where we have 2 clients connected to refinitiv.
I need to know what I can use to ensure that in the event of a failure of our primary (which forwards the subscribed data to kafka), our secondary can resume forwarding data from where the primary left off. But they are 2 different clients, on different machines, and with the assumption that there was no fault on the refinitiv side.
0 -
The API doesn't provide any capabilities to synchronize data among instances.
The application itself needs to handle this. As mentioned by my colleague, you may use sequence numbers in the retrieved messages to synchronize data.
For example, the secondary needs to cache all messages retrieved from the server. The secondary also needs to retrieve all messages published by the primary. Then, the secondary removes messages from its cache after retrieving the messages with the same sequence numbers from the primary. When the secondary can detect that the primary is down, it can continue to publish data from its cache.
However, I suggest you contact our Solution team via your Refintiv account team to discuss the design and implementation.
0 -
Hi @Jirapongse , thanks for the reply, we were already evaluating the use of sequence numbers for this purpose, is there an exhaustive guide on SeqNum field logic? Something which clarifies details such as if it is a global or independent value (for each item?), if it is strictly incremental, etc.
0 -
This is the definition of SeqNum in the API document.
Specifies a user-defined sequence number, which can range in value from 0 to 4,294,967,295. To help with temporal ordering, SeqNum should increase across messages, but can have gaps depending on the sequencing algorithm in use. Details about sequence number use should be defined within the domain model specification or any documentation for products which require the use of SeqNum
This SeqNum is generated by the data feed so please contact the data feed support team directly via MyRefinitiv to verify how the data feed generates this SeqNum field.
0 -
Hi, thanks for your reply, I have just another question always inherent to having two consumers on different machines with EMA. Is it possible to have the same user connected (registered to the same service) from different processes at the same time? or would I have errors while building the consumer or registering the client in the second process?
0 -
Hi @cdefusco,
You can use the same DACS user ID to connect from two different processes. Your market data administrator can configure to allow for this.
However, if you are connecting to RTO service in the cloud, then it is recommended that each application should have its own unique machine ID.
PS: Its better to ask a new question in a new post. It helps us keep track of the answers.
0 -
Hi @Gurpreet, thanks, sorry if I continue here but we have already started the topic.
Would it therefore also be possible to configure the user to prevent the connection from 2 different processes? In order to be sure that at any given moment the user can only be logged in from one of the two machines?
0 -
Yes, your market data administrator can limit the DACS maximum mounts per user in the RTDS configuration.0
Categories
- All Categories
- 6 AHS
- 39 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 60 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛