TRTH Python API - Queuing Time
Hi,
I received a question "I see that to generate the intraday summary report with 5sec interval for 1 RIC for 1 month takes long time (more than 30min excluding queuing time). Is it normal? Is there any way to quicken the process?"
I could not attach .py File though the code from it is copied below, I have removed the Username and Password from the Python script.
Best regards,
Gareth
-----------------------------------------------------------------------------------------------------------------------------------
# coding: utf-8 # In[4]: #Step 1: token request import requests import json import time requestUrl = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken" requestHeaders={ "Prefer":"respond-async", "Content-Type":"application/json" } requestBody={ "Credentials": { "Username": , "Password": "" } } proxies = {'http': 'http://webproxy.ssmb.com:8080', 'https': 'http://webproxy.ssmb.com:8080'} r1 = requests.post(requestUrl, json=requestBody, headers=requestHeaders, proxies=proxies) if r1.status_code == 200 : jsonResponse = json.loads(r1.text.encode('ascii', 'ignore')) token = jsonResponse["value"] print ('Authentication token (valid 24 hours):') print (token) else: print ('Please replace myUserName and myPassword with valid credentials, then repeat the request') # In[5]: #Step 2: send an on demand extraction request using the received token requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw' requestHeaders={ "Prefer":"respond-async", "Content-Type":"application/json", "Authorization": "token " + token } requestBody={ "ExtractionRequest": { "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryIntradaySummariesExtractionRequest", "ContentFieldNames": [ # "Close Ask", # "Close Bid", # "High", # "High Ask", # "High Bid", "Last", # "Low", # "Low Ask", # "Low Bid", # "No. Asks", # "No. Bids", "No. Trades", "Open", # "Open Ask", # "Open Bid", "Volume" ], # "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryTimeAndSalesExtractionRequest", # "ContentFieldNames": [ # "Trade - Price", # "Trade - Volume" # ], "IdentifierList": { "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList", "InstrumentIdentifiers": [{ "Identifier": "ESU7", "IdentifierType": "Ric" }, ], "UseUserPreferencesForValidationOptions":"false" }, "Condition": { "MessageTimeStampIn": "GmtUtc", "ReportDateRangeType": "Range", "QueryStartDate": "2017-06-28T00:00:00.000Z", "QueryEndDate": "2017-06-29T00:00:00.000Z", "SummaryInterval": "FiveSeconds", "TimebarPersistence":"false", "DisplaySourceRIC":"true" } } } r2 = requests.post(requestUrl, json=requestBody, headers=requestHeaders, proxies=proxies) #displaying the response status, and the location url to use to get the status of the extraction request #initial response status (after approximately 30 seconds wait) will be 202 print (r2.status_code) print (r2.headers["location"]) # In[6]: #Step 3: poll the status of the request using the received location URL, and getting the jobId and extraction notes requestUrl = r2.headers["location"] requestHeaders={ "Prefer":"respond-async", "Content-Type":"application/json", "Authorization":"token " + token } while True: r3 = requests.get(requestUrl, headers=requestHeaders, proxies=proxies) if r3.status_code == 200: break else: print('Failed...Re-request in 30 secs...') time.sleep(30) #when the status of the request is 200 the extraction is complete, we display the jobId and the extraction notes print ('response status = ' + str(r3.status_code)) if r3.status_code == 200 : r3Json = json.loads(r3.text.encode('ascii', 'ignore')) jobId = r3Json["JobId"] print ('jobId: ' + jobId + '\n') notes = r3Json["Notes"] print ('Extraction notes:\n' + notes[0]) else: print ('execute the cell again, until it returns a response status of 200') # In[7]: #Step 4: get the extraction results, using the receive jobId requestUrl = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value" requestHeaders={ "Prefer":"respond-async", "Content-Type":"text/plain", "Accept-Encoding":"gzip", "Authorization": "token " + token } r4 = requests.get(requestUrl, headers=requestHeaders, proxies=proxies) # print (r4.text) # In[8]: #Step 5 (cosmetic): formating the response using a panda dataframe from io import StringIO import pandas as pd timeSeries = pd.read_csv(StringIO(r4.text)) timeSeries # In[ ]:
Best Answer
-
Here this is useful investigation information from case 05649911:
This is an expected behavior of TRTH because it may take longer time to process a request if an input RIC is extremely liquid.
Tick history has to parse lot of ticks to create Intraday summaries on the fly. The intraday extractions are expectedly takes longer while you do the same extraction on time and sales report you will receive a faster response but with huge number of messages.
For another interesting information about the AWS Direct Download feature, it enhances only the download speed of extracted data but not the speed (time) of Processing the data. The time to process the data still remains same like before.0
Answers
-
Hi Team, could someone look into this and provide an update? it's been quite a few days this query was posted
0 -
This issue needs to be investigated. This doesn't appear to be a "How to" to question, so we need more information in order to investigate. From an email I received, there is a case number associated with this issue, 05649911. Please provide the notes, or at least the user id via the case 05649911.
0 -
You will have to refer to the specific qualifies
that were used for the specific time period. Unfortunately, there is no single
consistent trade qualifier for the entire life of the instrument. You will have
to use different set of trade qualifiers.
Apologies for the inconvenience caused to you.0
Categories
- All Categories
- 6 AHS
- 37 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛