TickHistoryMarketDepthExtractionRequest
I requested market depths with below code. Although there are no error msg, the file is empty. I tried different time but no luck.
#Step 2: send an on demand extraction request using the received token
requestUrl='https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/ExtractRaw'
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"Authorization": "token " + token
}
requestBody={
"ExtractionRequest": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"ContentFieldNames": [
"Ask Price",
"Ask Size",
"Bid Price",
"Bid Size",
"Domain",
"History End",
"History Start",
"Instrument ID",
"Instrument ID Type",
"Number of Buyers",
"Number of Sellers",
"Sample Data"
],
"IdentifierList": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [
{
"Identifier": "6501.T",
"IdentifierType": "Ric"
}
]
},
"Condition": {
"View": "NormalizedLL2",
"NumberOfLevels": 10,
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2022-06-28T05:00:00.000Z",
"QueryEndDate": "2022-06-28T05:35:00.000Z",
"DisplaySourceRIC": True
}
}
}
r2 = requests.post(requestUrl, json=requestBody,headers=requestHeaders)
#Display the HTTP status of the response
#Initial response status (after approximately 30 seconds wait) is usually 202
status_code = r2.status_code
print ("HTTP status of the response: " + str(status_code))
#Step 3: if required, poll the status of the request using the received location URL.
#Once the request has completed, retrieve the jobId and extraction notes.
#If status is 202, display the location url we received, and will use to poll the status of the extraction request:
if status_code == 202 :
requestUrl = r2.headers["location"]
print ('Extraction is not complete, we shall poll the location URL:')
print (str(requestUrl))
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"Authorization":"token " + token
}
#As long as the status of the request is 202, the extraction is not finished;
#we must wait, and poll the status until it is no longer 202:
while (status_code == 202):
print ('As we received a 202, we wait 30 seconds, then poll again (until we receive a 200)')
time.sleep(30)
r3 = requests.get(requestUrl,headers=requestHeaders)
status_code = r3.status_code
print ('HTTP status of the response: ' + str(status_code))
#When the status of the request is 200 the extraction is complete;
#we retrieve and display the jobId and the extraction notes (it is recommended to analyse their content)):
if status_code == 200 :
r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
jobId = r3Json["JobId"]
print ('\njobId: ' + jobId + '\n')
notes = r3Json["Notes"]
print ('Extraction notes:\n' + notes[0])
#If instead of a status 200 we receive a different status, there was an error:
if status_code != 200 :
print ('An error occured. Try to run this cell again. If it fails, re-run the previous cell.\n')
#Step 4: get the extraction results, using the received jobId.
#Decompress the data and display it on screen.
#Skip this step if you asked for a large data set, and go directly to step 5 !
#We also save the data to disk; but note that if you use AWS it will be saved as a GZIP,
#otherwise it will be saved as a CSV !
#This discrepancy occurs because we allow automatic decompression to happen when retrieving
#from RTH, so we end up saving the decompressed contents.
#IMPORTANT NOTE:
#The code in this step is only for demo, to display some data on screen.
#Avoid using this code in production, it will fail for large data sets !
#See step 5 for production code.
requestUrl = "https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value"
#AWS requires an additional header: X-Direct-Download
if useAws:
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"text/plain",
"Accept-Encoding":"gzip",
"X-Direct-Download":"true",
"Authorization": "token " + token
}
else:
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"text/plain",
"Accept-Encoding":"gzip",
"Authorization": "token " + token
}
r4 = requests.get(requestUrl,headers=requestHeaders)
if useAws:
print ('Content response headers (AWS server): type: ' + r4.headers["Content-Type"] + '\n')
#AWS does not set header Content-Encoding="gzip", so the requests call does not decompress.
#We therefore decompress using a separate call (to the gzip library).
uncompressedData = gzip.decompress(r4.content).decode("utf-8")
#We save the original compressed data (to save space):
fileName = filePath + fileNameRoot + ".step4.csv.gz"
print ('Saving compressed data to file: ' + fileName + ' ... please be patient')
else:
print ('Content response headers (TRTH server): type: ' + r4.headers["Content-Type"] + ' - encoding: ' + r4.headers["Content-Encoding"] + '\n')
#The requests call automatically decompresses the data, if header Content-Encoding="gzip".
uncompressedData = r4.text
#We save the uncompressed data (because it was automatically decompressed):
fileName = filePath + fileNameRoot + ".step4.csv"
print ('Saving uncompressed data to file: ' + fileName + ' ... please be patient')
#Save to file:
#with open(fileName, 'wb') as fd:
# for chunk in r4.iter_content(chunk_size=1024):
# fd.write(chunk)
#fd.close
#print ('Finished saving data to file:' + fileName + '\n')
#Display data:
print ('Decompressed data:\n' + uncompressedData)
#Note: variable uncompressedData stores all the data.
#This is not a good practice, that can lead to issues with large data sets.
#We only use it here as a convenience for the demo, to keep the code very simple.
#Step 5: get the extraction results, using the received jobId.
#We also save the compressed data to disk, as a GZIP.
#We only display a few lines of the data.
#IMPORTANT NOTE:
#This code is much better than that of step 4; it should not fail even with large data sets.
#If you need to manipulate the data, read and decompress the file, instead of decompressing
#data from the server on the fly.
#This is the recommended way to proceed, to avoid data loss issues.
#For more information, see the related document:
# Advisory: avoid incomplete output - decompress then download
requestUrl = "https://selectapi.datascope.refinitiv.com/RestApi/v1/Extractions/RawExtractionResults" + "('" + jobId + "')" + "/$value"
#AWS requires an additional header: X-Direct-Download
if useAws:
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"text/plain",
"Accept-Encoding":"gzip",
"X-Direct-Download":"true",
"Authorization": "token " + token
}
else:
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"text/plain",
"Accept-Encoding":"gzip",
"Authorization": "token " + token
}
r5 = requests.get(requestUrl,headers=requestHeaders,stream=True)
#Ensure we do not automatically decompress the data on the fly:
r5.raw.decode_content = False
if useAws:
print ('Content response headers (AWS server): type: ' + r5.headers["Content-Type"] + '\n')
#AWS does not set header Content-Encoding="gzip".
else:
print ('Content response headers (TRTH server): type: ' + r5.headers["Content-Type"] + ' - encoding: ' + r5.headers["Content-Encoding"] + '\n')
#Next 2 lines display some of the compressed data, but if you uncomment them save to file fails
#print ('20 bytes of compressed data:')
#print (r5.raw.read(20))
print ('Saving compressed data to file:' + fileName + ' ... please be patient')
fileName = filePath + fileNameRoot + ".step5.csv.gz"
chunk_size = 1024
rr = r5.raw
with open(fileName, 'wb') as fd:
shutil.copyfileobj(rr, fd, chunk_size)
fd.close
print ('Finished saving compressed data to file:' + fileName + '\n')
#Now let us read and decompress the file we just created.
#For the demo we limit the treatment to a few lines:
maxLines = 10
print ('Read data from file, and decompress at most ' + str(maxLines) + ' lines of it:')
uncompressedData = ""
count = 0
with gzip.open(fileName, 'rb') as fd:
for line in fd:
dataLine = line.decode("utf-8")
#Do something with the data:
print (dataLine)
uncompressedData = uncompressedData + dataLine
count += 1
if count >= maxLines:
break
fd.close()
#Note: variable uncompressedData stores all the data.
#This is not a good practice, that can lead to issues with large data sets.
#We only use it here as a convenience for the next step of the demo, to keep the code very simple.
#In production one would handle the data line by line (as we do with the screen display)
Best Answer
-
Are you using this request?
{
"ExtractionRequest": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"ContentFieldNames": [
"Ask Price",
"Ask Size",
"Bid Price",
"Bid Size",
"Domain",
"History End",
"History Start",
"Instrument ID",
"Instrument ID Type",
"Number of Buyers",
"Number of Sellers",
"Sample Data"
],
"IdentifierList": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [
{
"Identifier": "6501.T",
"IdentifierType": "Ric"
}
]
},
"Condition": {
"View": "NormalizedLL2",
"NumberOfLevels": 10,
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2022-06-28T05:00:00.000Z",
"QueryEndDate": "2022-06-28T05:35:00.000Z",
"DisplaySourceRIC": true
}
}
}If yes, please contact the Refinitiv Tick History support team directly via MyRefinitiv to verify the problem. Please also share the request message and the content in Notes with the support team.
0
Answers
-
I used the same request message in Postman and am able to get the data properly.
{
"ExtractionRequest": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"ContentFieldNames": [
"Ask Price",
"Ask Size",
"Bid Price",
"Bid Size",
"Domain",
"History End",
"History Start",
"Instrument ID",
"Instrument ID Type",
"Number of Buyers",
"Number of Sellers",
"Sample Data"
],
"IdentifierList": {
"@odata.type": "#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [
{
"Identifier": "6501.T",
"IdentifierType": "Ric"
}
]
},
"Condition": {
"View": "NormalizedLL2",
"NumberOfLevels": 10,
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2022-06-28T05:00:00.000Z",
"QueryEndDate": "2022-06-28T05:35:00.000Z",
"DisplaySourceRIC": true
}
}
}The output is:
You can check the Notes to verify the status of the extraction.
if status_code == 200 :
r3Json = json.loads(r3.text.encode('ascii', 'ignore'))
jobId = r3Json["JobId"]
print ('\njobId: ' + jobId + '\n')
notes = r3Json["Notes"]
print ('Extraction notes:\n' + notes[0])If the data can be extracted properly, you will see this kind of information in the Notes.
"Notes": [
"Extraction Services Version 16.0.43633 (806c08a4ae8f), Built May 9 2022 17:14:12\nUser ID: 9008895\nExtraction ID: 2000000419751957\nCorrelation ID: CiD/9008895/0x0000000000000000/REST API/EXT.2000000419751957\nSchedule: 0x0815a19d64ce08b7 (ID = 0x0000000000000000)\nInput List (1 items): (ID = 0x0815a19d64ce08b7) Created: 07/11/2022 08:21:38 Last Modified: 07/11/2022 08:21:38\nReport Template (12 fields): _OnD_0x0815a19d64ce08b7 (ID = 0x0815a19d64ee08b7) Created: 07/11/2022 08:20:34 Last Modified: 07/11/2022 08:20:34\nSchedule dispatched via message queue (0x0815a19d64ce08b7), Data source identifier (7F546751BCDC4C189DF2BB249641EB13)\nSchedule Time: 07/11/2022 08:20:35\nProcessing started at 07/11/2022 08:20:35\nProcessing completed successfully at 07/11/2022 08:21:39\nExtraction finished at 07/11/2022 07:21:39 UTC, with servers: tm01n03, TRTH (54.285 secs)\nInstrument <RIC,6501.T> expanded to 1 RIC: 6501.T.\nTotal instruments after instrument expansion = 1\n\nQuota Message: INFO: Tick History Cash Quota Count Before Extraction: 49199; Instruments Approved for Extraction: 1; Tick History Cash Quota Count After Extraction: 49199, 9839.8% of Limit; Tick History Cash Quota Limit: 500\nManifest: #RIC,Domain,Start,End,Status,Count\nManifest: 6501.T,Market Price,2022-06-28T04:00:00.081024428Z,2022-06-28T04:34:59.624091019Z,Active,6352\n"
]1 -
@Jirapongse Thank you very much. Below is my notes file. It seems like the last two sentence seems to be error?
Extraction notes:
Extraction Services Version 16.0.43633 (806c08a4ae8f), Built May 9 2022 17:14:12
User ID: 9006461
Extraction ID: 2000000421158857
Correlation ID: CiD/9006461/0x0000000000000000/REST API/EXT.2000000421158857
Schedule: 0x0816d62fa1ee0af7 (ID = 0x0000000000000000)
Input List (1 items): (ID = 0x0816d62fa1ee0af7) Created: 2022/07/14 13:47:31 Last Modified: 2022/07/14 13:47:31
Report Template (12 fields): _OnD_0x0816d62fa1ee0af7 (ID = 0x0816d62fa20e0af7) Created: 2022/07/14 13:41:28 Last Modified: 2022/07/14 13:41:28
Schedule dispatched via message queue (0x0816d62fa1ee0af7), Data source identifier (92D9339D81E54FD2B524C4B6EEC5416F)
Schedule Time: 2022/07/14 13:41:29
Processing started at 2022/07/14 13:41:30
Processing completed successfully at 2022/07/14 13:47:32
Extraction finished at 2022/07/14 04:47:32 UTC, with servers: tm03n02, TRTH (55.122 secs)
Instrument <RIC,6501.T> expanded to 1 RIC: 6501.T.
Total instruments after instrument expansion = 1
Manifest: #RIC,Domain,Start,End,Status,Count
Manifest: 6501.T,Market Price,,,Inactive,00 -
Thank you very much. Will check.0
Categories
- All Categories
- 6 AHS
- 37 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛