How to pull exchange by day in Tick History V 2.0.
Can someone please post a description of the JSON request to pull back exchange by day files under the new tick history API. In the old version we simply downloaded the exchange by day files from their respective folders (e.g. corporate actions, market data etc) from a single ftp location. In the new system it appears you actually have to make calls to pull back these files. Can you please post an example of what the JSON Request should look like to get back different files such as corporate actions, market data, etc. Also can you refer me to where in the documentation this service is located?
Best Answer
-
I think you are looking to retrieve Venue By Day, VBD files?
If this is correct, please refer to REST API Tutorial 2: Retrieving VBD files
If this is different, please update the question with more detail?
Thanks
0
Answers
-
noah.kauffman, to add to Zoya's reply, on the topic of documentation:
- VBD extractions are also called "Standard extractions".
- VBD relevant calls are documented in the REST API User Guide chapter 11.
- The REST API Reference Tree is the API call reference describing all calls with their input parameters and outputs.
To experiment you can also run VBD calls and display the corresponding HTTP requests and responses using the C# example application, which has example calls under its section "Standard Extractions" (refer to screenshot below). The C# example application installation and usage are described in the Quick Start. Even though it is called the C# example application, as it displays HTTP requests and responses it is useful whatever programming language you use.
0 -
Thanks for the response. Any chance you could post a python file with an example call? That would be super helpful. Otherwise, can you simply explain what the JSON Request should look like? I'm running the following request string and getting a 400 response.:
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/Packages'
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"charset":"utf-8",
"Authorization": "token " + token
}
requestBody= {
"@odata.context": "https://hosted.datascopeapi.reuters.com/RestApi/v1/$metadata#UserPackages",
"value": [
{
"UserPackageId": "0x0460dc1d24a62cb1",
"PackageId": "0x0460dc1d24a62cb1",
"PackageName": "US Insider Trading Model v3",
"SubscriptionId": "0x0400dc1d24a00cb3",
"SubscriptionName": "Insider"
},
{
"UserPackageId": "0x04f21a8d1a559cb1",
"PackageId": "0x04f21a8d1a559cb1",
"PackageName": "CSI - CHINA SECURITIES INDEX COMPANY",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
},
{
"UserPackageId": "0x04f9cf0080e59cb1",
"PackageId": "0x04f9cf0080e59cb1",
"PackageName": "ZHC - China Zhengzhou Commodity Exchange",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
},
{
"UserPackageId": "0x04f9cf0080a59cb1",
"PackageId": "0x04f9cf0080a59cb1",
"PackageName": "TOJ - Asia Composite",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
},
{
"UserPackageId": "0x04f21a8d23f59cb1",
"PackageId": "0x04f21a8d23f59cb1",
"PackageName": "MCE - BME SPANISH EXCHANGE EQUITIES LEVEL 2",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
},
{
"UserPackageId": "0x04f21a8d2a359cb1",
"PackageId": "0x04f21a8d2a359cb1",
"PackageName": "SAP - Sapporo Stock Exchange",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
},
{
"UserPackageId": "0x04f21a8d20859cb1",
"PackageId": "0x04f21a8d20859cb1",
"PackageName": "JNX - SBI JAPANNEXT PTS Level 2",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
},
{
"UserPackageId": "0x04f21a8d2c059cb1",
"PackageId": "0x04f21a8d2c059cb1",
"PackageName": "SHF - Shanghai Futures Exchange",
"SubscriptionId": "0x0400dc1d24a00cb4",
"SubscriptionName": "TRTH Venue by Day"
}
]
}
r2 = requests.post(requestUrl, json=requestBody,headers=requestHeaders)0 -
I do not have VBD python script to share but the tutorial REST API Tutorial 2: Retrieving VBD files contains the complete details and requests, and sample responses for the steps that are necessary to retrieve VBD.
In the most basic form, they are:
Get available packages, with PackageIds:
GET https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/Packages
Get specific VBD file list by PackageId:
GET https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveryGetUserPackageDeliveriesByPackageId(PackageId='0x04f21a8d28f59cb1')
Get specific VBD file:
GET https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveries('0x05a61154de8b3016')/$value
Does this help?
0 -
So I took a look at the tutorial, which is how I came to form that JSON request in the first place (it's the example they give in the documentation). However, I receive a 400 response from the request string mentioned in the tutorial. If you cannot provide an example in python would you at least be able to provide a sample JSON request string that we should be using? Thanks much!
0 -
So the question is ... if there is no requestBody, how does one request the API call -- is it like so - because this returns a 401 error.
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/Packages'
r2 = requests.post(requestUrl)0 -
Hi @noah.kauffman,
Just to see how to request and use token in python, please refer to Python example in
0 -
I'm able to connect and download reports for tick data on demand. However, that example has three components to a call (a) requestUrl (b) requestHeader and (c) requestBody.
What you have told me is that for the type of call described that no request body should be sent. However, what I am seeing is that calling as follows:
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/Packages'
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"charset":"utf-8",
"Authorization": "token " + token
}
r2 = requests.post(requestUrl, headers=requestHeaders)... Results in a response code of: 400
And more simply calling with the following ...
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/Packages'
r2 = requests.post(requestUrl)....
Leads to a response code of 401.
Could you please specify how this call is supposed to be made without requestBody, because I'm not able to get it to work.
0 -
Please see attached code, which generates a 400 response. Can you please let me know what the request call is supposed to look like if it is not supposed to look as I currently have it. Thanks!
0 -
Hello @noah.kauffman,
Please note, the request should be GET, not POST for VBD, please try the following
requestUrl='https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/Packages'
requestHeaders={
"Prefer":"respond-async",
"Content-Type":"application/json",
"Authorization": "token " + token
}
r2 = requests.get(requestUrl, headers=requestHeaders)please let us know if you get back 200/202 status + data
0 -
Hello - a little confused as I parse through the JSON result. I want to fetch exchange by day files for a subset of exchanges. The tutorial says to do so I need to collect the package_ids and then make the following call for each package_id:
https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveryGetUserPackageDeliveriesByPackageId(PackageId='p_id')
However, when I look at the package ides for the following exchanges:
exchanges = ['ASQ','NAQ','NMQ','NMS','NSM','NSQ','NYQ','PCQ']
I notice that the packageid is always the same in the JSON. Is this expected?
0 -
nevermind - deleting last comment about packageids returning the same value.
0 -
I'm receiving response code 400 when pulling for the following exchanges ...
Error 1:
NMS - NASDAQ Stock Market Exchange Large Cap (formally known as NASDAQ NATIONAL MARKET SYSTEM)
package_id: 0x04f21a8d26859cb1
GET: https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveryGetUserPackageDeliveriesByPackageId(PackageId='0x04f21a8d26859cb1')
Error 2:
NSQ - Consolidated Issue Listed on Nasdaq Global Select Market
package_id: 0x04f21a8d27059cb1
GET: https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveryGetUserPackageDeliveriesByPackageId(PackageId='0x04f21a8d27059cb1')other exchanges are working fine. Any idea why the error with the above exchanges?
0 -
Instead of /Packages
try
/UserPackages
to get the list, and this command will yield only packages your user is entitled for. Check if you are entitled to NASDAQ VBD.
0 -
Hi @noah.kauffman ,
I have just tested with these two ids, and these two do not come up, for me too. This is a suspected content missing issue, so the best way to investigate this is to open a content-related inquiry via My Account or to call Thomson Reuters Helpdesk to report this is happening and ask to investigate.
0 -
So I am able to loop through the daily files and see the package ids and filenames. Now I would just like to download the daily files. I am using code below to try and download a selected REF-Data file, however, the zip archive I am getting is empty. Can you let me know how I am supposed to download the by day files:
for r in data:
date = r['ReleaseDateTime'][0:10]
n_days_prior = datetime.now().date() - timedelta(days=3)
dt = datetime.strptime(date, '%Y-%m-%d')
if dt.date() > n_days_prior:
ReleaseDateTime = r['ReleaseDateTime']
UserPackageId = r['UserPackageId']
Name = r['Name']
ContentMd5 = r['ContentMd5']
Frequency = r['Frequency']
PackageDeliveryId = r['PackageDeliveryId']
SubscriptionId = r['SubscriptionId']
FileSizeBytes = r['FileSizeBytes']
#print('UserPackageId:', UserPackageId)
#print('PackageDeliveryId:', PackageDeliveryId)
#print(date)
if('REF-Report' in Name):
print(Name)
print(FileSizeBytes)
get_url = "https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveries('" + PackageDeliveryId +"')"
r5 = requests.get(get_url,headers=requestHeaders,stream=True)
r5.raw.decode_content = False
filePath = "C:/"
fileNameRoot = "Python_Test_DailyDownload"
fileName = filePath + fileNameRoot + ".csv.gz"
chunk_size = 1024
rr = r5.raw
with open(fileName, 'wb') as fd:
shutil.copyfileobj(rr, fd, chunk_size)0 -
Thru the following code I am able to access the text response of a daily file and save it as a CSV. However, is there some method that is more efficient for doing this?
... get_url = "https://hosted.datascopeapi.reuters.com/RestApi/v1/StandardExtractions/UserPackageDeliveries('" + PackageDeliveryId +"')/$value"
r5 = requests.get(get_url,headers=requestHeaders,stream=True)
#Ensure we do not automatically decompress the data on the fly:
r5.raw.decode_content = False
f = file('test.csv', 'w')
f.write(r5.text)
f.flush()
f.close()I notice that the method that I previously used for other files returns no data in the .zip file:
i.e. using the following shutil copyfileobj method:
fileName = filePath + fileNameRoot + ".csv.gz"
chunk_size = 1024
rr = r5.raw
with open(fileName, 'wb') as fd:
shutil.copyfileobj(rr, fd, chunk_size)Any idea why r5.raw is empty?
0 -
noah.kauffman, have you looked at the latest version of our Python sample in the downloads ? It was updated last Friday, and (in step 5) saves some data as a zip file. It is not for VBD, but it might help.
0 -
ok - all good now. thanks!
0 -
So where do I submit the ticket regarding missing exchanges which I should be credentialed for.. is that via https://tickhistory.thomsonreuters.com/TickHistory/login.jsp or through another site?
0 -
The best way to get a content issue that you suspect, looked and investigated by the appropriate content group, is by MyAccount Raise A Case or by calling Thomson Reuters Helpdesk.
0
Categories
- All Categories
- 6 AHS
- 37 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛