Re Tick History: more than one query start/end date
Hello,
Apologies if this has been asked already. I'm using the REST API via Python in order to pull 1 minute intraday summary data. Goal of my analysis is to study a set of (~400) historical events going back to 1996. Specifically, I want to construct intraday surprises in a narrow window around these events, for a variety of RICs.
Instead of one big query of intraday data going back to 1996, I though it would be more efficient if instead I download only data for the dates I'm interested in.
As a beginner, I see two ways of doing so:
1) I use Python to loop over each date where I have an event, effectively sending 400 different on-demand extraction requests
2) There is some clever way to provide multiple QueryStartDate/QueryEndDate in the JSON file of the
on demand extraction request:
"Condition": {
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": QueryStartDate ,
"QueryEndDate": QueryEndDate ,
"SummaryInterval": "OneMinute",
"DisplaySourceRIC":"true"
}
Could anyone advice me if 2) is possible. If not, would 1) go against the best practice? Yet an alternative way might be to use Google Cloud integration of Tick History, altough not sure I need it.
Thanks for any help,
Robin
Best Answer
-
Hello @robin.braun,
Any time you are designing multiple Tick History requests, first I would suggest to verify Best Practices & Fair Usage Policy for DataScope Select and Tick History concurrent request limit and concurrent request processing limits for your request template type.
You may also wish to review article Tick History Request - Parametrize and Parallelize for helpful tips and included downloadable companion code.
Hope that this information is of help
1
Answers
-
Hello @robin.braun ,
Additionally, please see Google Big Query Tutorials to gain better understanding of this approach.
Google Big Query approach allows to process very large quantities of data, very fast, and to only download the result of the query, however- you are using GCP console to drive the workflow and not HTTP REST request, so you should see if this approach aligns with your organization's integration requirements.
1
Categories
- All Categories
- 6 AHS
- 37 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛