Manage Data With Pandas Dataframe
Hello, would be possible for you to create an example in python that populates a Pandas dataframe with the following data (4 columns, same order below) for a custom list of tickers (let's say for example 10 tickers of your choice), in a given time frame (i.e last 2 years) using DS data:
ticker, date in yyyymmdd format, last price, total return (percent change vs previous period) lagged backward one period.
The script should work on whatever timeframe is selected by the user (daily, weekly, monthly). The dataframe will be indexed on the tickers so to have all data for all tickers in one dataframe only.
The output (example with daily data) should be something like in the gif attached (so you could understand what I mean for "lagged" one period) ... here Capture.PNG, and then should continue below with the data of the second ticker, the third ... etc
I need this as a start point to understand how to download the data as well as how to handle their order (columns) and then carry out other studies.
Many thanks!
Best Answer
-
I have created an example to get the Total Return Index (RI) and Price - Trade (P) data types of the following items.
'@AAPL,@MSFT,@AMZN,@TSLA,@GOOGL';
You can use the Datastream navigator to search for items and data types.
I used the DatastreamDSWS python package and the get_data method to get historical data.
import DatastreamDSWS as DSWS
username = "username"
password = "password"
ds = DSWS.Datastream(username = username, password = password)
data = ds.get_data('@AAPL,@MSFT,@AMZN,@TSLA,@GOOGL';, ['P','RI'], start='-2Y', end='0D', kind=1, freq='D')
df = data.iloc[: , data.columns.get_level_values(1) == 'P'].melt(col_level=0, ignore_index=False, value_name='P')
tmp_df = data.iloc[: , data.columns.get_level_values(1) == 'RI'].melt(col_level=0, ignore_index=False, value_name='RI')
df["RI"] = tmp_df["RI"]
df.reset_index(inplace=True)
df.Dates = df.Dates.apply(lambda x: x.replace('-',''))
df[["Instrument","Dates","P","RI"]]The output is:
0
Answers
-
Fantastic!
I just amended a little bit the code so to have also the shift I need in the return column (last column).
Many thanks!
data = ds.get_data('@AAPL,@MSFT,@AMZN,@TSLA,@GOOGL', ['P','RI'], start='-2Y', end='0D', kind=1, freq='D')
df = data.iloc[: , data.columns.get_level_values(1) == 'P'].melt(col_level=0, ignore_index=False, value_name='P')
tmp_df = data.iloc[: , data.columns.get_level_values(1) == 'RI'].melt(col_level=0, ignore_index=False, value_name='RI')
df["RI"] = tmp_df["RI"]
# shift total return backward one period
df["CHG"]=(100*(df['RI']/df['RI'].shift(1)-1).shift(-1))
df.reset_index(inplace=True)
df.Dates = df.Dates.apply(lambda x: x.replace('-',''))
df[["Instrument","Dates","P","RI"]]
# set instrument as an index
df.set_index('Instrument',inplace=True)
df.head(10)0 -
One last question, how to adjust the code for tickers like these one ?
LHUSFRN is the price index
LHUSFRN(IN)+100 is the total return index
or
MSUSAML(MSPI) Price IndexMSUSAML(MSRI) Total Return Index
I mean how to rewrite the part below for those types of tickers ?
data = ds.get_data('LHUSFRN,@MSFT,@AMZN,@TSLA,@GOOGL', ['P','RI'], start='-2Y', end='0D', kind=1, freq='D')
0 -
It uses different data types to get the Price Index and Total Return Index values so I use a dictionary to map fields.
fields_map = {'P':"PI", "IN+100":"RI","MSPI":"PI","MSRI":"RI"}
The code looks like this:
fields_map = {'P':"PI", "IN+100":"RI","MSPI":"PI","MSRI":"RI"}
data = ds.get_data('LHUSFRN(P), LHUSFRN(IN)+100,MSUSAML(MSPI),MSUSAML(MSRI), @AAPL(P), @AAPL(RI)';,
start='-2Y', end='0D', kind=1, freq='D')
#Rename the multi-index columns
column_index0 = [x.split('(')[0] for x in data.columns.get_level_values(0)]
column_index1 = [x.split('(')[1].replace(')','') for x in data.columns.get_level_values(0)]
#use the dictionary to rename data types
column_index1_rename = []
for dt in column_index1:
if dt in fields_map:
column_index1_rename.append(fields_map[dt])
else:
column_index1_rename.append(dt)
data.columns = data.columns.from_tuples(list(zip(column_index0,column_index1_rename)),names=['Instrument', 'Field'])
df = data.iloc[: , data.columns.get_level_values(1) == 'PI'].melt(col_level=0, ignore_index=False, value_name='PI')
tmp_df = data.iloc[: , data.columns.get_level_values(1) == 'RI'].melt(col_level=0, ignore_index=False, value_name='RI')
df["RI"] = tmp_df["RI"]
df.reset_index(inplace=True)
df.Dates = df.Dates.apply(lambda x: x.replace('-',''))
df[["Instrument","Dates","PI","RI"]]The output is:
0 -
Very kind of you!
One more question: what if I need to retrieve same kind of output for a couple of formulas instead? Like the following (even by writing another separate code if it is not possible to integrate it into the previous one):
(0.6*REBE#(S&PCOMP(RI)))+(0.4*(REBE#(LHAGGBD(IN)+100)))
(REBE#(CPRD#(1+((PCH#(MSACWF$(RI),1M)*0.006)+((PCH#(LHAGGBD(IN)+100,1M))*0.004)))))-100
I mean do you think is possibile to pull out data from a formula as well or do I need to convert it into python code?
Thanks!
0 -
Those formulas return the following data.
ds.get_data('(0.6*REBE#(S&PCOMP(RI)))+(0.4*(REBE#(LHAGGBD(IN)+100))),(REBE#(CPRD#(1+((PCH#(MSACWF$(RI),1M)*0.006)+((PCH#(LHAGGBD(IN)+100,1M))*0.004)))))-100', start='-2Y', end='0D', kind=1, freq='D')
What output would you like to see?
0
Categories
- All Categories
- 6 AHS
- 37 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛