News headline and story to CSV file
Hi,
I would like to make csv file of news headlines and story.
for headline I’m using→headlines = ek.get_news_headlines('JPY=')
for story I’m using →for index, headline_row in headlines.iterrows():
story = ek.get_news_story(headline_row['StoryId'])
print (story)
then request, df.to_csv('news.csv')
Does anyone know where do I have to fix?
Regards
Best Answer
-
Do you mean adding the Story column in the headlines data frame? If yes, the code is:
headlines = ek.get_news_headlines("R:JPY= IN JAPANESE", count=100, date_from='2018-01-10T13:00:00', date_to='2018-01-10T15:00:00')
stories = pd.DataFrame(columns=['DATE','STORY'])
for index, headline_row in headlines.iterrows():
story = ek.get_news_story(headline_row['storyId'])
stories = stories.append({'DATE':index,'STORY':story}, ignore_index=True)
stories = stories.set_index('DATE')
result = pd.concat([headlines, stories], axis=1)
result.to_csv("news.csv")The result looks like:
0
Answers
-
First, set to lower case StoryId in your code to request a story :
story = ek.get_news_story(headline_row['storyId'])Then, I understand that you want to save stories with storyId in a csv file.
If I'm correct, the function to_csv you're using comes from DataFrame class.
You have to create the DataFrame based on a story list.
Example:headlines = ek.get_news_headlines('JPY=')
stories = [ (storyId,ek.get_news_story(storyId)) for storyId in headlines['storyId'].tolist()]
df = pd.DataFrame(stories, columns=['storyId', 'story'])
df.to_csv('news.csv', sep=',',index=False)0 -
Thank you for your support.
I have an one more question,the number of news are different between DF and RESULTS.
It's my understanding that RESULTS includes DF thus I can get wider range of news using RESULTS compare with DF. Is this correct?
Sorry but I am very new to Eikon APIs.
Thank you for your kindly support.
Regards,
Koji
0 -
Could you please explain more about the question or share the code?
0 -
If you're comparing results from following requests :
headlines = ek.get_news_headlines("R:JPY= IN JAPANESE",...
and
headlines = ek.get_news_headlines('JPY=')News parameters are different, so number of headlines/stories could be different.
0 -
I meant former answer uses :
result = pd.concat([headlines, stories], axis=1)
result.to_csv("news.csv")
But latter answer uses :
df = pd.DataFrame(stories, columns=['storyId', 'story'])
df.to_csv('news.csv', sep=',',index=False)
What is the difference between result= and df=?0 -
As mentioned by pierre.faurel, news parameters are different, so number of headlines/stories could be different.
result uses headlines from ek.get_news_headlines("R:JPY= IN JAPANESE", count=100, date_from='2018-01-10T13:00:00', date_to='2018-01-10T15:00:00') while pd uses headlines from ek.get_news_headlines('JPY=').
0 -
Sorry for lack of my information,
I meant definitions of result= and df= .
Its my understanding that If I want to contain over 2 columns, I should use results=
then if I want to just 2 columns, use df=.
Is this correct?
Regards,
Koji
0 -
Yes, you are correct.
result in the first sample uses concat to merge two data frames (headlines, stories) based on date which is an index. headlines data frame has the following 5 columns: DATE, versionCreated, text, storyId, and sourceCode while stories data frame has the following 2 column: DATE, and STORY. After merging, the result data frame has 6 column which has DATE as an index.
df in the second sample creates a new data frame with two columns: storyId, and story.
0 -
Thank you very much!
Your answer is very helpful.
Kind regards,
Koji
0
Categories
- All Categories
- 6 AHS
- 39 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛