How to identify "truncated" responses?

Hello everybody,

I am trying to obtain some data via EIKON Data API and Python. Taking a closer look at my obtained result, I observed the following: from time to time after a different amount of "complete responses", I only obtain results for some of my requested fields. I attached a Screenshot to show you what I mean. Does this mean that the very specifc request has been "truncated" at that point (as described in the API documentation)?

In the documentation it says:

"When the limits for datapoints per request are reached, responses are simply truncated and only the available cells/headlines/results are returned ..."

screenshot-2023-08-23-141611.jpg


kind regards

Best Answer

  • @s2782245 So the Eikon Data API get_data api has a limit of 10K datapoints per call - if your request is larger than this it attempts to deliver the maximum possible and will NA the rest. You need to break up your request into smaller chunks and iterate those calls to build larger dataframes. Alternatively - and I would suggest you do - use our Refinitiv Data Libraries to retrieve data - these can do the iteration for you depending on the function for example get_history. You can play around with the API in the Codebook app (type CODEBK into Eikon Search bar). Go to the examples folder and look at the Access layer samples for both get_data and get_history. I hope this can help.

    1692965753621.png

Answers

  • Hello @s2782245

    Thank you for contacting us. Can you share the code that you are using? Additionally, can you replicate the issue on demand?

  • Hi,

    thank you very much for your responses.

    @wasin.w - yes I can replicate the issue on demand.

    In the code below I reproduce the issue for the Stock with the ISIN "CNE100004116". What the code essentially does is taking the Stock, a list of TR.Functions, and a list of Parameters (here an indicator for years e.g. 0CY, -1CY, -2CY etc.) and puts them into the get.data function. Whil the Stock and the TR.Functions are held constant, a loop works through the years definied in the parameter list. Please not that I use an R wrapper for the Python library (please let me know if you want to see it in plain python; however the logic is the same).

    Example Code for "CNE100004116":



    #### Setting up the connection ####
    eikonapir::set_proxy_port(9000L)
    eikonapir::set_app_id('XXXXXX')
    #### Creating Year Parameter ####
    year_df <- as.data.frame(0:-25)
    year_df <- year_df %>% rename("yearindikator"="0:-25")
    year_df <- year_df %>%
    mutate("yearindikator"=paste(year_df$yearindikator,"CY",sep ="" ))
    #### Reuters Functions to be pulled ####
    investors_reuters_function <- list(
    "TR.InstrumentType",
    "TR.CompanyName",
    "TR.FreeFloatPct",
    "TR.InvestorFullName.investorpermid",
    "TR.InvestorFullName",
    "TR.HoldingsDate",
    "TR.EarliestHoldingsDate",
    "TR.SharesHeld",
    "TR.PctOfSharesOutHeld",
    "TR.SharesHeldValue",
    "TR.InvestorType",
    "TR.InvParentType",
    "TR.InvInvmtOrientation",
    "TR.FilingType",
    "TR.ConsHoldFilingDate",
    "TR.NbrOfInstrHeldByInv",
    "TR.InvAddrCountry",
    "TR.NbrOfInstrBoughtByInv",
    "TR.NbrOfInstrSoldByInv")

    #### Example-ISIN for truncated response ####
    example_truncated_stock <-"CNE100004116"

    #### Main loop to get the needed data ####

    for (q in 1:nrow(year_df)){

    # This loop now takes one "Parameter Item" from the year_df created earlier
    # and attaches it to the "get_data" function


    ### To avoid overload
    tmsleep<- sample(1:6,1)
    Sys.sleep(tmsleep)



    print(q)


    next_df <-
    get_data(example_truncated_stock,
    investors_reuters_function,
    parameters = list("SDate" = year_df[q,1]))



    ### As sometimes, empty responses are received for no obvious reason,
    ### a while loop is implemented to try again

    counter <- 0
    while(dim(next_df)[1]==0 & counter <5){
    counter <- sum(counter,1)

    tmsleep<- sample(1:5,1)
    Sys.sleep(tmsleep)


    print(counter)
    next_df <-
    get_data(example_truncated_stock,
    investors_reuters_function,
    parameters = list("SDate" = year_df[q,1]))

    }

    ### Saving the answers

    savingsname <- paste("CNE100004116",".csv",sep="")
    savingspfad <- paste("P:/17 ...",savingsname,sep="")
    write_delim(next_df,
    savingspfad,
    delim = ";",
    append = TRUE,
    col_names = !file.exists(savingspfad)

    )

    }


    @jason.ramchandani01 Thank you very much for your answer. Is there any possibility to know beforehand whether the answer will exceed the 10.000 datapoint limit? Having checked the obtained files, i noticed that the "truncated repsonses" start way before even being close to the 10.000 datapoint limit....

    I also have attached an image of the output file for the stock mentioned. As I marked in yellow, the answers start to be "truncated" already after 2536 datapoints...


    screenshot-2023-08-28-111326-2.jpg

    Kind regards













  • Hello @s2782245

    Currently there's no way for you to check the data usage or be alerted when you approach the data retrieval throttle.
    Daily requests and volume limits are reset at midnight according to the operating system clock on your machine.

    I hope this information helps.

  • @wasin.w Thank you very much for your response. Do you have maybe an idea why I get "truncated" responses way before the 10.000 elements limit (as indicated in the screenshot above?)


    Kind regards

  • Hello @s2782245

    I am not sure why you get "truncated" response. Did you try to break your request into smaller chunks like @jason.ramchandani01 suggested yet?

    I noticed that you are using the eikonapir library. Please be informed that the eikonapir is not Refinitiv product, you may need to contact the library developer directly via the GitHub issue page.