MBO file download
We are looking to download MBO files for different exchanges. Now each exchange has its own start and end time. And accordingly we need to send a request to TRTH REST API. A few question regarding the same.
1. Is there any API to send a request for MBO and as well as other types of requests such as MM and MBP?
2. If we want to download the data for system date, what filter we should provide for the date?
3. Every exchnage has its own time to be the data available, how can we implement the same in the request?
Best Answer
-
Did you look at the headers of the response when you submitted the request for all the instruments in flname.txt ? It should have contained a header field called Location, containing a URL. A get to that URL would deliver a response containing a JobId and Notes. And finally a GET to this end point https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('0x05d0c955452b3036')/$value (after replacing the string between single quotes with your own JobId) should deliver the data. This workflow is described in detail in REST API Tutorial 3.
I did it for your request and received data. Steps to do this:
Step 1: POST data extraction request.
Body:
{
"ExtractionRequest": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"ContentFieldNames": [
"Ask Price", "Ask Size", "Bid Price", "Bid Size",
"Domain", "History End", "History Start", "Instrument ID", "Instrument ID Type",
"Number of Buyers", "Number of Sellers", "Sample Data"
],
"IdentifierList": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [
{"Identifier": "37H0.SI","IdentifierType": "Ric"},
... etc (using the entire contents of your flname.txt file)
{"Identifier": "ZOBEta.SI","IdentifierType": "Ric"}
]
},
"Condition": {
"View": "RawMarketByOrder",
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2017-07-25T09:00:00.000Z",
"QueryEndDate": "2017-07-25T12:00:00.000Z",
"DisplaySourceRIC": true
}
}
}Response has HTTP status 202.
Step 2: Retrieve your Location URL from response header.
Step 3: Do a GET on your location URL
(note: the location URL is different for each request you make, you cannot use mine):
https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='0x05d0c955452b3036')
The response should have HTTP status 200. If it is 202, then wait at least 30 seconds, then repeat step 3 until a 200 is received.
HTTP status 200 response body:
{
"@odata.context": "https://hosted.datascopeapi.reuters.com/RestApi/v1/$metadata#RawExtractionResults/$entity",
"JobId": "0x05d0c955452b3036",
"Notes": [
"Extraction Services Version 11.1.37014 (36b953b5a32e),
... etc.
{
"Identifier": {
"@odata.type": "#ThomsonReuters.Dss.Api.Content.InstrumentIdentifier",
"Identifier": "ZOBEta.SI",
"IdentifierType": "Ric",
"Source": ""
},
"Message": "Not found"
}
]
}Note: ~25% of the instruments in your list were not found.
Step 4: Retrieve your JobId from the response
Step 5: Do a GET to the following URL, containing your JobId
GET https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('0x05d0c955452b3036')/$value
(after replacing the string between single quotes with the JobId retrieved in step 3).
Data is delivered (I got more than 600'000 lines):
#RIC,Domain,Date-Time,Type,MsgClass/FID number,UpdateType/Action,FID Name,FID Value,FID Enum String,PE Code,Template Number,Key/Msg Sequence Number,Number of FIDs
AAGH.SI,Market By Order,2017-07-25T09:00:02.974515298Z,Raw,UPDATE,UNSPECIFIED,,,,3051,,9104,
,,,Summary,,,,,,,,,2
,,,FID,4148,,TIMACT_MS,32402834,
,,,FID,6516,,BOOK_STATE,1,N
,,,MapEntry,,ADD,,,,,,6876109329407376546B,6
,,,FID,3426,,ORDER_ID,6876109329407376546,
,,,FID,3427,,ORDER_PRC,0.039,
,,,FID,3428,,ORDER_SIDE,1,BID
,,,FID,3429,,ORDER_SIZE,500000,
,,,FID,3886,,ORDER_TONE,0,
,,,FID,11692,,OR_UDT_RSN,6,
AAGH.SI,Market By Order,2017-07-25T09:00:04.558582508Z,Raw,UPDATE,UNSPECIFIED,,,,3051,,9120,... etc.0
Answers
-
Do you mean "RawMarketByOrder" view viaTickHistoryMarketDepthExtractionRequest report type?
The result looks like:
0 -
Yes, you are correct. Now, how can we create a request with time criteria for the same?
0 -
To get RawMarketByOrder, you need to use a POST request with Extractions/ExtractRaw endpoint. Its body contains:
{
"ExtractionRequest": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"IdentifierList": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [{
"Identifier": "CARR.PA",
"IdentifierType": "Ric"
},{
"Identifier": "TSCO.L",
"IdentifierType": "Ric"
}]
},
"Condition": {
"View": "RawMarketByOrder",
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2017-07-25T09:00:00.000Z",
"QueryEndDate": "2017-07-25T12:00:00.000Z",
"DisplaySourceRIC": true
}
}"InstrumentIdentifiers" is an array so it can have multiple items. You can specify date via "QueryStartDate" and "QueryEndDate" fields.
0 -
Tried below command
curl -k -X POST -H "Content-Type: application/json" -H "Prefer: respond-async" -H "Authorization: Token xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" https://10.192.6.221/RestApi/v1/Extractions/ExtractRaw -d '{ "ExtractionRequest": {"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest","IdentifierList": {"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList","InstrumentIdentifiers": [{"Identifier": "37H0.SI","IdentifierType": "Ric"},{"Identifier": "6UA0.SI","IdentifierType": "Ric"}]},"Condition":{"View":"RawMarketByOrder","MessageTimeStampIn": "GmtUtc","ReportDateRangeType": "Range","QueryStartDate": "2017-07-25T00:00:00.000Z","QueryEndDate": "2017-07-25T23:59:59.000Z","DisplaySourceRIC": true}}}' -o new.csv
And got below output:
@{"@odata.context":"https://10.192.6.221/RestApi/v1/$metadata#RawExtractionResults/$entity","JobId":"0x05cfbb5e93db2f86","Notes":["All identifiers were invalid. No extraction performed."],"IdentifierValidationErrors":[{"Identifier":{"@odata.type":"#ThomsonReuters.Dss.Api.Content.InstrumentIdentifier","Identifier":"37H0.SI","IdentifierType":"Ric","Source":""},"Message":"Not found"},{"Identifier":{"@odata.type":"#ThomsonReuters.Dss.Api.Content.InstrumentIdentifier","Identifier":"6UA0.SI","IdentifierType":"Ric","Source":""},"Message":"Not found"}]}
Am I missing anything?
0 -
Ayan, your request syntax is ok. It works fine with a different RIC (CARR.PA).
A historical search shows your 2 RICs are valid from 5.5.2015 till now. I also get "Not found" for those 2 RICs, even when extending the date range to start on 5.5.2015.
So I tried requesting a chart for the same RICs in Eikon, and I get "Insufficient Data". It looks like those 2 instruments have not been quoted ...
0 -
curl -k -X POST -H "Content-Type: application/json" -H "Prefer: respond-async" -H "Authorization: Token xxxxxxxxxxxxxxxxxxxxxxxxx" https://10.192.6.221/RestApi/v1/Extractions/ExtractRaw -d '{ "ExtractionRequest": {"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest","IdentifierList": {"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList","InstrumentIdentifiers": [{"Identifier": "37H0.SI","IdentifierType": "Ric"}, {"Identifier": "6UA0.SI","IdentifierType": "Ric"}, {"Identifier": "AAGHbi.SI","IdentifierType": "Ric"}, {"Identifier": "AAGHol.SI","IdentifierType": "Ric"}]},"Condition":{"View":"RawMarketByOrder","MessageTimeStampIn": "GmtUtc","ReportDateRangeType": "Range","QueryStartDate": "2017-07-25T00:00:00.000Z","QueryEndDate": "2017-07-25T23:59:59.000Z","DisplaySourceRIC": true}}}'
Output:
@{"@odata.context":"https://10.192.6.221/RestApi/v1/$metadata#RawExtractionResults/$entity","JobId":"0x05cfd3577beb3036","Notes":["Extraction Services Version 11.1.37014 (36b953b5a32e), Built Jul 5 2017 20:14:02\nUser ID: 9012819\nExtraction ID: 2000000001350293\nSchedule: 0x05cfd3577beb3036 (ID = 0x0000000000000000)\nInput List (2 items): (ID = 0x05cfd3577beb3036) Created: 07/28/2017 12:18:54 Last Modified: 07/28/2017 12:18:54\nReport Template: _OnD_0x05cfd3577beb3036 (ID = 0x05cfd35783bb3036) Created: 07/28/2017 12:18:51 Last Modified: 07/28/2017 12:18:51\nSchedule dispatched via message queue (0x05cfd3577beb3036)\nSchedule Time: 07/28/2017 12:18:54\nProcessing started at 07/28/2017 12:18:54\nProcessing completed successfully at 07/28/2017 12:18:54\nExtraction finished at 07/28/2017 12:18:54 UTC, with servers: x07q13\nInstrument <RIC,AAGHbi.SI> expanded to 0 RICS.\nInstrument <RIC,AAGHol.SI> expanded to 0 RICS.\nReport suppressed because there are no instruments\n"],"IdentifierValidationErrors":[{"Identifier":{"@odata.type":"#ThomsonReuters.Dss.Api.Content.InstrumentIdentifier","Identifier":"37H0.SI","IdentifierType":"Ric","Source":""},"Message":"Not found"},{"Identifier":{"@odata.type":"#ThomsonReuters.Dss.Api.Content.InstrumentIdentifier","Identifier":"6UA0.SI","IdentifierType":"Ric","Source":""},"Message":"Not found"}]}
0 -
Ayan, again your request syntax is ok.
37H0.SI and 6UA0.SI: a historical search now shows these 2 RICs are valid from 5.5.2015 till 29.7.2017. Eikon does not find them any more.
AAGHbi.SI and AAGHol.SI: I can find them in Eikon, but the last data was on 21.7.2017 for AAGHbi.SI, and 22.11.2016 for AAGHol.SI.
There is no market depth because these instruments have extremely low volatility. This is the only EoD data for AAGHbi.SI in July 2017:
AAGHbi.SI,.042,.042,.042,.042,271300,.042,,2017/07/21
And this is the last EoD data for AAGHol.SI:
AAGHol.SI,.01,.01,.01,.01,60,.01,,2016/11/22
0 -
I have tried with all the RICs from the instruments file of SES exchnage for the date 25th July,17. After invoking the command it took around 5 mins, but it just didn't showed any result. It comes back to command prompt. flname.txt contains all the RICs. Please have a look and suggest.
0 -
Hey, I want to catch the output in a .csv.gz file, how can I do it -o filename, will it work?
If yes, then is there anything I need to take care of?
For listing the RICs in the command, I have 2 options. They are as below. Please suggest the best way.
1. Create a instrument list and add the RICs in the list. Once created, use that list.
2. Use the RICs directly in the code?
0 -
After getting JobId from the 4th step mentioned in the Christiaan's answer, you can use the curl command to get the file. For example, if the JobId is 0x05d14ebf30bb2f96, the curl command is:
curl -k -X GET -H "Authorization: Token <token>" -o output.csv.gz https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('0x05d14ebf30bb2f96')/$value
Both methods work.
If the list of RICs and report templates aren't changed, you can use the first method by creating instrument lists and report templates. After that, you can use an immediate extraction to retrieve the data from the selected instrument list and report template. You can mix between the instrument list and report template for an immediate extraction. For more information, please refer to REST API Tutorial 12: GUI control calls: immediate extract.
However, if the list of RICs, content fields, and conditions (QueryStartDate and QueryEndDate) can be changed dynamically in every request, it is more appropriate to use the second method which is an on-demand extraction.
0 -
curl -k -X POST -H 'Content-Type: application/json' -H 'Prefer: respond-async' -H 'Authorization: Token xxxxxxxxxxxxx' https://10.192.6.221/RestApi/v1/Extractions/ExtractRaw -d ' { ExtractionRequest: { @odata.type: "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest", IdentifierList: { @odata.type: "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",InstrumentIdentifiers: [{"@odata.type": "#ThomsonReuters.Dss.Api.Search.HistoricalSearchResult","Identifier": "AAGHbi.SI","IdentifierType": "Ric"}]},Condition:{View: RawMarketByOrder,MessageTimeStampIn: GmtUtc,ReportDateRangeType: Range,QueryStartDate: 2017-07-30T00:00:00.000Z,QueryEndDate: 2017-07-30T23:59:59.000Z,DisplaySourceRIC: true}}}'
I am getting below error for the above command. Please help.
{"error":{"message":"Malformed request payload: Syntax error at Line 1, Char 5: expected '}' {"}}
Is there anyway to findout the error easily?
0 -
Ayan, It is easier to find the issues if you paste the body of your request inside Postman.
There are missing "" around ExtractionRequest, @odata.type, IdentifierList, InstrumentIdentifiers, etc. The following will work:
{
"ExtractionRequest": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"IdentifierList": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers": [{
"Identifier": "AAGHbi.SI", "IdentifierType": "Ric"
}]
},
"Condition": {
"View": "RawMarketByOrder",
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "2017-07-30T09:00:00.000Z",
"QueryEndDate": "2017-07-30T23:59:59.000Z",
"DisplaySourceRIC": true
}
}
}0 -
The above mentioned comments helped me a lot, a very big thank you for that.
Now, I'm facing a new problem. While I'm downloading the On-Demand data through shell script, it is downloading less data than that of Manual download from GUI. Attaching my query I was downloading the data for 30-Jul-2017 for SES exchnage
0 -
Could
you please share the "AllRICSFile.txt" file you are using? Do you mean
that you use the curl command to download the On-Demand data from the
following endpoint but receive less data? If not, could you also share the download command?RestApi/v1/Extractions/RawExtractionResults%28%27<job ID>%27%29/%24value
0 -
I have tried the request message you provided and then created the similar Schedule extraction in the GUI. The InstrumentList has been created using bulk search in the Historical Search. The file size is slightly different: 960869 bytes for GUI and 960589 bytes for On-Demand. It is because the On-Demand extraction request doesn't have the "AllowHistoricalInstruments": true, so some Historical RICs are not included in the extraction. With the option, the extraction receive the same size of data. The message is request-message.zip.
If this is not the behavior you found, please elaborate.
0 -
There is a difference of 5KB after extraction of the gz files.
From data scope :
Zipped size: 942 KB
Unzipped size : 17535 KB
Using curl command :
Zipped size: 938 KB
Unzipped size : 17530 KB
Also as you mentioned AllowHistoricalInstruments optionis not true for which one through curl command or through data scope?
0 -
@Ayan
I mean the one through curl command. You can see my request message, which is modified to have the AllowHistoricalInstruments options, in the request-message.zip file attached in my previous comment.
0 -
My query:
My output:
* About to connect() to 10.192.6.221 port 443 (#0) * Trying 10.192.6.221... connected * Connected to 10.192.6.221 (10.192.6.221) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * warning: ignoring value of ssl.verifyhost * skipping SSL peer certificate verification * SSL connection using TLS_RSA_WITH_AES_128_CBC_SHA256 * Server certificate: * subject: CN=hosted.datascopeapi.reuters.com,O=Thomson Reuters,L=New York,ST=New York,C=US * start date: Nov 14 00:00:00 2015 GMT * expire date: Nov 14 23:59:59 2017 GMT * common name: hosted.datascopeapi.reuters.com * issuer: CN=Symantec Class 3 Secure Server CA - G4,OU=Symantec Trust Network,O=Symantec Corporation,C=US > POST /RestApi/v1/Extractions/ExtractRaw HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: 10.192.6.221 > Accept: */* > Content-Type: application/json > Prefer: respond-async > Authorization: Token xxxxxxxxxxxxxx > Content-Length: 3017 > Expect: 100-continue > % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 3017 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0< HTTP/1.1 100 Continue } [data not shown] 100 3017 0 0 100 3017 0 1964 0:00:01 0:00:01 --:--:-- 2700 100 3017 0 0 100 3017 0 1189 0:00:02 0:00:02 --:--:-- 1423 100 3017 0 0 100 3017 0 852 0:00:03 0:00:03 --:--:-- 966 100 3017 0 0 100 3017 0 664 0:00:04 0:00:04 --:--:-- 732 100 3017 0 0 100 3017 0 544 0:00:05 0:00:05 --:--:-- 589 100 3017 0 0 100 3017 0 461 0:00:06 0:00:06 --:--:-- 0 100 3017 0 0 100 3017 0 399 0:00:07 0:00:07 --:--:-- 0 100 3017 0 0 100 3017 0 353 0:00:08 0:00:08 --:--:-- 0 100 3017 0 0 100 3017 0 316 0:00:09 0:00:09 --:--:-- 0< HTTP/1.1 200 OK < Set-Cookie: DSSAPI-COOKIE=R2817038826; path=/ < Cache-Control: no-cache < Pragma: no-cache < Content-Type: application/json; charset=utf-8 < Expires: -1 < Server: Microsoft-IIS/7.5 < X-Request-Execution-Correlation-Id: 4b7e2aa5-96b8-43d6-b090-271031c2c114 < X-App-Id: Custom.RestApi < X-App-Version: 11.1.534.64 < Date: Wed, 09 Aug 2017 10:22:27 GMT < Content-Length: 1893 < { [data not shown] 78 4910 46 831 100 3017 83 304 0:00:22 0:00:09 0:00:13 0 100 4910 105 1893 100 3017 190 304 0:00:09 0:00:09 --:--:-- 0* Connection #0 to host 10.192.6.221 left intact * Closing connection #0 {"@odata.context":"https://10.192.6.221/RestApi/v1/$metadata#RawExtractionResults/$entity","JobId":"0x05d3a9716d3b3026","Notes":["Extraction Services Version 11.1.37014 (36b953b5a32e), Built Jul 5 2017 20:14:02\nUser ID: 9012819\nExtraction ID: 2000000001456448\nSchedule: 0x05d3a9716d3b3026 (ID = 0x0000000000000000)\nInput List (20 items): (ID = 0x05d3a9716d3b3026) Created: 08/09/2017 10:22:26 Last Modified: 08/09/2017 10:22:26\nReport Template: _OnD_0x05d3a9716d3b3026 (ID = 0x05d3a971cacb3026) Created: 08/09/2017 10:22:21 Last Modified: 08/09/2017 10:22:21\nSchedule dispatched via message queue (0x05d3a9716d3b3026)\nSchedule Time: 08/09/2017 10:22:25\nProcessing started at 08/09/2017 10:22:25\nProcessing completed successfully at 08/09/2017 10:22:26\nExtraction finished at 08/09/2017 10:22:26 UTC, with servers: x07q13\nHistorical Instrument <RIC,1CC1200I7> expanded to 0 RICS.\nInstrument <RIC,1CC1200J7> expanded to 0 RICS.\nInstrument <RIC,1CC1200K7> expanded to 0 RICS.\nInstrument <RIC,1CC1200L7> expanded to 0 RICS.\nHistorical Instrument <RIC,1CC1200U7> expanded to 0 RICS.\nInstrument <RIC,1CC1200V7> expanded to 0 RICS.\nInstrument <RIC,1CC1200W7> expanded to 0 RICS.\nInstrument <RIC,1CC1200X7> expanded to 0 RICS.\nInstrument <RIC,1CC1250C8> expanded to 0 RICS.\nInstrument <RIC,1CC1250E8> expanded to 0 RICS.\nInstrument <RIC,1CC1250G8> expanded to 0 RICS.\nHistorical Instrument <RIC,1CC1250I7> expanded to 0 RICS.\nInstrument <RIC,1CC1250J7> expanded to 0 RICS.\nInstrument <RIC,1CC1250K7> expanded to 0 RICS.\nInstrument <RIC,1CC1250L7> expanded to 0 RICS.\nInstrument <RIC,1CC1250O8> expanded to 0 RICS.\nInstrument <RIC,1CC1250Q8> expanded to 0 RICS.\nInstrument <RIC,1CC1250S8> expanded to 0 RICS.\nHistorical Instrument <RIC,1CC1250U7> expanded to 0 RICS.\nInstrument <RIC,1CC1250V7> expanded to 0 RICS.\nReport suppressed because there are no instruments\n"]}
How can I get this output in a .csv.gz file? This is for date 03-08-2017
0 -
Note: the end of this thread is under another query: https://community.developers.refinitiv.com/questions/17163/mbo-file-download-continue.html
0 -
How to download MBO file using AWS? I am following the same approach as mentioned above.
0 -
Yes, you can download MBO file from AWS. You use add -H "X-Direct-Download:true" when sending GET request at step 5. The script should look like:
- Get redirect URL, and then store it in an environment variable
redirect_url="$(curl -Ik -L -H 'Authorization: Token xxxxxxxx' -H 'X-Direct-Download:true' -X GET https://10.192.6.221/RestApi/v1/Extractions/RawExtractionResults('0x05d87c9b9c9b3036')/$value -o /dev/null -w %{url_effective})"
- Directly download a file from the redirect URL
curl -k -X GET $redirect_url -o output.gz<br>
0 -
We are trying to get the data using the mentioned process for AWS, but getting connection timeout message 'HTTP/1.1 408 Request Timeout'. We are downloading in 5K RICS chunks. Please suggest.
0 -
Tried with 500 RICS and get below output.
command:
redirect_url="$(curl -Ik -L -H 'Authorization: Token xxxxxxxx' -H 'X-Direct-Download:true' -X GET https://10.192.6.221/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='0x05d87c9b9c9b3036') -o /dev/null -w %{url_effective})"
echo $redirect_url https://10.192.6.221/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='0x05d87c9b9c9b3036')
This URL looks like normal URL not a AWS URL. Please suggest.
0 -
Sorry, you need to use with this URL: https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('0x05d0c955452b3036')/$value
redirect_url="$(curl -Ik -L -H 'Authorization: Token xxxxxxxx' -H 'X-Direct-Download:true' -X GET https://10.192.6.221/RestApi/v1/Extractions/RawExtractionResults('0x05d87c9b9c9b3036')/$value -o /dev/null -w %{url_effective})"
0 -
Tried wiyj $value, still same output.
redirect_url="$(curl -Ik -L -H 'Authorization: Token xxxxxxxxxx' -H 'X-Direct-Download:true' -X GET https://10.192.6.221/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='0x05d88163e15b3016')/$value -o /dev/null -w %{url_effective})"
echo $redirect_url https://10.192.6.221/RestApi/v1/Extractions/ExtractRawResult(ExtractionId='0x05d88163e15b3016')/$value
0 -
It is RawExtractionResults, not ExtractRawResult.
/RawExtractionResults%28%270x05d865a51a7b2f96%27%29/%24value
0 -
We are able to get the data from AWS, but while getting the aws link using the above code, we could also see that it is tryting to connect to the AWS link on its own. So can you please help us to suppress such try?
curl -Ik -L -H 'Authorization: Token xxx' -H X-Direct-Download:true -X GET https://10.192.6.221/RestApi/v1/Extractions/RawExtractionResults('0x05d9bb4a822b3016')/$value -o /dev/null -w '%{url_effective}'
curl: (7) couldn't connect to host
0 -
Please remove -L -o -w from the curl command.
curl -Ik -H 'Authorization: Token xxx' -H X-Direct-Download:true -X GET https://10.192.6.221/RestApi/v1/Extractions/RawExtractionResults('0x05d9bb4a822b3016')/$value
The output from the above command is:
HTTP/1.1 302 Found
Set-Cookie: DSSAPI-COOKIE=R3148268809; path=/
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Location: https://s3.amazonaws.com/...
Server: Microsoft-IIS/7.5
X-Request-Execution-Correlation-Id: de4c9cfe-18a9-4abf-a769-c9be547f6028
X-App-Id: Custom.RestApi
X-App-Version: 11.1.558.64
Date: Mon, 28 Aug 2017 08:38:26 GMT
Content-Length: 0Then, you need to get the AWS URL from the Location header.
0
Categories
- All Categories
- 6 AHS
- 37 Alpha
- 161 App Studio
- 4 Block Chain
- 4 Bot Platform
- 16 Connected Risk APIs
- 47 Data Fusion
- 30 Data Model Discovery
- 608 Datastream
- 1.3K DSS
- 577 Eikon COM
- 4.9K Eikon Data APIs
- 7 Electronic Trading
- Generic FIX
- 7 Local Bank Node API
- Trading API
- 2.7K Elektron
- 1.3K EMA
- 236 ETA
- 519 WebSocket API
- 33 FX Venues
- 10 FX Market Data
- 1 FX Post Trade
- 1 FX Trading - Matching
- 12 FX Trading – RFQ Maker
- 5 Intelligent Tagging
- 2 Legal One
- 20 Messenger Bot
- 2 Messenger Side by Side
- 9 ONESOURCE
- 7 Indirect Tax
- 59 Open Calais
- 264 Open PermID
- 39 Entity Search
- 2 Org ID
- PAM
- PAM - Logging
- 8.4K Private Comments
- 6 Product Insight
- Project Tracking
- ProView
- ProView Internal
- 20 RDMS
- 1.4K Refinitiv Data Platform
- 367 Refinitiv Data Platform Libraries
- 3 Refinitiv Due Diligence
- LSEG Due Diligence Portal API
- 3 Refinitiv Due Dilligence Centre
- Rose's Space
- 1.1K Screening
- 18 Qual-ID API
- 13 Screening Deployed
- 23 Screening Online
- 10 World-Check Customer Risk Screener
- 990 World-Check One
- 44 World-Check One Zero Footprint
- 45 Side by Side Integration API
- Test Space
- 3 Thomson One Smart
- 1.2K TR Internal
- Global Hackathon 2015
- 2 Specialists Who Code
- 10 TR Knowledge Graph
- 150 Transactions
- 142 REDI API
- 1.7K TREP APIs
- 4 CAT
- 21 DACS Station
- 117 Open DACS
- 1.1K RFA
- 103 UPA
- 172 TREP Infrastructure
- 224 TRKD
- 886 TRTH
- 5 Velocity Analytics
- 5 Wealth Management Web Services
- 59 Workspace SDK
- 9 Element Framework
- 5 Grid
- 13 World-Check Data File
- Yield Book Analytics
- 46 中文论坛