how to download depth data from TRTH/datascope using Python REST API?












1














I am trying to download depth data from Tick History Market Depth/Legacy market depth using REST API from this post.



As you already know, depth data is quite large. so I would like to save the files by date. i.e. instead of having 1 giant file, I have one file for each day that contains depth data for all stocks in the list. I would also love to separate by stock and then by date as well but by date is fine for now.



How would I go about it? The following is my code. I use Python 3.6 with Pycharm by the way. And I am not really good with Python. I normally use SAS. While the code is long, there are 4 big parts:



    1. A JSON file that specifies the fields to download
2. a RequestNewToken function to get token each time
3. A function that extracts data
4. A main function that runs the above 2 functions



  1. The JSON file that specifies the fields to download



{
"ExtractionRequest": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"ContentFieldNames": [
"Ask Price",
"Ask Size",
"Bid Price",
"Bid Size"
],
"IdentifierList": {
"@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentListIdentifierList",
"InstrumentListId":"0x06698c5d00301db4"
},
"Condition": {
"View": "NormalizedLL2",
"NumberOfLevels": 5,
"MessageTimeStampIn": "GmtUtc",
"ReportDateRangeType": "Range",
"QueryStartDate": "1996-01-01T00:00:00.000Z",
"QueryEndDate": "2018-06-06T23:59:59.999Z",
"DisplaySourceRIC": "True"
}
}
}



  1. Here is the code to run and get data:




> _outputFilePath="./"
> _outputFileName="TestOutput"
> _retryInterval=int(30) #value in second used by Pooling loop to check request status on the server
> _jsonFileName="TickHistoricalRequest.json"
>
> def RequestNewToken(username="",password=""):
> _AuthenURL = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken"
> _header= {}
> _header['Prefer']='respond-async'
> _header['Content-Type']='application/json; odata.metadata=minimal'
> _data={'Credentials':{
> 'Password':password,
> 'Username':username
> }
> }
>
> print("Send Login request")
> resp=post(_AuthenURL,json=_data,headers=_header)
>
>
> if resp.status_code!=200:
> message="Authentication Error Status Code: "+ str(resp.status_code) +" Message:"+dumps(loads(resp.text),indent=4)
> raise Exception(str(message))
>
> return loads(resp.text)['value']
>
>
> def ExtractRaw(token,json_payload):
> try:
> _extractRawURL="https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw"
> #Setup Request Header
> _header={}
> _header['Prefer']='respond-async'
> _header['Content-Type']='application/json; odata.metadata=minimal'
> _header['Accept-Charset']='UTF-8'
> _header['Authorization']='Token'+token
>
> #Post Http Request to DSS server using extract raw URL
> resp=post(_extractRawURL,data=None,json=json_payload,headers=_header)
>
>
> #Print Status Code return from HTTP Response
> print("Status Code="+str(resp.status_code) )
>
>
> #Raise exception with error message if the returned status is not 202 (Accepted) or 200 (Ok)
> if resp.status_code!=200:
> if resp.status_code!=202:
> message="Error: Status Code:"+str(resp.status_code)+" Message:"+resp.text
> raise Exception(message)
>
>
> #Get location from header, URL must be https so we need to change it using string replace function
> _location=str.replace(resp.headers['Location'],"http://","https://")
>
>
> print("Get Status from "+str(_location))
> _jobID=""
>
>
> #pooling loop to check request status every 2 sec.
> while True:
> resp=get(_location,headers=_header)
> _pollstatus = int(resp.status_code)
>
>
> if _pollstatus==200:
> break
> else:
> print("Status:"+str(resp.headers['Status']))
> sleep(_retryInterval) #wait for _retyInterval period and re-request the status to check if it already completed
>
>
> # Get the jobID from HTTP response
> json_resp = loads(resp.text)
> _jobID = json_resp.get('JobId')
> print("Status is completed the JobID is "+ str(_jobID)+ "n")
>
>
> # Check if the response contains Notes.If the note exists print it to console.
> if len(json_resp.get('Notes')) > 0:
> print("Notes:n======================================")
> for var in json_resp.get('Notes'):
> print(var)
> print("======================================n")
>
>
> # Request should be completed then Get the result by passing jobID to RAWExtractionResults URL
> _getResultURL = str("https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('"
> + _jobID + "')/$value")
> print("Retrieve result from " + _getResultURL)
> resp=get(_getResultURL,headers=_header,stream=True)
>
>
> #Write Output to file.
> outputfilepath = str(_outputFilePath + _outputFileName + str(os.getpid()) + '.csv.gz')
> if resp.status_code==200:
> with open(outputfilepath, 'wb') as f:
> f.write(resp.raw.read())
>
>
> print("Write output to "+outputfilepath+" completednn")
> print("Below is sample data from "+ outputfilepath)
> #Read data from csv.gz and shows output from dataframe head() and tail()
> df=pd.read_csv(outputfilepath,compression='gzip')
> print(df.head())
> print("....")
> print(df.tail())
>
>
> except Exception as ex:
> print("Exception occrus:", ex)
>
>
> return
>
>
>
> def main():
> try:
> #Request a new Token
> print("Login to DSS Server")
> _DSSUsername=input('Enter DSS Username:')
> try:
> _DSSPassword=getpass(prompt='Enter DSS Password:')
> _token=RequestNewToken(_DSSUsername,_DSSPassword)
> except GetPassWarning as e:
> print(e)
> print("Token="+_token+"n")
>
>
> #Read the HTTP request body from JSON file. So you can change the request in JSON file instead.
> queryString = {}
> with open(_jsonFileName, "r") as filehandle:
> queryString=load(filehandle,object_pairs_hook=OrderedDict)
>
>
> #print(queryString)
> ExtractRaw(_token,queryString)
>
>
> except Exception as e:
> print(e)
>
>
> print(__name__)
>
>
> if __name__=="__main__":
> main()









share|improve this question



























    1














    I am trying to download depth data from Tick History Market Depth/Legacy market depth using REST API from this post.



    As you already know, depth data is quite large. so I would like to save the files by date. i.e. instead of having 1 giant file, I have one file for each day that contains depth data for all stocks in the list. I would also love to separate by stock and then by date as well but by date is fine for now.



    How would I go about it? The following is my code. I use Python 3.6 with Pycharm by the way. And I am not really good with Python. I normally use SAS. While the code is long, there are 4 big parts:



        1. A JSON file that specifies the fields to download
    2. a RequestNewToken function to get token each time
    3. A function that extracts data
    4. A main function that runs the above 2 functions



    1. The JSON file that specifies the fields to download



    {
    "ExtractionRequest": {
    "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
    "ContentFieldNames": [
    "Ask Price",
    "Ask Size",
    "Bid Price",
    "Bid Size"
    ],
    "IdentifierList": {
    "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentListIdentifierList",
    "InstrumentListId":"0x06698c5d00301db4"
    },
    "Condition": {
    "View": "NormalizedLL2",
    "NumberOfLevels": 5,
    "MessageTimeStampIn": "GmtUtc",
    "ReportDateRangeType": "Range",
    "QueryStartDate": "1996-01-01T00:00:00.000Z",
    "QueryEndDate": "2018-06-06T23:59:59.999Z",
    "DisplaySourceRIC": "True"
    }
    }
    }



    1. Here is the code to run and get data:




    > _outputFilePath="./"
    > _outputFileName="TestOutput"
    > _retryInterval=int(30) #value in second used by Pooling loop to check request status on the server
    > _jsonFileName="TickHistoricalRequest.json"
    >
    > def RequestNewToken(username="",password=""):
    > _AuthenURL = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken"
    > _header= {}
    > _header['Prefer']='respond-async'
    > _header['Content-Type']='application/json; odata.metadata=minimal'
    > _data={'Credentials':{
    > 'Password':password,
    > 'Username':username
    > }
    > }
    >
    > print("Send Login request")
    > resp=post(_AuthenURL,json=_data,headers=_header)
    >
    >
    > if resp.status_code!=200:
    > message="Authentication Error Status Code: "+ str(resp.status_code) +" Message:"+dumps(loads(resp.text),indent=4)
    > raise Exception(str(message))
    >
    > return loads(resp.text)['value']
    >
    >
    > def ExtractRaw(token,json_payload):
    > try:
    > _extractRawURL="https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw"
    > #Setup Request Header
    > _header={}
    > _header['Prefer']='respond-async'
    > _header['Content-Type']='application/json; odata.metadata=minimal'
    > _header['Accept-Charset']='UTF-8'
    > _header['Authorization']='Token'+token
    >
    > #Post Http Request to DSS server using extract raw URL
    > resp=post(_extractRawURL,data=None,json=json_payload,headers=_header)
    >
    >
    > #Print Status Code return from HTTP Response
    > print("Status Code="+str(resp.status_code) )
    >
    >
    > #Raise exception with error message if the returned status is not 202 (Accepted) or 200 (Ok)
    > if resp.status_code!=200:
    > if resp.status_code!=202:
    > message="Error: Status Code:"+str(resp.status_code)+" Message:"+resp.text
    > raise Exception(message)
    >
    >
    > #Get location from header, URL must be https so we need to change it using string replace function
    > _location=str.replace(resp.headers['Location'],"http://","https://")
    >
    >
    > print("Get Status from "+str(_location))
    > _jobID=""
    >
    >
    > #pooling loop to check request status every 2 sec.
    > while True:
    > resp=get(_location,headers=_header)
    > _pollstatus = int(resp.status_code)
    >
    >
    > if _pollstatus==200:
    > break
    > else:
    > print("Status:"+str(resp.headers['Status']))
    > sleep(_retryInterval) #wait for _retyInterval period and re-request the status to check if it already completed
    >
    >
    > # Get the jobID from HTTP response
    > json_resp = loads(resp.text)
    > _jobID = json_resp.get('JobId')
    > print("Status is completed the JobID is "+ str(_jobID)+ "n")
    >
    >
    > # Check if the response contains Notes.If the note exists print it to console.
    > if len(json_resp.get('Notes')) > 0:
    > print("Notes:n======================================")
    > for var in json_resp.get('Notes'):
    > print(var)
    > print("======================================n")
    >
    >
    > # Request should be completed then Get the result by passing jobID to RAWExtractionResults URL
    > _getResultURL = str("https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('"
    > + _jobID + "')/$value")
    > print("Retrieve result from " + _getResultURL)
    > resp=get(_getResultURL,headers=_header,stream=True)
    >
    >
    > #Write Output to file.
    > outputfilepath = str(_outputFilePath + _outputFileName + str(os.getpid()) + '.csv.gz')
    > if resp.status_code==200:
    > with open(outputfilepath, 'wb') as f:
    > f.write(resp.raw.read())
    >
    >
    > print("Write output to "+outputfilepath+" completednn")
    > print("Below is sample data from "+ outputfilepath)
    > #Read data from csv.gz and shows output from dataframe head() and tail()
    > df=pd.read_csv(outputfilepath,compression='gzip')
    > print(df.head())
    > print("....")
    > print(df.tail())
    >
    >
    > except Exception as ex:
    > print("Exception occrus:", ex)
    >
    >
    > return
    >
    >
    >
    > def main():
    > try:
    > #Request a new Token
    > print("Login to DSS Server")
    > _DSSUsername=input('Enter DSS Username:')
    > try:
    > _DSSPassword=getpass(prompt='Enter DSS Password:')
    > _token=RequestNewToken(_DSSUsername,_DSSPassword)
    > except GetPassWarning as e:
    > print(e)
    > print("Token="+_token+"n")
    >
    >
    > #Read the HTTP request body from JSON file. So you can change the request in JSON file instead.
    > queryString = {}
    > with open(_jsonFileName, "r") as filehandle:
    > queryString=load(filehandle,object_pairs_hook=OrderedDict)
    >
    >
    > #print(queryString)
    > ExtractRaw(_token,queryString)
    >
    >
    > except Exception as e:
    > print(e)
    >
    >
    > print(__name__)
    >
    >
    > if __name__=="__main__":
    > main()









    share|improve this question

























      1












      1








      1







      I am trying to download depth data from Tick History Market Depth/Legacy market depth using REST API from this post.



      As you already know, depth data is quite large. so I would like to save the files by date. i.e. instead of having 1 giant file, I have one file for each day that contains depth data for all stocks in the list. I would also love to separate by stock and then by date as well but by date is fine for now.



      How would I go about it? The following is my code. I use Python 3.6 with Pycharm by the way. And I am not really good with Python. I normally use SAS. While the code is long, there are 4 big parts:



          1. A JSON file that specifies the fields to download
      2. a RequestNewToken function to get token each time
      3. A function that extracts data
      4. A main function that runs the above 2 functions



      1. The JSON file that specifies the fields to download



      {
      "ExtractionRequest": {
      "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
      "ContentFieldNames": [
      "Ask Price",
      "Ask Size",
      "Bid Price",
      "Bid Size"
      ],
      "IdentifierList": {
      "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentListIdentifierList",
      "InstrumentListId":"0x06698c5d00301db4"
      },
      "Condition": {
      "View": "NormalizedLL2",
      "NumberOfLevels": 5,
      "MessageTimeStampIn": "GmtUtc",
      "ReportDateRangeType": "Range",
      "QueryStartDate": "1996-01-01T00:00:00.000Z",
      "QueryEndDate": "2018-06-06T23:59:59.999Z",
      "DisplaySourceRIC": "True"
      }
      }
      }



      1. Here is the code to run and get data:




      > _outputFilePath="./"
      > _outputFileName="TestOutput"
      > _retryInterval=int(30) #value in second used by Pooling loop to check request status on the server
      > _jsonFileName="TickHistoricalRequest.json"
      >
      > def RequestNewToken(username="",password=""):
      > _AuthenURL = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken"
      > _header= {}
      > _header['Prefer']='respond-async'
      > _header['Content-Type']='application/json; odata.metadata=minimal'
      > _data={'Credentials':{
      > 'Password':password,
      > 'Username':username
      > }
      > }
      >
      > print("Send Login request")
      > resp=post(_AuthenURL,json=_data,headers=_header)
      >
      >
      > if resp.status_code!=200:
      > message="Authentication Error Status Code: "+ str(resp.status_code) +" Message:"+dumps(loads(resp.text),indent=4)
      > raise Exception(str(message))
      >
      > return loads(resp.text)['value']
      >
      >
      > def ExtractRaw(token,json_payload):
      > try:
      > _extractRawURL="https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw"
      > #Setup Request Header
      > _header={}
      > _header['Prefer']='respond-async'
      > _header['Content-Type']='application/json; odata.metadata=minimal'
      > _header['Accept-Charset']='UTF-8'
      > _header['Authorization']='Token'+token
      >
      > #Post Http Request to DSS server using extract raw URL
      > resp=post(_extractRawURL,data=None,json=json_payload,headers=_header)
      >
      >
      > #Print Status Code return from HTTP Response
      > print("Status Code="+str(resp.status_code) )
      >
      >
      > #Raise exception with error message if the returned status is not 202 (Accepted) or 200 (Ok)
      > if resp.status_code!=200:
      > if resp.status_code!=202:
      > message="Error: Status Code:"+str(resp.status_code)+" Message:"+resp.text
      > raise Exception(message)
      >
      >
      > #Get location from header, URL must be https so we need to change it using string replace function
      > _location=str.replace(resp.headers['Location'],"http://","https://")
      >
      >
      > print("Get Status from "+str(_location))
      > _jobID=""
      >
      >
      > #pooling loop to check request status every 2 sec.
      > while True:
      > resp=get(_location,headers=_header)
      > _pollstatus = int(resp.status_code)
      >
      >
      > if _pollstatus==200:
      > break
      > else:
      > print("Status:"+str(resp.headers['Status']))
      > sleep(_retryInterval) #wait for _retyInterval period and re-request the status to check if it already completed
      >
      >
      > # Get the jobID from HTTP response
      > json_resp = loads(resp.text)
      > _jobID = json_resp.get('JobId')
      > print("Status is completed the JobID is "+ str(_jobID)+ "n")
      >
      >
      > # Check if the response contains Notes.If the note exists print it to console.
      > if len(json_resp.get('Notes')) > 0:
      > print("Notes:n======================================")
      > for var in json_resp.get('Notes'):
      > print(var)
      > print("======================================n")
      >
      >
      > # Request should be completed then Get the result by passing jobID to RAWExtractionResults URL
      > _getResultURL = str("https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('"
      > + _jobID + "')/$value")
      > print("Retrieve result from " + _getResultURL)
      > resp=get(_getResultURL,headers=_header,stream=True)
      >
      >
      > #Write Output to file.
      > outputfilepath = str(_outputFilePath + _outputFileName + str(os.getpid()) + '.csv.gz')
      > if resp.status_code==200:
      > with open(outputfilepath, 'wb') as f:
      > f.write(resp.raw.read())
      >
      >
      > print("Write output to "+outputfilepath+" completednn")
      > print("Below is sample data from "+ outputfilepath)
      > #Read data from csv.gz and shows output from dataframe head() and tail()
      > df=pd.read_csv(outputfilepath,compression='gzip')
      > print(df.head())
      > print("....")
      > print(df.tail())
      >
      >
      > except Exception as ex:
      > print("Exception occrus:", ex)
      >
      >
      > return
      >
      >
      >
      > def main():
      > try:
      > #Request a new Token
      > print("Login to DSS Server")
      > _DSSUsername=input('Enter DSS Username:')
      > try:
      > _DSSPassword=getpass(prompt='Enter DSS Password:')
      > _token=RequestNewToken(_DSSUsername,_DSSPassword)
      > except GetPassWarning as e:
      > print(e)
      > print("Token="+_token+"n")
      >
      >
      > #Read the HTTP request body from JSON file. So you can change the request in JSON file instead.
      > queryString = {}
      > with open(_jsonFileName, "r") as filehandle:
      > queryString=load(filehandle,object_pairs_hook=OrderedDict)
      >
      >
      > #print(queryString)
      > ExtractRaw(_token,queryString)
      >
      >
      > except Exception as e:
      > print(e)
      >
      >
      > print(__name__)
      >
      >
      > if __name__=="__main__":
      > main()









      share|improve this question













      I am trying to download depth data from Tick History Market Depth/Legacy market depth using REST API from this post.



      As you already know, depth data is quite large. so I would like to save the files by date. i.e. instead of having 1 giant file, I have one file for each day that contains depth data for all stocks in the list. I would also love to separate by stock and then by date as well but by date is fine for now.



      How would I go about it? The following is my code. I use Python 3.6 with Pycharm by the way. And I am not really good with Python. I normally use SAS. While the code is long, there are 4 big parts:



          1. A JSON file that specifies the fields to download
      2. a RequestNewToken function to get token each time
      3. A function that extracts data
      4. A main function that runs the above 2 functions



      1. The JSON file that specifies the fields to download



      {
      "ExtractionRequest": {
      "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
      "ContentFieldNames": [
      "Ask Price",
      "Ask Size",
      "Bid Price",
      "Bid Size"
      ],
      "IdentifierList": {
      "@odata.type": "#ThomsonReuters.Dss.Api.Extractions.ExtractionRequests.InstrumentListIdentifierList",
      "InstrumentListId":"0x06698c5d00301db4"
      },
      "Condition": {
      "View": "NormalizedLL2",
      "NumberOfLevels": 5,
      "MessageTimeStampIn": "GmtUtc",
      "ReportDateRangeType": "Range",
      "QueryStartDate": "1996-01-01T00:00:00.000Z",
      "QueryEndDate": "2018-06-06T23:59:59.999Z",
      "DisplaySourceRIC": "True"
      }
      }
      }



      1. Here is the code to run and get data:




      > _outputFilePath="./"
      > _outputFileName="TestOutput"
      > _retryInterval=int(30) #value in second used by Pooling loop to check request status on the server
      > _jsonFileName="TickHistoricalRequest.json"
      >
      > def RequestNewToken(username="",password=""):
      > _AuthenURL = "https://hosted.datascopeapi.reuters.com/RestApi/v1/Authentication/RequestToken"
      > _header= {}
      > _header['Prefer']='respond-async'
      > _header['Content-Type']='application/json; odata.metadata=minimal'
      > _data={'Credentials':{
      > 'Password':password,
      > 'Username':username
      > }
      > }
      >
      > print("Send Login request")
      > resp=post(_AuthenURL,json=_data,headers=_header)
      >
      >
      > if resp.status_code!=200:
      > message="Authentication Error Status Code: "+ str(resp.status_code) +" Message:"+dumps(loads(resp.text),indent=4)
      > raise Exception(str(message))
      >
      > return loads(resp.text)['value']
      >
      >
      > def ExtractRaw(token,json_payload):
      > try:
      > _extractRawURL="https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/ExtractRaw"
      > #Setup Request Header
      > _header={}
      > _header['Prefer']='respond-async'
      > _header['Content-Type']='application/json; odata.metadata=minimal'
      > _header['Accept-Charset']='UTF-8'
      > _header['Authorization']='Token'+token
      >
      > #Post Http Request to DSS server using extract raw URL
      > resp=post(_extractRawURL,data=None,json=json_payload,headers=_header)
      >
      >
      > #Print Status Code return from HTTP Response
      > print("Status Code="+str(resp.status_code) )
      >
      >
      > #Raise exception with error message if the returned status is not 202 (Accepted) or 200 (Ok)
      > if resp.status_code!=200:
      > if resp.status_code!=202:
      > message="Error: Status Code:"+str(resp.status_code)+" Message:"+resp.text
      > raise Exception(message)
      >
      >
      > #Get location from header, URL must be https so we need to change it using string replace function
      > _location=str.replace(resp.headers['Location'],"http://","https://")
      >
      >
      > print("Get Status from "+str(_location))
      > _jobID=""
      >
      >
      > #pooling loop to check request status every 2 sec.
      > while True:
      > resp=get(_location,headers=_header)
      > _pollstatus = int(resp.status_code)
      >
      >
      > if _pollstatus==200:
      > break
      > else:
      > print("Status:"+str(resp.headers['Status']))
      > sleep(_retryInterval) #wait for _retyInterval period and re-request the status to check if it already completed
      >
      >
      > # Get the jobID from HTTP response
      > json_resp = loads(resp.text)
      > _jobID = json_resp.get('JobId')
      > print("Status is completed the JobID is "+ str(_jobID)+ "n")
      >
      >
      > # Check if the response contains Notes.If the note exists print it to console.
      > if len(json_resp.get('Notes')) > 0:
      > print("Notes:n======================================")
      > for var in json_resp.get('Notes'):
      > print(var)
      > print("======================================n")
      >
      >
      > # Request should be completed then Get the result by passing jobID to RAWExtractionResults URL
      > _getResultURL = str("https://hosted.datascopeapi.reuters.com/RestApi/v1/Extractions/RawExtractionResults('"
      > + _jobID + "')/$value")
      > print("Retrieve result from " + _getResultURL)
      > resp=get(_getResultURL,headers=_header,stream=True)
      >
      >
      > #Write Output to file.
      > outputfilepath = str(_outputFilePath + _outputFileName + str(os.getpid()) + '.csv.gz')
      > if resp.status_code==200:
      > with open(outputfilepath, 'wb') as f:
      > f.write(resp.raw.read())
      >
      >
      > print("Write output to "+outputfilepath+" completednn")
      > print("Below is sample data from "+ outputfilepath)
      > #Read data from csv.gz and shows output from dataframe head() and tail()
      > df=pd.read_csv(outputfilepath,compression='gzip')
      > print(df.head())
      > print("....")
      > print(df.tail())
      >
      >
      > except Exception as ex:
      > print("Exception occrus:", ex)
      >
      >
      > return
      >
      >
      >
      > def main():
      > try:
      > #Request a new Token
      > print("Login to DSS Server")
      > _DSSUsername=input('Enter DSS Username:')
      > try:
      > _DSSPassword=getpass(prompt='Enter DSS Password:')
      > _token=RequestNewToken(_DSSUsername,_DSSPassword)
      > except GetPassWarning as e:
      > print(e)
      > print("Token="+_token+"n")
      >
      >
      > #Read the HTTP request body from JSON file. So you can change the request in JSON file instead.
      > queryString = {}
      > with open(_jsonFileName, "r") as filehandle:
      > queryString=load(filehandle,object_pairs_hook=OrderedDict)
      >
      >
      > #print(queryString)
      > ExtractRaw(_token,queryString)
      >
      >
      > except Exception as e:
      > print(e)
      >
      >
      > print(__name__)
      >
      >
      > if __name__=="__main__":
      > main()






      python json python-3.x api






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 21 at 2:18









      duckman

      15713




      15713





























          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53404434%2fhow-to-download-depth-data-from-trth-datascope-using-python-rest-api%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown






























          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53404434%2fhow-to-download-depth-data-from-trth-datascope-using-python-rest-api%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          404 Error Contact Form 7 ajax form submitting

          How to know if a Active Directory user can login interactively

          TypeError: fit_transform() missing 1 required positional argument: 'X'