Using Tick History V2 REST API with Go Programming Language
Last Update: July 2021
Introduction
LSEG Tick History is an Internet-hosted product on the DataScope Select platform that provides a legacy SOAP-based API and a REST API for unparalleled access to historical high-frequency data across global asset classes dating to 1996. The legacy SOAP-based API is also available and is scheduled to be sunset. Therefore clients who still use this SOAP-based API may need to upgrade their application to use REST API instead.
This article demonstrates problems and solutions that developers should be aware of when using Tick History V2 On Demand data extraction with Go programming language. It uses Tick History Market Depth On Demand data extraction as an example to demonstrate the usage and solutions. However, the methods mentioned in this article can be applied to other types of data extractions and programming languages.
Prerequisite
The following knowledge is required before reading this article.
- How to use On Demand extraction in Tick History. This article doesn't explain LSEG Tick History V2 REST API On Demand data extraction request in detail. Fortunately, there is a REST API Tutorial 3: On Demand Data extraction workflow tutorial available in the Developer Community which thoroughly explains On Demand data extraction
- Basic knowledge of Go programming language. This article doesn't cover the installation, settings, and usage of Go programming language. You can refer to the official Go Programming Language Website for this basic knowledge
Overview
Go is an open source project under a BSD-style license developed by a team at Google in 2007 and many contributors from the open source community. Its binary distributions are available for Linux, Mac OS X, Windows, and more. Go is a statically typed and compiled language with a simple syntax. It features garbage collection, concurrency, type safety and large standard library.
Developers can use Go programming language to consume Tick History data via LSEG Tick History V2 REST API. This article lists several problems and solutions which developers may find during development. The problems mentioned in this article include:
- Encode and decode JSON objects
- Encode enumeration
- Concurrently download a gzip file
- Download a gzip file from Amazon Web Services
Encode and Decode JSON Objects
LSEG Tick History V2 REST API requires JSON (JavaScript Object Notation) in request and response messages. JSON is a lightweight data-interchange format. It is easy for humans to read and write and easy for machines to parse and generate. In Go programming language, there are several ways to encode and decode JSON objects.
Using a String to Encode and Decode JSON Object
JSON is a text format so the application can directly construct a JSON string for the HTTP request by using string manipulation and process a JSON string in the HTTP response by using a string parser or regular expression. This method is quick and easy to implement but it is ineffective and prone to error. Therefore it is suitable for proving the programming concepts or verifying the correctness of HTTP request and response messages.
Using a Map to Encode and Decode JSON Object
JSON is also a key and value pair data so map[string]interface{} can be used with json.Marshal and json.Unmarshal functions. These functions are available in the encoding/json package to encode and decode JSON objects.
jsonMap := map[string]interface{}{
"field1": "value1",
"field2": 2,
"a": "1",
"b": 2,
}
jsonByte, _ := json.Marshal(jsonMap)
fmt.Println(string(jsonByte))
The above code uses map[string]interface{} to store key and value pair data. Then, it uses json.Marshal function to encode the map to JSON byte array. After that, it prints the encoded JSON object.
{"a":"1","b":2,"field1":"value1","field2":2}
To decode a JSON object, json.Unmarshal function can be used.
var jsonMap map[string]interface{}
jsonStr := `{"field1":"value1","field2":2,"a":"1","b":2}`
json.Unmarshal([]byte(jsonStr), &jsonMap)
for k, v := range jsonMap {
fmt.Printf("%s: %v\n", k, v)
}
The above code defines a JSON object in a string variable. Then, it calls the json.Unmarshal function to decode the JSON object to a map. After that, it prints keys and values in the map.
b: 2
field1: value1
field2: 2
a: 1
The drawback from this method is that the order of fields when encoding and decoding can be out of order, as shown in the previous examples. This could be the problem when using with the API that the order of fields in the HTTP request must be preserved.
Referring to this question, in LSEG Tick History V2 REST API, the order of fields in JSON object is important, especially for the @odata.type field. Therefore, this method may not be suitable to use with LSEG Tick History V2 REST API for encoding JSON objects.
Using a Type to Encode and Decode JSON Object
In addition to a map, json.Marshal and json.Unmarshal functions can also be used with user-defined types. Therefore, JSON objects can be defined as types in Go programming language. Then, the types can be used with those functions to encode and decode JSON objects. This method is used by the example in this article.
In the example, the types for JSON objects in the request and response messages are defined as:
type TickHistoryMarketDepthExtractionRequest struct {
Metadata string `json:"@odata.type" odata:"#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest"`
ContentFieldNames []string `json:",omitempty"`
IdentifierList InstrumentIdentifierList `json:",omitempty"`
Condition TickHistoryMarketDepthCondition `json:",omitempty"`
}
type RawExtractionResult struct {
Metadata string `json:"@odata.context,omitempty"`
JobID string `json:"JobId"`
Notes []string
IdentifierValidationErrors []IdentifierValidationError
}
The first type is used in the HTTP request message to extract Tick History Market Depth data. The second type is for the JSON object in the HTTP response message when the extraction is completed. These types will be encoded and decoded as JSON objects. Each field represents a member of the JSON object by using a type's field name as a JSON object key.
JSON object in the HTTP request and response of LSEG Tick History V2 REST API contains @data.type field which defines a type name of OData.
{
"ExtractionRequest":{
"@odata.type":"#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
...
}
}
However, @data.type is an invalid field name in Go programming language. To solve this issue, the json field's tag is used in the Metadata field to customize the field name for JSON object.
Metadata string `json:"@odata.type,omitempty"
@odata.type is set as a value for the json field's tag. The omitempty option specifies that the field should be omitted from the encoding if the field has an empty value, defined as false, 0, a nil pointer, a nil interface value, and an empty array, slice, map, or string.
The value of @odata.type is unique and constant for each request type. It contains the OData's type name. For example, the value of @odata.type field for TickHistoryMarketDepthExtractionRequest is #DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest. It is inconvenient and prone to error if this value is set by users. Therefore, a custom field's tag (odata) is defined for this Metadata field so the user doesn't need to specify its value when using the TickHistoryMarketDepthExtractionRequest type.
type TickHistoryMarketDepthExtractionRequest struct {
Metadata string `json:"@odata.type" odata:"#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest"`
...
}
The value of this odata tag is the OData's type name. To use this tag, the custom JSON marshaler is defined for this type. The custom marshaler is used when this type is passed as an argument to json.Marshal method.
func (r TickHistoryMarketDepthExtractionRequest) MarshalJSON() ([]byte, error) {
type _TickHistoryMarketDepthExtractionRequest TickHistoryMarketDepthExtractionRequest
if r.Metadata == "" {
st := reflect.TypeOf(r)
field, _ := st.FieldByName("Metadata")
r.Metadata = field.Tag.Get("odata")
}
return json.Marshal(_TickHistoryMarketDepthExtractionRequest(r))
}
This marshaler uses reflection to get the value from odata tag, and set the value back to Metadata field. It also defines a new type (_TickHistoryMarketDepthExtractionRequest) as an alias type for the marshalled type. After setting the value in the Metadata field, it casts the marshalled type to the new type and passes this new type as an argument to json.Marshal function. Thus, the default marshaler of this new type will be used to marshal the data. This method is used to prevent the recursive call to the same custom marshaler of TickHistoryMarketDepthExtractionRequest type.
The following code shows how to use this user-defined type and marshaler to encode JSON object.
request := new(rthrest.TickHistoryMarketDepthExtractionRequest)
request.Condition.View = rthrest.ViewOptionsNormalizedLL2Enum
request.Condition.SortBy = rthrest.SortSingleByRicEnum
request.Condition.NumberOfLevels = 10
request.Condition.MessageTimeStampIn = rthrest.TimeOptionsGmtUtcEnum
request.Condition.DisplaySourceRIC = true
request.Condition.ReportDateRangeType = rthrest.ReportDateRangeTypeRangeEnum
startdate := time.Date(2017, 7, 1, 0, 0, 0, 0, time.UTC)
request.Condition.QueryStartDate = &startdate
enddate := time.Date(2017, 8, 23, 0, 0, 0, 0, time.UTC)
request.Condition.QueryEndDate = &enddate
request.ContentFieldNames = []string{
"Ask Price",
"Ask Size",
"Bid Price",
"Bid Size",
"Domain",
"History End",
"History Start",
"Instrument ID",
"Instrument ID Type",
"Number of Buyers",
"Number of Sellers",
"Sample Data",
}
request.IdentifierList.InstrumentIdentifiers = append(request.IdentifierList.InstrumentIdentifiers, rthrest.InstrumentIdentifier{Identifier: "IBM.N", IdentifierType: "Ric"})
request.IdentifierList.ValidationOptions = &rthrest.InstrumentValidationOptions{AllowHistoricalInstruments: true}
req1, _ := json.Marshal(struct {
ExtractionRequest *rthrest.TickHistoryMarketDepthExtractionRequest
}{
ExtractionRequest: request,
})
The above code is from the example which shows how to use TickHistoryMarketDepthExtractionRequest type and its marshaler to encode JSON object. The encoded JSON object looks like:
{
"ExtractionRequest":{
"@odata.type":"#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
"ContentFieldNames":[
"Ask Price",
"Ask Size",
"Bid Price",
"Bid Size",
"Domain",
"History End",
"History Start",
"Instrument ID",
"Instrument ID Type",
"Number of Buyers",
"Number of Sellers",
"Sample Data"
],
"IdentifierList":{
"@odata.type":"#DataScope.Select.Api.Extractions.ExtractionRequests.InstrumentIdentifierList",
"InstrumentIdentifiers":[
{
"Identifier":"CARR.PA",
"IdentifierType":"Ric"
}
],
"ValidationOptions":{
"AllowHistoricalInstruments":true
}
},
"Condition":{
"View":"NormalizedLL2",
"NumberOfLevels":10,
"SortBy":"SingleByRic",
"MessageTimeStampIn":"GmtUtc",
"ReportDateRangeType":"Range",
"QueryStartDate":"2017-07-01T00:00:00Z",
"QueryEndDate":"2017-08-23T00:00:00Z",
"Preview":"None",
"ExtractBy":"Ric",
"DisplaySourceRIC":true
}
}
}
The JSON object shows that the Metadata field in TickHistoryMarketDepthExtractionRequest type is encoded as a @odata.type field with the value specified in the odata tag.
"@odata.type":"#DataScope.Select.Api.Extractions.ExtractionRequests.TickHistoryMarketDepthExtractionRequest",
After the extraction request is sent, the HTTP response will return with the JSON object when the extraction is completed. To decode the returned JSON object, json.Unmarshal function is called with RawExtractionResult type as an argument.
extractRawResult := &rthrest.RawExtractionResult{}
err = json.Unmarshal(body, extractRawResult)
The above code decodes the following JSON object to RawExtractionResults type.
{ "@odata.context":"https://selectapi.datascope.refinitiv.com/RestApi/v1/$metadata#RawExtractionResults/$entity", "JobId":"0x079d449da56cde57",
"Notes":[
"Extraction Services Version 15.0.42358 (01a7f7ea050d), Built May 20 2021 18:20:45
User ID: 9008895\nExtraction ID: 2000000276866722
Schedule: 0x079d449da56cde57 (ID = 0x0000000000000000)
Input List (1 items): (ID = 0x079d449da56cde57) Created: 07/01/2021 06:06:31 Last Modified: 07/01/2021 06:06:31
Report Template (12 fields): _OnD_0x079d449da56cde57 (ID = 0x079d449da58cde57) Created: 07/01/2021 06:03:44 Last Modified: 07/01/2021 06:03:44
Schedule dispatched via message queue (0x079d449da56cde57), Data source identifier (6EAC532B395249D386B46C3A6B9BA797)
Schedule Time: 07/01/2021 06:03:46\nProcessing started at 07/01/2021 06:03:46
Processing completed successfully at 07/01/2021 06:06:32
Extraction finished at 07/01/2021 05:06:32 UTC, with servers: tm03n03, TRTH (136.646 secs)
Instrument <RIC,CARR.PA> expanded to 1 RIC: CARR.PA.
Total instruments after instrument expansion = 1
Quota Message: INFO: Tick History Cash Quota Count Before Extraction: 3190; Instruments Approved for Extraction: 0; Tick History Cash Quota Count After Extraction: 3190, 638% of Limit; Tick History Cash Quota Limit: 500
Quota Message: ERROR: The RIC 'CARR.PA' in the request would exceed your quota limits. Adjust your input list to continue.
Quota Message: WARNING: Tick History Cash Quota has been reached or exceeded
Quota Message: Note: Quota has exceeded, however, it is not being enforced at this time but you can still make your extractions and instruments are still being counted. Please contact your Account Manager for questions.
Manifest: #RIC,Domain,Start,End,Status,Count
Manifest: CARR.PA,Market Price,2017-07-03T01:25:00.958951484Z,2017-08-22T15:39:30.257431494Z,Active,8209396
"]}
The value of @odata.context field in JSON object is decoded to Metadata field according to the defined json field's tag in the RawExtractionResults type.
type RawExtractionResult struct {
Metadata string `json:"@odata.context,omitempty"`
In conclusion, using types to encode and decode JSON objects is effective and flexible. Because the extraction request is a static type in Go programming language, the incorrect field names will be caught at compile time. It is also useful when using with IDE that supports Intellisense, such as Visual Studio Code. Moreover, the user-defined types can be reused by other GO LSEG Tick History V2 applications.
Encode enumeration
LSEG Tick History V2 REST API defines enumerations used in JSON objects, such as TickHistoryExtractByMode, TickHistoryMarketDepthViewOptions, and ReportDateRangeType. These enumerations can also be defined in Go programming language and can be used to construct the request message.
type TickHistoryMarketDepthViewOptions int
const (
ViewOptionsRawMarketByPriceEnum TickHistoryMarketDepthViewOptions = iota
ViewOptionsRawMarketByOrderEnum
ViewOptionsRawMarketMakerEnum
ViewOptionsLegacyLevel2Enum
ViewOptionsNormalizedLL2Enum
)
The above code defines an enumeration type called TickHistoryMarketDepthViewOptions and all enumeration values of this type are defined as constants.
The following shows how to use this enumeration.
request.Condition.View = rthrest.ViewOptionsNormalizedLL2Enum
Condition.View is TickHistoryMarketDepthViewOptions type and its value is set to ViewOptionsNormalizedLL2Enum.
However, in the JSON object, these enumeration fields are encoded as strings, not integers. To encode an enumeration as a string, an array of strings and custom text marshalers are defined.
var tickHistoryMarketDepthViewOptions = [...]string{
"RawMarketByPrice",
"RawMarketByOrder",
"RawMarketMaker",
"LegacyLevel2",
"NormalizedLL2",
}
func (d TickHistoryMarketDepthViewOptions) MarshalText() ([]byte, error) {
return []byte(tickHistoryMarketDepthViewOptions[d]), nil
}
The above code defines an array of strings called tickHistoryMarketDepthViewOptions which contains a string for each enumeration value. This array is used by the custom text marshaler of TickHistoryMarketDepthViewOptions type to convert an integer to a string while marshalling. For example, if the application sets the value of TickHistoryMarketDepthViewOptions type to ViewOptionsNormalizedLL2Enum (4), when marshalling, the custom text marshaler of this type will return a "NormalizedLL2" string which is a string at the fourth index in the array and this string will be used by the JSON marshaler, as shown below.
"Condition":{
"View":"NormalizedLL2",
...
Concurrently download a gzip file
When the extraction is completed, the file is available on the DSS server for downloading. The result file of ExtractRaw extraction is in .csv.gz format and the HTTP response when downloading the result file typically contains Content-Encoding: gzip in the header. With this header, the net/http package in Go programming language typically decompresses the gzip file and then returns the csv to the application. However, the returned csv data from the Go package may be incomplete. Therefore, the application should disable the decompression logic by using the following code.
tr := &http.Transport{
DisableCompression: true,
}
Depending on the number of instruments or the range of periods specified in the extraction request, the size of gzip file could be gigantic. According to LSEG Tick History V2 REST API User Guide, download speed is limited to 1 MB/s for each connection. Therefore, downloading the huge gzip file can take more than several hours with a single connection.
To speed up the download, the file can be download concurrently with multiple connections. Each connection will download a specific range of a file by defining a range (offset) in the HTTP request header.
Range: bytes=0-22168294
The above header indicates that it requests to download the first 3079591 bytes of the file. DSS server supports Range header so the status code of the HTTP response from DSS will be 206 Partial Content and its content will contain only the first 3079591 bytes.
HTTP/1.1 206 Partial Content
Content-Length: 22168295
Accept-Ranges: bytes
Content-Disposition: attachment; filename=_OnD_0x079d449da56cde57.csv.gz
Content-Range: bytes 0-22168294/88673183
Content-Type: application/gzip
Date: Thu, 01 Jul 2021 05:06:50 GMT
The response also indicates the content size, starting, and ending offset.
However, in order to download file concurrently, the starting and ending offset of each download connection must be calculated from the total size of the result file. There are several ways to get the size of the extracted file. The example in this article uses the Extraction ID appearing in the Notes field when the job is completed to get the size of the extracted file.
{ "@odata.context":"https://selectapi.datascope.refinitiv.com/RestApi/v1/$metadata#RawExtractionResults/$entity", "JobId":"0x079d449da56cde57",
"Notes":[
"Extraction Services Version 15.0.42358 (01a7f7ea050d), Built May 20 2021 18:20:45
User ID: 9008895\nExtraction ID: 2000000276866722
Schedule: 0x079d449da56cde57 (ID = 0x0000000000000000)
...
]
}
From the above response, the Extraction ID in the Notes field is 2000000276866722. To get the file description including the size of file, the following HTTP GET request is used.
GET /RestApi/v1/Extractions/ReportExtractions('2000000276866722')/FullFile
The response for this request contains the description of the extracted file.
{
"@odata.context":"https://selectapi.datascope.refinitiv.com/RestApi/v1/$metadata#ExtractedFiles/$entity",
"ExtractedFileId":"VjF8MHgwNzlkNDA1YWI0NmNkZTI3fA",
"ReportExtractionId":"2000000276866722",
"ScheduleId":"0x079d449da56cde57",
"FileType":"Full",
"ExtractedFileName":"_OnD_0x079d449da56cde57.csv.gz",
"LastWriteTimeUtc":"2021-07-01T05:06:32.000Z",
"ContentsExists":true,
"Size":88673183
}
The Size field in the response contains the size of file. Then, the download byte offset can be calculated for each connection by dividing the size of file by the number of connections. For example, if the above file is downloaded concurrently with four connections, the download size for each connection will be 22168295 bytes (88673183/ 4) and the download offsets for four connections will be:
Connection 1: Range: Bytes=0-22168294
Connection 2: Range: Bytes=22168295-44336589
Connection 3: Range: Bytes=44336590-66504884
Connection 4: Range: Bytes=66504885-
The fourth connection will start downloading the file starting at 66504885 offset until the end of file. After all connections complete downloading files, all files must be merged according to the offset order to get the complete file.
The following test results compare the download time between a single connection and four connections.
No. |
Total download time (seconds) with a single connection |
Total download time (seconds) with four concurrent connections |
1 |
43.832 |
24.675 |
2 |
111.683 |
26.654 |
3 |
63.658 |
29.655 |
4 |
46.807 |
33.009 |
5 |
89.659 |
25.013 |
6 |
66.757 |
20.037 |
7 |
54.846 |
25.15 |
8 |
106.874 |
18.664 |
9 |
56.865 |
19.841 |
10 |
55.628 |
45.135 |
After testing ten times, downloading a file with four concurrent connections is faster than download a file with a single connection. The test results may vary according to machine and network performance.
Download a gzip file from Amazon Web Services
In addition to download extracted files directly from DSS server, the application can download the files faster by retrieving them directly from the Amazon Web Services (AWS) cloud in which they are hosted. This feature is available for VBD (Venue by Day) data, Tick History Time and Sales, Tick History Market Depth, Tick History Intraday Summaries, and Tick History Raw reports.
To use this feature, the application must include the HTTP header X-Direct-Download: true in the request. If the file is available on AWS, the status code of HTTP response will be 302 Found with the new AWS URL in the Location HTTP header field. The new URL is the pre-signed URL to get data directly from AWS.
HTTP/1.1 302 Found
Beginrequestdate: 2021-07-01
Beginrequesttime: 05:06:48.6651455
Cache-Control: no-cache
Cpuutilization: 9.809805
Date: Thu, 01 Jul 2021 05:06:48 GMT
Expires: -1
Location: https://s3.amazonaws.com/tickhistory.query.production.hdc-results/6xxxx/data/merged/merged-Data.csv.gz?AWSAccessKeyId=xxxx&Expires=1625137609&response-content-disposition=attachment%3B%20filename%3D_OnD_0x079d449da56cde57.csv.gz&Signature=zKNkgZTnv4nWxorriDNNqlIpEYE%3D
Then, the application can use this new AWS URL to download the file.
However, when retrieving the HTTP status code 302, the http package in Go programming language will automatically redirect to the new URL with the same HTTP headers of the previous request which have fields for LSEG Tick History V2 REST API. This causes AWS returning 403 Forbidden status code.
To avoid this issue, the application should disable this redirect by using the following code.
client := &http.Client{
Transport: &tr,
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
Then, the application can remove LSEG Tick History V2 headers and optionally add its own HTTP headers in the request. Concurrent downloads mentioned in the previous section can also be used with AWS by specifying Range header in the request.
Go Get and Run the Example
TickHistoryMarketDepthEx.go is implemented to demonstrate the solutions mentioned in this article. It uses ExtractRaw endpoint to send TickHistoryMarketDepthExtractionRequest to extract normalized legacy level 2 data of IBM.N from 1 Jul 2017 to 23 Aug 2017. All settings are hard-coded. This example supports the following features:
- Concurrent Downloads
- Download a file from AWS
- Request and response tracing
- Proxy setting
This example depends on the github.com/howeyc/gopass package in order to retrieve the DSS password from the console.
The optional arguments for this example are:
Argument Name |
Description |
Argument Type (Default Value) |
--help |
List all valid arguments |
|
-u |
Specify the DSS user name |
String ("") |
-p |
Specify the DSS password |
String ("") |
-n |
Specify the number of concurrent downloads |
Integer (1) |
-aws |
Flag to download from AWS |
Boolean (false) |
-X |
Flag to trace HTTP request and response |
Boolean (false) |
-proxy |
|
To download the example, please run the following command.
go get github.com/LSEG-API-Samples/Article.RTH.Go.REST.rthrest/main
The example can be run with the following command.
go run github.com/LSEG-API-Samples/Article.RTH.Go.REST.rthrest/main/TickHistoryMarketDepthEx.go -aws -n 4
The above command runs the example to download the result file from AWS with four concurrent connections. The output is shown below.
21/07/01 12:52:27 X-Direct-Download: true
2021/07/01 12:52:27 Number of concurrent download: 4
Enter DSS Username: 9008895
Enter DSS Password: **************
2021/07/01 12:52:39 Step 1: RequestToken
2021/07/01 12:52:41 Step 2: ExtractRaw for TickHistoryMarketDepthExtractionRequest
2021/07/01 12:53:14 Step 3: Checking Status (202) of Extraction (1)
2021/07/01 12:53:48 Step 3: Checking Status (202) of Extraction (2)
2021/07/01 12:54:21 Step 3: Checking Status (202) of Extraction (3)
2021/07/01 12:54:54 Step 3: Checking Status (202) of Extraction (4)
2021/07/01 12:55:22 ExtractionID: "2000000276885314"
2021/07/01 12:55:22 Step 4: Get File information
2021/07/01 12:55:23 File: _OnD_0x079d44a2b11cde57.csv.gz, Size: 88673183
2021/07/01 12:55:23 Step 5: Get AWS URL
2021/07/01 12:55:23 AWS: https://s3.amazonaws.com/tickhistory.query.production.hdc-results/B5056A2B172A4CCBAC2AA5xxxx/data/merged/merged-Data.csv.gz?AWSAccessKeyId=xxxJNJ6M4OJR7xxx&Expires=1625140520&response-content-disposition=attachment%3B%20filename%3D_OnD_0x079d44a2b11cde57.csv.gz&Signature=1vKdzeRxxxxAxVDmEb6L9Io%3D
2021/07/01 12:55:23 Step 6: Concurrent Download: _OnD_0x079d44a2b11cde57.csv.gz, Size: 88673183, Connection: 4
2021/07/01 12:55:23 ConcurrentDownload: _OnD_0x079d44a2b11cde57.csv.gz, conn=4
2021/07/01 12:55:23 Part 1: 0 - 22168294
2021/07/01 12:55:23 Part 2: 22168295 - 44336589
2021/07/01 12:55:23 Part 3: 44336590 - 66504884
2021/07/01 12:55:23 Part 4: 66504885-
2021/07/01 12:55:23 Download File: part4, 66504885, -1
2021/07/01 12:55:23 Download File: part1, 0, 22168294
2021/07/01 12:55:23 Download File: part2, 22168295, 44336589
2021/07/01 12:55:23 Download File: part3, 44336590, 66504884
2021/07/01 12:55:28 part1, Bytes: 10844983/Total: 22168295 (49%)
2021/07/01 12:55:28 part2, Bytes: 9905592/Total: 22168295 (45%)
2021/07/01 12:55:28 part3, Bytes: 6405504/Total: 22168295 (29%)
2021/07/01 12:55:28 part4, Bytes: 7141976/Total: 22168298 (32%)
2021/07/01 12:55:30 part2: Download Completed, Speed: Avg 3608.00 KB/s, Max 8736.00 KB/s
2021/07/01 12:55:31 part1: Download Completed, Speed: Avg 3092.57 KB/s, Max 6128.00 KB/s
2021/07/01 12:55:32 part3: Download Completed, Speed: Avg 2706.00 KB/s, Max 5152.00 KB/s
2021/07/01 12:55:33 Merging Files: _OnD_0x079d44a2b11cde57.csv.gz
2021/07/01 12:55:33 part4: Download Completed, Speed: Avg 2405.33 KB/s, Max 4416.00 KB/s
2021/07/01 12:55:34 Download Time: 10.9761244s