hi kevin,
The API only returns a maximum of 1000 data points at a time, you'll have to "paginate" the data to get the whole feed.
When you do a data query, the results are always sorted newest-to-oldest and include x-pagination-* headers. For example,
- Code: Select all | TOGGLE FULL SIZE
X-Pagination-Limit: 1000
X-Pagination-Total: 84548
X-Pagination-Start: 2019-02-11T22:52:18.103+0000
X-Pagination-End: 2019-02-12T16:03:00.694+0000
X-Pagination-Count: 1000
Limit is either the requested limit or 1000, whichever is less;
Total is the total number of data points in the feed--note, this value may be up to 5 minutes behind real time;
Start is the timestamp on the oldest value;
End is the timestamp on the newest value; and
Count is the number of data points in the current request. Whenever Limit and Count are both 1000 and Total is more than 1000, that's evidence that more data is available. You can get the next 1000 data points by using the X-Pagination-Start value OR the created_at value of the oldest data point in the API response as
the end_time parameter in your next request to the data API.
I always end up thinking about it visually. On a timeline the idea kind of looks like this:
NOTE: long running, frequently updated feeds could have more than a hundred "pages" of data. If you make requests without a delay in between, you could hit a rate limited. Watch for 429 HTTP error responses.
Regarding the data storage and feed history, storage size in this instance is the per-data point value size limit. With history on, meaning we preserve every data point, each data point value can be at most 1KB. With history off, meaning we only preserve the most recent data point, each value can be at most 100KB.
- adam b.