Listings question

Listings question

in API Development

Posted by: Tharin.6358

Tharin.6358

Q:

Hello! I’m new to development, and I want to make a trading post website similar to gw2bltc.com
I was looking through the commerce/ api and found out that there’s no actual way to find if a listing was cancelled by the buying user, not bought.
What I mean is there is a json array for buys with 0 element being the top buyer.
I want to calculate how much of that item is being bought a day. One of the idea was to pull the data every 2-5 mins and see if and how the top buying (0) element changed, if it decreased in price or if the price is the same but the quantity decreased, then the quantity of that difference is bought and vice-versa, if it increased in price (or in quantity) – it is a new bid. But what if there is actually an ID for every listing done by users and anet is just not providing us with that functionality? It would be so much easier with this to calculate if it was a normal buy or just a cancellation of order. If people here have any good idea – feel free to help me, a noobie, to progress in this matter

Oh! And is there a limit to how fast I can make another GET request?

Listings question

in API Development

Posted by: Lawton Campbell

Lawton Campbell

Web Programmer

Next

A:

But what if there is actually an ID for every listing done by users and anet is just not providing us with that functionality? It would be so much easier with this to calculate if it was a normal buy or just a cancellation of order.

There is an id associated with every listing (the ids are returned from /v2/commerce/transactions), but we’re never going to return that from /v2/commerce/listings. The basic rundown is that those transaction ids are only stored in the actual database — which for performance reasons the API doesn’t have access to. Instead, the API talks to some servers which have cached aggregates for the listings (basically, what’s displayed on the UI in-game).

Adding in the ids would require piping them through a small handful of backend services and would probably have some serious performance implications, so it’s never going to happen.

Oh! And is there a limit to how fast I can make another GET request?

Not currently, but there’s some caching going on. Check the Expires/Cache-Control response headers to see how often the API’s data is updated. My rough rule of thumb is to keep below 100k requests/day; we haven’t had issues so far but reserve the right to add in a rate limit if there’s ever a problem.

Listings question

in API Development

Posted by: Tharin.6358

Tharin.6358

But what if there is actually an ID for every listing done by users and anet is just not providing us with that functionality? It would be so much easier with this to calculate if it was a normal buy or just a cancellation of order.

There is an id associated with every listing (the ids are returned from /v2/commerce/transactions), but we’re never going to return that from /v2/commerce/listings. The basic rundown is that those transaction ids are only stored in the actual database — which for performance reasons the API doesn’t have access to. Instead, the API talks to some servers which have cached aggregates for the listings (basically, what’s displayed on the UI in-game).

Adding in the ids would require piping them through a small handful of backend services and would probably have some serious performance implications, so it’s never going to happen.

Oh! And is there a limit to how fast I can make another GET request?

Not currently, but there’s some caching going on. Check the Expires/Cache-Control response headers to see how often the API’s data is updated. My rough rule of thumb is to keep below 100k requests/day; we haven’t had issues so far but reserve the right to add in a rate limit if there’s ever a problem.

Okay, so if I want to request alot of id’s, I believe I can’t make it with a get-request simply dividing tons of ids by commas? Do you have an option to make post-request?

Listings question

in API Development

Posted by: Lawton Campbell

Previous

Lawton Campbell

Web Programmer

Next

You totally can make it a get request with ids separated by commas. Also the results can be paginated if you don’t want to deal with ids.

Listings question

in API Development

Posted by: Demosthenex.7368

Demosthenex.7368

You totally can make it a get request with ids separated by commas. Also the results can be paginated if you don’t want to deal with ids.

To confirm, that’s limited to 200 ids per result?

I also read somewhere that ids=all was valid, but doesn’t appear to work for items, recipes, commerce/prices, or commerce/listings?

Is there a better way to bulk download price/listing data (ie: prepared cache) somewhere?

Listings question

in API Development

Posted by: Lawton Campbell

Previous

Lawton Campbell

Web Programmer

Next

You totally can make it a get request with ids separated by commas. Also the results can be paginated if you don’t want to deal with ids.

To confirm, that’s limited to 200 ids per result?

Correct.

I also read somewhere that ids=all was valid, but doesn’t appear to work for items, recipes, commerce/prices, or commerce/listings?

Is there a better way to bulk download price/listing data (ie: prepared cache) somewhere?

Typically, ids=all is only available on endpoints that have <500 resources.

IMO, the best way to fetch them all is to paginate through the listings using ?page_size=200&page=N, with N starting at 0 and ending at the value of the X-Page-Total response header. This will require 121 requests/scrape.

Listings question

in API Development

Posted by: Demosthenex.7368

Demosthenex.7368

IMO, the best way to fetch them all is to paginate through the listings using ?page_size=200&page=N, with N starting at 0 and ending at the value of the X-Page-Total response header. This will require 121 requests/scrape.

I’ll have to look into that. I’ve been using the list of id’s returned by the root of each endpoint, and creating 200 long URLs.

I was asking about bulk download because it seems like most (of my) API usage has nothing to do with requesting single items or 200 at a time. The most common use seems to be download to a local DB to query and a bulk option would rather make sense.

Ever considered making a compressed bulk file available hourly for the most common endpoints?

Listings question

in API Development

Posted by: Lawton Campbell

Previous

Lawton Campbell

Web Programmer

Ever considered making a compressed bulk file available hourly for the most common endpoints?

Our stack makes it pretty difficult to do that; currently the APIs don’t have any sane way to trigger periodic events.

Listings question

in API Development

Posted by: Demosthenex.7368

Demosthenex.7368

Our stack makes it pretty difficult to do that; currently the APIs don’t have any sane way to trigger periodic events.

I’ve transitioned over to using pagination instead of long lists of IDs. I bet that’s more cache layer friendly, and really eliminates the need for bulk files.

I saw someone else mention there should be a best practice guide. I totally agree!

Listings question

in API Development

Posted by: SlippyCheeze.5483

SlippyCheeze.5483

Ever considered making a compressed bulk file available hourly for the most common endpoints?

Our stack makes it pretty difficult to do that; currently the APIs don’t have any sane way to trigger periodic events.

Don’t tell anyone, because it’s kinda an ugly hack, but this one time I implemented that sort of endpoint in a system with the same limitation. I just let lucky number one request after the hour do all the work of fetch-and-compress, and then saved that to a cache to serve for the rest of the hour.