(edited by joshflosh.4517)
Enable keep-alive and pipelining?
On a sidenote: an even better solution would be, if you could enable ?ids=all for that endpoint.
I guess the servers would then explode…
I’d recommend you something like RollingCurl which speeds up the whole process (a few minutes on a fast connection).
https://github.com/codemasher/gw2-database/blob/master/classes/rollingcurl.class.php
https://github.com/codemasher/gw2-database/blob/master/classes/gw2items.class.php#L186
A few minutes? Maybe im misreading something there, but i currently fetch all items in 15 seconds^^
So where’s the problem then? (We’re talking about ~38k items and ~150MB DB size)
I’ve a very slow connection, so i can only estimate – there have been reports from like a couple seconds to a few minutes. However, it also depends on the amount of languages you pull from the DB.
Well it’s not a problem, just a question, since i care about optimization. 150MB? Are we talking about the same thing?
(edited by joshflosh.4517)
Web Programming Lead
With our current setup keep-alive/pipelining isn’t possible, sorry.
/v2/items?ids=all would be a massive response and probably make our servers very, very unhappy.
Ok i thought so, thanks for the response nonetheless!
I always thought that the problem with ids=all is that it runs synchronously. I don’t experience any server hiccups when I send hundreds of smaller requests (200 ids) in parallel. Shouldn’t synchronous requests for the same data actually be less demanding?
Unless the problem is that you buffer every json response in memory before you start writing to the underlying socket. I don’t know how difficult it would be to stream the response, but that would give you an immediate, noticeable performance boost.