(edited by smiley.1438)
You may also want to take a look at this: https://github.com/codemasher/gw2-database
Seriously, because the forum sometimes acts up, I’ve taken to — when I remember to do it — copying and saving the post in my clipboard so I can just slap it back into the thread.
This is what i do ALWAYS before i post on any forum or whatever – or even better, sometimes i compose post in my text editor just to get rid of all those inconviniences
no!
i mean: no?
maybe something’s wrong with your setup. Everything ok here.
broken pagination is broken.
https://forum-en.gw2archive.eu/forum/support/forum/Thread-loaded-as-Empty/first#post2149426
https://forum-en.gw2archive.eu/forum/support/forum/No-posts-in-a-thread/first#post2693455
(thread unreadable)
Thank you Pat! <3
And that’s on the slow end – Smiley’s takes less than 30 minutes (IIRC).
It does, in fact. My dl is 384 kbit/s btw.
ohhhh, eff… awesome, thanks!
Currently the forum URLs were SEO optimized, it means the URLs were optimized to look more friendly to search engines and bookmarks (probably you know that). It uses a rewriting method as you described to achieve such thing.
Let me give an example to make it clear for those that don’t know it. Let’s take the URL for this thread as example: https:// forum-en.guildwars2.com /forum/support/forum/Search-feature-not-working/ (spaces inserted purposely to break link)
This is how we see the URL, but the real path could not be it. In fact, behind the scenes it could be something like https:// forum-en.guildwars2.com /forum?category=support&subcat=forum&thread=Search%20feature%20not%20working
This type of URL is unfriendly for search engines and they look scowling at it, then one of the reasons for the rewriting. No issues with it at all so far.
Seems like you still didn’t get the point about URL rewriting – you should really look into this to get the whole point of URLs like this and that this isn’t the problem at all
http://en.wikipedia.org/wiki/Rewrite_engine
http://httpd.apache.org/docs/2.0/misc/rewriteguide.html
So yeah, someone finally admits they aren’t able to fix it. To start, there is a simply thing that could be done: stop using AJAX and make a simpler search engine. The way the search works now was done based on SEO optimizations and stuff but it’s clearly broken. The file to where the search points to returns an error and an empty JSON file. Why using a SEO friendly URL as form’s destination? Use a normal URL without these fancy SEO stuff, that’s breaking the thing. Make a simple form that sends data to a simple URL like forum-en.guildwars2.com/search?q=SearchTerm instead a complex SEO “optimized” one like forum-en.guildwars2.com/forum/search/SearchTerm. What’s happening is something like that. Turning off javascript and forcing a search by “normal” means (typing and hitting Enter) sends the data that to a page that returns like “Cannot find category named search”, what seems like a broken SEO “optimization”. Did I give some insight at least?
There’s so much wrong in this post, lemme get this a bit straight:
There’s nothing wrong with an AJAX powered search – as long as it’s well implemented (see Google). The Fangamer Forum, however, isn’t – in so many ways (note the “proudly powered by wordpress” on the bottom of that page – i guess there’s a reason…).
A URL like domain.tld/search/searchterm doesn’t mean SEO optimization at all (the word “SEO optimization” is marketing babble anyway – there is no such thing, there’s only doing it right using web standards). The thing you meant is URL rewriting to not only create search engine friendly URLs but to also allow backend changes without breaking links - so for example it’d be possible to completely switch the forums software without breaking bookmarks or google (or any other search engine’s) search results.
The thing you mentioned what happens with Javascript disabled is actually the same page (or JSON result) which would be received via the AJAX search – there’s just a parameter missing, so you run into this error (also a clear sign that this forum’s software is… bad – a good one would also work flawless without Javascript).
Anyway, i’ve posted a GitHub link to a little greasemonkey script along with a stylesheet in reply to this thread last week which is a temporary fix for the search, using Google’s custom search. However, my post got deleted and i got infracted for it because the moderator thought i posted bad stuff, which clearly wasn’t the case. I hope that’s now sorted out (thanks to Gaile!), so i’ll post it again:
https://gist.github.com/codemasher/654f05a7cdf1c268d404
In order to use this script, you’ll need a google account where you have to create a custom search engine and an API key (links inluded in the script’s comments). For now i assume you know how to use greasemonkey etc. and you know a little Javascript to see that is no malicious stuff at all – i’ll write up some instructions when i got some time on my hands. Enjoy.
No, these icons are not yet pat of the API. Let’s hope for the /v2/guilds API which is to be released soon™ along with OAuth2 support.
However, it’s possible to extract the files from the GW2.dat. Have a look at https://github.com/rhoot/Gw2Browser (grey zone, you have been warned).
Don’t confuse “storing user data in the cloud” with “content delivery networks”. Content delivery networks make sure that a service is provided worldwide in the same quality (in terms of speed and accessibilty and load balancing).
There’s currently one cloudfront.net domain (Amazon AWS) which you have to allow in order to display the forums properly, which is:
https://d1r2pgr9caw5gy.cloudfront.net/
There is another one which is used by the API services (fonts to be specific):
https://d1h9a8s8eodvjz.cloudfront.net/
Btw. I agree with you that storing personal data in the “cloud” is highly problematic, but sadly that’s the way the web goes – live with it or plug off your pc from the internet.
Even if it would only deliver the current skill descriptions along with professions & ids and icons it’d be great!
You’re still requesting the data for each individual item, try using the “ids” parameter and receive the data in chunks of up to 200 items per request. Over here: https://github.com/codemasher/gw2-database/blob/master/classes/gw2items.class.php#L178
Okay, thanks for the infraction, but this thread should be pinned since it seems noone at anet cares about this issue.
Please just remove the search box (protip: style=“display:none;”) and add a hint that we should better use the google site search instead. You can also add a google custom search box (https://developers.google.com/custom-search/) to save us the hassle typing site:http://forum-en.guildwars2.com/ into googles search.
Thanks.
Just use the proposed “fix”, google:
site:https://forum-en.gw2archive.eu/forum/[DESIRED_SUBFORUM] search term (the site: is important here)
I use it all the time in case i search stuff and it works better than this forum’s search has ever worked (and probably will if it ever gets fixed). Live with it
(edited by smiley.1438)
I’m sure you’ll get below 30 min, RollingCurl is da bomb!
Yes, 9h to complete seems highly inefficient – it should actually not take longer than a couple minutes to complete when using the bulk feature. Yor script initiates a new (single) cURL instance for each request and doesn’t queue these up to process them asynchronous which means it has to wait for the whole lifecycle of one request until it fires another one. You might want to take a look at rolling (multi) curl and maybe also at my database project on GitHub which uses it. (takes about half an hour to download the whole items database in 4 languages on my very slow connection)
https://github.com/codemasher/gw2-database/blob/master/classes/rollingcurl.class.php
https://github.com/codemasher/gw2-database/blob/master/classes/gw2items.class.php#L156
€: also you need to connect through https since the http URL redirects you to the https one each time (which probably costs some time) – to make cURL work with https, you need to provide a CA root certificate: http://curl.haxx.se/ca/cacert.pem by passing the parameters:
CURLOPT_SSL_VERIFYPEER = true
CURLOPT_SSL_VERIFYHOST = 2
CURLOPT_CAINFO = ‘path/to/certificates.pem’
(edited by smiley.1438)
00000000000000000000011111010101, Twilight Arbor
11111010101 → 2005 decimal which is RoF’s world_id (coincidence?)
Guys, please stop derailing this thread. It has been pinned for a reason and was not meant to be a discussion thread.
(can someone please merge the discussion stuff into new topics?)
(edited by smiley.1438)
@OP:
just create a “repair post” next time, pointing out that the pagination is still broken and link to the existing threads:
https://forum-en.gw2archive.eu/forum/support/forum/Thread-loaded-as-Empty/first#post2149426
https://forum-en.gw2archive.eu/forum/support/forum/No-posts-in-a-thread/first#post2693455
Short answer: no.
Long answer: nope.
You need to download and store them in a local database to achieve this.
Thanks for the quick fixes!
How about the requests returning a 400 on single invalid item ids? Would be cool if the API would return a HTTP/206 Partial Content then and just send the data – would make our lives easier
I’ve tested with 50 and 200, but 200 took way too long on the initial download. I can live with a full database refresh in half an hour – thats nothing compared to the old version
Darthmaim’s server has at least a 1Gbit connection (if not 10Gbit+) – not much of a problem there.
(WIP already on github: https://github.com/codemasher/gw2-database/blob/master/classes/gw2items.class.php#L156)
(edited by smiley.1438)
I don’t even…
36 minutes on my connection for a full update
I’m down to 2 minutes for a single language now… ;D
You can have my connection – 3 times dual channel ISDN… you know how “fast” that is, right?
IBM invented something for those who prefer XML: https://twitter.com/DanHarper7/status/514822464673951744
(no comment)
Is there still no way to search for items by name?
No, unfortunately not. That’s what we were discussing over here: https://forum-en.gw2archive.eu/forum/community/api/v2-item-details-and-recipe-details/4434782
36 minutes on my connection for a full update (4 languages) – while downloading chunks of just 50. I guess it’ll be done in a few minutes on darthmaim’s server then (gw2treasures.de). I ran a v1 update before just for testing and it was like i predicted ~ 3 hours.
Ok, one bug (or feature?):
When you send a chunk of ids and one of them is invalid, the API returns a 400 for the whole request. I’d prefer if it would return the available ones instead.
The invalid ones (i had in my DB still):
39925
40615
40898
40914
40922
40930
40978
43948
43949
(edited by smiley.1438)
Whoa, finally! Thanks!
(so you did this because you noticed me hammering the v1/items endpoint right now?)
I’ve tested it and it’d be able to do so. But there’s 2 problems:
first: it’ll blow up your cpu and memory
second: there may be a limit of concurrent connections to the same host or you might get blocked when you’re hammering too hard, so there’s a limit which you can specify
The first example which i’ve posted works basically like that – no limits, just hammering. The one Pat posted has those limitations built in, one might use this for production.
Y U EDIT WHILE I ANSWER?
The total duration is just approx, mileage may vary. I’m currently rewriting my database updater, so i can maybe give you real results tomorrow.
(edited by smiley.1438)
I’ve used https://code.google.com/p/rolling-curl/ in the past with success, looks like it might be a similar implementation.
It’s in fact based on the same implementation by Josh Fraser: http://www.onlineaspect.com/2009/01/26/how-to-use-curl_multi-without-blocking/
€:
Just did a couple test runs with 1000 requests (250 items) which have been completed in ~72 seconds on a very slow connection, would be in total about 3 hours, assuming 38k items (152k requests).
(edited by smiley.1438)
I was tempted to say thats hard to do in PHP due to the lack of multi threading, but hey, it isn’t (there is an extension but it’s nothing to rely on: http://php.net/pthreads). I built the updater around this example: https://gist.github.com/Xeoncross/2362936 and it looks promising – ok, it isn’t multi threaded but at least asynchronous.
(edited by smiley.1438)
Sometimes the API seems to get overloaded and returns a short error message instead of the expected JSON.
Thing is, that is not the case. Have a look at the snippet above – the JSON was returned correctly but the state of the objectives was neutral – thats clearly an error on the API side, no hiccup or overload or whatever.
Also, any decent javascript library nowadays checks if a request was successful (e.g. http://api.prototypejs.org/ajax/Ajax/Request/) – you can rely on that pretty much and just check for the data you expect.
Would be nice to know the match_id or at least the world you’re playing on, interesting for me (and probably others) to investigate.
https://api.guildwars2.com/v1/wvw/match_details.json?match_id=2-4 <— camps on red BL flipping to neutral
Might turn this one in over here: https://forum-en.gw2archive.eu/forum/community/api/Match-details-giving-incorrect-results/4436632
Thats in fact interesting, because it seems it’s just the camps on the red map.
"maps": [
{
"type": "RedHome", "scores": [
32257,
2950,
2950
], "objectives": [
{"id": 32, "owner": "Red"},
{"id": 33, "owner": "Red"},
{"id": 34, "owner": "Neutral"},
{"id": 35, "owner": "Green"},
{"id": 36, "owner": "Blue"},
{"id": 37, "owner": "Red"},
{"id": 38, "owner": "Red"},
{"id": 39, "owner": "Red"},
{"id": 40, "owner": "Red"},
{"id": 50, "owner": "Neutral"},
{"id": 51, "owner": "Neutral"},
{"id": 52, "owner": "Neutral"},
{"id": 53, "owner": "Neutral"},
{"id": 62, "owner": "Neutral"},
{"id": 63, "owner": "Neutral"},
{"id": 64, "owner": "Neutral"},
{"id": 65, "owner": "Neutral"},
{"id": 66, "owner": "Neutral"}
], "bonuses": []
},
IDs 50-53 are the camps, 62-66 the ruins, which are usually neutral.
Depends on how the application is written – the objectives don’t change to neutral during a capture. You may check out my overwolf app, which does basically the same than millenium.org plus cool stuff.
The timers are only approx since there’s currently no reliable way to get the exact cap time from the API (i hope for this feature in v2). Therefore it may happen that more objectives have changed during 2 consecutive API polls, which was most likely the case when you made that screen.
The only way this information would make sense with each request would be for the item details, like build_introduced and build_changed so that we could keep track of these. But i guess thats over the top anyway…
If i remember right, there was a command /supply for that.
any chance on an endpoint which lists the ids of changed items after a patch has been pushed?
It’s not something we have currently, and I don’t think it’s anything we’re planning to build. Hopefully once /v2/items is out it’ll be easier for people to compile those sorts of things themselves.
Thanks for the heads up!
I wonder how that process would be easier with v2 since it basically delivers the same data. Currently the only way to find and update changed items is to retrieve the info for each and every item over and over. So the only advantage of v2 will be the bulk request which may speed up this process (which currently takes about a day).
€: see also: https://forum-en.gw2archive.eu/forum/community/api/API-Suggestion-Items-Recipes-and-Crafting/2863618
(edited by smiley.1438)
There’s for sure an advantage over JSON. You just need the seperator in CSV while you need all the braces and quotes in JSON, so CSV will save you a couple of bytes.
Also: if your app isn’t a web app anyway, loading times won’t matter that much i guess.
Don’t worry about how long it takes to populate a database. You only have to do it once for the lifetime of the installation.
Wrong. Do you know how often even the names of items change? There’s some more languages than just english. Also, if you want to offer an extended search, you need to run an updater after every new build. There’s a reason why so many people asking for a list of changed item ids for each patch.
Thank you guys. Probably going to implement a local database then. Any Ideas what to use for C#? I don’t want the end user to install MySQL or something so it should be lightweight.
What about CSV? This way you could also easily offer updates.
We could expose the commerce search endpoint, but it only knows about items that can be put on the Trading Post. We don’t have any other search backends wired up to items that I’m aware of.
This would be helpful too – at least for people who want to build their own TP listings.
Anyway, having an “own” database makes it easier to play around and keep track of changes. That being said: any chance on an endpoint which lists the ids of changed items after a patch has been pushed?