(edited by Nekres.1038)
How frequently is the data updating?
The short answer is “5 minutes” for anything reading account/character data.
Thanks for the reply. Is there a ressource that immeditaley updates the deaths count?
Nope.
As a short explanation, the data path looks roughly like this:
[map instance server] ? [database] → [API account/character cache] → [API frontend]
When you’re logged in and on a map, the map instance server you’re connected to has authoritative control over your character data. It only writes it back to the database when certain events occur. The database is read by an API backend component that can parse the account/character data; it’s cached by this component for ~5 minutes so that API requests can’t negatively affect game systems. Finally, the API frontend makes a request to the API backend whenever someone wants the data.
To get the data immediately, the API components would have to talk directly to the map instance servers — we don’t do this because the API is intentionally decoupled from all the game systems so that the API can independently catch fire and burn down without affecting anything else. (additionally, this is one of the main reasons why the API will ever only be read-only, for the most part).
I’m making some sweeping generalizations here, but that’s the rough explanation.
Thanks for the explanation. One final question: Is the value of 5 minutes a fixed value? Is it always exactly 5 minutes?
(I am wondering if I should set my death counter to request the death API every 00:05:10 or shorter.)
Thanks for the explanation. One final question: Is the value of 5 minutes a fixed value? Is it always exactly 5 minutes?
There’s effectively two levels of caches when you’re logged in — the map instance server has to persist the changes to the database, and the API backend component has to fetch the updated data from the database. The API backend component has a fixed 5-minute expiry time, but it’s possible that it re-reads the same data because the database hasn’t been persisted.
In practice, the database copy is almost always updated by the time the 5 minutes has expired, so assuming the value is updated exactly 5 minutes after requesting is pretty safe.
Distributed systems are hard.
To get the data immediately, the API components would have to talk directly to the map instance servers — we don’t do this because the API is intentionally decoupled from all the game systems so that the API can independently catch fire and burn down without affecting anything else. (additionally, this is one of the main reasons why the API will ever only be read-only, for the most part).
I’m making some sweeping generalizations here, but that’s the rough explanation.
Could something like this work? push/pull sync
It would allow for push notifications to the api backend when the server backend is ready for a pull request. This would allow for synchronized timing and maintain the decoupling. It would also reduce unnecessary calls, as the API would only make a call when the server has fresh data available.
It sounds like you have 4 layers here, the server, server cache, api cache, and api frontend. So the server cache would send the request when updated, the api cache would be receiving and trigger the sync request when new data is available. This also eliminates a need for constant polling. You’re talking through an expected channel, when a message is received action is triggered. For security you can use an encrypted application/database key that exists in the server db, and without said key a request to either cache would be denied. You can also setup acceptable domain keys, so anything outside of your local domain would be rejected for such a request (security sucks…) much like how OAuth2 works with .NET WebAPI
(edited by Sondergaard.8469)
It sounds like you have 4 layers here, the server, server cache, api cache, and api frontend. So the server cache would send the request when updated, the api cache would be receiving and trigger the sync request when new data is available.
Yeah, adding notifications to the server cache would allow us to flush the api cache and have better coherency.
I’m a little bit afraid to make changes to that system — the server cache is basically where account/character persistence is handled (it’s effectively backed by MSSQL) and fiddling with that terrifies me. Our backend systems don’t really support pub/sub style events — and the DB component is basically a generic piece that basically converts from our internal message protocol (“STS”) to stored procedure invocations — so there’s not really a good place to stick in functionality to notify other components on update.
I think this need is better met by having a local in-client websocket API (since the client always has up-to-date data). It’s still a pie in the sky, but I have a feeling by next year I’ll be out of lower-hanging fruit to implement.
I did a little more research on this issue, and I found this…
https://msdn.microsoft.com/en-us/library/t9x04ed2(v=vs.110).aspx
I thought I’d share, since this sounds like a much better solution than my initial suggestion.