Help creating distributed server cache

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

David Fox
I'm currently developing a gaming server which stores player information
that can be accessed from any of our games via a REST API.

So far I've thought of two ways to structure and cache player data:

1. When a client requests data on a player, spawn 1 player process. This
process handles: all subsequent requests from clients for this player,
retrieving the player data from the DB when created and periodically
updating the DB with any new data from clients. If the player is not
requested by another client within... say 30 minutes, the player process
will terminate.

2. Just keep previously requested data in a distributed LRU cache (e.g.,
memcached, redis, mnesia)

Out of the two, I prefer #1 since it would allow me to separate the
functionality of different "data types" (e.g., player data, game data).

There are just 2 problems with doing it this way that I'd like your
thoughts and help with:
I. I would have to implement some sort of "LRU process cache" so I could
terminate processes to free memory for new ones.
II. If a load balancer connects a client to node #1, but the process for
the requested player is on node #2, how can the player process on node
#2 send the data to the socket opened for the client on node #1. Is it
possible to somehow send an socket across nodes? The reason I ask, is
that I'd like to prevent sending big messages across nodes.

Thanks for the help!


Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

Anthony Molinaro
You might consider just using riak with the memory backend.  It already
has a REST API

http://docs.basho.com/riak/latest/tutorials/querying/Basic-Operations/

and with the memory backend you can set a ttl

http://docs.basho.com/riak/latest/tutorials/choosing-a-backend/Memory/

There's no need to keep processes running or do much of anything
other than install it on a few machines, configure it and go.

-Anthony

On Thu, Dec 06, 2012 at 03:43:22PM -0600, David Fox wrote:

> I'm currently developing a gaming server which stores player
> information that can be accessed from any of our games via a REST
> API.
>
> So far I've thought of two ways to structure and cache player data:
>
> 1. When a client requests data on a player, spawn 1 player process.
> This process handles: all subsequent requests from clients for this
> player, retrieving the player data from the DB when created and
> periodically updating the DB with any new data from clients. If the
> player is not requested by another client within... say 30 minutes,
> the player process will terminate.
>
> 2. Just keep previously requested data in a distributed LRU cache
> (e.g., memcached, redis, mnesia)
>
> Out of the two, I prefer #1 since it would allow me to separate the
> functionality of different "data types" (e.g., player data, game
> data).
>
> There are just 2 problems with doing it this way that I'd like your
> thoughts and help with:
> I. I would have to implement some sort of "LRU process cache" so I
> could terminate processes to free memory for new ones.
> II. If a load balancer connects a client to node #1, but the process
> for the requested player is on node #2, how can the player process
> on node #2 send the data to the socket opened for the client on node
> #1. Is it possible to somehow send an socket across nodes? The
> reason I ask, is that I'd like to prevent sending big messages
> across nodes.
>
> Thanks for the help!
> _______________________________________________
> erlang-questions mailing list
> erlang-questions
> http://erlang.org/mailman/listinfo/erlang-questions

--
------------------------------------------------------------------------
Anthony Molinaro                           <anthonym>


Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

David Fox
So you'd suggest going with #2 instead of #1? Could you tell me why?

Thanks for pointing out riak's memory backend. Had forgotten about that :)

-David

On 12/6/2012 15:53, Anthony Molinaro wrote:

> You might consider just using riak with the memory backend.  It already
> has a REST API
>
> http://docs.basho.com/riak/latest/tutorials/querying/Basic-Operations/
>
> and with the memory backend you can set a ttl
>
> http://docs.basho.com/riak/latest/tutorials/choosing-a-backend/Memory/
>
> There's no need to keep processes running or do much of anything
> other than install it on a few machines, configure it and go.
>
> -Anthony
>
> On Thu, Dec 06, 2012 at 03:43:22PM -0600, David Fox wrote:
>> I'm currently developing a gaming server which stores player
>> information that can be accessed from any of our games via a REST
>> API.
>>
>> So far I've thought of two ways to structure and cache player data:
>>
>> 1. When a client requests data on a player, spawn 1 player process.
>> This process handles: all subsequent requests from clients for this
>> player, retrieving the player data from the DB when created and
>> periodically updating the DB with any new data from clients. If the
>> player is not requested by another client within... say 30 minutes,
>> the player process will terminate.
>>
>> 2. Just keep previously requested data in a distributed LRU cache
>> (e.g., memcached, redis, mnesia)
>>
>> Out of the two, I prefer #1 since it would allow me to separate the
>> functionality of different "data types" (e.g., player data, game
>> data).
>>
>> There are just 2 problems with doing it this way that I'd like your
>> thoughts and help with:
>> I. I would have to implement some sort of "LRU process cache" so I
>> could terminate processes to free memory for new ones.
>> II. If a load balancer connects a client to node #1, but the process
>> for the requested player is on node #2, how can the player process
>> on node #2 send the data to the socket opened for the client on node
>> #1. Is it possible to somehow send an socket across nodes? The
>> reason I ask, is that I'd like to prevent sending big messages
>> across nodes.
>>
>> Thanks for the help!
>> _______________________________________________
>> erlang-questions mailing list
>> erlang-questions
>> http://erlang.org/mailman/listinfo/erlang-questions



Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

Garrett Smith
In reply to this post by David Fox
Hi David,

On Thu, Dec 6, 2012 at 9:43 PM, David Fox <david> wrote:

> I'm currently developing a gaming server which stores player information
> that can be accessed from any of our games via a REST API.
>
> So far I've thought of two ways to structure and cache player data:
>
> 1. When a client requests data on a player, spawn 1 player process. This
> process handles: all subsequent requests from clients for this player,
> retrieving the player data from the DB when created and periodically
> updating the DB with any new data from clients. If the player is not
> requested by another client within... say 30 minutes, the player process
> will terminate.
>
> 2. Just keep previously requested data in a distributed LRU cache (e.g.,
> memcached, redis, mnesia)
>
> Out of the two, I prefer #1 since it would allow me to separate the
> functionality of different "data types" (e.g., player data, game data).
>
> There are just 2 problems with doing it this way that I'd like your thoughts
> and help with:
> I. I would have to implement some sort of "LRU process cache" so I could
> terminate processes to free memory for new ones.
> II. If a load balancer connects a client to node #1, but the process for the
> requested player is on node #2, how can the player process on node #2 send
> the data to the socket opened for the client on node #1. Is it possible to
> somehow send an socket across nodes? The reason I ask, is that I'd like to
> prevent sending big messages across nodes.

It's tough to answer high level "approach" style questions, especially
without some hands on work (tinkering) to help you understand the
problem.

Limiting yourself to either/or options at this stage might also be premature.

Do you have a first pass at the public API for this service?

If you have an idea of the functions that could define the interface,
you can ask, for each unimplemented function:

- Can I make this side-effect free -- i.e. calling the function
doesn't change state or otherwise tamper with the universe?

- Does the function read from or write to long running state?

Side effect free functions are easy, which is why you try to solve
problems using them exclusively whenever possible.

For long running state, you can use a simple gen_server to implement
state initialization and mutation. If you have questions about what I
mean here, you'll need to bone up on gen_server, or alternatively look
at e2 services (see http://e2project.org) as they're simpler to write.

Once you have something very basic working, see if you're done! It
might just work for you as is, at least for the short term. If it
doesn't work, address the specific problem. E.g. if your problem is "I
loose my state when the VM crahses," you'll need to implement
persistence in some fashion.

Questions at that level are much easier to answer :)

Garrett


Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

David Fox
Hi Garret, thanks for the response.

I have not yet finished implementing the API; I'm still in the design
phase figuring out how everything should be hooked up.

Totally agree on not limiting yourself to options this early on, I'm
just asking for some help and opinions on some problems I saw in
potential implementations :)

David Fox
m: 630 930 9219
Chicago

On 12/6/2012 16:34, Garrett Smith wrote:

> Hi David,
>
> On Thu, Dec 6, 2012 at 9:43 PM, David Fox <david> wrote:
>> I'm currently developing a gaming server which stores player information
>> that can be accessed from any of our games via a REST API.
>>
>> So far I've thought of two ways to structure and cache player data:
>>
>> 1. When a client requests data on a player, spawn 1 player process. This
>> process handles: all subsequent requests from clients for this player,
>> retrieving the player data from the DB when created and periodically
>> updating the DB with any new data from clients. If the player is not
>> requested by another client within... say 30 minutes, the player process
>> will terminate.
>>
>> 2. Just keep previously requested data in a distributed LRU cache (e.g.,
>> memcached, redis, mnesia)
>>
>> Out of the two, I prefer #1 since it would allow me to separate the
>> functionality of different "data types" (e.g., player data, game data).
>>
>> There are just 2 problems with doing it this way that I'd like your thoughts
>> and help with:
>> I. I would have to implement some sort of "LRU process cache" so I could
>> terminate processes to free memory for new ones.
>> II. If a load balancer connects a client to node #1, but the process for the
>> requested player is on node #2, how can the player process on node #2 send
>> the data to the socket opened for the client on node #1. Is it possible to
>> somehow send an socket across nodes? The reason I ask, is that I'd like to
>> prevent sending big messages across nodes.
> It's tough to answer high level "approach" style questions, especially
> without some hands on work (tinkering) to help you understand the
> problem.
>
> Limiting yourself to either/or options at this stage might also be premature.
>
> Do you have a first pass at the public API for this service?
>
> If you have an idea of the functions that could define the interface,
> you can ask, for each unimplemented function:
>
> - Can I make this side-effect free -- i.e. calling the function
> doesn't change state or otherwise tamper with the universe?
>
> - Does the function read from or write to long running state?
>
> Side effect free functions are easy, which is why you try to solve
> problems using them exclusively whenever possible.
>
> For long running state, you can use a simple gen_server to implement
> state initialization and mutation. If you have questions about what I
> mean here, you'll need to bone up on gen_server, or alternatively look
> at e2 services (see http://e2project.org) as they're simpler to write.
>
> Once you have something very basic working, see if you're done! It
> might just work for you as is, at least for the short term. If it
> doesn't work, address the specific problem. E.g. if your problem is "I
> loose my state when the VM crahses," you'll need to implement
> persistence in some fashion.
>
> Questions at that level are much easier to answer :)
>
> Garrett



Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

Garrett Smith
On Thu, Dec 6, 2012 at 5:01 PM, David Fox <david> wrote:
> Hi Garret, thanks for the response.
>
> I have not yet finished implementing the API; I'm still in the design phase
> figuring out how everything should be hooked up.

Right. though I think you can start to build actual pieces (i.e.
compiling/running code) based on what's the most obvious to you, even
if it's just super stupidly simple. It will give you a basis for
iteration, which will further you help your understanding. You may
find yourself spending almost no time designing :)

> Totally agree on not limiting yourself to options this early on, I'm just
> asking for some help and opinions on some problems I saw in potential
> implementations :)
>
> David Fox
> m: 630 930 9219
> Chicago

Ah, I'm also in Chicago. Do you know about the Chicago Erlang User Group:

http://www.meetup.com/ErlangChicago/

We haven't met the last few months, but I'd like to do an informal
meetup before years end!

> On 12/6/2012 16:34, Garrett Smith wrote:
>>
>> Hi David,
>>
>> On Thu, Dec 6, 2012 at 9:43 PM, David Fox <david> wrote:
>>>
>>> I'm currently developing a gaming server which stores player information
>>> that can be accessed from any of our games via a REST API.
>>>
>>> So far I've thought of two ways to structure and cache player data:
>>>
>>> 1. When a client requests data on a player, spawn 1 player process. This
>>> process handles: all subsequent requests from clients for this player,
>>> retrieving the player data from the DB when created and periodically
>>> updating the DB with any new data from clients. If the player is not
>>> requested by another client within... say 30 minutes, the player process
>>> will terminate.
>>>
>>> 2. Just keep previously requested data in a distributed LRU cache (e.g.,
>>> memcached, redis, mnesia)
>>>
>>> Out of the two, I prefer #1 since it would allow me to separate the
>>> functionality of different "data types" (e.g., player data, game data).
>>>
>>> There are just 2 problems with doing it this way that I'd like your
>>> thoughts
>>> and help with:
>>> I. I would have to implement some sort of "LRU process cache" so I could
>>> terminate processes to free memory for new ones.
>>> II. If a load balancer connects a client to node #1, but the process for
>>> the
>>> requested player is on node #2, how can the player process on node #2
>>> send
>>> the data to the socket opened for the client on node #1. Is it possible
>>> to
>>> somehow send an socket across nodes? The reason I ask, is that I'd like
>>> to
>>> prevent sending big messages across nodes.
>>
>> It's tough to answer high level "approach" style questions, especially
>> without some hands on work (tinkering) to help you understand the
>> problem.
>>
>> Limiting yourself to either/or options at this stage might also be
>> premature.
>>
>> Do you have a first pass at the public API for this service?
>>
>> If you have an idea of the functions that could define the interface,
>> you can ask, for each unimplemented function:
>>
>> - Can I make this side-effect free -- i.e. calling the function
>> doesn't change state or otherwise tamper with the universe?
>>
>> - Does the function read from or write to long running state?
>>
>> Side effect free functions are easy, which is why you try to solve
>> problems using them exclusively whenever possible.
>>
>> For long running state, you can use a simple gen_server to implement
>> state initialization and mutation. If you have questions about what I
>> mean here, you'll need to bone up on gen_server, or alternatively look
>> at e2 services (see http://e2project.org) as they're simpler to write.
>>
>> Once you have something very basic working, see if you're done! It
>> might just work for you as is, at least for the short term. If it
>> doesn't work, address the specific problem. E.g. if your problem is "I
>> loose my state when the VM crahses," you'll need to implement
>> persistence in some fashion.
>>
>> Questions at that level are much easier to answer :)
>>
>> Garrett
>
>


Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

Steve Davis-2
In reply to this post by David Fox
>From what you've said, I would guess that the correct answer is:
2) memcached protocol

Since

Your solution (1) starts with "When a client requests data on a player,
spawn 1 player process." If that had been "when a client requests data on
themselves from another game" then it could have been in the running...

A memcached implementation will sort out LRU without you having to reinvent
(stabilize, test) a wheel.

Not sure why there's a REST requirement. If this MUST be HTTP then I see
it, otherwise what does it do for you?

My 2c,
/s

On Thursday, December 6, 2012 3:43:22 PM UTC-6, David Fox wrote:

>
> I'm currently developing a gaming server which stores player information
> that can be accessed from any of our games via a REST API.
>
> So far I've thought of two ways to structure and cache player data:
>
> 1. When a client requests data on a player, spawn 1 player process. This
> process handles: all subsequent requests from clients for this player,
> retrieving the player data from the DB when created and periodically
> updating the DB with any new data from clients. If the player is not
> requested by another client within... say 30 minutes, the player process
> will terminate.
>
> 2. Just keep previously requested data in a distributed LRU cache (e.g.,
> memcached, redis, mnesia)
>
> Out of the two, I prefer #1 since it would allow me to separate the
> functionality of different "data types" (e.g., player data, game data).
>
> There are just 2 problems with doing it this way that I'd like your
> thoughts and help with:
> I. I would have to implement some sort of "LRU process cache" so I could
> terminate processes to free memory for new ones.
> II. If a load balancer connects a client to node #1, but the process for
> the requested player is on node #2, how can the player process on node
> #2 send the data to the socket opened for the client on node #1. Is it
> possible to somehow send an socket across nodes? The reason I ask, is
> that I'd like to prevent sending big messages across nodes.
>
> Thanks for the help!
> _______________________________________________
> erlang-questions mailing list
> erlang-q... <javascript:>
> http://erlang.org/mailman/listinfo/erlang-questions 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20121206/a13fc9a5/attachment.html>

Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

Anthony Molinaro
In reply to this post by David Fox
I mention riak because it gets you furthest quickest.  It solves the problems you outlined (REST API and LRU style caching), while avoiding the problems (assuming you use a player id for your key riak will route requests to the appropriate node, and it has LRU style caching).  In addition you get distributed caching which works even when nodes go down.

So it just seemed to fit your problem as described.

-Anthony

On Dec 6, 2012, at 2:26 PM, David Fox <david> wrote:

> So you'd suggest going with #2 instead of #1? Could you tell me why?
>
> Thanks for pointing out riak's memory backend. Had forgotten about that :)
>
> -David
>
> On 12/6/2012 15:53, Anthony Molinaro wrote:
>> You might consider just using riak with the memory backend.  It already
>> has a REST API
>>
>> http://docs.basho.com/riak/latest/tutorials/querying/Basic-Operations/
>>
>> and with the memory backend you can set a ttl
>>
>> http://docs.basho.com/riak/latest/tutorials/choosing-a-backend/Memory/
>>
>> There's no need to keep processes running or do much of anything
>> other than install it on a few machines, configure it and go.
>>
>> -Anthony
>>
>> On Thu, Dec 06, 2012 at 03:43:22PM -0600, David Fox wrote:
>>> I'm currently developing a gaming server which stores player
>>> information that can be accessed from any of our games via a REST
>>> API.
>>>
>>> So far I've thought of two ways to structure and cache player data:
>>>
>>> 1. When a client requests data on a player, spawn 1 player process.
>>> This process handles: all subsequent requests from clients for this
>>> player, retrieving the player data from the DB when created and
>>> periodically updating the DB with any new data from clients. If the
>>> player is not requested by another client within... say 30 minutes,
>>> the player process will terminate.
>>>
>>> 2. Just keep previously requested data in a distributed LRU cache
>>> (e.g., memcached, redis, mnesia)
>>>
>>> Out of the two, I prefer #1 since it would allow me to separate the
>>> functionality of different "data types" (e.g., player data, game
>>> data).
>>>
>>> There are just 2 problems with doing it this way that I'd like your
>>> thoughts and help with:
>>> I. I would have to implement some sort of "LRU process cache" so I
>>> could terminate processes to free memory for new ones.
>>> II. If a load balancer connects a client to node #1, but the process
>>> for the requested player is on node #2, how can the player process
>>> on node #2 send the data to the socket opened for the client on node
>>> #1. Is it possible to somehow send an socket across nodes? The
>>> reason I ask, is that I'd like to prevent sending big messages
>>> across nodes.
>>>
>>> Thanks for the help!
>>> _______________________________________________
>>> erlang-questions mailing list
>>> erlang-questions
>>> http://erlang.org/mailman/listinfo/erlang-questions
>
> _______________________________________________
> erlang-questions mailing list
> erlang-questions
> http://erlang.org/mailman/listinfo/erlang-questions


Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

David Fox
In reply to this post by Steve Davis-2
There is no hard requirement for a RESTful API, but since this API will
be used in a wide variety of places (e.g., web/html5 games, mobile,
flash, etc) and not just internally, we decided having a RESTful API
would be a good idea and make using the API in development quicker/easier.

On 12/6/2012 18:42, Steve Davis wrote:

> From what you've said, I would guess that the correct answer is:
> 2) memcached protocol
>
> Since
>
> Your solution (1) starts with "When a client requests data on a
> player, spawn 1 player process." If that had been "when a client
> requests data on themselves from another game" then it could have been
> in the running...
>
> A memcached implementation will sort out LRU without you having to
> reinvent (stabilize, test) a wheel.
>
> Not sure why there's a REST requirement. If this MUST be HTTP then I
> see it, otherwise what does it do for you?
>
> My 2c,
> /s
>
> On Thursday, December 6, 2012 3:43:22 PM UTC-6, David Fox wrote:
>
>     I'm currently developing a gaming server which stores player
>     information
>     that can be accessed from any of our games via a REST API.
>
>     So far I've thought of two ways to structure and cache player data:
>
>     1. When a client requests data on a player, spawn 1 player
>     process. This
>     process handles: all subsequent requests from clients for this
>     player,
>     retrieving the player data from the DB when created and periodically
>     updating the DB with any new data from clients. If the player is not
>     requested by another client within... say 30 minutes, the player
>     process
>     will terminate.
>
>     2. Just keep previously requested data in a distributed LRU cache
>     (e.g.,
>     memcached, redis, mnesia)
>
>     Out of the two, I prefer #1 since it would allow me to separate the
>     functionality of different "data types" (e.g., player data, game
>     data).
>
>     There are just 2 problems with doing it this way that I'd like your
>     thoughts and help with:
>     I. I would have to implement some sort of "LRU process cache" so I
>     could
>     terminate processes to free memory for new ones.
>     II. If a load balancer connects a client to node #1, but the
>     process for
>     the requested player is on node #2, how can the player process on
>     node
>     #2 send the data to the socket opened for the client on node #1.
>     Is it
>     possible to somehow send an socket across nodes? The reason I ask, is
>     that I'd like to prevent sending big messages across nodes.
>
>     Thanks for the help!
>     _______________________________________________
>     erlang-questions mailing list
>     erlang-q... <javascript:>
>     http://erlang.org/mailman/listinfo/erlang-questions
>     <http://erlang.org/mailman/listinfo/erlang-questions>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20121207/133c2d6e/attachment.html>

Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

David Fox
In reply to this post by Garrett Smith
I did not know about the Chicago group. Thanks for the heads up, I'll
make sure to look into it :)

On 12/6/2012 17:26, Garrett Smith wrote:

> On Thu, Dec 6, 2012 at 5:01 PM, David Fox <david> wrote:
>> Hi Garret, thanks for the response.
>>
>> I have not yet finished implementing the API; I'm still in the design phase
>> figuring out how everything should be hooked up.
> Right. though I think you can start to build actual pieces (i.e.
> compiling/running code) based on what's the most obvious to you, even
> if it's just super stupidly simple. It will give you a basis for
> iteration, which will further you help your understanding. You may
> find yourself spending almost no time designing :)
>
>> Totally agree on not limiting yourself to options this early on, I'm just
>> asking for some help and opinions on some problems I saw in potential
>> implementations :)
>>
>> David Fox
>> m: 630 930 9219
>> Chicago
> Ah, I'm also in Chicago. Do you know about the Chicago Erlang User Group:
>
> http://www.meetup.com/ErlangChicago/
>
> We haven't met the last few months, but I'd like to do an informal
> meetup before years end!
>
>> On 12/6/2012 16:34, Garrett Smith wrote:
>>> Hi David,
>>>
>>> On Thu, Dec 6, 2012 at 9:43 PM, David Fox <david> wrote:
>>>> I'm currently developing a gaming server which stores player information
>>>> that can be accessed from any of our games via a REST API.
>>>>
>>>> So far I've thought of two ways to structure and cache player data:
>>>>
>>>> 1. When a client requests data on a player, spawn 1 player process. This
>>>> process handles: all subsequent requests from clients for this player,
>>>> retrieving the player data from the DB when created and periodically
>>>> updating the DB with any new data from clients. If the player is not
>>>> requested by another client within... say 30 minutes, the player process
>>>> will terminate.
>>>>
>>>> 2. Just keep previously requested data in a distributed LRU cache (e.g.,
>>>> memcached, redis, mnesia)
>>>>
>>>> Out of the two, I prefer #1 since it would allow me to separate the
>>>> functionality of different "data types" (e.g., player data, game data).
>>>>
>>>> There are just 2 problems with doing it this way that I'd like your
>>>> thoughts
>>>> and help with:
>>>> I. I would have to implement some sort of "LRU process cache" so I could
>>>> terminate processes to free memory for new ones.
>>>> II. If a load balancer connects a client to node #1, but the process for
>>>> the
>>>> requested player is on node #2, how can the player process on node #2
>>>> send
>>>> the data to the socket opened for the client on node #1. Is it possible
>>>> to
>>>> somehow send an socket across nodes? The reason I ask, is that I'd like
>>>> to
>>>> prevent sending big messages across nodes.
>>> It's tough to answer high level "approach" style questions, especially
>>> without some hands on work (tinkering) to help you understand the
>>> problem.
>>>
>>> Limiting yourself to either/or options at this stage might also be
>>> premature.
>>>
>>> Do you have a first pass at the public API for this service?
>>>
>>> If you have an idea of the functions that could define the interface,
>>> you can ask, for each unimplemented function:
>>>
>>> - Can I make this side-effect free -- i.e. calling the function
>>> doesn't change state or otherwise tamper with the universe?
>>>
>>> - Does the function read from or write to long running state?
>>>
>>> Side effect free functions are easy, which is why you try to solve
>>> problems using them exclusively whenever possible.
>>>
>>> For long running state, you can use a simple gen_server to implement
>>> state initialization and mutation. If you have questions about what I
>>> mean here, you'll need to bone up on gen_server, or alternatively look
>>> at e2 services (see http://e2project.org) as they're simpler to write.
>>>
>>> Once you have something very basic working, see if you're done! It
>>> might just work for you as is, at least for the short term. If it
>>> doesn't work, address the specific problem. E.g. if your problem is "I
>>> loose my state when the VM crahses," you'll need to implement
>>> persistence in some fashion.
>>>
>>> Questions at that level are much easier to answer :)
>>>
>>> Garrett
>>



Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

David Fox
In reply to this post by David Fox
I've never heard of this tech before. Thanks for the heads up, looks
quite interesting :)

On 12/7/2012 13:34, Arthur Ingram wrote:

> Take a look at the following
>
>
> https://github.com/ztmr/egtm
>
> http://robtweed.wordpress.com/2012/10/22/natively-stateless/
>
>
>
> On Thursday, December 6, 2012 3:43:22 PM UTC-6, David Fox wrote:
>
>     I'm currently developing a gaming server which stores player
>     information
>     that can be accessed from any of our games via a REST API.
>
>     So far I've thought of two ways to structure and cache player data:
>
>     1. When a client requests data on a player, spawn 1 player
>     process. This
>     process handles: all subsequent requests from clients for this
>     player,
>     retrieving the player data from the DB when created and periodically
>     updating the DB with any new data from clients. If the player is not
>     requested by another client within... say 30 minutes, the player
>     process
>     will terminate.
>
>     2. Just keep previously requested data in a distributed LRU cache
>     (e.g.,
>     memcached, redis, mnesia)
>
>     Out of the two, I prefer #1 since it would allow me to separate the
>     functionality of different "data types" (e.g., player data, game
>     data).
>
>     There are just 2 problems with doing it this way that I'd like your
>     thoughts and help with:
>     I. I would have to implement some sort of "LRU process cache" so I
>     could
>     terminate processes to free memory for new ones.
>     II. If a load balancer connects a client to node #1, but the
>     process for
>     the requested player is on node #2, how can the player process on
>     node
>     #2 send the data to the socket opened for the client on node #1.
>     Is it
>     possible to somehow send an socket across nodes? The reason I ask, is
>     that I'd like to prevent sending big messages across nodes.
>
>     Thanks for the help!
>     _______________________________________________
>     erlang-questions mailing list
>     erlang-q... <javascript:>
>     http://erlang.org/mailman/listinfo/erlang-questions
>     <http://erlang.org/mailman/listinfo/erlang-questions>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20121207/466179ed/attachment.html>

Reply | Threaded
Open this post in threaded view
|

Help creating distributed server cache

Steve Davis-2
In reply to this post by David Fox
Then it does sound like either one of:
 - Couchbase http://www.couchbase.com - which is what I had in mind as a
memcached supporting cache/db
 - Riak (as Anthony had suggested)
...would suit your needs.

Riak is mature and massively scalable by design. Couchbase is also highly
scalable, offers document-oriented storage and a js query interface.

It's hard to predict which would suit you best, as that would depend on
your data and current infrastructure.

Both are definitely worth doing the "due diligence" on.

/s

On Friday, December 7, 2012 12:49:59 PM UTC-6, David Fox wrote:
>
>  There is no hard requirement for a RESTful API, but since this API will
> be used in a wide variety of places (e.g., web/html5 games, mobile, flash,
> etc) and not just internally, we decided having a RESTful API would be a
> good idea and make using the API in development quicker/easier.
>
>  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://erlang.org/pipermail/erlang-questions/attachments/20121207/b04b2e35/attachment.html>