I am working with a busy database driven web site who have a performance
- several linux hosts runing apache and PHP and issuing SQL queries to
- one hefty four cpu SMP linux host running Mysql
When the site is busy, the apache instances open more and more connections
and start new queries to the Mysql system, which leads to the Mysql
server trying to handle several hundred queries at one time. Unhappily,
Mysql bogs down very badly when there are lots of active threads.
For example with only a few (ca. 10) connections, the Mysql system can
handle several thousand queries per second. But, with many (eg 1000)
connections, Mysql can only do a few hundred queries per second. That is,
with large numbers of active queries, overall throughput decline by an
order of magnitude.
Since the slower the database goes, the more queries the webservers want to
start, there is a nasty feedback effect.
Mysql uses pthreads internally, and since this is on Linux, they map to
Linux clone() threads, which are scheduled by the kernel and are quite a
lot heavier than an Erlang process.
To avoid the site meltdowns, I am proposing to limit the number of
Mysql connections by placing a proxy process between the webservers
(Mysql clients) and the Mysql server. This proxy will multiplex multiple
sql client sessions onto each sql server connection.
Bad ascii diagram ...
Client ----\ \
Client ------- Proxy <====> Mysql
Client ----/ /
When a client sends a query, it must be queued until it can be sent on to
the server. When a query is sent to the server, there may be some delay
before the response. We need to continue accepting and queueing client
queries during this time. When a response comes from the server it
must be sent back to the client that originated the query, and the next
queue query must be sent to the server.
Client A: - to proxy -> here is query A
Proxy : Server is idle, mark it busy with qA and
- to Server -> here is a query
Client B: -> to proxy -> here is qB
Proxy: hmm, server is busy, put qB on queue.
Client C: -> to proxy -> here is qC
Proxy: put qC on queue
Server: - to proxy -> here is a partial result
Proxy: -> to Client A -> here is part of response A
Server: - to proxy -> here is the rest of the result
Proxy: - to Client A -> here is the rest of rA
Proxy: dequeue qB - to Server -> here is a query
The query and especially the response may be any size (although here they
are probably a few KB), so it is neccessary to handle partial queries
and partial results. Also, the client does not have to read the whole
response at once, so there may be some buffering needed. This all means that
the proxy needs to understand enough of the mysql "on the wire" protocol to
identify whole queries and whole sets of results, presumably by watching for
whatever "end of message" markers there are in the packets.
I am considering Erlang to write this proxy, it seems like it should be
quite simple even using sockets and binaries that doing it in C. Although,
there are some C libraries for this that do a bit of the heavy lifting.
The performance constraints are roughly:
- queries are a few hundred bytes.
- I am guessing that responses are about one K bytes, but could sometimes
be quite large (ie megs).
- packets are handled as binaries, only a few bytes of header need looking
- roughly 1000 client sessions.
- roughly 10 to 40 server sessions.
- it would be nice to handle thousands of queries per second.
- the multiplexing proxy(s) can run on the webserver machines, one per
apache, or on dedicated machines between the webservers and the sql
- network bandwidth is not a problem, hardware can be purchased if needed.
Current machines are two and four cpu Zeons with 1GB to 4GB of memory.
So, has anyone got any advice for me?
Is it reasonable to expect Erlang to handle this?
- How many tcp sockets can the emulator well? Hundreds? Thousands?
- is it able to poll them efficiently?
- what sort of traffic level can it cope with in terms of messages/bytes
per second? (I realize this is an impossible question, but if the answer
is "many KB per second", I have to give up, if it is "many MB per second",
I have a hope.)
Is there a preferred way to lay this sort of thing out into processes? I have
a vague idea of a process per socket and some sort of queue manager process
spawning a process for each query as it is activated.
David Gould dg
SuSE, Inc., 580 2cd St. #210, Oakland, CA 94607 510.628.3380
You left them alone in a room with a penguin?! Mr Gates, your men are
David Gould <dg> writes:
> To avoid the site meltdowns, I am proposing to limit the number of
> Mysql connections by placing a proxy process between the webservers
> (Mysql clients) and the Mysql server. This proxy will multiplex
> multiple sql client sessions onto each sql server connection.
> Bad ascii diagram ...
> Client ----\
> Client ----\ \
> Client ------- Proxy <====> Mysql
> Client ----/ /
> Client ----/
> When a client sends a query, it must be queued until it can be sent
> on to the server. When a query is sent to the server, there may be
> some delay before the response. We need to continue accepting and
> queueing client queries during this time. When a response comes from
> the server it must be sent back to the client that originated the
> query, and the next queue query must be sent to the server.
I can't answer the important questions about throughput and load
limits, as our app is low traffic. But, our pilot Erlang project, put
into production last summer, does something quite similar to your
diagram, except that we are not only proxying and queuing, but also
providing failover on redundant custom databases over WAN links. It
has been extremely reliable, allowing support staff to sleep through
near-weekly outages in any one of the remote servers.
Warning: The principal programmer on the project turned into an FP
zealot, and doesn't like C++, Java, or Perl any more. ;-)
|Free forum by Nabble||Edit this page|