Logging to one process from thousands: How does it work?

classic Classic list List threaded Threaded
25 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Joel Reymont-2
Folks,

I have just rewrote some code from Haskell to Erlang and there's one  
thing that baffles me.

I tried to approach Haskell the way I would code in Erlang and set up  
a logger thread with an unbounded message queue. I then launched a  
few hundred/thousand threads that traced to the logger periodically.  
This is no different than using disk_log in Erlang, I think.

The Haskell logger thread got quickly overwhelmed with messages I  
think. The queue build-up was huge. How does it work with Erlang? Is  
the scheduler specially tuned somehow to give different priorities to  
different threads? The Haskell scheduler, I believe, is just round-
robin.

        Thanks, Joel

--
http://wagerlabs.com/







Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Raimo Niskanen-3
There is a small fix in the scheduler for the standard
producer/consumer problem: A process that sends to a
receiver having a large receive queue gets punished
with a large reduction (number of function calls)
count for the send operation, and will therefore
get smaller scheduling slots.

joelr1 (Joel Reymont) writes:

> Folks,
>
> I have just rewrote some code from Haskell to Erlang and there's one
> thing that baffles me.
>
> I tried to approach Haskell the way I would code in Erlang and set up
> a logger thread with an unbounded message queue. I then launched a
> few hundred/thousand threads that traced to the logger periodically.
> This is no different than using disk_log in Erlang, I think.
>
> The Haskell logger thread got quickly overwhelmed with messages I
> think. The queue build-up was huge. How does it work with Erlang? Is
> the scheduler specially tuned somehow to give different priorities to
> different threads? The Haskell scheduler, I believe, is just round-
> robin.
>
> Thanks, Joel
>
> --
> http://wagerlabs.com/
>
>
>
>
>

--

/ Raimo Niskanen, Erlang/OTP, Ericsson AB


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Joel Reymont-2
In reply to this post by Joel Reymont-2
To add to my list of questions...

Are there any limits on the size of the receive queue in Erlang? Is  
there any blocking to prevent other processes from postingg to the  
message queue once it reaches a certain size?

Is the receive queue a bottomless pit without any blocking?

On Jan 3, 2006, at 12:36 PM, Joel Reymont wrote:

> Folks,
>
> I have just rewrote some code from Haskell to Erlang and there's  
> one thing that baffles me.
>
> I tried to approach Haskell the way I would code in Erlang and set  
> up a logger thread with an unbounded message queue. I then launched  
> a few hundred/thousand threads that traced to the logger  
> periodically. This is no different than using disk_log in Erlang, I  
> think.
>
> The Haskell logger thread got quickly overwhelmed with messages I  
> think. The queue build-up was huge. How does it work with Erlang?  
> Is the scheduler specially tuned somehow to give different  
> priorities to different threads? The Haskell scheduler, I believe,  
> is just round-robin.

--
http://wagerlabs.com/







Reply | Threaded
Open this post in threaded view
|

How does the scheduler assign priorities?

Joel Reymont-2
In reply to this post by Raimo Niskanen-3
How does the scheduler assign priorities then? Is there a particular  
source code file that I should look at? I'm very interested in the  
logic behind it.

So far it looks like a priority-based approach where priorities are  
assigned depending on the number of reductions. How many reductions  
does a process get before it is rescheduled? Is there a way to  
control that?

        Thanks, Joel

On Jan 3, 2006, at 1:03 PM, Raimo Niskanen wrote:

> There is a small fix in the scheduler for the standard
> producer/consumer problem: A process that sends to a
> receiver having a large receive queue gets punished
> with a large reduction (number of function calls)
> count for the send operation, and will therefore
> get smaller scheduling slots.

--
http://wagerlabs.com/







Reply | Threaded
Open this post in threaded view
|

How does the scheduler assign priorities?

Joel Reymont-2
Google is my friend:

http://www.erlang.org/ml-archive/erlang-questions/200104/msg00072.html

On Jan 3, 2006, at 1:41 PM, Joel Reymont wrote:

> How does the scheduler assign priorities then? Is there a  
> particular source code file that I should look at? I'm very  
> interested in the logic behind it.
>
> So far it looks like a priority-based approach where priorities are  
> assigned depending on the number of reductions. How many reductions  
> does a process get before it is rescheduled? Is there a way to  
> control that?

--
http://wagerlabs.com/







Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

David Hopwood-2
In reply to this post by Raimo Niskanen-3
> joelr1 (Joel Reymont) writes:
>
>>I have just rewrote some code from Haskell to Erlang and there's one
>>thing that baffles me.
>>
>>I tried to approach Haskell the way I would code in Erlang and set up
>>a logger thread with an unbounded message queue. I then launched a
>>few hundred/thousand threads that traced to the logger periodically.
>>This is no different than using disk_log in Erlang, I think.
>>
>>The Haskell logger thread got quickly overwhelmed with messages I
>>think. The queue build-up was huge. How does it work with Erlang? Is
>>the scheduler specially tuned somehow to give different priorities to
>>different threads? The Haskell scheduler, I believe, is just round-
>>robin.

Raimo Niskanen wrote:
> There is a small fix in the scheduler for the standard
> producer/consumer problem: A process that sends to a
> receiver having a large receive queue gets punished
> with a large reduction (number of function calls)
> count for the send operation, and will therefore
> get smaller scheduling slots.

This makes the problem less likely to occur, but it isn't necessarily enough
to prevent a message backlog in all cases. In principle, the right way to
handle this is to provide back-pressure to the message sources, i.e. to allow
the logging operation to block in each source. The simplest way to do that is
to use bounded queues.

--
David Hopwood <david.nospam.hopwood>



Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Joel Reymont-2
How do you implement a bounded queue in Erlang without busy-waiting?

On Jan 3, 2006, at 8:02 PM, David Hopwood wrote:

> This makes the problem less likely to occur, but it isn't  
> necessarily enough
> to prevent a message backlog in all cases. In principle, the right  
> way to
> handle this is to provide back-pressure to the message sources,  
> i.e. to allow
> the logging operation to block in each source. The simplest way to  
> do that is
> to use bounded queues.

--
http://wagerlabs.com/







Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Richard Cameron-2
In reply to this post by David Hopwood-2

On 3 Jan 2006, at 20:02, David Hopwood wrote:

> Raimo Niskanen wrote:
>> A process that sends to a
>> receiver having a large receive queue gets punished
>> with a large reduction (number of function calls)
>> count for the send operation
>
> This makes the problem less likely to occur, but it isn't  
> necessarily enough
> to prevent a message backlog in all cases.

If the amount of punishment a sender received increased with the  
queue length, would the maximum backlog asymptotically tend to a  
finite limit in the worst case?

That's probably only true if you assume that the per-timeslice  
production and service rates are constant, which might very well be  
complete rubbish. I'm guessing the cost of pattern matching to do the  
"receive" on the message queue is going to increase with the queue  
length too. So, maybe it comes down to making sure that the  
punishments grow faster than the cost of performing the receives?

Any idea what that cost is? Is the worst case linear, or can the  
virtual machine optimise for all cases?

> In principle, the right way to
> handle this is to provide back-pressure to the message sources,  
> i.e. to allow
> the logging operation to block in each source. The simplest way to  
> do that is
> to use bounded queues.

Sounds like the reduction count punishment is a good way of providing  
that back-pressure, but still having "soft realtime" properties which  
more gradually degrade under load.

I think with bounded queues it might also be possible to dream up  
scenarios where deadlocks can occur. What about a simple client/
server arrangement where the client sends a message to the server  
then happens to receive some other external (and unrelated) message  
which fills up its queue before the server responds. The server can't  
reply (its ! now blocks?) and the client can't process its message  
queue as it's sitting waiting for a reply from the server.

Richard.


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Ulf Wiger-5
In reply to this post by Joel Reymont-2
Den 2006-01-03 22:03:43 skrev Joel Reymont <joelr1>:

> How do you implement a bounded queue in Erlang without busy-waiting?

Difficult, but you can achieve back-pressure by
making the communication with the logger synchronous.

/Uffe
--
Ulf Wiger


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Raimo Niskanen-3
In reply to this post by Richard Cameron-2
Well, the existing implementation is just a small fix to prevent
the beginner from getting stuck on a simple producer/consumer
problem. Unfortunately the fix works so well that users are
lured to think it is the complete solution, realizes it is
not flawless, and starts making intricate improvement
suggestions.

The problem is not easily solved in the general case. So
I guess it would be futile to try to implement the
final solution in the runtime system.

The existing implementation might be enough in many cases;
to reduce the cost of pattern matching at the receive end
- do no pattern matching, just always take the
first message. The pattern matching cost at
the receive end does not have to increase
with queue lengt; if a message has been scanned
and found not to match - the scan continues with
new messages even after a schedule out. It is
when entering a new receive statement the message
queue is rescanned from the beginning.

Each application will have to solve the problem using
some kind of flow control. For a logger this is
unpleasent but not impossible. E.g the client might
have a counter in the process dictionary and do
a synchronous log entry every 17 log entries. And
if the logger always takes the first message, the
existing implementation might be enough. Beware that
the existing flow control only works for local sends
so I guess a logger in a node cluster would have to
use explicit flow control.



camster (Richard Cameron) writes:

> On 3 Jan 2006, at 20:02, David Hopwood wrote:
>
> > Raimo Niskanen wrote:
> >> A process that sends to a
> >> receiver having a large receive queue gets punished
> >> with a large reduction (number of function calls)
> >> count for the send operation
> >
> > This makes the problem less likely to occur, but it isn't
> > necessarily enough
> > to prevent a message backlog in all cases.
>
> If the amount of punishment a sender received increased with the
> queue length, would the maximum backlog asymptotically tend to a
> finite limit in the worst case?
>
> That's probably only true if you assume that the per-timeslice
> production and service rates are constant, which might very well be
> complete rubbish. I'm guessing the cost of pattern matching to do the
> "receive" on the message queue is going to increase with the queue
> length too. So, maybe it comes down to making sure that the
> punishments grow faster than the cost of performing the receives?
>
> Any idea what that cost is? Is the worst case linear, or can the
> virtual machine optimise for all cases?
>
> > In principle, the right way to
> > handle this is to provide back-pressure to the message sources,
> > i.e. to allow
> > the logging operation to block in each source. The simplest way to
> > do that is
> > to use bounded queues.
>
> Sounds like the reduction count punishment is a good way of providing
> that back-pressure, but still having "soft realtime" properties which
> more gradually degrade under load.
>
> I think with bounded queues it might also be possible to dream up
> scenarios where deadlocks can occur. What about a simple client/
> server arrangement where the client sends a message to the server
> then happens to receive some other external (and unrelated) message
> which fills up its queue before the server responds. The server can't
> reply (its ! now blocks?) and the client can't process its message
> queue as it's sitting waiting for a reply from the server.
>
> Richard.

--

/ Raimo Niskanen, Erlang/OTP, Ericsson AB


Reply | Threaded
Open this post in threaded view
|

How does the scheduler assign priorities?

Matthias Lang-2
In reply to this post by Joel Reymont-2
Joel Reymont writes:
 > Google is my friend:
 >
 > http://www.erlang.org/ml-archive/erlang-questions/200104/msg00072.html

To get full points, you have to use it _before_ posting. For extra
credit, make sure the question isn't in the FAQ and isn't covered by
the Erlang reference manual.

Matthias

 > On Jan 3, 2006, at 1:41 PM, Joel Reymont wrote:
 >
 > > How does the scheduler assign priorities then? Is there a  
 > > particular source code file that I should look at? I'm very  
 > > interested in the logic behind it.
 > >
 > > So far it looks like a priority-based approach where priorities are  
 > > assigned depending on the number of reductions. How many reductions  
 > > does a process get before it is rescheduled? Is there a way to  
 > > control that?
 >
 > --
 > http://wagerlabs.com/
 >
 >
 >
 >


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Rick Pettit-2
In reply to this post by Joel Reymont-2
On Tue, Jan 03, 2006 at 01:21:32PM +0000, Joel Reymont wrote:
> To add to my list of questions...
>
> Are there any limits on the size of the receive queue in Erlang?

I entered your exact question into google and the first link that popped up
was the Erlang FAQ--try reading that.

You might even want to skip to the following section:

  10.8.3. What limits does Erlang have?

There you will find a link to even more information, and so on.

-Rick

> Is  
> there any blocking to prevent other processes from postingg to the  
> message queue once it reaches a certain size?
>
> Is the receive queue a bottomless pit without any blocking?
>
> On Jan 3, 2006, at 12:36 PM, Joel Reymont wrote:
>
> >Folks,
> >
> >I have just rewrote some code from Haskell to Erlang and there's  
> >one thing that baffles me.
> >
> >I tried to approach Haskell the way I would code in Erlang and set  
> >up a logger thread with an unbounded message queue. I then launched  
> >a few hundred/thousand threads that traced to the logger  
> >periodically. This is no different than using disk_log in Erlang, I  
> >think.
> >
> >The Haskell logger thread got quickly overwhelmed with messages I  
> >think. The queue build-up was huge. How does it work with Erlang?  
> >Is the scheduler specially tuned somehow to give different  
> >priorities to different threads? The Haskell scheduler, I believe,  
> >is just round-robin.
>
> --
> http://wagerlabs.com/
>
>
>
>
>


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

David Hopwood-2
Rick Pettit wrote:

> On Tue, Jan 03, 2006 at 01:21:32PM +0000, Joel Reymont wrote:
>
>>To add to my list of questions...
>>
>>Are there any limits on the size of the receive queue in Erlang?
>
> I entered your exact question into google and the first link that popped up
> was the Erlang FAQ--try reading that.
>
> You might even want to skip to the following section:
>
>   10.8.3. What limits does Erlang have?
>
> There you will find a link to even more information, and so on.

Neither the FAQ, nor the "Efficiency Guide" that it links to, actually answers
Joel's question. The lack of any explicit statement about a receive queue limit
might be inferred to mean that there is no limit other than heap sizes, but I
wouldn't be sure of that without looking at the implementation.

--
David Hopwood <david.nospam.hopwood>



Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Rick Pettit-2
On Wed, Jan 04, 2006 at 09:39:13PM +0000, David Hopwood wrote:

> Rick Pettit wrote:
> > On Tue, Jan 03, 2006 at 01:21:32PM +0000, Joel Reymont wrote:
> >
> >>To add to my list of questions...
> >>
> >>Are there any limits on the size of the receive queue in Erlang?
> >
> > I entered your exact question into google and the first link that popped up
> > was the Erlang FAQ--try reading that.
> >
> > You might even want to skip to the following section:
> >
> >   10.8.3. What limits does Erlang have?
> >
> > There you will find a link to even more information, and so on.
>
> Neither the FAQ, nor the "Efficiency Guide" that it links to, actually answers
> Joel's question. The lack of any explicit statement about a receive queue limit
> might be inferred to mean that there is no limit other than heap sizes, but I
> wouldn't be sure of that without looking at the implementation.

My apologies.

-Rick


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Sean Hinde-2
In reply to this post by David Hopwood-2

On 4 Jan 2006, at 21:39, David Hopwood wrote:

> Rick Pettit wrote:
>> On Tue, Jan 03, 2006 at 01:21:32PM +0000, Joel Reymont wrote:
>>
>>> To add to my list of questions...
>>>
>>> Are there any limits on the size of the receive queue in Erlang?
>>
>> I entered your exact question into google and the first link that  
>> popped up
>> was the Erlang FAQ--try reading that.
>>
>> You might even want to skip to the following section:
>>
>>   10.8.3. What limits does Erlang have?
>>
>> There you will find a link to even more information, and so on.
>
> Neither the FAQ, nor the "Efficiency Guide" that it links to,  
> actually answers
> Joel's question. The lack of any explicit statement about a receive  
> queue limit
> might be inferred to mean that there is no limit other than heap  
> sizes, but I
> wouldn't be sure of that without looking at the implementation.

Maybe this is seen as rather basic for the FAQ (FAQs are normally  
written by people who have forgotten the process of learning..).

It is explained very nicely in the chapter on Inter Process  
Communication in the Erlang book (p69)

Sean




Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Scott Lystig Fritchie-3
In reply to this post by Ulf Wiger-5
>>>>> "uw" == Ulf Wiger <ulf> writes:

>> How do you implement a bounded queue in Erlang without
>> busy-waiting?

uw> Difficult, but you can achieve back-pressure by making the
uw> communication with the logger synchronous.

How about an unbounded queue that can raise its own priority?  Then we
take advantage of the 'high' vs. 'normal' scheduling behavior.  I
haven't actually tried *compiling* this code, but hopefully it gets
the idea across.

One potential problem could be that my_recursive_func() stops calling
itself before the number of messages in the queue falls below the low
water mark and then keeps its high priority.

-define(WATERMARK_HIGH, 100).
-define(WATERMARK_LOW,    5).

my_recursive_func(MyState) ->
    {message_queue_len, Msgs} = process_info(self(), message_queue_len),
    if Msgs > ?WATERMARK_HIGH -> process_flag(priority, high);
       Msgs < ?WATERMARK_LOW  -> process_flag(priority, normal);
       true                   -> ok
    end,
    %% Do real stuff here....

-Scott


Reply | Threaded
Open this post in threaded view
|

Can there be limits on message queue length?

David Hopwood-2
In reply to this post by Sean Hinde-2
Sean Hinde wrote:

> On 4 Jan 2006, at 21:39, David Hopwood wrote:
>> Rick Pettit wrote:
>>> On Tue, Jan 03, 2006 at 01:21:32PM +0000, Joel Reymont wrote:
>>>
>>>> To add to my list of questions...
>>>>
>>>> Are there any limits on the size of the receive queue in Erlang?
>>>
>>> I entered your exact question into google and the first link that
>>> popped up was the Erlang FAQ--try reading that.
>>>
>>> You might even want to skip to the following section:
>>>
>>>   10.8.3. What limits does Erlang have?
>>>
>>> There you will find a link to even more information, and so on.
>>
>> Neither the FAQ, nor the "Efficiency Guide" that it links to,
>> actually answers Joel's question. The lack of any explicit statement
>> about a receive queue limit might be inferred to mean that there is
>> no limit other than heap sizes, but I wouldn't be sure of that
>> without looking at the implementation.
>
> Maybe this is seen as rather basic for the FAQ (FAQs are normally
> written by people who have forgotten the process of learning..).

More specifically, by people who are already too familiar with the
system that the FAQ is about. That's understandable, but it means that
a conscious effort should be taken not to simply dismiss questions as
"too basic" or easily answered by Google, if they are not. (This is
intended as constructive criticism.)

> It is explained very nicely in the chapter on Inter Process
> Communication in the Erlang book (p69)

Maybe I'm being dense, but this does not seem to me to be either basic,
or explained by p69 of the Erlang book. The most relevant paragraph of
the latter says:

# Erlang has a selective receive mechanism, thus no message arriving
# unexpectedly at a process can block other messages to that process.
# However, as any messages not matched by receive are left in the
# mailbox, it is the programmer's responsibility to make sure that the
# system does not fill up with such messages.

This says that the programmer should ensure that a message queue does
not "fill up". It doesn't say whether "filling up" would occur only as
a result of heap limits, or whether it could occur at some smaller,
implementation-dependent limit. (It also doesn't say whether it is
only the recipient's heap that can act as the limit, or what happens
if the recipient's heap is exhausted asynchronously.)

There are other asynchronous message passing systems representing each
of these possibilities, so it's not obvious.


Hmm, the "Erlang specification" pointed to by the FAQ seems to be
the "Erlang 4.7.3 Reference Manual, DRAFT (0.7)" dated February 1999.
Is this really the most up-to-date written specification of Erlang?
Anyway, this specification doesn't appear (by skimming section 10) to
say whether there are, or are required not to be any limits (besides
heap size) on message queue length.

--
David Hopwood <david.nospam.hopwood>



Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

chandru
In reply to this post by Ulf Wiger-5
On 03/01/06, Ulf Wiger <ulf> wrote:
> Den 2006-01-03 22:03:43 skrev Joel Reymont <joelr1>:
>
> > How do you implement a bounded queue in Erlang without busy-waiting?
>
> Difficult, but you can achieve back-pressure by
> making the communication with the logger synchronous.

We've had the same problem of multiple processes sending log messages
to one process. As other people have written, you can do caching
before writing, use synchronous calls etc. But none of these will
guarantee any protection. You can still overwhelm a process employing
all these techniques.

What will be nice though is support for bounded queues where a process
can specify the maximum size for it's message queue. New messages
which arrive once this max size id reached can then be thrown away by
the runtime system. At the moment, the result of sending a message to
a process is the message itself(*) - in the case of the recipient
having a bounded queue, maybe the result can be an error (**) ?

Chandru

(*) Unless you are sending a message using the process name in which
case the caller will exit with badarg.

(**) I don't think anyone uses the return value of the ! operator so
old code should be ok?


Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

Sean Hinde-2
Hi Chandru,

On 5 Jan 2006, at 16:11, chandru wrote:

> On 03/01/06, Ulf Wiger <ulf> wrote:
>> Den 2006-01-03 22:03:43 skrev Joel Reymont <joelr1>:
>>
>>> How do you implement a bounded queue in Erlang without busy-waiting?
>>
>> Difficult, but you can achieve back-pressure by
>> making the communication with the logger synchronous.
>
> We've had the same problem of multiple processes sending log messages
> to one process. As other people have written, you can do caching
> before writing, use synchronous calls etc. But none of these will
> guarantee any protection. You can still overwhelm a process employing
> all these techniques.

I'm not sure I see this.

It is possible to overwhelm a whole system by sending it too much  
traffic (unless you have some overload protection which can push back  
against incoming traffic - {active, once} ?).

It is also possible to overwhelm the disk subsystem, in which case  
you just need to buy more hardware. Both of these cases are normal  
out of capacity situations which Erlang has shown itself to be more  
than capable of handling.

If the overall system is synchronous you should not see message  
queues fill up out of control.

>
> What will be nice though is support for bounded queues where a process
> can specify the maximum size for it's message queue. New messages
> which arrive once this max size id reached can then be thrown away by
> the runtime system. At the moment, the result of sending a message to
> a process is the message itself(*) - in the case of the recipient
> having a bounded queue, maybe the result can be an error (**) ?

Or make the message consumer drop events which it cannot handle. Just  
receiving a message takes very little time, and then you can  
guarantee that important messages do get handled.

Sean



Reply | Threaded
Open this post in threaded view
|

Logging to one process from thousands: How does it work?

chandru
Hi Sean,

On 05/01/06, Sean Hinde <sean.hinde> wrote:

> Hi Chandru,
>
> On 5 Jan 2006, at 16:11, chandru wrote:
>
> > On 03/01/06, Ulf Wiger <ulf> wrote:
> >> Den 2006-01-03 22:03:43 skrev Joel Reymont <joelr1>:
> >>
> >>> How do you implement a bounded queue in Erlang without busy-waiting?
> >>
> >> Difficult, but you can achieve back-pressure by
> >> making the communication with the logger synchronous.
> >
> > We've had the same problem of multiple processes sending log messages
> > to one process. As other people have written, you can do caching
> > before writing, use synchronous calls etc. But none of these will
> > guarantee any protection. You can still overwhelm a process employing
> > all these techniques.
>
> I'm not sure I see this.
>
> It is possible to overwhelm a whole system by sending it too much
> traffic (unless you have some overload protection which can push back
> against incoming traffic - {active, once} ?).
>
> It is also possible to overwhelm the disk subsystem, in which case
> you just need to buy more hardware. Both of these cases are normal
> out of capacity situations which Erlang has shown itself to be more
> than capable of handling.
>
> If the overall system is synchronous you should not see message
> queues fill up out of control.

Not necessarily. If I decided to allow a 100msgs/sec into the system
and if each of those messages generated a number of log entries, the
message queues in the logger process quickly build up if your hard
disk is not responsive at some point (using synchronous logging). The
problem I have with this is I can't always guarantee that I will
safely handle 100 msgs/sec.

> Or make the message consumer drop events which it cannot handle. Just
> receiving a message takes very little time, and then you can
> guarantee that important messages do get handled.

I suppose I could do this - but that'll mean invoking
process_info(self(), message_queue_len) everytime I handle a new
message. Is the message queue length actually incremented everytime a
message is placed in a process' queue? Or is the length computed
everytime I call process_info/2?

cheers
Chandru


12