sends don't block, right?

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
Ok, colo(u)r me stupid now (borrowed phrase, sorry!):

! doesn't block, right?

If I do something foolish like this:

        never_end() ->
                Pid = spawn(fun() -> receive foo -> ok end),
                do_loop(Pid).

        do_loop(Pid) ->
                Pid ! bar,
                io:format("still going, just like the bunny~n", []),
                do_loop(Pid).

will the parent ever stop because the child's message buffer is full?

Basically, I'm asking if Erlang will let the parent in this case run
the VM out of memory before making the parent freeze.  Clearly one should
never write this code, but I'm trying to setup an async-send for my
serial driver that will "never" block the caller, as apposed to the
blocking send which makes sure the data was delivered to the endpoint
before returning.

The only reason I'm concerned here is the caller could lock up if it
gets blocked and hardware or software flow control breaks down due to
link failure.

With most serial protocols I wouldn't see the need to buffer more than
a few hundred KB of data, and the port driver already has buffers deep
enough to handle that, so in theory, port_command/2 should never block.
I just want to keep from blocking up the application, if the application
so desires.

--
Shawn.

  C Code.
  C Code Run.
  Run, Code, RUN!
  PLEASE!!!!


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Chris Pressey
On Tue, 24 Feb 2004 22:48:21 -0500
Shawn Pearce <spearce> wrote:

> Ok, colo(u)r me stupid now (borrowed phrase, sorry!):
>
> ! doesn't block, right?

Right.

> If I do something foolish like this:
>
> never_end() ->
> Pid = spawn(fun() -> receive foo -> ok end),
> do_loop(Pid).
>
> do_loop(Pid) ->
> Pid ! bar,
> io:format("still going, just like the bunny~n", []),
> do_loop(Pid).
>
> will the parent ever stop because the child's message buffer is full?

No.  It'll crash.
By which I mean the Erlang *node* will crash, not just the process.

> Basically, I'm asking if Erlang will let the parent in this case run
> the VM out of memory before making the parent freeze.

Yes, exactly that.

At least, that was my experience last time I tried anything like this.
Perhaps try it yourself and see?

-Chris


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
Chris Pressey <cpressey> wrote:
> On Tue, 24 Feb 2004 22:48:21 -0500
> Shawn Pearce <spearce> wrote:
>
> > Basically, I'm asking if Erlang will let the parent in this case run
> > the VM out of memory before making the parent freeze.
>
> Yes, exactly that.

Excellent, that's what I had thought, but its been a while since I
had last read that fact and/or proved it to myself by reading that
section of the emulator source code.

Not that I'd ever condone taking a node down like this.  But the fact
that Erlang will grow the buffers as needed is what I'd expect.  In
truth, my serial port 'user' processes will notice something is amiss
long before they ever put over a MB or two into a message queue.

> At least, that was my experience last time I tried anything like this.
> Perhaps try it yourself and see?

Nah.  I'm not that worried about it.  It was easier to email the list
and get a response from someone like yourself who knows Erlang better
than I, than to run a node for hours trying to fill up main memory until
the node crashes.  When you have a full 1 GB of RAM available to the node
its gonna take a while to run that test case I posted.

Thanks for the quick reply Chris.

--
Shawn.

  I'm continually AMAZED at th'breathtaking effects of WIND EROSION!!


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Taj Khattra-2
On Tue, Feb 24, 2004 at 11:45:02PM -0500, Shawn Pearce wrote:
> than I, than to run a node for hours trying to fill up main memory until
> the node crashes.  When you have a full 1 GB of RAM available to the node
> its gonna take a while to run that test case I posted.

you can start the node with a virtual memory limit.

e.g. on my linux box

% sh -c 'ulimit -v 8000; exec erl'
Erlang (BEAM) emulator version 5.3 [source] [hipe]

Eshell V5.3  (abort with ^G)
1> lists:duplicate(100000, abc).

Crash dump was written to: erl_crash.dump
eheap_alloc: Cannot allocate 785672 bytes of memory (of type "old_heap").
                                                                         Aborted (core dumped)

-taj


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Francesco Mazzoli-2
In reply to this post by Shawn Pearce
There are design rules
(http://www.erlang.se/doc/programming_rules.shtml) to always flush
unknown messages, logging the error. I usually go a step further and
suggest people make their gen_server code crash through a badmatch if
they receive unknown calls and casts, as they are bugs and should not be
sent. For handle info, always add a debug printout, as the story is
slightly different. Messages for ports closing, nodes going down,
possibly expired time-outs, or as I recently discovered, results from
asynchronous RPCs, and other junk might appear.

Shawn - If you are worried about a blocking parent, you could solve your
problem with asynchronous call-backs and time-outs.

Cheers,
Francesco
--
http://www.erlang-consulting.com

Shawn Pearce wrote:

>Ok, colo(u)r me stupid now (borrowed phrase, sorry!):
>
>! doesn't block, right?
>
>If I do something foolish like this:
>
> never_end() ->
> Pid = spawn(fun() -> receive foo -> ok end),
> do_loop(Pid).
>
> do_loop(Pid) ->
> Pid ! bar,
> io:format("still going, just like the bunny~n", []),
> do_loop(Pid).
>
>will the parent ever stop because the child's message buffer is full?
>
>Basically, I'm asking if Erlang will let the parent in this case run
>the VM out of memory before making the parent freeze.  Clearly one should
>never write this code, but I'm trying to setup an async-send for my
>serial driver that will "never" block the caller, as apposed to the
>blocking send which makes sure the data was delivered to the endpoint
>before returning.
>
>The only reason I'm concerned here is the caller could lock up if it
>gets blocked and hardware or software flow control breaks down due to
>link failure.
>
>With most serial protocols I wouldn't see the need to buffer more than
>a few hundred KB of data, and the port driver already has buffers deep
>enough to handle that, so in theory, port_command/2 should never block.
>I just want to keep from blocking up the application, if the application
>so desires.
>
>  
>



Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Joe Williams-2
In reply to this post by Shawn Pearce
On Tue, 24 Feb 2004, Shawn Pearce wrote:

> Ok, colo(u)r me stupid now (borrowed phrase, sorry!):
>
> ! doesn't block, right?
>
> If I do something foolish like this:
>
> never_end() ->
> Pid = spawn(fun() -> receive foo -> ok end),
> do_loop(Pid).
>
> do_loop(Pid) ->
> Pid ! bar,
> io:format("still going, just like the bunny~n", []),
> do_loop(Pid).
>
> will the parent ever stop because the child's message buffer is full?
>

  That depends on the relative rates that the produce and consumer run
at.

  If you consume  messages faster than you produce  them then you will
be ok - otherwise you'll be screwed.

  In the early  days of Erlang I  used to write a mixture  of RPCs and
casts pretty much as the  problem dictated - occasionally I'd run into
buffer overrun  problems (such  as the kind  that would occur  in your
example) and  so I went  over to the  synced RPC style  of programming
where buffer overflow can't occur.

  More recently I have returned  to my original style of programming -
Asynchronous sends  are find - but you  need to take a  little care to
make sure you can't go into infinite send loops (like you program) and
that if you are doing a load of asynchronous sends then you interleave
the odd RPC to synchronize things again.

  If  you do  a whole  bundle of  asynchronous sends  and then  an RPC
things will  get synchronized  at the RPC  and the buffers  should not
saturate - this is of course only true between pairs of processes.

  I now love my !! operator - and program clients with

        A ! B         and
        Val = A !! B

  operators

  I program the servers like this:

        receive
            {replyTo, Pid, ReplyAs, Q} ->
                ...
                Pid ! {ReplyAs, Val}
                ...
            Msg ->
                ...

  And don't use any stub routines (horrors).

  Robert  (Virding)  first  programmed  the  I/O  routines  with  this
replyTo-replyAs style - it's very nice ...

  Now I don't write interface  functions that abstract out the message
passing interface. That way I can clearly "see" the message passing.

  If you use !!  at the top  level of your code then you can "see" the
RPC - my brain goes (RPC this  might be slow - take care, and RPC this
will synchronize any outstanding asynchronous  message) - so I can see
what I'm doing.

  Burying this *inside*  a stub function which hides  the RPC makes me
loose sight of the important fact that we are doing an RPC. If the RPC
is  "off site"  (ie  we do  an  RPC on  a remote  node)  then we  have
abstracted  away  from  the  single  most important  detail  that  the
programmer should no about.

  Non-local RPCs cause great performance hits.

  When I see this in my code:

        Pid @ Node !! X

  I think <<danger very slow>>

  with

        Pid !! X

  I think <<possibly slow take care>>

  but

        X ! Y

  I thing <<ok but I'll need to synchronize later>>

  Hiding  this  essential  detail  in  a  stub  routine  seems  to  be
abstracting away  from the one essential  detail that we  need to know
about when writing distributed programs.

  /Joe



> Basically, I'm asking if Erlang will let the parent in this case run
> the VM out of memory before making the parent freeze.  Clearly one should
> never write this code, but I'm trying to setup an async-send for my
> serial driver that will "never" block the caller, as apposed to the
> blocking send which makes sure the data was delivered to the endpoint
> before returning.
>
> The only reason I'm concerned here is the caller could lock up if it
> gets blocked and hardware or software flow control breaks down due to
> link failure.
>
> With most serial protocols I wouldn't see the need to buffer more than
> a few hundred KB of data, and the port driver already has buffers deep
> enough to handle that, so in theory, port_command/2 should never block.
> I just want to keep from blocking up the application, if the application
> so desires.
>
>



Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Vlad Dumitrescu-4
From: "Joe Armstrong" <joe>
>   If you use !!  at the top  level of your code then you can "see" the
> RPC - my brain goes (RPC this  might be slow - take care, and RPC this
> will synchronize any outstanding asynchronous  message) - so I can see
> what I'm doing.

You make a very nice argument for the !! operator, while explaining some basic
Erlang philosophy. If only it was easy to specify timeouts, I'd be sold!

/Vlad


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Joe Williams-2

On Wed, 25 Feb 2004, Vlad Dumitrescu wrote:

> From: "Joe Armstrong" <joe>
> >   If you use !!  at the top  level of your code then you can "see" the
> > RPC - my brain goes (RPC this  might be slow - take care, and RPC this
> > will synchronize any outstanding asynchronous  message) - so I can see
> > what I'm doing.
>
> You make a very nice argument for the !! operator, while explaining some basic
> Erlang philosophy. If only it was easy to specify timeouts, I'd be sold!
>
> /Vlad
>

  I wondered about the following:

  If we make the following one line addition to erl_parse.yrl:

  expr_100 -> expr_150 '!' '!' expr_100:
     {call, line('$1'),{atom, line('$1'), rpc},['$1', '$4']}.

  Then A !!  B just gets expanded into the  *local* function rpc(A, B)
and  the user is  free to  add their  own definition  of rcp/2  to the
module concerned.

So in one module I might say:

   rpc(Pid, Q) ->
       Pid ! {replyTo, self(), replyAs, Pid, Q},
       receive
           {Pid, Reply} ->
               Reply
       end.

In another

   rpc(Pid, Q) ->
       Pid ! {self(), Q},
       receive
    {Pid, Reply} ->
       Reply;
       after 1000 ->
       exit(oops)
       end.

In another

   -include("rpc1.hrl").

In another:

   rpc(Pid, Q) ->
    gen_server:call(....)

  This is more or less how I program - I want all the !!'s to work the
same way in a given scope. ie all of then have timeouts or none, etc.

/Joe



Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
Or really alter the language to allow:

        Pid !! Message after Timeout -> n end.

Yuck.  I can't believe I just wrote that pile of garbage.

Never mind, VERY bad idea.

I'm sold on the local RPC function.  Have at it
they-who-maintain-the-compiler.

Joe Armstrong <joe> wrote:

>
> On Wed, 25 Feb 2004, Vlad Dumitrescu wrote:
>
> > From: "Joe Armstrong" <joe>
> > >   If you use !!  at the top  level of your code then you can "see" the
> > > RPC - my brain goes (RPC this  might be slow - take care, and RPC this
> > > will synchronize any outstanding asynchronous  message) - so I can see
> > > what I'm doing.
> >
> > You make a very nice argument for the !! operator, while explaining some basic
> > Erlang philosophy. If only it was easy to specify timeouts, I'd be sold!
> >
> > /Vlad
> >
>
>   I wondered about the following:
>
>   If we make the following one line addition to erl_parse.yrl:
>
>   expr_100 -> expr_150 '!' '!' expr_100:
>      {call, line('$1'),{atom, line('$1'), rpc},['$1', '$4']}.
>
>   Then A !!  B just gets expanded into the  *local* function rpc(A, B)
> and  the user is  free to  add their  own definition  of rcp/2  to the
> module concerned.
>
> So in one module I might say:
>
>    rpc(Pid, Q) ->
>        Pid ! {replyTo, self(), replyAs, Pid, Q},
>        receive
>   {Pid, Reply} ->
>       Reply
>        end.
>
> In another
>
>    rpc(Pid, Q) ->
>        Pid ! {self(), Q},
>        receive
>     {Pid, Reply} ->
>        Reply;
>        after 1000 ->
>        exit(oops)
>        end.
>
> In another
>
>    -include("rpc1.hrl").
>
> In another:
>
>    rpc(Pid, Q) ->
>     gen_server:call(....)
>
>   This is more or less how I program - I want all the !!'s to work the
> same way in a given scope. ie all of then have timeouts or none, etc.
>
> /Joe
>

--
Shawn.

  Fortune's Real-Life Courtroom Quote #3:
 
  Q:  When he went, had you gone and had she, if she wanted to and were
      able, for the time being excluding all the restraints on her not to
      go, gone also, would he have brought you, meaning you and she, with
      him to the station?
  MR. BROOKS:  Objection.  That question should be taken out and shot.


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Vlad Dumitrescu-4
In reply to this post by Joe Williams-2
From: "Joe Armstrong" <joe>
>   If we make the following one line addition to erl_parse.yrl:
>
>   expr_100 -> expr_150 '!' '!' expr_100:
>      {call, line('$1'),{atom, line('$1'), rpc},['$1', '$4']}.
>
>   Then A !!  B just gets expanded into the  *local* function rpc(A, B)
> and  the user is  free to  add their  own definition  of rcp/2  to the
> module concerned.

Mmm, yes, but IMHO this makes the semantics of !! not so easy to grasp. Having a
local definition helps (because of the locality) but one still has to check what
it means.

/Vlad



Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
In reply to this post by Joe Williams-2
Joe, thanks for a nicely written reply.

So long as the developer is programming using send/recv (like with
gen_tcp), they will most likely synchronize with the other process,
and consequently ensure the message queues are emptied at frequent
intervals.

Really I was just a little concerned about the caller blocking up
due to buffers being jammed everywhere except on the target process'
message queue, so that the caller could do:

        gen_serial:send(Port, << ... >>),
        case gen_serial:recv(Port, 8, 3000) of
        {ok, Data} ->
                ...
        {error, timeout} ->
                error_logger:error_report(...)
        end.

and be sure that the recv call is where they will block.  From what
everyone has told me, this is basically what will happen.

You make a good arguement for !!, but do you really mean to suggest that
the above should be written as:

        Port ! {send, << ... >>},
        case Port !! {recv, 8, 3000} of
        {ok, Data} ->
                ...
        {error, timeout} ->
                error_logger:error_report(...)
        end.

?

I'm quite sure I find the syntax ugly, at least in this case.  :)
But then again, this is like a file IO process, it shouldn't be
seen by the user; or the user shouldn't know its there.  If my driver
was a linked-in driver I wouldn't even need the process at all.


Joe Armstrong <joe> wrote:

> Asynchronous sends  are find - but you  need to take a  little care to
> make sure you can't go into infinite send loops (like you program) and
> that if you are doing a load of asynchronous sends then you interleave
> the odd RPC to synchronize things again.
>
>   If  you do  a whole  bundle of  asynchronous sends  and then  an RPC
> things will  get synchronized  at the RPC  and the buffers  should not
> saturate - this is of course only true between pairs of processes.
>
>   I now love my !! operator - and program clients with
>
> A ! B         and
> Val = A !! B
>
>   operators
>
>   I program the servers like this:
>
> receive
>    {replyTo, Pid, ReplyAs, Q} ->
> ...
> Pid ! {ReplyAs, Val}
> ...
>    Msg ->
> ...
>
>   And don't use any stub routines (horrors).
>
>   Robert  (Virding)  first  programmed  the  I/O  routines  with  this
> replyTo-replyAs style - it's very nice ...
>
>   Now I don't write interface  functions that abstract out the message
> passing interface. That way I can clearly "see" the message passing.
>
>   If you use !!  at the top  level of your code then you can "see" the
> RPC - my brain goes (RPC this  might be slow - take care, and RPC this
> will synchronize any outstanding asynchronous  message) - so I can see
> what I'm doing.
>
>   Burying this *inside*  a stub function which hides  the RPC makes me
> loose sight of the important fact that we are doing an RPC. If the RPC
> is  "off site"  (ie  we do  an  RPC on  a remote  node)  then we  have
> abstracted  away  from  the  single  most important  detail  that  the
> programmer should no about.
>
>   Non-local RPCs cause great performance hits.
>
>   When I see this in my code:
>
> Pid @ Node !! X
>
>   I think <<danger very slow>>
>
>   with
>
> Pid !! X
>
>   I think <<possibly slow take care>>
>
>   but
>
> X ! Y
>
>   I thing <<ok but I'll need to synchronize later>>
>
>   Hiding  this  essential  detail  in  a  stub  routine  seems  to  be
> abstracting away  from the one essential  detail that we  need to know
> about when writing distributed programs.

--
Shawn.

  I had no shoes and I pitied myself.  Then I met a man who had no feet,
  so I took his shoes.
  -- Dave Barry


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
In reply to this post by Francesco Mazzoli-2
And all of these are good rules, and my poorly written example to
find out worst-case behavior really breaks them.  :-)

My real code of course follows these.  Only I usually reply back to
the caller in a gen_server when the call is invalid/unsupported.  This
way the caller gets {error, {badcall, Call}} rather than a server
crashing.  I mean, what if a programmer does something stupid in a client
like:

        Server = ...,
        OtherServer = ...,

        case server_api:lock(OtherServer) of
        ok ->
                case other_server_api:lock(Server) of
                ok ->
        ...

?  Do we really want both servers to crash because a client application
mixed up the pids?  Usually not.  But usually the caller will crash due
to a badmatch:

        ok = server_api:lock(OtherServer),
        ok = other_server_api:lock(Server)

and wham!  The client dies, as it has the error, and the server stays up,
as it is otherwise healthy.


I was worried about the parent blocking because lets say that the serial
port has stopped being able to transmit data out the serial line.  The
OS buffer will fill up, and then the port driver buffer will fill, and
then the OS pipe buffer (between the node and the port driver) will
fill, and then the node's output biffer will fill.. and any process
using port_command/2 will block (port busy).

But I can't use a timeout with port_command/2.

But I can by doing this:

        user:
                Server ! {send, Data},
                Server ! {recv, self(), 8},
                receive
                {reply, Data} ->
                        Data
                after 30 * 1000 ->
                        exit(timeout)
                end

        server:
                loop(P) ->
                        receive
                        {send, D} ->
                                port_command(P, D),
                                loop(P);
                        {recv, From, L) ->
                                port_command(P, ...),
                                receive
                                {P, ..., Data} ->
                                        From ! {reply, Data}
                                end,
                                loop(P)
                        end.

and be certain that the user will not block until the receive call,
where it can handle the timeout.  Oh sure, the server will be frozen
in the port_command when handling {send, D}, but the client will just
buffer up {recv, F, L} in the server message queue, block in the receive,
wakeup from the timeout, crash (as the port is all screwed up now)
and because I always link the user and the server (spawn_link is my
friend) the port will get killed.

I was just worried if the client decided to send say 10 chunks of
data, and then block in recv that the buffering wasn't going to be
an issue.  From what I've been told, I'm perfectly safe in that.  This
is a serial port after all, it can barely do 256,000 bits per second.
:-)

Francesco Cesarini <francesco> wrote:

> There are design rules
> (http://www.erlang.se/doc/programming_rules.shtml) to always flush
> unknown messages, logging the error. I usually go a step further and
> suggest people make their gen_server code crash through a badmatch if
> they receive unknown calls and casts, as they are bugs and should not be
> sent. For handle info, always add a debug printout, as the story is
> slightly different. Messages for ports closing, nodes going down,
> possibly expired time-outs, or as I recently discovered, results from
> asynchronous RPCs, and other junk might appear.
>
> Shawn - If you are worried about a blocking parent, you could solve your
> problem with asynchronous call-backs and time-outs.

--
Shawn.

  Death.  Destruction.  Disease.  Horror.  That's what war is all about.
  That's what makes it a thing to be avoided.
  -- Kirk, "A Taste of Armageddon", stardate 3193.0


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Vlad Dumitrescu-4
> This is a serial port after all, it can barely do 256,000 bits per second. :-)

Oh, you mean it doesn't work with USB serial ports? ;-)

/Vlad


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Vance Shipley-2
In reply to this post by Shawn Pearce
On Wed, Feb 25, 2004 at 09:25:20AM -0500, Shawn Pearce wrote:
}  
}  ... I usually reply back to the caller in a gen_server when the
}  call is invalid/unsupported.  This way the caller gets
}  {error, {badcall, Call}} rather than a server }  crashing.  I
}  mean, what if a programmer does something stupid in a client


I think this is right.  The philosphy of crashing on programming
errors is a good one however one needs to think about who the
programmer is when building APIs.  The intent of providing an
API is that the client programmer doesn't have to understand
anything other than the API.  He doesn't want to have to debug
the service he is using therefore a crash report isn't useful
to him.  

I consider API interfaces to be slightly unreliable so do some
amount of error checking on them.  I like to do things like:

foo(Integer) when is_integer(Integer) ->
        1 + Integer.

Which follows the let it crash and make it crash early philosophy.
This causes the right sort of report:

1> t:foo(a).

=ERROR REPORT==== 25-Feb-2004::13:27:24 ===
Error in process <0.26.0> with exit value: {function_clause,[{t,foo,[a]},{erl_eval,do_apply,5},{shell,eval_loop,2}]}

** exited: {function_clause,[{t,foo,[a]},
                             {erl_eval,do_apply,5},
                             {shell,eval_loop,2}]} **


This tells the client programmer what he needs to know; that there
is no matching function clause for foo(a).  Without the guard we get:

=ERROR REPORT==== 25-Feb-2004::13:26:54 ===
Error in process <0.24.0> with exit value: {badarith,[{t,foo,1},{shell,eval_loop,2}]}

** exited: {badarith,[{t,foo,1},{shell,eval_loop,2}]} **


The client programmer scratches his head wondering why he's doing
arithmetic.  This exposes the internals of the service unecessarily.


        -Vance


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Chris Pressey
In reply to this post by Shawn Pearce
On Tue, 24 Feb 2004 23:45:02 -0500
Shawn Pearce <spearce> wrote:

> Chris Pressey <cpressey> wrote:
> > On Tue, 24 Feb 2004 22:48:21 -0500
> > Shawn Pearce <spearce> wrote:
> >
> > > Basically, I'm asking if Erlang will let the parent in this case
> > > run the VM out of memory before making the parent freeze.
> >
> > Yes, exactly that.
>
> Excellent, that's what I had thought, but its been a while since I
> had last read that fact and/or proved it to myself by reading that
> section of the emulator source code.
>
> Not that I'd ever condone taking a node down like this.  But the fact
> that Erlang will grow the buffers as needed is what I'd expect.

I'd much, much, MUCH rather it just kill the receiving process or
(better) discard the message after a receive buffer limit is reached,
than taking down the entire node, though.

> Nah.  I'm not that worried about it.  It was easier to email the list
> and get a response from someone like yourself who knows Erlang better
> than I

Used it more in terms of sheer hours, perhaps; know it better, probably
not -- I'm still essentially mystified by OTP and erl_interface...

btw, I agree 100% (maybe 1000%) with Joe on the idea that hiding IPC
stuff inside wrapper functions is just plain *wrongheaded*.  Yes,
abstraction is good, but no, a function makes a *horrible* abstraction
for IPC -- especially in a "functional" language where functions aren't
supposed to have side-effects :)

-Chris


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
In reply to this post by Vlad Dumitrescu-4
Vlad Dumitrescu <vlad_dumitrescu> wrote:
> > This is a serial port after all, it can barely do 256,000 bits per second. :-)
>
> Oh, you mean it doesn't work with USB serial ports? ;-)


Err, Uhm, well, its never been tested with them.  It should work with
those USB to RS-232 converter boxes if that's what you mean.  But it
might not, as that's a different serial port driver to Windows, and
that driver might not behave as nice as the standard UART one does.

Perhaps its also possible to use this code to talk to other USB
devices, but I doubt it.  But I'd like to get that supported in perhaps
a gen_usb some day.  :)

--
Shawn.

  Anyone who goes to a psychiatrist ought to have his head examined.
  -- Samuel Goldwyn


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Shawn Pearce
In reply to this post by Chris Pressey
Chris Pressey <cpressey> wrote:
> I'd much, much, MUCH rather it just kill the receiving process or
> (better) discard the message after a receive buffer limit is reached,
> than taking down the entire node, though.

I agree.  If the node cannot give memory to a process for its heap or
its message queue, the process should be immediately killed with the
reason 'enomem'.  This kill should not be trappable by any catch
expressions either.

If the developer/designer/whatever doesn't want the process to just
vanish in low memory conditions they should setup a proper supervision
tree.  A good supervisor won't allocate memory, and thus should be able
to have enough free temporary space to get the 'EXIT' message of the
child, realize its an enomem error, and handle it somehow.  It may just
be enough to restart the process as the process may have just been
leaking a ton of memory (by stuffing things into its dictionary for
example).

> Used it more in terms of sheer hours, perhaps; know it better, probably
> not -- I'm still essentially mystified by OTP and erl_interface...

Heh.  I don't think there's many out there who really do know OTP.  One
thing I pick up well is low-level guts of almost anything.. so I feel
like I have a decent grasp on how much of OTP works, but certainly cannot
claim to know it as well as many of the folks here.  erl_interface isn't
bad either.. pretty simple actually.  Which is a pleasure compared to
some of the other higher-level languages and their interfaces.

> btw, I agree 100% (maybe 1000%) with Joe on the idea that hiding IPC
> stuff inside wrapper functions is just plain *wrongheaded*.  Yes,
> abstraction is good, but no, a function makes a *horrible* abstraction
> for IPC -- especially in a "functional" language where functions aren't
> supposed to have side-effects :)

Yea, I've become convinced of this now too.

So I'll just keep asking, when will !! become part of the langauge?  :)

I'd have to suggest that if !! is added the way Joe is suggesting
(use a local rpc/2 function to do the send/receive operations) that the
compiler either automatically generate an rpc/2 function if one is not
supplied, or that it give a very good error message indicating that
the rpc/2 function is missing, what it needs to do, and why the
compiler is requesting it.

--
Shawn.


Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Ulf Wiger (AL/EAB)
On Wed, 25 Feb 2004 21:07:08 -0500, Shawn Pearce <spearce>
wrote:

> Chris Pressey <cpressey> wrote:

>> btw, I agree 100% (maybe 1000%) with Joe on the idea that hiding IPC
>> stuff inside wrapper functions is just plain *wrongheaded*.  Yes,
>> abstraction is good, but no, a function makes a *horrible* abstraction
>> for IPC -- especially in a "functional" language where functions aren't
>> supposed to have side-effects :)
>
> Yea, I've become convinced of this now too.

I agree. I'd like to be able to see message passing more clearly.
It will still be possible to hide it behind function calls, just
like today, since !! is only syntactic sugar.

One problem is that it's more difficult to trace IPC dependencies.
We have xref for finding function call dependencies between modules,
and at AXD 301, we have CCviewer which provides hypertext linking
with forward and backward references (basically, Emacs tags and Distel
do this too.) But what do you do to follow the trail of a message while
reading source code? It can be arbitrarily difficult. I've spent some
time pondering the problem trying to add something to CCviewer, but
it's not easy. And if you can't tell a tool where to jump, humans
will have more difficulty as well.

I'd like to see some invention here as well. This would strengthen Joe's
case that we could focus more on message passing and RPCs.


> So I'll just keep asking, when will !! become part of the langauge?  :)
>
> I'd have to suggest that if !! is added the way Joe is suggesting
> (use a local rpc/2 function to do the send/receive operations) that the
> compiler either automatically generate an rpc/2 function if one is not
> supplied, or that it give a very good error message indicating that
> the rpc/2 function is missing, what it needs to do, and why the
> compiler is requesting it.

Or add an

-import(gen, [rpc/2]).

Don't know if this should be done automatically - but probably.
gen:rpc/2 should use basically the same semantics as gen:call/2
(that would mean a default 5 second timeout), and there should
also be a gen:rpc/3, which one could easily use by adding

rpc(Server, Request) ->
    gen:rpc(Server, Request, 17000).

e.g. for a 17 second timeout.

/Uffe
--
Ulf Wiger



Reply | Threaded
Open this post in threaded view
|

sends don't block, right?

Thomas Lindgren-5
In reply to this post by Chris Pressey

--- Chris Pressey <cpressey> wrote:
> I'd much, much, MUCH rather it just kill the
> receiving process or
> (better) discard the message after a receive buffer
> limit is reached,
> than taking down the entire node, though.

Yes ... I think Safe Erlang had some support for this
(but did it ever get implemented?). Though when I
fretted about similar issues, Per Bergqvist told me I
could also just use several nodes and limit the size
of each to get roughly the same effect. (At some extra
cost, of course.) This seemed sensible enough.

> btw, I agree 100% (maybe 1000%) with Joe on the idea
> that hiding IPC
> stuff inside wrapper functions is just plain
> *wrongheaded*.  Yes,
> abstraction is good, but no, a function makes a
> *horrible* abstraction
> for IPC

No more rpc:call, gen_server:call or gen_tcp:send
then? I agree that encapsulation and performance make
uneasy bedfellows, but I would dearly like to have
some way to abstract certain things.

> -- especially in a "functional" language
> where functions aren't
> supposed to have side-effects :)

Very true. The cure seems worse than the disease,
though :-) Is there a better way than either?

Best,
Thomas


__________________________________
Do you Yahoo!?
Get better spam protection with Yahoo! Mail.
http://antispam.yahoo.com/tools