enif_send backpressure

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

enif_send backpressure

Roger Lipscombe-2
It's my understanding that if a normal Erlang process does Pid ! Msg, and Pid has a particularly full message queue, then the sending process is penalised (gives up the remainder of its timeslice, e.g.).

Is there any way to implement something similar for enif_send from a NIF?

I'm calling it from a background thread in my NIF, and it'd be nice if it (e.g.) returned the size of the recipient's message queue, so that I could implement backpressure in my NIF.

Any ideas?


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Roger Lipscombe-2
On 25 May 2017 at 13:20, Roger Lipscombe <[hidden email]> wrote:
It's my understanding that if a normal Erlang process does Pid ! Msg, and Pid has a particularly full message queue, then the sending process is penalised (gives up the remainder of its timeslice, e.g.).

I found the relevant code. Search for erts_use_sender_punish in the OTP source.

Is there any way to implement something similar for enif_send from a NIF?

Note that enif_send calls erts_queue_message, which *does* return the recipient's queue length (afaict). It's just that enif_send discards the result. Would a PR which implemented (e.g.) enif_send_return_len -- naming is hard, suggestions appreciated -- be considered for OTP 20.x?


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Alex S.
Re: name, when in doubt, just add “_ex” suffix :)
25 мая 2017 г., в 15:44, Roger Lipscombe <[hidden email]> написал(а):

On 25 May 2017 at 13:20, Roger Lipscombe <[hidden email]> wrote:
It's my understanding that if a normal Erlang process does Pid ! Msg, and Pid has a particularly full message queue, then the sending process is penalised (gives up the remainder of its timeslice, e.g.).

I found the relevant code. Search for erts_use_sender_punish in the OTP source.

Is there any way to implement something similar for enif_send from a NIF?

Note that enif_send calls erts_queue_message, which *does* return the recipient's queue length (afaict). It's just that enif_send discards the result. Would a PR which implemented (e.g.) enif_send_return_len -- naming is hard, suggestions appreciated -- be considered for OTP 20.x?

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Felix Gallo-2
In reply to this post by Roger Lipscombe-2
Rather than dive into the details of another process, wouldn't it be more erlang to use enif_consume_timeslice (http://erlang.org/doc/man/erl_nif.html#enif_consume_timeslice) and/or dirty nifs and just let the scheduler do its thing?

F.

On Thu, May 25, 2017 at 5:44 AM, Roger Lipscombe <[hidden email]> wrote:
On 25 May 2017 at 13:20, Roger Lipscombe <[hidden email]> wrote:
It's my understanding that if a normal Erlang process does Pid ! Msg, and Pid has a particularly full message queue, then the sending process is penalised (gives up the remainder of its timeslice, e.g.).

I found the relevant code. Search for erts_use_sender_punish in the OTP source.

Is there any way to implement something similar for enif_send from a NIF?

Note that enif_send calls erts_queue_message, which *does* return the recipient's queue length (afaict). It's just that enif_send discards the result. Would a PR which implemented (e.g.) enif_send_return_len -- naming is hard, suggestions appreciated -- be considered for OTP 20.x?


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions



_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Roger Lipscombe-2
On 25 May 2017 at 18:07, Felix Gallo <[hidden email]> wrote:
Rather than dive into the details of another process, wouldn't it be more erlang to use enif_consume_timeslice (http://erlang.org/doc/man/erl_nif.html#enif_consume_timeslice) and/or dirty nifs and just let the scheduler do its thing?

This stuff's running on a background thread. Specifically, it's running a Squirrel VM (squirrel-lang.org) on a background thread. Some of the calls in the squirrel code result in enif_send to the accompanying (1:1) Erlang process. I'd like the squirrel to (briefly) pause[1] if the Erlang process has too many messages in its queue.

[1] not actually pause; we're running 10s of thousands of these on a pool of background threads, we want the one that's calling enif_send too frequently to yield, *exactly* the same as an Erlang process would do.

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Felix Gallo-2
If your NIF is running in a process managed by the scheduler, then as long as it uses enif_consume_timeslice appropriately, then if it sends a message to a loaded queue, it will get penalized by a large number of reductions, get deprioritized, yield, and be dropped to the bottom of the run queue, unless I'm misunderstanding.  If one of the other tens of thousands of squirrels are runnable with a higher priority, they will take over the run slot.

That way you wouldn't have to reach into the internals of another process to make your own determination, which seems like it'd be a better idea.  But again, perhaps I'm confused.

F.

On Thu, May 25, 2017 at 12:11 PM, Roger Lipscombe <[hidden email]> wrote:
On 25 May 2017 at 18:07, Felix Gallo <[hidden email]> wrote:
Rather than dive into the details of another process, wouldn't it be more erlang to use enif_consume_timeslice (http://erlang.org/doc/man/erl_nif.html#enif_consume_timeslice) and/or dirty nifs and just let the scheduler do its thing?

This stuff's running on a background thread. Specifically, it's running a Squirrel VM (squirrel-lang.org) on a background thread. Some of the calls in the squirrel code result in enif_send to the accompanying (1:1) Erlang process. I'd like the squirrel to (briefly) pause[1] if the Erlang process has too many messages in its queue.

[1] not actually pause; we're running 10s of thousands of these on a pool of background threads, we want the one that's calling enif_send too frequently to yield, *exactly* the same as an Erlang process would do.


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Roger Lipscombe-2
On 25 May 2017 at 20:53, Felix Gallo <[hidden email]> wrote:
If your NIF is running in a process managed by the scheduler,

It's not. It's running on a background thread that's unrelated to Erlang. We call enif_send from that background thread.

At some point we'll look at moving the whole thing to use dirty schedulers, but that's a couple of months of work vs. what looks like about a 10 line patch.
 
That way you wouldn't have to reach into the internals of another process to make your own determination, which seems like it'd be a better idea.

Given that erts_queue_message *already* returns the message queue length of the recipient, it's not exactly "reaching into the internals".

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Daniel Goertzen-3
In reply to this post by Roger Lipscombe-2
Could the existing implementation of enif_send() do a thread yield when sending to a congested process?

The first env param will tell you if this is a created thread or process thread, so either a thread yield or sender punish could be performed.


On Thu, May 25, 2017 at 7:44 AM Roger Lipscombe <[hidden email]> wrote:
On 25 May 2017 at 13:20, Roger Lipscombe <[hidden email]> wrote:
It's my understanding that if a normal Erlang process does Pid ! Msg, and Pid has a particularly full message queue, then the sending process is penalised (gives up the remainder of its timeslice, e.g.).

I found the relevant code. Search for erts_use_sender_punish in the OTP source.

Is there any way to implement something similar for enif_send from a NIF?

Note that enif_send calls erts_queue_message, which *does* return the recipient's queue length (afaict). It's just that enif_send discards the result. Would a PR which implemented (e.g.) enif_send_return_len -- naming is hard, suggestions appreciated -- be considered for OTP 20.x?

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Roger Lipscombe-2
My call to enif_send is from a non-Erlang background thread. Thread yield and sender punish don't *mean* anything. I'll be implementing the sender-punish / thread-yield in the squirrel bindings. So *I* need to know whether to do that or not -- hence returning the queue length of the recipient.

On 25 May 2017 at 21:41, Daniel Goertzen <[hidden email]> wrote:
Could the existing implementation of enif_send() do a thread yield when sending to a congested process?

The first env param will tell you if this is a created thread or process thread, so either a thread yield or sender punish could be performed.


On Thu, May 25, 2017 at 7:44 AM Roger Lipscombe <[hidden email]> wrote:
On 25 May 2017 at 13:20, Roger Lipscombe <[hidden email]> wrote:
It's my understanding that if a normal Erlang process does Pid ! Msg, and Pid has a particularly full message queue, then the sending process is penalised (gives up the remainder of its timeslice, e.g.).

I found the relevant code. Search for erts_use_sender_punish in the OTP source.

Is there any way to implement something similar for enif_send from a NIF?

Note that enif_send calls erts_queue_message, which *does* return the recipient's queue length (afaict). It's just that enif_send discards the result. Would a PR which implemented (e.g.) enif_send_return_len -- naming is hard, suggestions appreciated -- be considered for OTP 20.x?

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Lukas Larsson-8
In reply to this post by Roger Lipscombe-2
Hello,

On Thu, May 25, 2017 at 2:20 PM, Roger Lipscombe <[hidden email]> wrote:
Is there any way to implement something similar for enif_send from a NIF?

It is possible to get that information in the current implementation, but it may very well not be possible in future implementations. We don't want to create an API that exposes an implementation detail that may very well change in the future. If you need that type of back pressure, you should implement your own synchronization mechanism.

Lukas


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Roger Lipscombe-2
On 29 May 2017 at 13:13, Lukas Larsson <[hidden email]> wrote:
It is possible to get that information in the current implementation, but it may very well not be possible in future implementations. We don't want to create an API that exposes an implementation detail that may very well change in the future.

That's fair.
 
If you need that type of back pressure, you should implement your own synchronization mechanism.

Any suggestions for things I should take a look at?

Thanks,
Roger.


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: enif_send backpressure

Daniel Goertzen-3
When you send a message to the process you could increment a counter, and when the process finishes it can call a NIF to decrement the counter.  Resources might be handy if you have many counters to keep track of.

On Mon, May 29, 2017 at 5:06 PM Roger Lipscombe <[hidden email]> wrote:
On 29 May 2017 at 13:13, Lukas Larsson <[hidden email]> wrote:
It is possible to get that information in the current implementation, but it may very well not be possible in future implementations. We don't want to create an API that exposes an implementation detail that may very well change in the future.

That's fair.
 
If you need that type of back pressure, you should implement your own synchronization mechanism.

Any suggestions for things I should take a look at?

Thanks,
Roger.

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Loading...