gproc_dist finicky about latecomers

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

gproc_dist finicky about latecomers

Oliver Korpilla
Hello.

I use gproc/gproc_dist with gen_leader_revival. I have the gproc_dist all application option on and do use global names.

It works fine if and only if I connect nodes first and then start gproc afterwards for additional nodes joining the cluster.

If I start gproc before connecting nodes, every node insists on being the leader (I queried through the gproc API for each node) and they stick with their opinion. global aggregated counters in turn do not work, failing my application's simple loadbalancing.

I ran into this problem twice:

* When originally writing my application startup.
* When redoing the startup and forgetting why I started gproc at a specific time.

I hoped to document this somehow.

Do people observe the same with locks_leader?

Regards,
Oliver

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: gproc_dist finicky about latecomers

Ulf Wiger-2
Just as an update, I'm (very slowly) finishing a rewrite of gproc to add some extension capability. After that, I thought I'd take a look at making the locks_leader the default. However, there is a reported issue with locks_leader that I'd have to take a look at first (https://github.com/uwiger/locks/issues/30). I apologize for having paid so little attention to this lately.

BR,
Ulf W

2017-06-03 7:28 GMT+01:00 Oliver Korpilla <[hidden email]>:
Hello.

I use gproc/gproc_dist with gen_leader_revival. I have the gproc_dist all application option on and do use global names.

It works fine if and only if I connect nodes first and then start gproc afterwards for additional nodes joining the cluster.

If I start gproc before connecting nodes, every node insists on being the leader (I queried through the gproc API for each node) and they stick with their opinion. global aggregated counters in turn do not work, failing my application's simple loadbalancing.

I ran into this problem twice:

* When originally writing my application startup.
* When redoing the startup and forgetting why I started gproc at a specific time.

I hoped to document this somehow.

Do people observe the same with locks_leader?

Regards,
Oliver

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: gproc_dist finicky about latecomers

Oliver Korpilla
Hello, Ulf.

First of all: Thanks for gproc! It is very central to what I'm doing, so I'm only documenting a current limitation, I'm _not_ complaining. :)

Looking forward to future improvements. :)

Cheers,
Oliver 
 

Gesendet: Dienstag, 06. Juni 2017 um 18:16 Uhr
Von: "Ulf Wiger" <[hidden email]>
An: "Oliver Korpilla" <[hidden email]>
Cc: erlang-questions <[hidden email]>
Betreff: Re: [erlang-questions] gproc_dist finicky about latecomers

Just as an update, I'm (very slowly) finishing a rewrite of gproc to add some extension capability. After that, I thought I'd take a look at making the locks_leader the default. However, there is a reported issue with locks_leader that I'd have to take a look at first (https://github.com/uwiger/locks/issues/30). I apologize for having paid so little attention to this lately.
 
BR,
Ulf W
 
2017-06-03 7:28 GMT+01:00 Oliver Korpilla <[hidden email][mailto:[hidden email]]>:Hello.

I use gproc/gproc_dist with gen_leader_revival. I have the gproc_dist all application option on and do use global names.

It works fine if and only if I connect nodes first and then start gproc afterwards for additional nodes joining the cluster.

If I start gproc before connecting nodes, every node insists on being the leader (I queried through the gproc API for each node) and they stick with their opinion. global aggregated counters in turn do not work, failing my application's simple loadbalancing.

I ran into this problem twice:

* When originally writing my application startup.
* When redoing the startup and forgetting why I started gproc at a specific time.

I hoped to document this somehow.

Do people observe the same with locks_leader?

Regards,
Oliver

_______________________________________________
erlang-questions mailing list
[hidden email][mailto:[hidden email]]
http://erlang.org/mailman/listinfo/erlang-questions
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions