How to deploy an upgrade (build with rebar3)

classic Classic list List threaded Threaded
7 messages Options
by
Reply | Threaded
Open this post in threaded view
|

How to deploy an upgrade (build with rebar3)

by
Hello,

I use rebar3 to build my Erlang applications.

For application deployment, I build a compressed archive and copy it to the target. The first release works, but how can I upgrade my application on the running system?


But I got error when I try to unpack/install my new release:
Unpack failed: release_package_not_found
Installed versions:
* 1.0.2 permanent

Say, I have a release (1.0.2) which is running, and there are some code changes, I build ($ rebar3 as prod tar) a new release (1.0.3). Is myrel-1.0.3.tar.gz sufficient for deploying the upgrade? I copied the myrel-1.0.3.tar.gz to the root directory of my application.

I think the problem is how to make my new release/version being listed with command: $ bin/myrel versions

Kind Regards,
Yao
Reply | Threaded
Open this post in threaded view
|

Re: How to deploy an upgrade (build with rebar3)

Guilherme Andrade
Hello Yao,

`rebar3_appup_plugin` may help you:

https://github.com/lrascao/rebar3_appup_plugin

(I know this doesn't answer your actual question - I'm not knowledgeable enough - but it works very well for me and perhaps it will solve your problem, too.)

On Sat, 22 Feb 2020 at 12:27, by <[hidden email]> wrote:
Hello,

I use rebar3 to build my Erlang applications.

For application deployment, I build a compressed archive and copy it to the target. The first release works, but how can I upgrade my application on the running system?


But I got error when I try to unpack/install my new release:
Unpack failed: release_package_not_found
Installed versions:
* 1.0.2 permanent

Say, I have a release (1.0.2) which is running, and there are some code changes, I build ($ rebar3 as prod tar) a new release (1.0.3). Is myrel-1.0.3.tar.gz sufficient for deploying the upgrade? I copied the myrel-1.0.3.tar.gz to the root directory of my application.

I think the problem is how to make my new release/version being listed with command: $ bin/myrel versions

Kind Regards,
Yao


--
Guilherme
by
Reply | Threaded
Open this post in threaded view
|

Re: How to deploy an upgrade (build with rebar3)

by
Hello Guilherme,

Thanks for the tip!

After some experiments (follow the instructions of rebar3_appup_plugin), I can deploy my application upgrade to my running system!

Although there are some other problems (on my running system, the established WebSocket connection (process) been killed after the upgrade), the whole thing works fine.

I will investigate more about why the running process been killed after the upgrade, I think the link below might be helpful:

Kind Regards,
Yao

在 2020年2月23日,08:14,Guilherme Andrade <[hidden email]> 写道:

Hello Yao,

`rebar3_appup_plugin` may help you:

https://github.com/lrascao/rebar3_appup_plugin

(I know this doesn't answer your actual question - I'm not knowledgeable enough - but it works very well for me and perhaps it will solve your problem, too.)

On Sat, 22 Feb 2020 at 12:27, by <[hidden email]> wrote:
Hello,

I use rebar3 to build my Erlang applications.

For application deployment, I build a compressed archive and copy it to the target. The first release works, but how can I upgrade my application on the running system?


But I got error when I try to unpack/install my new release:
Unpack failed: release_package_not_found
Installed versions:
* 1.0.2 permanent

Say, I have a release (1.0.2) which is running, and there are some code changes, I build ($ rebar3 as prod tar) a new release (1.0.3). Is myrel-1.0.3.tar.gz sufficient for deploying the upgrade? I copied the myrel-1.0.3.tar.gz to the root directory of my application.

I think the problem is how to make my new release/version being listed with command: $ bin/myrel versions

Kind Regards,
Yao


--
Guilherme

Reply | Threaded
Open this post in threaded view
|

Re: How to deploy an upgrade (build with rebar3)

Fred Hebert-2

On Sun, Feb 23, 2020 at 1:37 AM by <[hidden email]> wrote:
Although there are some other problems (on my running system, the established WebSocket connection (process) been killed after the upgrade), the whole thing works fine.

I will investigate more about why the running process been killed after the upgrade, I think the link below might be helpful:


Yeah it's likely the soft vs. brutal purge thing will be significant. On a long-lived process that may be hanging and doing nothing (just waiting for a message), more than one upgrade can take place before the next message is handled, which will cause a failure when the process is killed for holding on to an old version. Either all these processes need to be sent a message to force them to load a local module reference (fully-qualified call), or drop old ones (i.e. local funs like fun f/2 or closures like fun(X) -> X + 1 end) which are technically references to the module version whence they were declared.

Another common pattern is going to see some servers die when their acceptor pool crashes for the same reasons -- the acceptors maintain a reference to an older module, and following a couple of reloads, they get killed all at once in a kind of storm. Ranch, which underpins cowboy, is the most frequently seen one doing this as its acceptor pool isn't safe (all accept calls wait for infinity, which as far as I can tell is a performance-based decision); a workaround for this is to shrink your acceptor pool to be small enough that you can reasonably expect all acceptors to be used between two upgrades.

A server that would otherwise be safe for reloads in that case could be Elli, which has a timeout-then-reload pattern that specifically guards for upgrades (as long as they happen less frequently than the accept timeout duration), or YAWS, which will start a new acceptor process on a timeout for a similar result.

Do note that this latter point on acceptor pools is only tangentially related to websockets, since having the server handle its acceptor pool timing out (and allowing upgrades) will not prevent websocket connections from dying during upgrades; they're just another thing to test and worry about.
Reply | Threaded
Open this post in threaded view
|

Re: How to deploy an upgrade (build with rebar3)

Loïc Hoguin-3
On 24/02/2020 15:15, Fred Hebert wrote:> Another common pattern is going
to see some servers die when their

> acceptor pool crashes for the same reasons -- the acceptors maintain a
> reference to an older module, and following a couple of reloads, they
> get killed all at once in a kind of storm. Ranch, which underpins
> cowboy, is the most frequently seen one doing this as its acceptor pool
> isn't safe (all accept calls wait for infinity
> <https://github.com/ninenines/ranch/blob/ae84436f7ceed06a09e3fe1afb30e675579b7621/src/ranch_acceptor.erl#L35>,
> which as far as I can tell is a performance-based decision); a
> workaround for this is to shrink your acceptor pool to be small enough
> that you can reasonably expect all acceptors to be used between two
> upgrades.

Ranch 2.0 onward fully supports upgrades and comes with an appup you can
use directly: https://github.com/ninenines/ranch/blob/master/src/ranch.appup

We've recently added a test suite:
https://github.com/ninenines/ranch/blob/master/test/upgrade_SUITE.erl

The workaround described is more or less what Ranch 2.0 does during the
upgrade: stop all the acceptor processes, perform the upgrade, resume
the acceptor processes. The listening socket(s) stays open, so no new
connections should be rejected as long as the backlog is large enough,
and existing connections stay alive as well.

Cheers,

--
Loïc Hoguin
https://ninenines.eu
Reply | Threaded
Open this post in threaded view
|

Re: How to deploy an upgrade (build with rebar3)

Fred Hebert-2


On Mon, Feb 24, 2020 at 9:28 AM Loïc Hoguin <[hidden email]> wrote:

Ranch 2.0 onward fully supports upgrades and comes with an appup you can
use directly: https://github.com/ninenines/ranch/blob/master/src/ranch.appup

We've recently added a test suite:
https://github.com/ninenines/ranch/blob/master/test/upgrade_SUITE.erl

The workaround described is more or less what Ranch 2.0 does during the
upgrade: stop all the acceptor processes, perform the upgrade, resume
the acceptor processes. The listening socket(s) stays open, so no new
connections should be rejected as long as the backlog is large enough,
and existing connections stay alive as well.


Good to hear, hadn't seen that one go through. That can cause a bit of a pause and still may be a bit surprising in the shell, but that at least makes things production safe for relup, which is definitely an improvement.
Reply | Threaded
Open this post in threaded view
|

Re: How to deploy an upgrade (build with rebar3)

Loïc Hoguin-3
On 24/02/2020 15:49, Fred Hebert wrote:

>
>
> On Mon, Feb 24, 2020 at 9:28 AM Loïc Hoguin <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>
>     Ranch 2.0 onward fully supports upgrades and comes with an appup you
>     can
>     use directly:
>     https://github.com/ninenines/ranch/blob/master/src/ranch.appup
>
>     We've recently added a test suite:
>     https://github.com/ninenines/ranch/blob/master/test/upgrade_SUITE.erl
>
>     The workaround described is more or less what Ranch 2.0 does during the
>     upgrade: stop all the acceptor processes, perform the upgrade, resume
>     the acceptor processes. The listening socket(s) stays open, so no new
>     connections should be rejected as long as the backlog is large enough,
>     and existing connections stay alive as well.
>
>
> Good to hear, hadn't seen that one go through. That can cause a bit of a
> pause and still may be a bit surprising in the shell, but that at least
> makes things production safe for relup, which is definitely an improvement.
In the distant future the idea is to be able to support both synchronous
and asynchronous accept functions, the latter of course not needing to
stop acceptors.

Then if the gen_tcp+socket work gives us an asynchronous accept method,
or if people want to use the underlying socket module directly, they
will not suffer from these issues.

Cheers,

--
Loïc Hoguin
https://ninenines.eu