VM leaking memory

classic Classic list List threaded Threaded
27 messages Options
12
Reply | Threaded
Open this post in threaded view
|

VM leaking memory

Frank Muller
Hello guys

After adding a new feature to my app (running non-stop for 5 years), it started leaking memory in staging.

Obviously, I’m suspecting this new feature. Command top shows RES going from 410m (during startup) to 6.2g in less than 12h.

For stupid security reasons, it will take me weeks to be allowed to share collected statistics (from recon, entop) here, but I can share them in private if someone is willing to help.


My config: Erlang 21.2.4 on CentOS 7

Thanks you.
/Frank

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Benoit Chesneau-2
is there any nif or something used by your new feature? 

On Thu, Jan 31, 2019 at 10:21 PM Frank Muller <[hidden email]> wrote:
Hello guys

After adding a new feature to my app (running non-stop for 5 years), it started leaking memory in staging.

Obviously, I’m suspecting this new feature. Command top shows RES going from 410m (during startup) to 6.2g in less than 12h.

For stupid security reasons, it will take me weeks to be allowed to share collected statistics (from recon, entop) here, but I can share them in private if someone is willing to help.


My config: Erlang 21.2.4 on CentOS 7

Thanks you.
/Frank
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

John Krukoff-2
In reply to this post by Frank Muller
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

smime.p7m (16K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
In reply to this post by Benoit Chesneau-2
Hi Benoit

No NIF. Just pure Erlang

/Frank

is there any nif or something used by your new feature? 

On Thu, Jan 31, 2019 at 10:21 PM Frank Muller <[hidden email]> wrote:
Hello guys

After adding a new feature to my app (running non-stop for 5 years), it started leaking memory in staging.

Obviously, I’m suspecting this new feature. Command top shows RES going from 410m (during startup) to 6.2g in less than 12h.

For stupid security reasons, it will take me weeks to be allowed to share collected statistics (from recon, entop) here, but I can share them in private if someone is willing to help.


My config: Erlang 21.2.4 on CentOS 7

Thanks you.
/Frank
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
In reply to this post by John Krukoff-2
Hi John

I’ve a process which does that. Every 5mn it checks it the total memory went above a threshold and I it triggers a garbage collection. But this time it didn’t help. 

My app is a proxy server which forward packets from left to right by applying a transformation to them on the fly. 

/Frank

That’s not a lot to go on. FWIW, most of my memory issues have been caused by failing to understand this section of the manual: http://erlang.org/doc/efficiency_guide/binaryhandling.html

 

http://erlang.org/doc/man/erlang.html#garbage_collect-1 run manually can be an interesting experiment to see what kind of memory problem you’ve got.

 

From: [hidden email] <[hidden email]> On Behalf Of Frank Muller
Sent: Thursday, January 31, 2019 14:21
To: Erlang-Questions Questions <[hidden email]>
Subject: [erlang-questions] VM leaking memory

 

Hello guys

After adding a new feature to my app (running non-stop for 5 years), it started leaking memory in staging.


Obviously, I’m suspecting this new feature. Command top shows RES going from 410m (during startup) to 6.2g in less than 12h.


For stupid security reasons, it will take me weeks to be allowed to share collected statistics (from recon, entop) here, but I can share them in private if someone is willing to help.


My config: Erlang 21.2.4 on CentOS 7

Thanks you.
/Frank


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Fred Hebert-2
In reply to this post by Frank Muller
On 01/31, Frank Muller wrote:

>After adding a new feature to my app (running non-stop for 5 years), it
>started leaking memory in staging.
>
>Obviously, I’m suspecting this new feature. Command top shows RES going
>from 410m (during startup) to 6.2g in less than 12h.
>
>For stupid security reasons, it will take me weeks to be allowed to share
>collected statistics (from recon, entop) here, but I can share them in
>private if someone is willing to help.
>

I'd recommend checking things like:

- recon_alloc:memory(usage) and see if the ratio is high or very low;
  this can point towards memory fragmentation if the ratio is low.
- in case there is fragmentation (or something that looks like it)
  recon_alloc:fragmentation(current) will return lists of all the
  various allocators and types, which should help point towards which
  type of memory is causing issues
- if usage seems high, see recon_alloc:memory(allocated_types) to see if
  there's any allocator that's higher than others; ETS, binary, or eheap
  will tend to point towards an ETS table, a refc binary leak, or some
  process gathering lots of memory

Based on this it might be possible to then orient towards other avenues
without you having to share any numbers.

Quick checks if it's binary memory is to call recon:bin_leak(10), which
will probe all processes for their binary memory usage, run a GC on all
of them, then run a probe again, and give you those that have the
largest gap. This can point to processes that had the most dead memory.

There's an undocumented 'binary_memory' option that recon:info,
recon:proc_count, and recon:proc_window all support -- it's undocumented
because it might be expensive and not always safe to run -- that you can
use to find which processes are holding the most binary memory; after a
call to bin_leak, this can let you know about biggest users.

You can also use proc_count with:
- message_queue_len for large mailboxes
- memory for eheap usage

You can use the same values with proc_window to see who is currently
allocating the most.

If ETS is taking a lot of place, calling ets:i() can show a bunch of
tables with content; you might have a runaway cache table or something
like that.

Regards,
Fred.
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
Thanks Fred. Your recon library helped me a lot with additional tools like observer_cli (which wraps most of recon calls in a nice console GUI):
I’ve most of these information in my possession as I said to share in private. I will try the other ideas and report back.

/Frank

On 01/31, Frank Muller wrote:
>After adding a new feature to my app (running non-stop for 5 years), it
>started leaking memory in staging.
>
>Obviously, I’m suspecting this new feature. Command top shows RES going
>from 410m (during startup) to 6.2g in less than 12h.
>
>For stupid security reasons, it will take me weeks to be allowed to share
>collected statistics (from recon, entop) here, but I can share them in
>private if someone is willing to help.
>

I'd recommend checking things like:

- recon_alloc:memory(usage) and see if the ratio is high or very low;
  this can point towards memory fragmentation if the ratio is low.
- in case there is fragmentation (or something that looks like it)
  recon_alloc:fragmentation(current) will return lists of all the
  various allocators and types, which should help point towards which
  type of memory is causing issues
- if usage seems high, see recon_alloc:memory(allocated_types) to see if
  there's any allocator that's higher than others; ETS, binary, or eheap
  will tend to point towards an ETS table, a refc binary leak, or some
  process gathering lots of memory

Based on this it might be possible to then orient towards other avenues
without you having to share any numbers.

Quick checks if it's binary memory is to call recon:bin_leak(10), which
will probe all processes for their binary memory usage, run a GC on all
of them, then run a probe again, and give you those that have the
largest gap. This can point to processes that had the most dead memory.

There's an undocumented 'binary_memory' option that recon:info,
recon:proc_count, and recon:proc_window all support -- it's undocumented
because it might be expensive and not always safe to run -- that you can
use to find which processes are holding the most binary memory; after a
call to bin_leak, this can let you know about biggest users.

You can also use proc_count with:
- message_queue_len for large mailboxes
- memory for eheap usage

You can use the same values with proc_window to see who is currently
allocating the most.

If ETS is taking a lot of place, calling ets:i() can show a bunch of
tables with content; you might have a runaway cache table or something
like that.

Regards,
Fred.

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
The refc binary is the one using most of VM’s memory (in GB) compared to other allocators (few MB).

/Frank

Thanks Fred. Your recon library helped me a lot with additional tools like observer_cli (which wraps most of recon calls in a nice console GUI):
I’ve most of these information in my possession as I said to share in private. I will try the other ideas and report back.

/Frank

On 01/31, Frank Muller wrote:
>After adding a new feature to my app (running non-stop for 5 years), it
>started leaking memory in staging.
>
>Obviously, I’m suspecting this new feature. Command top shows RES going
>from 410m (during startup) to 6.2g in less than 12h.
>
>For stupid security reasons, it will take me weeks to be allowed to share
>collected statistics (from recon, entop) here, but I can share them in
>private if someone is willing to help.
>

I'd recommend checking things like:

- recon_alloc:memory(usage) and see if the ratio is high or very low;
  this can point towards memory fragmentation if the ratio is low.
- in case there is fragmentation (or something that looks like it)
  recon_alloc:fragmentation(current) will return lists of all the
  various allocators and types, which should help point towards which
  type of memory is causing issues
- if usage seems high, see recon_alloc:memory(allocated_types) to see if
  there's any allocator that's higher than others; ETS, binary, or eheap
  will tend to point towards an ETS table, a refc binary leak, or some
  process gathering lots of memory

Based on this it might be possible to then orient towards other avenues
without you having to share any numbers.

Quick checks if it's binary memory is to call recon:bin_leak(10), which
will probe all processes for their binary memory usage, run a GC on all
of them, then run a probe again, and give you those that have the
largest gap. This can point to processes that had the most dead memory.

There's an undocumented 'binary_memory' option that recon:info,
recon:proc_count, and recon:proc_window all support -- it's undocumented
because it might be expensive and not always safe to run -- that you can
use to find which processes are holding the most binary memory; after a
call to bin_leak, this can let you know about biggest users.

You can also use proc_count with:
- message_queue_len for large mailboxes
- memory for eheap usage

You can use the same values with proc_window to see who is currently
allocating the most.

If ETS is taking a lot of place, calling ets:i() can show a bunch of
tables with content; you might have a runaway cache table or something
like that.

Regards,
Fred.

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

scott ribe
> On Jan 31, 2019, at 3:26 PM, Frank Muller <[hidden email]> wrote:
>
> The refc binary is the one using most of VM’s memory (in GB) compared to other allocators (few MB).

So, could be the classic leak where you match on a binary and pass a sub-binary to another process? That creates a ref to the original binary, and if the first process is a long-lived one that doesn't gc often, even though the second one finishes, the original binary hangs around...
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Fred Hebert-2
On 01/31, Scott Ribe wrote:
>> On Jan 31, 2019, at 3:26 PM, Frank Muller <[hidden email]> wrote:
>>
>> The refc binary is the one using most of VM’s memory (in GB) compared to other allocators (few MB).
>
>So, could be the classic leak where you match on a binary and pass a sub-binary to another process? That creates a ref to the original binary, and if the first process is a long-lived one that doesn't gc often, even though the second one finishes, the original binary hangs around...

Yeah, that's a good bet. I'd check in the new feature where a sub-binary
of a large binary is taken, and would call `NewBin = binary:copy(Bin)`
on it to free the ref as early as possible.
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
The original proxy design was like this:

External TCP Source -> Proxy Process (binary transformation and analysis) -> External TCP Sink

The current design added a new stage to the proxy by having 2 processes instead of 1:

External TCP Source -> Proxy Process1 -> Proxy Process2 -> External TCP Sink

Process1 forward the binaries (>64B) to Process2  which also inspects them (by logging suspicious packets).

I will perform more checks today and report back.


On 01/31, Scott Ribe wrote:
>> On Jan 31, 2019, at 3:26 PM, Frank Muller <[hidden email]> wrote:
>>
>> The refc binary is the one using most of VM’s memory (in GB) compared to other allocators (few MB).
>
>So, could be the classic leak where you match on a binary and pass a sub-binary to another process? That creates a ref to the original binary, and if the first process is a long-lived one that doesn't gc often, even though the second one finishes, the original binary hangs around...

Yeah, that's a good bet. I'd check in the new feature where a sub-binary
of a large binary is taken, and would call `NewBin = binary:copy(Bin)`
on it to free the ref as early as possible.

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
Thanks Gerhard. I will definitely give it a go.

I was able to identify the process which is leaking memory, thanks to recon. It was the process added to handle the new feature.

The binary_alloc seems to be the one misbehaving (not 100% sure, see attached screenshot).

Is there a way to tweak this allocator strategy SBC/MBC? Lukas Larsson maybe?

/Frank



_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Gerhard Lazu
In reply to this post by Frank Muller
Hi Frank,

The binary_alloc seems to be the one misbehaving (not 100% sure, see attached screenshot).

I agree, it's your binary_alloc. How many single block carriers in binary_alloc do you have? What about multi-block carriers? Can you share a recon_alloc snapshot?
 
Is there a way to tweak this allocator strategy SBC/MBC? Lukas Larsson maybe?

For us, RabbitMQ, defaulting to a lower multi-block carrier size (+MBlmbcs) as well as changing the allocation strategy (+MBas) made a positive difference. From the screenshot that you've shared, I am guessing that this won't work for you. This is the process that we went through at RabbitMQ: https://groups.google.com/forum/#!msg/rabbitmq-users/LSYaac9frYw/LNZDZUlrBAAJ

I found Lukas' Erlang Memory Management Battle Stories from 2014 to be a great erts_alloc companion: http://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdfhttps://www.youtube.com/watch?v=nuCYL0X-8f4

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

IMG_6339.jpg (183K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
Here is the beginning of recon_alloc:snapshot/1.
Yes, Lukas can help on this. VM’s memory management looks like a mystery.



Hi Frank,

The binary_alloc seems to be the one misbehaving (not 100% sure, see attached screenshot).

I agree, it's your binary_alloc. How many single block carriers in binary_alloc do you have? What about multi-block carriers? Can you share a recon_alloc snapshot?
 
Is there a way to tweak this allocator strategy SBC/MBC? Lukas Larsson maybe?

For us, RabbitMQ, defaulting to a lower multi-block carrier size (+MBlmbcs) as well as changing the allocation strategy (+MBas) made a positive difference. From the screenshot that you've shared, I am guessing that this won't work for you. This is the process that we went through at RabbitMQ: https://groups.google.com/forum/#!msg/rabbitmq-users/LSYaac9frYw/LNZDZUlrBAAJ

I found Lukas' Erlang Memory Management Battle Stories from 2014 to be a great erts_alloc companion: http://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdfhttps://www.youtube.com/watch?v=nuCYL0X-8f4

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Gerhard Lazu
A text file would work a lot better, the important information is missing from your last screenshot.

This should fit binary_alloc stats for the first scheduler in a screenshot (your system has 48 schedulers):

[_|[SCHED0|_]] = erlang:system_info({allocator, binary_alloc}), io:format("~p~n", [SCHED0]).



On Fri, Feb 1, 2019 at 11:14 AM Frank Muller <[hidden email]> wrote:
Here is the beginning of recon_alloc:snapshot/1.
Yes, Lukas can help on this. VM’s memory management looks like a mystery.



Hi Frank,

The binary_alloc seems to be the one misbehaving (not 100% sure, see attached screenshot).

I agree, it's your binary_alloc. How many single block carriers in binary_alloc do you have? What about multi-block carriers? Can you share a recon_alloc snapshot?
 
Is there a way to tweak this allocator strategy SBC/MBC? Lukas Larsson maybe?

For us, RabbitMQ, defaulting to a lower multi-block carrier size (+MBlmbcs) as well as changing the allocation strategy (+MBas) made a positive difference. From the screenshot that you've shared, I am guessing that this won't work for you. This is the process that we went through at RabbitMQ: https://groups.google.com/forum/#!msg/rabbitmq-users/LSYaac9frYw/LNZDZUlrBAAJ

I found Lukas' Erlang Memory Management Battle Stories from 2014 to be a great erts_alloc companion: http://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdfhttps://www.youtube.com/watch?v=nuCYL0X-8f4

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
I tried two solutions to reduce the memory usage of the problematic process:

1. calling garbage:collect/0 after processing N packets (varying N=10..128).
Nothing changed at all and the bin_alloc memory stayed fragmented as you can see:
http://147.135.36.188:3400/observer_cli_BEFORE.jpg

The call to instrument:carriers/0:
http://147.135.36.188:3400/instrument_carriers.jpg

The call to instrument:allocations/0:
http://147.135.36.188:3400/instrument_allocations.jpg


2. Hibernating the process after processing N packets (varying N=10..128).
The HIT rate went above 90% immediately.
http://147.135.36.188:3400/observer_cli_AFTER.jpg

What is the effect of frequent hibernating on the long term? This process is receiving about ~1200 packets/sec under normal load and can reach ~3000 packets/sec under heavy load.

Is there a better way of solving this memory issue  by tweeting the bin allocator SBC/MBC?

/Frank

A text file would work a lot better, the important information is missing from your last screenshot.

This should fit binary_alloc stats for the first scheduler in a screenshot (your system has 48 schedulers):

[_|[SCHED0|_]] = erlang:system_info({allocator, binary_alloc}), io:format("~p~n", [SCHED0]).



On Fri, Feb 1, 2019 at 11:14 AM Frank Muller <[hidden email]> wrote:
Here is the beginning of recon_alloc:snapshot/1.
Yes, Lukas can help on this. VM’s memory management looks like a mystery.



Hi Frank,

The binary_alloc seems to be the one misbehaving (not 100% sure, see attached screenshot).

I agree, it's your binary_alloc. How many single block carriers in binary_alloc do you have? What about multi-block carriers? Can you share a recon_alloc snapshot?
 
Is there a way to tweak this allocator strategy SBC/MBC? Lukas Larsson maybe?

For us, RabbitMQ, defaulting to a lower multi-block carrier size (+MBlmbcs) as well as changing the allocation strategy (+MBas) made a positive difference. From the screenshot that you've shared, I am guessing that this won't work for you. This is the process that we went through at RabbitMQ: https://groups.google.com/forum/#!msg/rabbitmq-users/LSYaac9frYw/LNZDZUlrBAAJ

I found Lukas' Erlang Memory Management Battle Stories from 2014 to be a great erts_alloc companion: http://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdfhttps://www.youtube.com/watch?v=nuCYL0X-8f4

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Frank Muller
In reply to this post by Gerhard Lazu
I tried two solutions to reduce the memory usage of the problematic process:

1. calling garbage:collect/0 after processing N packets (varying N=10..128).
Nothing changed at all and the bin_alloc memory stayed fragmented as you can see:
http://147.135.36.188:3400/observer_cli_BEFORE.jpg

The call to instrument:carriers/0:
http://147.135.36.188:3400/instrument_carriers.jpg

The call to instrument:allocations/0:
http://147.135.36.188:3400/instrument_allocations.jpg


2. Hibernating the process after processing N packets (varying N=10..128).
The HIT rates went above 90% immediately.
http://147.135.36.188:3400/observer_cli_AFTER.jpg

What is the effect of hibernating this process on the long term? 
This process is receiving about ~1200 packets/sec under normal load and can reach ~3000 packets/sec under heavy load.

Is there a better way of solving the problem by tweeting the bin allocator SBC/MBC?


/Frank
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Fred Hebert-2


On Fri, Feb 1, 2019 at 11:11 AM Frank Muller <[hidden email]> wrote:

2. Hibernating the process after processing N packets (varying N=10..128).
The HIT rates went above 90% immediately.
http://147.135.36.188:3400/observer_cli_AFTER.jpg

What is the effect of hibernating this process on the long term? 
This process is receiving about ~1200 packets/sec under normal load and can reach ~3000 packets/sec under heavy load.

Is there a better way of solving the problem by tweeting the bin allocator SBC/MBC?

So hibernation will do a few things:

- a full-sweep garbage collection
- drop the stack
- memory compaction on the process.

Unless specified otherwise, a call to `erlang:garbage_collect(Pid)` forces a major collection so it may look like what you have is more or less a case of a process spiking with a lot of memory, and then becoming mostly idle while still holding enough references to refc binaries to not GC old data. Rince and repeat and you get a lot of old stuff.

Fragmentation of this kind is often resolved with settings such as `+MBas aobf +MBlmbcs 512` being passed to the emulator, which changes the allocation strategy for one that favors lower addresses, and reduces the size of a multiblock carrier to use more of them. The objective of this being to reuse existing blocks and make it easier to deallocate some by reducing the chance some binary keeps it active.

If what you have is really one process though, you may get better results by running some hibernation from time to time, but only experimentation will confirm. If the allocator strategies don't cut it (or can't be used because you want to keep the 5 years live upgrade streak going), do something like count the packets you received, and every N of them (pick a large value so you might run a compaction once every 10 minutes or so; you can pick based on the leaking rate), force a hibernation to shed some memory.


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Michael Truog
In reply to this post by Frank Muller
On 2/1/19 8:11 AM, Frank Muller wrote:
I tried two solutions to reduce the memory usage of the problematic process:

1. calling garbage:collect/0 after processing N packets (varying N=10..128).
Nothing changed at all and the bin_alloc memory stayed fragmented as you can see:
http://147.135.36.188:3400/observer_cli_BEFORE.jpg

The call to instrument:carriers/0:
http://147.135.36.188:3400/instrument_carriers.jpg

The call to instrument:allocations/0:
http://147.135.36.188:3400/instrument_allocations.jpg


2. Hibernating the process after processing N packets (varying N=10..128).
The HIT rates went above 90% immediately.
http://147.135.36.188:3400/observer_cli_AFTER.jpg

What is the effect of hibernating this process on the long term? 
This process is receiving about ~1200 packets/sec under normal load and can reach ~3000 packets/sec under heavy load.

Is there a better way of solving the problem by tweeting the bin allocator SBC/MBC?


/Frank
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

If you move the creation of temporary binaries out of any Erlang processes you have that are long-lived, into short-lived Erlang processes, you would no longer have this problem.  The tuning discussions, allocator options, hibernate use, etc. is not solving the cause of the problem.  Source code should not need to call garbage:collect/0 and using temporary Erlang processes makes the garbage collection occur naturally, at a pace that shouldn't require special tuning.

Best Regards,
Michael

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: VM leaking memory

Fred Hebert-2
On 02/01, Michael Truog wrote:

>
>If you move the creation of temporary binaries out of any Erlang
>processes you have that are long-lived, into short-lived Erlang
>processes, you would no longer have this problem.  The tuning
>discussions, allocator options, hibernate use, etc. is not solving the
>cause of the problem.  Source code should not need to call
>garbage:collect/0 and using temporary Erlang processes makes the
>garbage collection occur naturally, at a pace that shouldn't require
>special tuning.
>
>Best Regards,
>Michael

While agree on principle, both calling the garbage collector, tuning the
allocators, or moving the creation of a sub-binary to a temporary
process in the current contexts are workarounds over the current
limitation where the most natural pattern results in a memory leak.

You're suggesting another pattern that conflates code compared to its
ideal format, which just leaks right now. The problem is not the
workaround, it's needing it in the first place.
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
12