Ets read concurrency tweak

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Ets read concurrency tweak

Viacheslav V. Kovalev
Hi, List!

I'm playing with ets tweaks and specifically with read_concurrency.
I've written simple test to measure how this tweak impacts on read
performance. Test implementations is here
https://gist.github.com/kovyl2404/826a51b27ba869527910

Briefly, this test sequentially creates three [public, set] ets tables
with different read_concurrency options (without any tweaks, with
{read_concurrency, true} and with {read_concurrency, false}). After
one table created, test pupulates table with some random data and runs
N readers (N is power of 2 from 4 to 1024). Readers performs K random
reads and exists.

Result is quite surprising for me. There absolutely no difference
between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
and 64-core machines and could not find any significant performance
impact from this tweak.

Could anybody explain usecase of this tweak? What should I do to see
any difference and understand when to use this option?
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Chandru-4
Hi,

A few comments.

- Your two tests which use the read_concurrency option are both setting it to 'true' (lines 31 and 50 in the gist)

- I wouldn't run profiling when trying to compare two different options. It could be that the overhead of profiling is masking any differences between the two use cases.

- Setting {read_concurrency, false} is the same as not specifying it (so in effect 2 of the tests are the same)

cheers
Chandru


On 14 May 2015 at 11:29, Viacheslav V. Kovalev <[hidden email]> wrote:
Hi, List!

I'm playing with ets tweaks and specifically with read_concurrency.
I've written simple test to measure how this tweak impacts on read
performance. Test implementations is here
https://gist.github.com/kovyl2404/826a51b27ba869527910

Briefly, this test sequentially creates three [public, set] ets tables
with different read_concurrency options (without any tweaks, with
{read_concurrency, true} and with {read_concurrency, false}). After
one table created, test pupulates table with some random data and runs
N readers (N is power of 2 from 4 to 1024). Readers performs K random
reads and exists.

Result is quite surprising for me. There absolutely no difference
between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
and 64-core machines and could not find any significant performance
impact from this tweak.

Could anybody explain usecase of this tweak? What should I do to see
any difference and understand when to use this option?
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Viacheslav V. Kovalev
>> - Your two tests which use the read_concurrency option are both setting it to 'true' (lines 31 and 50 in the gist)
Ah yep, my fault. Shameful copypaste. However, as you noticed,
`{read_concurrency, false}` should be the same as first test with
default parameters. When I've fixed this, nothing was changed.

What about eprof - it is the last thing I've added to this test. With
or without profiler - same results (except of absolute values). Main
purpose of profiling was to see if `ets:lookup` is really most
expensive operation in this test (and this is true).

Also I've experimented with different sizes of table and different
sizes of keys in table. Seems all this things doesn't matter, so I am
really confused with this option and have no ideas.

2015-05-14 17:52 GMT+03:00 Chandru <[hidden email]>:

> Hi,
>
> A few comments.
>
> - Your two tests which use the read_concurrency option are both setting it
> to 'true' (lines 31 and 50 in the gist)
>
> - I wouldn't run profiling when trying to compare two different options. It
> could be that the overhead of profiling is masking any differences between
> the two use cases.
>
> - Setting {read_concurrency, false} is the same as not specifying it (so in
> effect 2 of the tests are the same)
>
> cheers
> Chandru
>
>
> On 14 May 2015 at 11:29, Viacheslav V. Kovalev <[hidden email]> wrote:
>>
>> Hi, List!
>>
>> I'm playing with ets tweaks and specifically with read_concurrency.
>> I've written simple test to measure how this tweak impacts on read
>> performance. Test implementations is here
>> https://gist.github.com/kovyl2404/826a51b27ba869527910
>>
>> Briefly, this test sequentially creates three [public, set] ets tables
>> with different read_concurrency options (without any tweaks, with
>> {read_concurrency, true} and with {read_concurrency, false}). After
>> one table created, test pupulates table with some random data and runs
>> N readers (N is power of 2 from 4 to 1024). Readers performs K random
>> reads and exists.
>>
>> Result is quite surprising for me. There absolutely no difference
>> between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
>> and 64-core machines and could not find any significant performance
>> impact from this tweak.
>>
>> Could anybody explain usecase of this tweak? What should I do to see
>> any difference and understand when to use this option?
>> _______________________________________________
>> erlang-questions mailing list
>> [hidden email]
>> http://erlang.org/mailman/listinfo/erlang-questions
>
>
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Sergej Jurečko
I think read_concurrency/write_concurrency are about mutexes between schedulers. So if you have 1000 processes on a single scheduler it makes no difference. How many logical cpus did the machine you were executing tests have?


Sergej

On 14 May 2015, at 17:50, Viacheslav V. Kovalev <[hidden email]> wrote:

>>> - Your two tests which use the read_concurrency option are both setting it to 'true' (lines 31 and 50 in the gist)
> Ah yep, my fault. Shameful copypaste. However, as you noticed,
> `{read_concurrency, false}` should be the same as first test with
> default parameters. When I've fixed this, nothing was changed.
>
> What about eprof - it is the last thing I've added to this test. With
> or without profiler - same results (except of absolute values). Main
> purpose of profiling was to see if `ets:lookup` is really most
> expensive operation in this test (and this is true).
>
> Also I've experimented with different sizes of table and different
> sizes of keys in table. Seems all this things doesn't matter, so I am
> really confused with this option and have no ideas.
>
> 2015-05-14 17:52 GMT+03:00 Chandru <[hidden email]>:
>> Hi,
>>
>> A few comments.
>>
>> - Your two tests which use the read_concurrency option are both setting it
>> to 'true' (lines 31 and 50 in the gist)
>>
>> - I wouldn't run profiling when trying to compare two different options. It
>> could be that the overhead of profiling is masking any differences between
>> the two use cases.
>>
>> - Setting {read_concurrency, false} is the same as not specifying it (so in
>> effect 2 of the tests are the same)
>>
>> cheers
>> Chandru
>>
>>
>> On 14 May 2015 at 11:29, Viacheslav V. Kovalev <[hidden email]> wrote:
>>>
>>> Hi, List!
>>>
>>> I'm playing with ets tweaks and specifically with read_concurrency.
>>> I've written simple test to measure how this tweak impacts on read
>>> performance. Test implementations is here
>>> https://gist.github.com/kovyl2404/826a51b27ba869527910
>>>
>>> Briefly, this test sequentially creates three [public, set] ets tables
>>> with different read_concurrency options (without any tweaks, with
>>> {read_concurrency, true} and with {read_concurrency, false}). After
>>> one table created, test pupulates table with some random data and runs
>>> N readers (N is power of 2 from 4 to 1024). Readers performs K random
>>> reads and exists.
>>>
>>> Result is quite surprising for me. There absolutely no difference
>>> between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
>>> and 64-core machines and could not find any significant performance
>>> impact from this tweak.
>>>
>>> Could anybody explain usecase of this tweak? What should I do to see
>>> any difference and understand when to use this option?
>>> _______________________________________________
>>> erlang-questions mailing list
>>> [hidden email]
>>> http://erlang.org/mailman/listinfo/erlang-questions
>>
>>
> _______________________________________________
> erlang-questions mailing list
> [hidden email]
> http://erlang.org/mailman/listinfo/erlang-questions

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Viacheslav V. Kovalev
I have run this test on following installations (without any
dramatical changes, except absolute values).

2 logical, 1 physical CPU, Erlang/OTP 17 [erts-6.1] [64-bit] [smp:2:2]
[async-threads:10] [hipe]
8 logical, 1 physical CPU, Erlang/OTP 17 [erts-6.4] [64-bit] [smp:8:8]
[async-threads:10] [hipe]
64 logical, 4 physical CPU, Erlang/OTP 17 [erts-6.3] [64-bit]
[smp:64:64] [async-threads:10] [hipe]

Also I've experimented with kernel-poll options. I believe it is about
another things, but anyway was interested.

2015-05-14 19:58 GMT+03:00 Sergej Jurecko <[hidden email]>:

> I think read_concurrency/write_concurrency are about mutexes between schedulers. So if you have 1000 processes on a single scheduler it makes no difference. How many logical cpus did the machine you were executing tests have?
>
>
> Sergej
>
> On 14 May 2015, at 17:50, Viacheslav V. Kovalev <[hidden email]> wrote:
>
>>>> - Your two tests which use the read_concurrency option are both setting it to 'true' (lines 31 and 50 in the gist)
>> Ah yep, my fault. Shameful copypaste. However, as you noticed,
>> `{read_concurrency, false}` should be the same as first test with
>> default parameters. When I've fixed this, nothing was changed.
>>
>> What about eprof - it is the last thing I've added to this test. With
>> or without profiler - same results (except of absolute values). Main
>> purpose of profiling was to see if `ets:lookup` is really most
>> expensive operation in this test (and this is true).
>>
>> Also I've experimented with different sizes of table and different
>> sizes of keys in table. Seems all this things doesn't matter, so I am
>> really confused with this option and have no ideas.
>>
>> 2015-05-14 17:52 GMT+03:00 Chandru <[hidden email]>:
>>> Hi,
>>>
>>> A few comments.
>>>
>>> - Your two tests which use the read_concurrency option are both setting it
>>> to 'true' (lines 31 and 50 in the gist)
>>>
>>> - I wouldn't run profiling when trying to compare two different options. It
>>> could be that the overhead of profiling is masking any differences between
>>> the two use cases.
>>>
>>> - Setting {read_concurrency, false} is the same as not specifying it (so in
>>> effect 2 of the tests are the same)
>>>
>>> cheers
>>> Chandru
>>>
>>>
>>> On 14 May 2015 at 11:29, Viacheslav V. Kovalev <[hidden email]> wrote:
>>>>
>>>> Hi, List!
>>>>
>>>> I'm playing with ets tweaks and specifically with read_concurrency.
>>>> I've written simple test to measure how this tweak impacts on read
>>>> performance. Test implementations is here
>>>> https://gist.github.com/kovyl2404/826a51b27ba869527910
>>>>
>>>> Briefly, this test sequentially creates three [public, set] ets tables
>>>> with different read_concurrency options (without any tweaks, with
>>>> {read_concurrency, true} and with {read_concurrency, false}). After
>>>> one table created, test pupulates table with some random data and runs
>>>> N readers (N is power of 2 from 4 to 1024). Readers performs K random
>>>> reads and exists.
>>>>
>>>> Result is quite surprising for me. There absolutely no difference
>>>> between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
>>>> and 64-core machines and could not find any significant performance
>>>> impact from this tweak.
>>>>
>>>> Could anybody explain usecase of this tweak? What should I do to see
>>>> any difference and understand when to use this option?
>>>> _______________________________________________
>>>> erlang-questions mailing list
>>>> [hidden email]
>>>> http://erlang.org/mailman/listinfo/erlang-questions
>>>
>>>
>> _______________________________________________
>> erlang-questions mailing list
>> [hidden email]
>> http://erlang.org/mailman/listinfo/erlang-questions
>
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Hynek Vychodil
I think you have to add at least few writes to see the difference. In your case, all your scheduler threads have soon read lock (RO copy in their CPU cache), so there is no effect of more granular locks.

On Thu, May 14, 2015 at 8:40 PM, Viacheslav V. Kovalev <[hidden email]> wrote:
I have run this test on following installations (without any
dramatical changes, except absolute values).

2 logical, 1 physical CPU, Erlang/OTP 17 [erts-6.1] [64-bit] [smp:2:2]
[async-threads:10] [hipe]
8 logical, 1 physical CPU, Erlang/OTP 17 [erts-6.4] [64-bit] [smp:8:8]
[async-threads:10] [hipe]
64 logical, 4 physical CPU, Erlang/OTP 17 [erts-6.3] [64-bit]
[smp:64:64] [async-threads:10] [hipe]

Also I've experimented with kernel-poll options. I believe it is about
another things, but anyway was interested.

2015-05-14 19:58 GMT+03:00 Sergej Jurecko <[hidden email]>:
> I think read_concurrency/write_concurrency are about mutexes between schedulers. So if you have 1000 processes on a single scheduler it makes no difference. How many logical cpus did the machine you were executing tests have?
>
>
> Sergej
>
> On 14 May 2015, at 17:50, Viacheslav V. Kovalev <[hidden email]> wrote:
>
>>>> - Your two tests which use the read_concurrency option are both setting it to 'true' (lines 31 and 50 in the gist)
>> Ah yep, my fault. Shameful copypaste. However, as you noticed,
>> `{read_concurrency, false}` should be the same as first test with
>> default parameters. When I've fixed this, nothing was changed.
>>
>> What about eprof - it is the last thing I've added to this test. With
>> or without profiler - same results (except of absolute values). Main
>> purpose of profiling was to see if `ets:lookup` is really most
>> expensive operation in this test (and this is true).
>>
>> Also I've experimented with different sizes of table and different
>> sizes of keys in table. Seems all this things doesn't matter, so I am
>> really confused with this option and have no ideas.
>>
>> 2015-05-14 17:52 GMT+03:00 Chandru <[hidden email]>:
>>> Hi,
>>>
>>> A few comments.
>>>
>>> - Your two tests which use the read_concurrency option are both setting it
>>> to 'true' (lines 31 and 50 in the gist)
>>>
>>> - I wouldn't run profiling when trying to compare two different options. It
>>> could be that the overhead of profiling is masking any differences between
>>> the two use cases.
>>>
>>> - Setting {read_concurrency, false} is the same as not specifying it (so in
>>> effect 2 of the tests are the same)
>>>
>>> cheers
>>> Chandru
>>>
>>>
>>> On 14 May 2015 at 11:29, Viacheslav V. Kovalev <[hidden email]> wrote:
>>>>
>>>> Hi, List!
>>>>
>>>> I'm playing with ets tweaks and specifically with read_concurrency.
>>>> I've written simple test to measure how this tweak impacts on read
>>>> performance. Test implementations is here
>>>> https://gist.github.com/kovyl2404/826a51b27ba869527910
>>>>
>>>> Briefly, this test sequentially creates three [public, set] ets tables
>>>> with different read_concurrency options (without any tweaks, with
>>>> {read_concurrency, true} and with {read_concurrency, false}). After
>>>> one table created, test pupulates table with some random data and runs
>>>> N readers (N is power of 2 from 4 to 1024). Readers performs K random
>>>> reads and exists.
>>>>
>>>> Result is quite surprising for me. There absolutely no difference
>>>> between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
>>>> and 64-core machines and could not find any significant performance
>>>> impact from this tweak.
>>>>
>>>> Could anybody explain usecase of this tweak? What should I do to see
>>>> any difference and understand when to use this option?
>>>> _______________________________________________
>>>> erlang-questions mailing list
>>>> [hidden email]
>>>> http://erlang.org/mailman/listinfo/erlang-questions
>>>
>>>
>> _______________________________________________
>> erlang-questions mailing list
>> [hidden email]
>> http://erlang.org/mailman/listinfo/erlang-questions
>
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Viacheslav V. Kovalev
I've added writer process to my test. It perfoms random write per each
100 ms. Implementation is here:
https://gist.github.com/kovyl2404/622a9908c4e8a2abc214

There is example of run on 8-core machine.

29> ets_read_concurrency:analyze(WithoutWriter).
[{{procs,1},{percent,-8.998057178933019}},
 {{procs,2},{percent,1.474232611754453}},
 {{procs,4},{percent,-0.4318193657099243}},
 {{procs,8},{percent,4.796912026714365}},
 {{procs,16},{percent,4.926194326111598}},
 {{procs,32},{percent,7.4091491628647805}},
 {{procs,64},{percent,6.7226404897426315}},
 {{procs,128},{percent,7.129140726319386}},
 {{procs,256},{percent,-28.200373148451757}},
 {{procs,512},{percent,10.229583687247757}},
 {{procs,1024},{percent,25.824989572270635}}]
30> ets_read_concurrency:analyze(WithWriter).
[{{procs,1},{percent,-9.233383915316411}},
 {{procs,2},{percent,1.3554058972355476}},
 {{procs,4},{percent,-1.3437122387165232}},
 {{procs,8},{percent,0.3944371727018411}},
 {{procs,16},{percent,21.719493229803017}},
 {{procs,32},{percent,-26.32711009412866}},
 {{procs,64},{percent,11.577461884371825}},
 {{procs,128},{percent,19.608517106505893}},
 {{procs,256},{percent,11.362311552960543}},
 {{procs,512},{percent,20.935963863004808}},
 {{procs,1024},{percent,22.512472513575506}}]

Where percent calculated as `(NonTweakedTime - TweakedTime)/NonTweaked*100`

Tweaked table shows stable performance advantage either with or
without writer process. I think I was wrong when interpreted former
results, but frankly I expected to see more drastic changes. Again, I
don't know how to interpret this artifacts:

{{procs,256},{percent,-28.200373148451757}}. %% WithoutWriter
{{procs,32},{percent,-26.32711009412866}}. %% WithWriter

Is this just statistical error or something else?
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Chandru-4

On 15 May 2015 at 12:00, Viacheslav V. Kovalev <[hidden email]> wrote:
I've added writer process to my test. It perfoms random write per each
100 ms. Implementation is here:
https://gist.github.com/kovyl2404/622a9908c4e8a2abc214

There is example of run on 8-core machine.

29> ets_read_concurrency:analyze(WithoutWriter).
[{{procs,1},{percent,-8.998057178933019}},
 {{procs,2},{percent,1.474232611754453}},
 {{procs,4},{percent,-0.4318193657099243}},
 {{procs,8},{percent,4.796912026714365}},
 {{procs,16},{percent,4.926194326111598}},
 {{procs,32},{percent,7.4091491628647805}},
 {{procs,64},{percent,6.7226404897426315}},
 {{procs,128},{percent,7.129140726319386}},
 {{procs,256},{percent,-28.200373148451757}},
 {{procs,512},{percent,10.229583687247757}},
 {{procs,1024},{percent,25.824989572270635}}]
30> ets_read_concurrency:analyze(WithWriter).
[{{procs,1},{percent,-9.233383915316411}},
 {{procs,2},{percent,1.3554058972355476}},
 {{procs,4},{percent,-1.3437122387165232}},
 {{procs,8},{percent,0.3944371727018411}},
 {{procs,16},{percent,21.719493229803017}},
 {{procs,32},{percent,-26.32711009412866}},
 {{procs,64},{percent,11.577461884371825}},
 {{procs,128},{percent,19.608517106505893}},
 {{procs,256},{percent,11.362311552960543}},
 {{procs,512},{percent,20.935963863004808}},
 {{procs,1024},{percent,22.512472513575506}}]

Where percent calculated as `(NonTweakedTime - TweakedTime)/NonTweaked*100`

Tweaked table shows stable performance advantage either with or
without writer process. I think I was wrong when interpreted former
results, but frankly I expected to see more drastic changes. Again, I
don't know how to interpret this artifacts:

{{procs,256},{percent,-28.200373148451757}}. %% WithoutWriter
{{procs,32},{percent,-26.32711009412866}}. %% WithWriter

Is this just statistical error or something else?
_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions


_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions
Reply | Threaded
Open this post in threaded view
|

Re: Ets read concurrency tweak

Rickard Green-2
In reply to this post by Viacheslav V. Kovalev
On Thu, May 14, 2015 at 12:29 PM, Viacheslav V. Kovalev
<[hidden email]> wrote:

> Hi, List!
>
> I'm playing with ets tweaks and specifically with read_concurrency.
> I've written simple test to measure how this tweak impacts on read
> performance. Test implementations is here
> https://gist.github.com/kovyl2404/826a51b27ba869527910
>
> Briefly, this test sequentially creates three [public, set] ets tables
> with different read_concurrency options (without any tweaks, with
> {read_concurrency, true} and with {read_concurrency, false}). After
> one table created, test pupulates table with some random data and runs
> N readers (N is power of 2 from 4 to 1024). Readers performs K random
> reads and exists.
>
> Result is quite surprising for me. There absolutely no difference
> between 3 these tests. I have run this test on Erlang 17 on 2-, 8-,
> and 64-core machines and could not find any significant performance
> impact from this tweak.
>
> Could anybody explain usecase of this tweak? What should I do to see
> any difference and understand when to use this option?
> _______________________________________________
> erlang-questions mailing list
> [hidden email]
> http://erlang.org/mailman/listinfo/erlang-questions
I haven't had the time to look at your code, so I cannot tell you why
you are not getting the results you expected. Here is some information
about the option, though.

When the {read_concurrency, true} option is passed, reader optimized
rwlocks are used instead of ordinary rwlocks. When reader optimized
rwlocks are used, threads performing read-locking notify about their
presence in separate cache lines, and by this avoid ping-ponging of a
common cache-line between caches.

Write-locking of a reader optimized rwlock is more expensive than
write-locking an ordinary rwlock, so if you have a large amount of
write operation you don't want to use the read_concurrency option. The
largest performance improvement will be seen when there are no
write-locking at all.

In order to determine if it is beneficial to use the option in your
use-case, you need to observe your system when it is executing under
expected load and without effecting it too much while observing it. In
your case it might be that eprof is effecting the execution too much,
but that is just a guess.

The improvement varies a lot depending on hardware. The more expensive
it is to keep a common cache line up to date in all involved caches,
the larger the performance improvement will be. It will typically be
more expensive, the further away cores are from each other and the
more cores that are involved.

I've attached a small benchmark that illustrates the effect. When run on:

Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz

Eshell V6.3  (abort with ^G)
1> erlang:system_info(cpu_topology).
[{processor,[{core,[{thread,{logical,0}},{thread,{logical,2}}]},
             {core,[{thread,{logical,1}},{thread,{logical,3}}]}]}]

Without read_concurrency an execution time of about 0.85-1.0 seconds.
With read_concurrency 0.75-0.8 seconds.


When run on:

AMD Opteron(tm) Processor 4376 HE

Eshell V6.4.1  (abort with ^G)
1> erlang:system_info(cpu_topology).
[{node,[{processor,[{core,{logical,0}},
                    {core,{logical,1}},
                    {core,{logical,2}},
                    {core,{logical,3}},
                    {core,{logical,4}},
                    {core,{logical,5}},
                    {core,{logical,6}},
                    {core,{logical,7}}]}]},
 {node,[{processor,[{core,{logical,8}},
                    {core,{logical,9}},
                    {core,{logical,10}},
                    {core,{logical,11}},
                    {core,{logical,12}},
                    {core,{logical,13}},
                    {core,{logical,14}},
                    {core,{logical,15}}]}]}]

Without read_concurrency an execution time of about 39-41 seconds.
With read_concurrency 1.1-1.2 seconds.


Regards,
Rickard
--
Rickard Green, Erlang/OTP, Ericsson AB

_______________________________________________
erlang-questions mailing list
[hidden email]
http://erlang.org/mailman/listinfo/erlang-questions

ets_rc_test.erl (968 bytes) Download Attachment