> > Support for mutable arrays has almost got to the point of a FAQ.
> HIPE used a third solution for a long time. However, it
> requires support
> from a generational garbage collector, since it relies on destructive
> updates 'under the hood'.
Yes, my understanding (from reading the papers about it) is that the current
GC is well liked by the OTP guys because it is extremely fast - much faster
than the equivalent generational version. The trade off of not having "under
the hood" destructive updates appears to be a pretty good one. This is
partly what lay behind my suggestion to simply use a heap based destructive
update mechanism like ets which has no GC.
Also once one introduces another way of doing imperative programming into
the language it has the potential to lead to more imperative style side
effect bugs ;) I'm thinking particularly about exception handling where the
result of the function call depends on whether the function crashed before
or after the array operation.
Erlang programmers have (presumably) got used to dealing with this situation
using ets so having something with the same type of library API and general
usage pattern might help somewhat to overcome the weaknesses in the
Your version would overcome this (i.e. you still provide access to the old
version) but at the expense of more GC complexity..
> What were the advantages:
> - Very useful, in short :-) For HIPE, graph algorithms became much
> more pleasant. I also implemented hash tables on top of the arrays,
> which rapidly spread through the compiler. Many common algorithms
> are much easier to write when your array ADT has the right
Would you include the declarative bit in the "right complexity"?
> I know there has been discussions about implementing something like
> this in OTP, but I don't know the precise status. Bj?rn G. probably
> knows more :-)
I hope so.. but not at the expense of overall system performance.
NOTICE AND DISCLAIMER:
This email (including attachments) is confidential. If you have received
this email in error please notify the sender immediately and delete this
email from your system without copying or disseminating it or placing any
reliance upon its contents. We cannot accept liability for any breaches of
confidence arising through use of email. Any opinions expressed in this
email (including attachments) are those of the author and do not necessarily
reflect our opinions. We will not accept responsibility for any commitments
made by our employees outside the scope of our business. We do not warrant
the accuracy or completeness of such information.
> Would you include the declarative bit in the "right complexity"?
Well, the point was just that you can translate an existing imperative
algorithm fairly easily (if verbosely) into Erlang _and_ get good complexity.
You can do the translation part today with an appropriate ADT:
/* A[I] = A[M]+A[N] */
V1 = read(Array0,M),
V2 = read(Array0,N),
Array1 = write(Array0,I,M+N), %% now use Array1 from here on
But space/time complexity will of course depend on how read/2 and
write/3 are implemented. Getting the right complexity for both reads
and writes is the problem solved by declarative vectors.
The complexity of implementing the algorithm stays about the same,
though debugging may be easier since old versions can be kept around
for inspection. (Having a verbose notation for array access may make
implementation harder, due to syntactic clutter; erl_lint could
perhaps be extended to warn about bad use of old versions.)
Thomas Lindgren thomas+junk