[Python-il] [pyweb-il:1072] Python coding question

Shai Berger shai at platonix.com
Mon Jul 12 02:22:04 IDT 2010

On Friday 09 July 2010, Amit Aronovitch wrote:
> On Fri, Jul 9, 2010 at 2:49 AM, Shai Berger <shai at platonix.com> wrote:
> > On Thursday 08 July 2010, you wrote:
> > > > I wouldn't like the trade-off where you "win" a couple of
> > > > questions on mailing-lists and pay for it with the DRY in
> > > > 
> > > > complex_or_just_long_reference = complex_or_just_long_reference + 1
> > > 
> > > But wouldn't you prefer something like
> > > 
> > >   complex_or_just_long_reference = same + 1
> > > 
> > > which turns out to be more powerful than the standard pack of syntactic
> > > 
> > > sugar constructs:
> > >   complex_or_just_long_reference = max(same, new_thing)
> > > 
> > > or
> > > 
> > >   complex_or_just_long_reference = "(" + same + ")"
> > > 
> > > IMHO, this is more readable and saves most of the mess with long names.
> > 
> > This is an interesting idea -- I agree that it has merits, but I also see
> > why
> > it wouldn't be included in a language where 'self' is a convention and
> > not a
> > keyword.
> > 
> > While it is definitely more general, I don't see it as more readable in
> > the common cases. Between C and its descendents, += has become quite
> > entrenched.
> But that's exactly the problem! It looks like something they know, but does
> not do what they expect it to do. Well, sometimes it does, depending on the
> type of the object, but this only makes it even more confusing.
> >>> a = 1; b=a
> >>> a += 1
> >>> b
> 1
> >>> from numpy import array
> >>> a = array(1); b=a
> >>> a+=1
> >>> b
> array(2)
>  This type-dependent semantics blurs the distinction between mangling the
> namespace and modifying the objects, adds complexity and special-cases to
> the mental model.
>  Suppose you are documenting a function that gets a parameter x (which
> might be an instance of user defined class), and does x/=2;x+=3 .
>  What would you say? "The parameter x will be modified only if it has both
> __iadd__ and __idiv__ defined?" and you'll have to keep track in case
>  you later add a -= as well...

You're completely right. And yet, you're wrong. That is exactly the 
"practicality beats purity" argument. Any definition which doesn't have this 
sort of wart is "pure"; like smalltalk, where it's really objects all the way 
down. That tends to be very powerful when you get the hang of it, but getting 
the hang of it tends to be rather long. And before you go there -- the 
proponents always argue that the purity, the simplicity of the mental model, 
is practical and allows then to do magic easily. And they're right, too. And 
yet, none of the pure languages makes it into the mainstream. Not Smalltalk, 
not Haskell, not J nor K (pure vector languages), not Prolog. The pure models, 
in spite of their internal consistency, always push most people out of their 
comfort zone.

>  Normally you will just not do that. Either completely avoid augmented
> assignments, or use them explicitly:
>     x.__idiv__(3)
>  which will raise an exception if the object does not support the protocol.
> This brings us to my own, much simpler, hypothetical alternative:
>   x.* y (equivalent to x.__imul__(y) ), x.+ y, etc.
The problem with these is that they require the definition of __iadd__ etc; it 
makes them less convenient in the common case.

BTW, __iadd__ is not required to be in-place. The assignment always happens, 
it just may be vacuous. On the other hand, nothing stops you from writing a 
modifying __add__ either.

The correct thing to look at should be whether the object is immutable or not. 
Python does not enforce that mutable objects with __add__ have an in-place 
__iadd__; that's part of its "consenting adults" philosophy.

(now from your other mail):

> ---> [Quoting Rani Hod]
> > > But wouldn't you prefer something like
> > > 
> > >   complex_or_just_long_reference = same + 1
> Maybe a graphical character ('$', '@', ...) could be used instead of a
> keyword?
> The idea seems like an inline version of the interactive interpreter's "_"

Yes, that's a very interesting way to look at it, though I think Python tends 
to be conservative on its use of new symbols. My own attempt for something 
that probably isn't used would be to follow the shell's history and use "!!". 
I wonder if we can be committed enough for a PEP.

(and finally, before I follow your lead and leave this be, at least publicly: 
from Dov's mail):

> The choices of 〈〉for function calls and ∘ for method specification was used
> to illustrate that with enough symbols there is no more the need for
> overloading them. Parentheses used for changing the order of operator
> precedence is a something different from parentheses used to enclose
> arguments in a function/method call.

The point I was trying to make is that even people writing on paper, with no 
character-set limitation, use this overloading. Then they go ahead and 
overload juxtaposition to mean both multiplication and function application 
(as in "sin 2x"). People sometimes find overloading easier than remembering 
which kind of parentheses to put in which context. We only avoid overloading 
where there is a risk of ambiguity, and sometimes not even then.

Hope I haven't bored you all to tears,


More information about the Python-il mailing list