Thursday, August 28, 2008

Scala Implicits: a dose of Magic | Part 2

In my previous post, I introduced implicits, mentioned some of the ways in which they can be used, and talked about one of their primary uses: view conversion.

Maybe you came away from that post thinking: 'hey, this looks pretty interesting, but this is too close to magic for my liking. I want to be able to write code that is understandable and maintainable, and all this under-the-covers stuff done by the compiler makes me uncomfortable'.

Well - fear not. Scala makes use of a bunch of safeguards to tame the magic of implicits. These are listed in the next section.

Rules/Safeguards for the application of Implicits

I will just mention the safeguards here; a detailed description of these rules is provided in the Programming in Scala book.
  • Marking Rule: only definitions marked implicit are available for implicit conversion and parameter passing.
  • Scope Rule: an implicit conversion/parameter must be in scope as a single identifier, or it must be available within the companion object of a source or target type for a conversion.
  • Non ambiguity rule: an implicit conversion is used only if there is no other possible conversion available to be used.
  • One at a time Rule: only one implicit is ever tried in an expression.
  • Explicits first rule: implicits are tried only if there is a type error with the existing code.
Debugging Implicits

So - you have written your code with a tasteful smattering of implicits; but it does not quite do what you think it should do. Is there a way for you to figure out what's going on under the covers?

Yes. If you run scalac -Xprint:typer on your code, you will get to see exactly what's going on with implicit conversions. Let me demonstrate this with an example from my previous post. Back there, we looked at a method to find the max element in a collection:
/*
* A module that provides aggregate operations on collections
*/
object AggregateIterableOps {

def max[A <% Ordered[A]](iterable: RichIterableOps[A]): A = {
iterable.reduceLeft((x, y) => if (x > y) x else y)
}
}
And the code to test it:
 @Test
def testMaxForSet = {
plainSet.add(18); plainSet.add(17); plainSet.add(16)
assertEquals(18, AggregateIterableOps.max(plainSet))
}
I spent a lot of words in the previous post trying to explain exactly what was happening under the covers with implicits in this example. This time around, let me just show you the output from scalac -Xprint:typer for the two code fragments:
  final object AggregateIterableOps extends java.lang.Object with ScalaObject {
def this(): object implicits.AggregateIterableOps = {
AggregateIterableOps.super.this();
()
};
def max[A >: Nothing <: Any](iterable: RichIterableOps[A])(implici
t view$1: (A) => Ordered[A])
: A = iterable.reduceLeft[A](((x: A, y: A) => if (vi
ew$1.apply(x).>
(y))
x
else
y))
};
And:
    @org.junit.Test def testMaxForSet: Unit = {
TestImplicitHashSet.this.plainSet.add(18);
TestImplicitHashSet.this.plainSet.add(17);
TestImplicitHashSet.this.plainSet.add(16);
org.junit.Assert.assertEquals(18, AggregateIterableOps.max[Int](this.RichCollections.toRichHashSet[Int](TestImplicitHashSet.this.plainSet))({
((eta$0$3: Int) => scala.this.Predef.intWrapper(eta$0$3))
})
)
}
}

As an aside: it's pretty simple to run scalac -Xprint:typer from within Eclipse by using an 'External Tool'. This works well if you do not need to specify a classpath for scalac. If you do need to specify a classpath, things get a little tricky. I have not found anything obvious that will allow me to run an external tool with the classpath of the current Eclipse project. To do this, I make use of a little utility class that I have written; this class is available in the scrisca code repo. A screenshot of how this is used is shown below:



Let's talk next about the origins of implicits...

Origins


Implicits have their origin in Haskell type classes. I have only recently started learning Haskell, so I'm going to tread into relatively unknown territory here. But I think a comparison of implicits to type classes is instructive - so I am going to just forge on ahead and try to talk about this as well as I can.

Type classes were added to Haskell to make ad-hoc polymorphism less ad-hoc. Take operator overloading for example. With operator overloading, there are multiple versions of a function for different types, and the compiler inserts a call to the correct one based on the actual arguments to the function. As opposed to this, a generic (or parametrically polymorphic) function has just one implementation that works across a family of types.

Type classes within Haskell are a mechanism to tame ad-hoc polymorphism, bring it into the type system of the language, and to enable the writing of generic code based on the additional information provided by type classes in situations which would otherwise have required ad-hoc overloading. Let us look at an example:
class MyEq a where
eql, neql :: a -> a -> Bool
neql x y = not (eql x y)

instance MyEq Int where
eql x y = x == y -- cheat and just use the (==) from the Eq type class

instance MyEq Char where
eql x y = x == y -- cheat and just use the (==) from the Eq type class

member :: MyEq a => a -> [a] -> Bool
member x [] = False
member x (y:ys) | x `eql` y = True
| otherwise = member x ys

In the code fragment above, MyEq is a type class that defines the eql and neql operations. Int and Char are made instances of this class. Finally, member is a generic function that can work with lists of both Ints and Chars.

The same thing in Scala looks like this:
trait MyEq[a] {
def eql(other: a): Boolean
def neql(other: a): Boolean = !eql(other)
}

object MyEq {
implicit def fromInt(n: Int): MyEq[Int] = new MyEq[Int] {
def eql(other: Int): Boolean = n == other
}

implicit def fromChar(c: Char): MyEq[Char] = new MyEq[Char] {
def eql(other: Char): Boolean = c == other
}
}

object MyEqFuncs {
def member[A <% MyEq[A]](a: A, as: List[A]): Boolean = as match {
case Nil => false
case (x::xs) => if (a.eql(x)) true else member(a, xs)
}
}

So how does Haskell implement type classes? Haskell translates the following code:
class MyEq a where
eql, neql :: a -> a -> Bool
neql x y = not (eql x y)

instance MyEq Int where
eql x y = x == y

member :: MyEq a => a -> [a] -> Bool
member x [] = False
member x (y:ys) | x `eql` y = True
| otherwise = member x ys
Into:
data MyEqDict a = MkMyEqDict (a->a->Bool) (a->a->Bool)

eql (MkMyEqDict e _) = e
nel (MkMyEqDict _ n) = n

dEqInt :: MyEqDict Int
dEqInt = MkMyEqDict (==) (\x y -> not (x == y))

member :: MyEqDict a -> a -> [a] -> Bool
member d x [] = False
member d x (y:ys) | eql d x y = True
| otherwise = member d x ys
So basically, a type class is implemented via an implicit parameter, a dictionary of methods, which is passed to generic methods that work with the type class. The generic methods make use of the dictionary to work with their parameters as needed.

In Poor Man's Type Classes, Martin Odersky outlines the mapping of Haskell type classes to Scala:

type class = (trait/)*class
instance declaration = implicit definition
context in a class = (trait) inheritance
context in a type = implicit parameter (or view bound)
dictionary = object (the target of a view conversion)
default method in (trait/)class = concrete member
method signature in (trait/)class = abstract member

*: stuff in brackets added by me to provide additional information

Applicability to Scala

I think it is valid to ask: Scala already has a concept of traits, and an implicit this parameter/dictionary is already passed around for every (non-static) method invocation. Do we really need implicit parameters to mimic the type class context in the type of a Haskell method? Can't generic methods in Scala just make use of an upper bound on a type parameter to achieve similar functionality?

In my mind, the answer is very definitely: yes we need implicits. Implicits provide substantial additional value, over and above the OO facilites already present in Scala. My previous post talked about how implicits bring the Is-viewable-as-a relationship between objects to the table, and why this is important. This time around, let me just list some additional scenarios/problems for which the type-class/implicits approach provides good solutions:
  • Object Adapters
  • The problem of the tyranny of the dominant decomposition
  • The expression problem
  • Multiple dispatch
  • Family polymorphism
  • Framework integration
More information on each of these items is available in this excellent paper on type classes.

Other uses of implicits

These are based on sightings in the wild:

Conclusion

Over the course of this (and the previous) post, I hope that I have been able to show you that:
  • Implicits are an extremely powerful feature of Scala.
  • Scala gives you good tools to control and diagnose the use of implements.

Have fun with Implicits! And use them (wisely) to write better code...

Relevant Reads


Programming in Scala; the chapter on Implicits
Poor Man's Type Classes
How to make ad-hoc polymorphism less ad hoc
Software Extension and Integration with Type Classes

No comments: