While working at a Scala company a few years ago, I documented a few pitfalls regarding Scala / Java interoperability. When I started looking into Kotlin I was curious to see if Kotlin had better interoperability as claimed or if it had the same issues than Scala.
Most of the issues mentioned below stems from badly defined Java code, but they are all issues I have actually experienced in the wild. Interoperability should work for those ill-defined cases too, not only for perfect Java code.
Note: my Scala knowledge may be outdated as I didn't develop with it for more than 4 years now. This blog post is based on articles I wrote for the company blog back then, while working with Scala 2.10 or less.
Challenge 1: Java methods with invariant parameters
Take a Java method with the following signature, with an invariant List
as parameter:
|
|
The foo()
method is not well defined: ideally the list should have been declared covariant by using the type
List<?>
to allow passing lists of different generic types, while protecting them from unsafe modification.
For those unfamiliar with variance concepts, you can read about the
Basics of Java generics.
Still, in Java, you can call the method like this:
|
|
The argument expression is inferred to be of type List<Object>
by the Java compiler even though it's actually a
List<String>
. This upcast is safe because the list, by being created inline, cannot be referenced later and even if
foo()
modifies it unsafely there cannot be any consequences.
Can we write such a one-liner in Scala and Kotlin?
Part 1: Collections of objects
Let's first try to call the method above with a list of objects.
Scala
From Scala you'll want to call this method by writing something like this:
|
|
However the compiler will complain about a type mismatch, because the List<String>
we pass to foo()
is not assignable to its parameter type List<Object>
.
In Scala you must help the compiler by using a type annotation to force the type expected
by the foo()
method:
|
|
We can also inline it by using a type ascription:
|
|
Type ascription is described in more detail here,
but basically is an upcast validated at compile time (whereas asInstanceOf
performs a
downcast or an upcast at runtime).
Kotlin
From Kotlin you'll call this method by writing something like this:
|
|
This works without any issues! The Kotlin compiler is able to infer the correct type
without any help thanks to the fact that the read-only kotlin.collections.List
is covariant, which implies a
List<String>
can be safely assigned to a List<Object>
. Moreover, kotlin.collections.List
is in fact simply a
read-only alias of java.util.List
, meaning no conversion is necessary.
Note however that if you try to pass a MutableList
to foo()
the compiler will complain
about a type mismatch.
|
|
Why? Because MutableList
, by being mutable, is declared invariant in Kotlin. The only way to pass it to
foo()
would be to convert it to a read-only covariant List
:
|
|
Part 2: Collections of primitive values
Now let's do the same exercise with a list of primitive values.
Scala
As in the above case, the following will not work because of type mismatch:
|
|
However, the solution described above will not work here:
|
|
This line doesn't compile because Int
is an AnyVal
(supertype of all Scala types that
represent primitive types and which cannot be null
) and not an AnyRef
(supertype of all
Scala reference types and that can be null
). Not even forcing the type parameter to Any
(supertype of both AnyVal
and AnyRef
) will work. We have to resort to either a runtime
cast:
|
|
Or explicitly wrap the numeric values into their Java boxed type and only then use type ascription:
|
|
Note that the runtime cast above works because an Int
value is automatically boxed at
runtime into a java.lang.Integer
which is a subclass of AnyRef
.
Kotlin
What about a List with numeric types?
|
|
Again, Kotlin is able to figure it out nicely.
Score
Kotlin wins hands down this 2-points challenge.
Scala forces you to explicitly upcast the argument because it considers a java.util.List
invariant (being it
mutable). Dealing with a collection of primitive types is much worse.
The way Scala handles primitive types is very
confusing: understanding how AnyVal
values map to either primitive types or boxed types is hard, given that the
mapping is different at compile-time and runtime… More on this in the last challenge.
Kotlin in contrast is as easy to use than Java.
It achieves this feat first by providing a read-only covariant alias to java.util.List
.
Second, because its support for primitive types is more natural.
Why neither Scala nor Kotlin support safe upcasting for inline invariant expressions like the Java compiler?
For example, even Kotlin refuses to upcast a MutableList
.
I suppose the Java compiler supports that feature as a workaround for its own limitations. Scala and Kotlin both
support declaration-site variance
which removes the need for that kind of compiler tricks.
Kotlin 2 - Scala 0
Challenge 2: Subclassing a Java class with raw types
Suppose you have the following Java interface:
|
|
The Foo
interface has 2 parametric types, K
and V
, with V
being
constrained to be a subclass of K
.
Now suppose you have the following interface which use Foo
but with a raw type, i.e.
omitting the parametric types:
|
|
The code above is legal in Java for backward compatibility reasons.
Scala
Implementing the Bar
interface in Scala will get you in trouble:
|
|
Scala doesn't allow raw types and forces you to declare the parametric types of Foo
.
But what do we need to do to correctly override the bar()
method?
If Foo
was declared simply as Foo<K, V>
the solution would be easy:
|
|
However this solution doesn't work for Foo<K, V extends K>
because the same K
type is
referenced twice in the Foo
declaration and the Scala compiler is adamant that we need
to express that relationship in the type of the parameter foo
even though the Java code
doesn't care.
In order to solve this issue in Scala you need to invoke the dark powers of existential types!
|
|
This special notation allows to represent, inside the type signature, the fact that
the type K
is referenced twice, like in the Foo
interface declaration.
Kotlin
In Kotlin, we just need to use a type projection to emulate the Java raw type.
|
|
No need to introduce nasty concepts like existential types.
Score
You could argue that the Scala solution is more type-safe, because you'll not be able to
call bar()
with an argument which type doesn't conform to the Foo<K, V extends K>
constraints.
And it's true that the Kotlin solution is less type-safe.
However, in practice I have encountered this kind of issue only while integrating with
a Java framework: the bar()
method was called by no one else than the framework itself,
which didn't care about Scala type-safe signature and was anyway calling my implementation
with an argument of the correct type. The time I spent finding out about the solution
with existential types would have been better spent implementing actual features.
Kotlin choose here to be pragmatic instead of insisting on a purist take about safety and I think it was the better choice.
Note: the Scala IntelliJ plugin has apparently improved a lot during these years and now it can generate the signature with existential types described above.
Kotlin 3 - Scala 0
Challenge 3: Primitive types
Suppose we have the following Java method, returning a java.lang.Integer
value which can
be null
:
|
|
Scala
Let's call this method in Scala and assign its value to a variable:
|
|
The inferred type of i
is java.lang.Integer
, but we would like to use a Scala Int
.
|
|
Thanks to the implicit conversion Predef.Integer2int
, the java.lang.Integer
is
automatically converted to Scala Int
. Everything looks fine.
But see what happens here:
|
|
When foo()
returns null
the value of i
will be 0, because this is what the implicit
conversion Predef.Integer2int
returns for a null Integer
! (In Scala 2.10 the same
line was blowing up with a NullPointerException
…). This automatic conversion to 0 is
a bad idea in my opinion: it could hide a programmer error and result in difficult to find bugs.
In Scala, the proper way to protect against possibly null
values is to wrap them into an
Option
:
|
|
The value of i
is None
, as expected. However, again, the inferred type of
Option[java.lang.Integer]
is not what we want. Let's force it to Option[Int]
:
|
|
Now the value of i
is… Some(0)
. WTF? The infamous Predef.Integer2int
implicit
conversion strikes again and converts null
to 0 before Option.apply()
has a chance to do its work. The only way I had found to workaround this issue was to
convert Option[java.lang.Integer]
to Option[Int]
in a separate step :
|
|
Now the value of i
is None
and its type Option[Int]
. Finally!
Kotlin
Let's call the foo()
method in Kotlin and assign its value to a variable:
|
|
The inferred type of i
is Int!
, the Kotlin platform type
indicating unknown nullability. What happens if the developer explicitly declares the
type of i
to be a non-nullable Int
?
|
|
When foo()
returns null
this line will blow up with a IllegalStateException
containing the message: foo(1) must not be null
. When a platform type is explicitly
declared non-nullable, Kotlin inserts a runtime assertion with a message detailing the
exact expression that was expected to be non-null.
The proper way to deal with a possible null
value from Java is to explicitly declare
it nullable:
|
|
The value of i
is null
and its type Int?
. No surprises here.
Score
Kotlin support for Java primitive types is clearly better.
First, Scala doesn't automatically maps the Java primitive types to its own types, which
forces the developer to explicitly declare the Scala type. But then the implicit
conversions break in uninvited with unexpected results. In practice, in order
to support a nullable primitive type one should stay with the Java wrapper type
(java.lang.Integer
) instead of trying to use Scala Int
.
Even using an Option
is not the best solution because it's error-prone as we saw above
and has poor performance: you end up at runtime with a wrapper (Option
) wrapping a
wrapper (java.lang.Integer
)…
Moreover, The Scala type hierarchy is confusing: a Scala Int
is an AnyVal
which is
non-nullable, but it can map to both Java int
and java.lang.Integer
; however
java.lang.Integer
is an AnyRef
, therefore sitting in another branch of the hierarchy.
We saw in challenge 1.2 (collections of primitive values) how this can be problematic.
On the other hand in Kotlin the mapping from Java primitive types to Kotlin types is
automatic and without unexpected implicit conversion getting in the way. Throwing an
exception with a detailed message when a null
value is unexpected is better than hiding
programming errors. Nullable primitive types are supported natively and without overhead
by simply using Int?
.
The Kotlin type hierarchy looks simpler and sound to me: a Kotlin Int
can map both
to a Java int
or a non-nullable java.lang.Integer
, while a Int?
maps to a nullable
java.lang.Integer
.
Kotlin 4 - Scala 0
Conclusion
Kotlin wins hands down the Java interoperability showdown. Kotlin lives up to its promises, at least against issues that I have encountered with Scala.
This comparison may be not really fair with Scala because I choose examples where I did already know that Scala had issues with. Kotlin could have interoperability issues that Scala have not. If I found some in the future, I would update this article.
However, I'm not surprised that Scala has more interoperability issues than Kotlin. Scala was not designed with Java interoperability as a primary goal: it's simply a byproduct of running on the JVM. Even though interoperability is used as a marketing argument to attract developers, in practice when moving to Scala, teams have to leave behind the Java ecosystem and adopt the Scala ecosystem in order to be productive.
Kotlin on the other hand doesn't try to replace the Java ecosystem but to play well with it. I think that in the long run this strategy will pay off and Kotlin will become more widespread than Scala because it requires a less radical change to adopt it.