Skip to main content

Command Palette

Search for a command to run...

Everything Bad in Java is Good for You

Nulls and checked exceptions are often promoted as "bad things" in Java, this isn't the case. Both carry significant advantage over the alternatives.

Updated
8 min read
Everything Bad in Java is Good for You
S

Entrepreneur, author, blogger, open source hacker, speaker, Java rockstar, developer advocate and more. ex-Sun/Oracle guy with 30 years of professional development experience. Shai built virtual machines, development tools, mobile phone environments, banking systems, startup/enterprise backends, user interfaces, development frameworks and much more. Shai is an award winning highly rated speaker with a knack for engaging the audience and deep technical chops.

Everything Bad is Good for You is a pop culture book that points out that some things we assume are bad (like TV) have tremendous benefits to our well-being. I love the premise of disrupting the conventional narrative and was reminded of that constantly when debating some of the more controversial features and problems in Java. It’s a feature, not a bug…

One of my favorite things about Java is its tendency to move slowly and deliberately. It doesn’t give us what we want right away. The Java team understands the requirements and looks at the other implementations, then learns from them.

I’d say Java’s driving philosophy is that the early bird is swallowed by a snake.

Checked Exceptions

One of the most universally hated features in Java is checked exceptions. They are the only innovative feature Java introduced as far as I recall. Most of the other concepts in Java existed in other languages, checked exceptions are a brand new idea that other languages rejected. They aren’t a “fun” feature, I get why people don’t like them. But they are an amazing tool.

The biggest problem with checked exceptions is the fact that they don’t fit nicely into functional syntax. This is true for nullability as well (which I will discuss shortly). That’s a fair complaint. Functional programming support was tacked onto Java and in terms of exception handling it was poorly done. The Java compiler could have detected checked exceptions and required an error callback. This was a mistake made when these capabilities were introduced in Java 8. E.g. if these APIs were better introduced into Java we could have written code like this:

api.call1()
    .call2(() -> codeThatThrowsACheckedException())
    .errorHandler(ex -> handleError(ex))
    .finalCall();

The compiler could force us to write the errorHandler callback if it was missing which would satisfy the spirit of the checked exceptions perfectly. This is possible because checked exceptions are a feature of the compiler, not the JVM. A compiler could detect a checked exception in the lambda and require a specially annotated exception handling callback.

Why wasn’t something like this added?

This is probably because of the general dislike of checked exceptions. No one attempted to come up with an alternative. No one likes them because no one likes the annoying feature that forces you to tidy up after yourself. We just want to code, checked exceptions force us to be responsible even when we just want to write a simple hello world…

This is, to a great extent, a mistake… We can declare that main throws an exception and create a simple hello world without handling checked exceptions. In large application frameworks like Spring, checked SQLException is wrapped with a RuntimeException version of the same class. You might think I’m against that but I’m not. It’s a perfect example of how we can use checked exceptions to clean up after the fact. Cleanup is performed internally by Spring, at this point the exception-handling logic is no longer crucial and can be converted to a runtime exception.

I think a lot of the hate towards the API comes from bad versions of this exception such as MalformedURLException or encoding exceptions. These exceptions are often thrown for constant input that should never fail. That’s just redundant and a bad use of language capabilities. Checked exceptions should only be thrown when there’s cleanup we can do. That’s an API problem, not a problem with the language feature.

Null

Pouring hate on null has been trending for the past 15+ years. Yes, I know that quote. I think people misuse it.

Null is a fact of life today, whether you like it or not. It’s inherent in everything: databases, protocols, formats, etc. Null is a deep part of programming and will not go away in the foreseeable future.

The debate over null is pointless. The debate that matters is whether the cure is better than the disease and I’m yet unconvinced. What matters isn’t if null was a mistake, what matters is what we do now.

To be fair, this directly correlates to your love of functional programming paradigms. Null doesn’t play nicely in FP which is why it became a punching bag for the FP guys. But are we stepping back or stepping forward?

Let’s break this down into three separate debates:

  • Performance

  • Failures

  • Ease of programming

Performance

Null is fast. Super fast. Literally free. The CPU performs a null check for us and handles exceptions as interrupts. We don’t need to write code to handle null. The alternatives can be very low overhead and can sometimes translate to null for CPU performance benefits. But this is harder to tune.

Abstractions leak and null is the way our hardware works. For most intents and purposes, it is better.

There is a caveat. We need the ability to mark some objects as non-null for better memory layout (as Valhalla plans to do). This will allow for better memory layout and can help speed up code. Notice that we can accomplish this while maintaining object semantics, a marker would be enough.

I would argue that null takes this round.

Failures

People hate NullPointerException. This baffles me.

NullPointerException is one of the best errors to get. It’s the fail-fast principle. The error is usually simple to understand and even when it isn’t; it isn’t far off. It’s an easy bug to fix. The alternative might include initializing an empty object which we need to verify or setting a dummy object to represent null.

Open a database that has been around long enough and search for “undefined”. I bet it has quite a few entries… That’s the problem with non-null values. You might not get a failure immediately. You will get something far worse. A stealth bug that crawls through the system and pollutes your data.

Since null is so simple and easy to detect there’s a vast number of tools that can deal with it both in runtime and during development. When people mention getting a null pointer exception in production I usually ask: what would have been the alternative?

If you could have initialized the value to begin with then why didn’t you do it?

Java has the final keyword, you can use that to keep non-null stateful values. Mutable values are the main reason for uninitialized or null values. It’s very possible that a non-null language wouldn’t fail. But would its result be worse?

In my experience, corrupt data in storage is far worse. The problem is insidious and hides under the surface. There’s no clue as to the origin of the problem and we need to set “traps” to track it down. Give me a fail-fast any day.

In my opinion, null has this one hands down…

Ease of Programming

An important point to understand is that null is a requirement of modern computing. Our entire ecosystem is built on top of null. Languages like Kotlin demonstrate this perfectly, they have null and non-null objects.

This means we have duplication. Every concept related to objects is expressed twice, and we need to maintain semantics between null and non-null. This raises the bar of complexity for developers new to such languages and makes for some odd syntax.

This in itself would be fine if the complexity paid off. Unfortunately, such features only resolve the most trivial non-issue cases of null. The complex objects aren’t supported since they contain null retrieved from external sources. We’re increasing language complexity for limited benefit.

Boilerplate

This used to be a bigger issue in the past but looking at a typical Java file vs. TypeScript or JavaScript the difference isn’t as big. Still, people nitpick. A smart engineer I know online called the use of semicolons in languages: "Laziness".

I don’t get that. I love the semicolon requirement and am always baffled by people who have a problem with that. As an author it lets me format my code while ignoring line length. I can line break wherever I want, the semicolon is the part that matters. If anything, I would have loved to cancel the ability to write conditional statements without the curly braces e.g.:

if(..) x();
else y();

That’s terrible. I block these in my style requirements; they are a recipe for disaster with an unclear beginning or end.

Java forces organization, this is a remarkable thing. Classes must be in a specific file and packages map to directories. This might not matter when your project is tiny, but as you handle a huge code base, this becomes a godsend. You would instantly know where to look for clues. That is a powerful tool. Yet, it leads to some verbosity and some deep directory structures. But Java was designed by people who build 1M LoC projects, it scales nicely thanks to the boilerplate. We can’t say the same for some other languages.

Moving Fast

Many things aren’t great in Java, especially when building more adventurous startup projects. That’s why I’m so excited about Manifold. I think it’s a way to patch Java with all the “cool stuff” we want while keeping the performance, compatibility and stability we love.

This can let the community move forward faster and experiment, while Java as a platform can take the slow and steady route.

Final Word

Conventional wisdom is problematic. Especially when it is so one-sided and presents a single-dimension argument in which a particular language feature is inferior. There are tradeoffs to be made and my bias probably shines through my words.

However, the cookie cutter counterpoints don’t cut it. The facts don’t present a clear picture to their benefit. There’s always a tradeoff and Java has walked a unique tightrope. Even a slight move in the wrong direction can produce a fast tumbling-down effect. Yet it maintains its traction despite the efforts of multiple different groups to display it as antiquated. This led to a ridiculous perception among developers of Python and JavaScript as “newer” languages.

I think the solution for that is two-fold. We need to educate about the benefits of Java's approach to these solutions. We also need solutions like Manifold to explore potential directions freely. Without the encumberment of the JCP. Having a working proof of concept will make integrating new ideas into Java much easier.

J

Helpful 👌

J

Very useful

J

Very useful 👈

1
O

I agree that it's good to look at commonly bad things again and critically evaluate if, why and how they are bad. And since I disagree with you on a few fronts, I figured, I should argue my thoughts.

(Note that I've not watched the video, but only read the article so I apologize if you've addressed some of it in the video.)

Checked Exceptions

I agree here. I do even think that non-checked Exceptions are a problem.

Exceptions are a form of a return value and since you don't have to specify them in the method definition, every Java method essentially has infinite possibilities for return values by default. However it gets hate because of how clumsily it was implemented in Java and so noone wants to write the throws line by hand. I'm not sure how to best solve it, though.

Null

I don't think the common problem is that null exists, but rather that Java doesn't support you with it at all.

Functional languages like Haskell have nulls in the form of the Option monad. In Java every object may be null until proven otherwise.

When people complain about null-safety in Java, they complain about the lack of compile-time checks. The compiler could always find out whether a variable might be null at a certain point, but it doesn't tell the programmer. It would make sense to have it explicitely in the syntax, too.

Yes, there are annotations and external tools, but they will never be able to approach a built-in solution.

You could even add null safety later like C# did.

So let's go throught the points again:

Performance

...is actually worse because of the null unsafety. For one, when the compiler knows when a variable is never null it can skip out on the check and you have one less fork. Of course the Java compiler could still do it, but it has to be done additionally instead of having it as part of the language syntax.

More importantly, however, is that throwing an exception is slow. In high-performance Java code, you can almost never afford creating and throwing an exception.

Failures

When people mention getting a null pointer exception in production I usually ask: what would have been the alternative?

A compile-time error.

And if you don't want to use a dummy object, then throw an exception yourself that better describes why it happened. Maybe even something like UnexpectedNullInDatabase.

Ease of Programming

I guess this is subjective. Personally I prefer having options like ?. and similar, because I will have to do these checks anyways if I want to program defensively. But of course it's more syntax to learn.

The complex objects aren’t supported since they contain null retrieved from external sources.

Why not? Data from external sources may always be corrupt and you should check for it. So in the case of null the language should always assume that they can be null and the programmer should handle it however they see fit.

Boilerplate

I think it got a lot better in Java 17. My main issue with the boilerplate code in Java were simple DTOs, but with records they become pretty small now, too.

As for semi-colons: I prefer the look of no semi-colons ^^ It's purely asthetic though and I see the value of them.

End

I guess I mostly disagree with you on the null front. I do think that "fail-fast" starts at compile time and errors with null are some of the easiest to bake into a language.

2
S

Thanks for your feedback. Despite the lengthy comments I don't think we're that far off in our opinions.

  • Optional sucks. I agree. It tries to make Java into a functional language which it isn't. I'm not a fan of the whole "chaining" process. It creates unreadable flows and error stacks just so code can "look good". To be fair, I do use it in streams and there are some positive aspects.

  • Null checks in compiler - Notice I mentioned this. This is checked for the most trivial cases of null. Since it doesn't eliminate the core need for null the problematic cases remain. Any IDE or linter usually finds these things just as well as any compiler feature. It's not worth the extra code or thought.

  • Notice I specifically mentioned the ability to declare nullability (which is important in Valhalla) as a reason for better memory layout in the performance section.

  • Notice that a compiler typically knows when a variable is never null, it doesn't really need that extra hint. Since nullability is effectively free at the CPU level this has no direct performance impact, the importance is only in memory layout (which does impact performance but it's a double edged sword).

  • Throwing an exception does require allocation and stack unwinding. So yes, you shouldn't throw an exception in normal code execution. But having the exception support is free, so in terms of performance this is till a win for null.

  • That's my point you wouldn't get a compile error for most of the "problematic things". If you have code that throws "NullInDatabase" then you need to write that code. With NPE we get the same result and the stack leads directly to the database. This is code we don't need to write and even the compiler doesn't really need to generate since the CPU/OS handle it seamlessly. The best code is the code that no one writes...

  • I like ?. I hope Java adds this. It isn't about non-null though.

  • "Data from external sources may always be corrupt and you should check for it." - Sure. But the language can't check for it and the checking isn't seamless... That's the point.

O

Shai Almog I don't think we're far off, either ^^

Optional sucks. I agree.

I never said Optional sucks? I think it's useful in some cases, like what you mentioned with streams. But honestly, I haven't had the option to use it enough to form a proper opinion. I write highly performance-sensitive code most of the time and the object churn from using Optional means it is almost never even an option.

I prefer the implementation of the Option monad in other languages over null, however.

Null checks in compiler - Notice I mentioned this. This is checked for the most trivial cases of null. Since it doesn't eliminate the core need for null the problematic cases remain.

Which problematic cases?

Since nullability is effectively free at the CPU level this has no direct performance impact

Yeah, I mostly just wrote the "performance is actually better" to be a bit controversial (kinda like clickbait I guess ;) ). The performance part of my comment I actually care the least about.

I'd assume most compilers for languages with null alternatives are going to compile it down to use null at CPU level anyways so I don't think it makes a difference.

For the Java compiler I'm not entirely sure how smart it can handle it, but at least in some cases (hot-loading) it's impossible for it to do a proper null-check at compile time and can at most do it during runtime (via the JIT), but I'm not sure if that analysis is not too heavy to do during runtime.


What I care more about with null is that it leads to having to write code like this:

@Nonnull
public String foobar(@Nonnull String foo, @Nonnull String bar) {
    requireNonNull(foo, "foo");
    requireNonNull(bar, "bar");

    return doFoobar(foo, bar); // still no actual guarantee since you can compile it ignoring the annotations.
}

In e.g. C# you could write the same code like this:

public string Foobar(string foo, string bar) {
    return DoFoobar(foo, bar);
}

and with the NRT feature flag turned on you have the same functionality in the good-case scenario (minus some unnecessary null checks) and completely prevent the bad-case scenario during compilation instead of relying on external tools or getting NPEs during runtime.

But having the exception support is free, so in terms of performance this is till a win for null.

I mean having the support is free, but actually using it isn't. I don't get how that's a win for null?

That's my point you wouldn't get a compile error for most of the "problematic things".

Again, which problematic things? There are languages with nonnull by default that don't have any problems with it.

If you have code that throws "NullInDatabase" then you need to write that code. With NPE we get the same result and the stack leads directly to the database.

We don't get the same result, because

  1. I can catch NullInDatabase nicely. Catching a NPE is risky since at any time more code could be added that throws this exception with entirely different semantics.
  2. With nonnullable-as-default the programmer is forced to deal with it. The one implementing the database-connection for instance could still choose to simply return a nullable value to the user instead of throwing the code. But it's a deliberate choice that the language forces you to do make. With nullable-by-default the programmer may overlook it.
  3. Most of the time I actually want to deal with it differently (e.g. using some default value or rejecting only that record instead of the entire batch). So I'd have to think about it either way. But sometimes (often) I make mistakes and forget to properly check stuff and so bugs happen.

"Data from external sources may always be corrupt and you should check for it." - Sure. But the language can't check for it and the checking isn't seamless... That's the point.

Yes, the language cannot check it, but it can force you to check it and in the process prevent errors that would otherwise be overlooked.

1
S

Oscar Ablinger

Which problematic cases?

Even a simple Map with a null value. You will fail in runtime regardless of a compiler feature.

About the code you posted. Notice that some null validations are enforced by the frameworks too. E.g. bean validation, etc.

However, this is a mistake. Why do you need to check if a value is null???

Just let it propagate and fail when you try to use the variable.

Or in the case of String as you did here:

return doFoobar(foo.toString(), bar.toString());

I mean having the support is free, but actually using it isn't. I don't get how that's a win for null?

Exceptions are rare. We optimize for the common case. If something is fast except for the 1 in 1000 case then its fast.

I can catch NullInDatabase nicely. Catching a NPE is risky since at any time more code could be added that throws this exception with entirely different semantics.

And what would you do for that catch? Do you have a fallback plan for that?

I get what you're saying but if you have a way to handle NullInDatabase then you would do it where the failure happened not in the catch.

Typically in a Spring application you just let the exception bubble up and have a generic exception processor return a proper error response to the rest request.

With nonnullable-as-default the programmer is forced to deal with it. ... e.g. using some default value or rejecting only that record instead of the entire batch

I get that claim but I think it seems good on paper only. You don't really have a recovery mode for nulls in most cases.

The problem is exactly this. The non-null compiler feature shows you something that "might" be null. So you spend time building logic to deal with this failure. This goes against the fail-fast principle. You spent time writing logic that doesn't fail when there's a problem.

But if your logic throws a custom exception then you invented yet another way to fail. That NullInDatabase doesn't provide any real world benefit over a generic NPE with a stack trace.

Yes, the language cannot check it, but it can force you to check it and in the process prevent errors that would otherwise be overlooked.

This is the source of our disagreement. I think it forces us to check it and as a result we write more code and add more places where we can fail. I think failing simply in a generic way is usually the best approach.

O

Shai Almog

Even a simple Map with a null value. You will fail in runtime regardless of a compiler feature.

Oh you meant if you tack it onto Java via its annotations. Then yes, that has a lot of limitations, but that's also why I think it's such a shame that Java wasn't designed with it from the ground up. In e.g. F# you can have a map like Map<string, string option> which means that "null" values are allowed as a value but not as a key.

It's still compiled to the same as a Map<string, string> in C#, but it provides compile-time safety.

However, this is a mistake. Why do you need to check if a value is null???

Just let it propagate and fail when you try to use the variable.

Because of the fail-early principle? We want to fail as early as possible. Otherwise I'll have to move up the stacktrace to find out why the value could be null. In this case I know it was the caller of that method. That saves a lot of time not just if I have to correct it, but also if e.g. the calling code is owned by a different team.

It also helps prevent unexpected states when methods suddenly interrupt in the middle of it's execution due to a NPE. Yes, you should be able to handle it (e.g. transactional logic if necessary), but by checking it earlier I can rely on it afterwards when using it, resulting in cleaner and faster code. It's also easy to overlook places where transactional logic is required.

Exceptions are rare. We optimize for the common case. If something is fast except for the 1 in 1000 case then its fast.

Yeah, it's not a big issue, but I don't see how that's a win for null.

Also, a 1 in 1000 case in Java actually has a performance impact on the other 999 as the runtime realizes that the fast path is not reliable and cannot use the optimistic nullness assertions. Most programs don't have to care about such minimal performance gains, though. I just wanted to mention it since most people don't know about it.

And what would you do for that catch? Do you have a fallback plan for that?

I get what you're saying but if you have a way to handle NullInDatabase then you would do it where the failure happened not in the catch.

Sometimes, I want to handle it where the failure happens. Then I wouldn't throw an exception. But if I cannot handle it, then I'll have to throw an exception and I believe that sometimes a specific exception is preferable to a generic one.

Let's say I call a method that should send some data to the database. If I get a NPE it could be because I gave it invalid data, the method is implemented wrong, a connection cannot be established, something in the database is null, a configuration value is null etc.. I'm mostly interested in whether or not it's my fault. With a specific exceptions I immediately know it, while with a NPE, I'd assume it's the fault of the one where it happens. But maybe it's the callers fault or another component that it depends on returns null even though it shouldn't?

You don't really have a recovery mode for nulls in most cases.

Yes, because most of the times it's a programming error. Most of the times the "recovery" is "tell the programmer to fix it". And that happens by looking at the code, figuring out why it can be null and what to do in that case. Most of the times that means find out who returns or provides a null value in a place where they shouldn't.

With nonnull-by-default that happens before an error happens. Oh the method doesn't accept null? Then I have to handle it. Sometimes that means using a default value, sometimes that means failing. But I notice it when writing the offending code not when going through my log lines.

The problem is exactly this. The non-null compiler feature shows you something that "might" be null. So you spend time building logic to deal with this failure. This goes against the fail-fast principle. You spent time writing logic that doesn't fail when there's a problem.

Noone says you're not allowed to fail when you see a null. But it becomes explicit. And when reading the code I immediately know that it's an option. More importantly, when writing code in a language that has nonnullable types, I don't have to ask all the time "can this be null?" and make sure it doesn't fuck anything up if it is or "can I use null in there" and look through the entire method to see if it handles it correctly (which might even change in the future without me knowing).


The annotations and requireNonNull are bandages for a lot of these problems and still allow Java to scale to enterprise size. But I don't see the advantage over having it baked into the language itself.

S

Oscar Ablinger

It's still compiled to the same as a Map<string, string> in C#, but it provides compile-time safety.

Yes but you can't use that for external data source and going over all the entries.

However, this is a mistake. Why do you need to >> check if a value is null??? Because of the fail-early principle?

Yes. I'll need to move up the stack. But it also saves me from replicating and keeping stuff that's no longer relevant.

Case in point. I have code that currently doesn't deal with variable X being null. If I add an explicit check at some point that fails immediately then even if I fix the code the failure (that's no longer needed) will still happen.

I want to see the "real failure" when it's possible. A validation should have a reason.

Yeah, it's not a big issue, but I don't see how that's a win for null.

I consider the performance natively on the CPU to be a win. Some things in non-null cases can be compiled to null. But some can't and this isn't clear from the language. That's the win.

Also, a 1 in 1000 case in Java actually has a performance impact on the other 999 as the runtime realizes that the fast path is not reliable and cannot use the optimistic nullness assertions.

Not so much. Since the exception is an interrupt there's really no change for most cases. The exceptions are rare, if this actually does happen the JIT just removes that optimization.

Let's say I call a method that should send some data to the database.

I understand what you're saying. But did you ever write code like that???

I have code that retries, that's a common case. But writing an entry to the database is VERY specific. I can't just write generic code that catches the case and does something differently. I generally would write if(x == null) ... within the same method.

But it becomes explicit

This is a fair point. But I don't think it's a necessary one. It pushes us to overthink null situations in some cases and write more code instead of just failing at runtime.

I get that failing at runtime is "the worst". But for a vast majority of the cases it's the right and simplest thing to do. It's then easy to fix.

Anyway, I think we're getting to the point where we keep repeating the same thing. This is an interesting discussion though, so thank you. I'll let you have the final word if you think more needs to be said.