@codenameone
25 years ago I was lucky. I faced my debugging ignorance. It's a skill we don't learn at school. Hopefully I can help you reach a similar ep
Entrepreneur, author, blogger, open source hacker, speaker, Java rockstar, developer advocate and more. ex-Sun/Oracle guy with 30 years of professional development experience. Shai built virtual machines, development tools, mobile phone environments, banking systems, startup/enterprise backends, user interfaces, development frameworks and much more. Shai is an award winning highly rated speaker with a knack for engaging the audience and deep technical chops.
I'm open to suggestions my DMs are open
Oscar Ablinger It's still compiled to the same as a Map<string, string> in C#, but it provides compile-time safety. Yes but you can't use that for external data source and going over all the entries. However, this is a mistake. Why do you need to >> check if a value is null??? Because of the fail-early principle? Yes. I'll need to move up the stack. But it also saves me from replicating and keeping stuff that's no longer relevant. Case in point. I have code that currently doesn't deal with variable X being null. If I add an explicit check at some point that fails immediately then even if I fix the code the failure (that's no longer needed) will still happen. I want to see the "real failure" when it's possible. A validation should have a reason. Yeah, it's not a big issue, but I don't see how that's a win for null. I consider the performance natively on the CPU to be a win. Some things in non-null cases can be compiled to null. But some can't and this isn't clear from the language. That's the win. Also, a 1 in 1000 case in Java actually has a performance impact on the other 999 as the runtime realizes that the fast path is not reliable and cannot use the optimistic nullness assertions. Not so much. Since the exception is an interrupt there's really no change for most cases. The exceptions are rare, if this actually does happen the JIT just removes that optimization. Let's say I call a method that should send some data to the database. I understand what you're saying. But did you ever write code like that??? I have code that retries, that's a common case. But writing an entry to the database is VERY specific. I can't just write generic code that catches the case and does something differently. I generally would write if(x == null) ... within the same method. But it becomes explicit This is a fair point. But I don't think it's a necessary one. It pushes us to overthink null situations in some cases and write more code instead of just failing at runtime. I get that failing at runtime is "the worst". But for a vast majority of the cases it's the right and simplest thing to do. It's then easy to fix. Anyway, I think we're getting to the point where we keep repeating the same thing. This is an interesting discussion though, so thank you. I'll let you have the final word if you think more needs to be said.
Oscar Ablinger Which problematic cases? Even a simple Map with a null value. You will fail in runtime regardless of a compiler feature. About the code you posted. Notice that some null validations are enforced by the frameworks too. E.g. bean validation, etc. However, this is a mistake. Why do you need to check if a value is null??? Just let it propagate and fail when you try to use the variable. Or in the case of String as you did here: return doFoobar(foo.toString(), bar.toString()); I mean having the support is free, but actually using it isn't. I don't get how that's a win for null? Exceptions are rare. We optimize for the common case. If something is fast except for the 1 in 1000 case then its fast. I can catch NullInDatabase nicely. Catching a NPE is risky since at any time more code could be added that throws this exception with entirely different semantics. And what would you do for that catch? Do you have a fallback plan for that? I get what you're saying but if you have a way to handle NullInDatabase then you would do it where the failure happened not in the catch. Typically in a Spring application you just let the exception bubble up and have a generic exception processor return a proper error response to the rest request. With nonnullable-as-default the programmer is forced to deal with it. ... e.g. using some default value or rejecting only that record instead of the entire batch I get that claim but I think it seems good on paper only. You don't really have a recovery mode for nulls in most cases. The problem is exactly this. The non-null compiler feature shows you something that "might" be null. So you spend time building logic to deal with this failure. This goes against the fail-fast principle. You spent time writing logic that doesn't fail when there's a problem. But if your logic throws a custom exception then you invented yet another way to fail. That NullInDatabase doesn't provide any real world benefit over a generic NPE with a stack trace. Yes, the language cannot check it, but it can force you to check it and in the process prevent errors that would otherwise be overlooked. This is the source of our disagreement. I think it forces us to check it and as a result we write more code and add more places where we can fail. I think failing simply in a generic way is usually the best approach.
Thanks for your feedback. Despite the lengthy comments I don't think we're that far off in our opinions. Optional sucks. I agree. It tries to make Java into a functional language which it isn't. I'm not a fan of the whole "chaining" process. It creates unreadable flows and error stacks just so code can "look good". To be fair, I do use it in streams and there are some positive aspects. Null checks in compiler - Notice I mentioned this. This is checked for the most trivial cases of null. Since it doesn't eliminate the core need for null the problematic cases remain. Any IDE or linter usually finds these things just as well as any compiler feature. It's not worth the extra code or thought. Notice I specifically mentioned the ability to declare nullability (which is important in Valhalla) as a reason for better memory layout in the performance section. Notice that a compiler typically knows when a variable is never null, it doesn't really need that extra hint. Since nullability is effectively free at the CPU level this has no direct performance impact, the importance is only in memory layout (which does impact performance but it's a double edged sword). Throwing an exception does require allocation and stack unwinding. So yes, you shouldn't throw an exception in normal code execution. But having the exception support is free, so in terms of performance this is till a win for null. That's my point you wouldn't get a compile error for most of the "problematic things". If you have code that throws "NullInDatabase" then you need to write that code. With NPE we get the same result and the stack leads directly to the database. This is code we don't need to write and even the compiler doesn't really need to generate since the CPU/OS handle it seamlessly. The best code is the code that no one writes... I like ?. I hope Java adds this. It isn't about non-null though. "Data from external sources may always be corrupt and you should check for it." - Sure. But the language can't check for it and the checking isn't seamless... That's the point.
Jeannot Muller Unfortunately this is going in the opposite direction. Oracle removed FX from the JDK after I wrote that. JetBrains removed the small amount of usage of JavaFX from their IDE and replaced it with JCEF. We followed suite for Codename One. The trend is pretty bleak and there are no signs of it changing. When I wrote that Flutter didn't exit. I don't like Flutter at all, but its growth proved that there's a desire for a mobile/desktop native solution. That makes the failure of JavaFX even more pronounced. I'm more positive about solutions like desktop compose. It is lead by a company that knows how to build user interfaces and developer friendly tooling. It's time to let go of FX as a failed experiment.