A Week With the Fairphone 2
I have spent about a week with the Fairphone 2 - a phone that aims at using fair trade ideas also for producing smartphones. Besides the ethical reasons for buying the Fairphone 2 there were actually a few technical, too:
- Dual SIM
- A quite up-to-date Android (5.0)
- microSD slot
- Modular design i.e. every part of the phone like display, microphone etc. can be replaced.
I do like the design and it is really easy to open and disassemble. I like the basic design of the phone a lot and wish there were more like it.
Now I have spent a week with the Fairphone 2 and I regret to say it was less than pleasant:
- The Android is not plain vanilla. It has some additional features. For example, the OS shows the privacy impact of each App. However, if you disable this feature some Apps don't work. For me Threema did not restore my backup - no error message, nothing. Luckily the Threema support was aware of the issue. If you enable the feature again, there is no problem.
- Battery life appears to be not-too-great. I can hardly get a full day without recharging which should be the minimum in my opinion.
- The display flickers if the brightness is set to a low value.
- I have trouble doing calls. Yesterday I could do a call - but the person I called could not hear me, even after dialing again several times. Today I had trouble understanding the person I called.
- The phone reboots - at least once a day, sometimes more often. Today it rebooted during a call. I am not the only one. I don't know any work around. An unreliable phone is a real issue in my opinion. But maybe it is a hardware issue and not all phone have this problem.
- There are many minor issues. The phone shows an exclamation mark at the wireless symbol - even though everything works. Only the red LED works - all other don't. The location icon stays on - even though no App requests the location. There is a quite long list of in the forum and official report.Those are minor issues and I don't care too much about them - but some of them are so obvious that I wonder whether the phone and its software was actually tested.
Today I submitted a support ticket concerning the reboots. It seems the answer can take up to eight working days. I am considering to return it.
So I really think the Fairphone 2 has a lot of potential and I really want the product to succeed. But I am afraid given its current state it is hardly usable for me. I wasn't sure whether I should write this blog post - I really love the technology and the idea behind the Fairphone. But the information concerning the issues is public anyway so I figure this blog post might not be such an issue after all.
I hope I can update the blog post soon with better news...
Update 2016-02-17: The support ticket was actually handled quite quickly - actually in less than a day. I tried to tweak the phone to solve the reboot issue at least - but I wasn't successful. So I have applied for returning the phone.
Update 2016-02-21: Return is not yet approved. I am considering a
Shiftphone instead.
Update 2016-02-26:
https://forum.fairphone.com/t/fairphone-2-random-reboots-15-times-a-day/11553/93 is the first Fairphone's official statement about the reboots. Quite honestly this doesn't look like there will be a fix soon or the problem is fully analyzed. The reboot is not on the list of known issues in the forum or on Fairphone's list of known issues. IIRC it has been on the list in the forums at one point. :-(
Update 2016-08-12:
- The reboot problem is fixed.
- However, my Fairphone destroyed my SIM card - a known problem that is now fixed.
- My microphone stopped working at one point. I got a replacement for free and now that problem ist solved, too.
- Also there is a problem now that required you to reenter the PIN. However, you can deactive the PIN for the SIM card.
Still the battery life is not that great and the proximity sensor doesn't work too great - i.e. if you look at the screen during a call, the screen is often blank.
So right now the phone is basically usable and most of the big problems are solved. I use it every day. However, the latest problem around the SIM PIN shows that the quality of the software is still not great.
Update 2016-09-05:
- Microphone dead again
- Cover doesn't look too nice any more
So I bought a Moto G4+ to replace the Fairphone 2. I just need a reliable mobile phone.
Time To Move On...
I have started a new blog called
Continuous Architecture at heise developer. It's in German - sorry to the English audience. I will therefore probably not spend a lot of time on this blog in the future. Thanks a lot for reading through all the material, commenting and the discussions. Hope to see you at the new blog! :-)
Approaches for Über / Fat JAR Deployment in Java EE
I believe the deployment and monitoring model for Java EE Application Server is outdated. If you want to learn more about my opinion refer to my
article at JAXenter, the
slides or my
Interview at InfoQ. Instead I advocate a model to deploy an application including all the needed infrastructure as a large JAR file. This model is supported e.g. by
Vert.x,
Dropwizard, the
Play Framework and of course
Spring Boot. However, what about Java EE applications? While in theory there is no reason why a Java EE should not be deployed as a huge JAR file - but there are not that many solutions to actually do that.
So here are some approaches that might be worth looking at:
- Spring Boot actually does support some Java EE APIs. A Blog Post describes this in more detail. It shows JAX-RS with Jersey, transactions with JTA and the Java EE annotation @Transactional, and also JSR 330 annotations for Dependency Injection. However, the example still uses some Spring classes and some technologies like EJB are not supported. But Spring Boot has a quite advanced model for metrics and monitoring, see Spring Boot Actuator.
- TomEE supports this model to some extend. See this older blog post. Also David Belvins of the TomEE team has quick demo and another demo.
- There is Backset that supports CDI, JSF and JPA - with EJB, JTA and JAX-RS on the roadmap.
- Finally this repro shows how to use Undertow with JAX-RS in a main method.
Thanks for the discussion over at
Twitter and all the help I got there. If you know any other options - please leave a comment. Thanks!
MongoDB and the CAP Theorem
The
CAP theorem by Brewer basically says that a distributed systems can only have two of the following three properties:
- Consistency i.e. each node has the same data
- Availability i.e. a node will always answer queries if possible
- Partition tolerance i.e. work despite a network failure so nodes cannot communicate with one another
In real life things are a little different: You cannot really sacrifice partition tolerance. The network will eventually fail. Or nodes might fail.
So here is a different approach to understanding the CAP theorem: Imagine a cluster of nodes. Each has a replicated set of data. When the network or a node fails there are two options to answer a query: First a node can give an answer to a query based on the data on that node. This information might be outdated. The other nodes might have received some update that are not propagated yet. So the node might therefore give an incorrect answer - so the system sacrifices consistency.
The other option in such a situation is to give no answer. Then the system rather won't answer queries than give a potentially incorrect answer. This sacrifices availability - the node does not answer the query even though it is still up.
So let's take the CAP theorem to better understand MongoDB. MongoDB uses a Master / Slave
replication scheme. Data is written to a master node and then replicated to the slaves. If a network failure occurs or the master is down a slave takes over as the new master. So how does MongoDB solve the issue concerning CAP? There are settings that influence Mongos behavior:
- Write Concerns let you choose when a write attempt is considered successful. The setting vary from "error ignored" up to settings that define how many nodes must have acknowledged the write operation.
- Read Preferences allow you to choose whether you want to read from the master or also from slaves.
So concerning CAP it leaves you with different options:
- Using the write concerns you can enforce different level of consistency in the cluster - you can choose how many nodes the data must be stored at. There is a trade off with availability: The write will fail if the number of nodes the data should be stored at is higher than the currently available number of nodes.
- The read preference can be used to choose which node data should be read from. If you decide to read data from the master only you will get the data even if it has not been propagated to all nodes.
-
Besides a trade off between availability and consistency these settings obviously also influence performance.
If you like you can read details about what happens if a MongoDB partitions in
Jepsen.
So bottom line: MongoDB allows you to fine tune the trade off between consistency and availability using write concerns and read preferences. Concerning partition tolerance there is really no choice - it will eventually happen. So it can be tuned to be AP or CP or something in between - depending on how you tune it. Final note: This is my take on CAP and where MongoDB stands. If you browse around on the web you might find different takes on it. I am happy to discuss the details - leave a comment!
Labels: CAP, CAP Theorem, MongoDB
Why Java's Checked Exceptions Are an Issue
Exceptions were originally introduced to make programs safer. Back in the days languages like C used return codes to signal whether the call was actually successful. The return codes could easily be ignored. So you end up with a program that just continues executing - even though an error occured. Exceptions are meant to solve this. To signal an error an exception is thrown. This exception has to be handled if the error should be resolved. Handling the exception is separated from the rest of the control flow e.g. in a try-block. So handling an error is separated from the rest of the program.
Java introduced three types of exceptions:
- Errors like OutOfMemoryError are unrecoverable problems. So your code cannot really handle them.
- Unchecked exceptions are subclasses of RuntimeException. While they can be handled it is not mandatory to do so. Examples include the NullPointerException. This exception can be thrown any time an object is accessed. So it makes no sense to force developers to handle it.
- All other exceptions are checked exceptions i.e. the compiler ensures that they are handled using catch or the method declares that it throws the exception.
Originally I thought an exception is an extension of the return type of a method. Instead of a value of the "normal" return type it throws or "returns" an exception. So if the program should be sound concerning types every type of exception needs to be handled - just as you would handle a normal return value. Obviously it makes sense to use the type checker to ensure this happens. So checked exceptions make a lot of sense.
Checked exceptions are also used throughout the Java APIs. The developers of these APIs obviously know what they are doing - so again checked exceptions again seemed like a good idea.
However, later on I did quite a few code reviews. Some typical ways to handle checked exceptions appeared:
- Exceptions were just ignored i.e. the catch block was empty.
- Exceptions are logged. This is also what the default Eclipse code snippet does. The code just swallows the exception. However, developers are supposed to actually handle the exceptions and not just log them. More often than not a closer examination revealed that the exception really was about a serious problem and the program should probably have signaled a failure to the user - and not just continue and log the problem somewhere. This is pretty much the same situation as ignoring return codes in C - which exceptions were supposed to solve.
- Sometime the exception is wrapped and rethrown. This is more or less identical to an unchecked exception. The exception is just propagated. Just the type changes, of course - but that might not justify the code written.
Only seldom the exception is really actually handled e.g. by trying an alternative algorithm or returning some kind of default value.
You could argue that this is just the fault of the developers who wrote the code. Why did they not implement proper exception handling?
However, some facts made me wonder whether this is actually the correct explanation:
- The Spring project only uses unchecked exceptions
- Even EJB introduces the EJBException that can be used to wrap other exception in an unchecked exception.
- There is hardly any other language using checked exceptions. Wikipedia lists OCaml, CLU and Modula-3. This makes Java the only main stream language using checked exceptions. This should really make you wonder - why didn't C# for example implement this feature, too?
So apparently checked exceptions might not be such a smart idea after all - otherwise everybody would be using them. And the reason why so much exception handling is implemented poorly might be that developers are forced to write code to handle exceptions - even if there is no sensible way to do so. This is the primary difference between checked and unchecked exceptions: Unchecked exceptions have a sensible default i.e. the exception is just propagated to the next level in the call hierarchy. Checked exceptions lack such default and must therefore be handled in one way or another.
So essentially checked exceptions force the developer of the calling code to handle the exception. This makes only sense if it can be handled sensible. In a lot of cases that is not possible. Therefore I believe checked exceptions should hardly be used. In almost any case it is better to rely on an unchecked exception. Then the default is that the exception is propagated if not handled - which is usually the better alternative. Note that it is still possible to handle the exception if needed.
I think relying more on unchecked exceptions is primarily a matter of courage. A lot of projects and libraries in the Java space use checked exceptions. They are without a doubt overused - see JDBC's
SQLException for example. It is a checked exception but can hardly ever be handled sensible. Not to follow these bad examples truly takes courage. Maybe introducing checked exception is even the greatest mistake in Java's language design.
Labels: Exceptions, Java
JAX Preview
This year's
JAX will see some interesting sessions that I would like to highlight:
- The New School Enterprise IT Day will show how new technologies and business challenges will change the Enterprise IT. I am quite happy that this has been added to the JAX schedule - because I believe there will a huge shift in this area in the next few years.
- The Cloud Computing Day will show the latest and greatest in the Cloud. Several topics - such as the different Java PaaS alternatives but also IaaS will be explained in detail.
- I have done several Code Retreats at adesso AG and always found them to be a great experience for all involved. Therefore I am pleases that I can do a Code Retreat at JAX this year.
- And of course the Advanced Spring Powerworkshop will take place - it is a unique opportunity to dive deeper into the framework.
Some of my colleagues are also presenting. For example Alexander Frommelt will talk about
IT landscapes and also about
Portals and whether Portlet are really a good fit for them. Halil-Cem Gürsoy will talk about
Google App Engine.
So I am really looking forward to the event - and would be glad to meet you there!
Common Misconceptions: The Waterfall Model
I think the Waterfall Model is the result of a big misunderstanding, probably one of the worst in out industry.
Look at Royce's original paper (PDF can be found
here. You will notice that the paper starts with the separation of different activities such as analysis and coding. To me that sound like an attempt to actually define basic software engineering activities instead of just unstructured hacking. The paper goes on and discusses more different phases that a project might go through. It shows a figure pretty much like the Waterfall model we are used to. No surprises so far.
But then the fun starts: The third figure already shows that the steps are not necessarily performed in order. The text says:
... as each step progresses and the design is further detailed, there is an iteration with the preceding and succeeding steps but rarely with the more remote steps in the sequence.
Let me repeat: The original Waterfall paper says that you might need to go back to previous steps, even remote ones. It even uses the term "iteration".
It goes on and discusses that once you run in production you might learn that your system does not perform well enough. That leads to major problems - and you will probably go back to the analysis. You might call it an iterative approach - even though it is probably not voluntarily.
Even better: The paper suggests:
If the computer program in question is being developed for the first time, arrange matters so that the version finally delivered to the customer for operational deployment is actually the second version in so far as critical design / operations areas are concerned.
So essentially you should do at least two iterations - the first version will not get it right. Another hint at an iterative process.
And the paper even suggests to involve the customer - probably one of the most important points in Agile practices.
Of course the paper includes sections that are quite different from the Agile school - such as the focus on documentation. But it is from 1970 - and the author is specialized in systems for spacecrafts. Those still rely a lot on documentation even today because of the extreme risks those systems have.
However, the bottom line is that the original Waterfall paper does not advocate what is now considered the Waterfall Model. It does not require going through the steps in order and it even mentions that the first release will not be a good solutions. Quite contrarily: It talks about iterations - very limited of course but it hints the direction Agile and iterative processes took later on.
I am still confused how the industry was able to misunderstand this paper. I wonder how much damage it did. Even today people still talk about Waterfall. I think everyone working with software processes should read this paper. I also suggest to use the term "Misunderstood Waterfall Model" when discussing a model that suggest going through the steps in a strict order. Because that model is just a misunderstanding, it is not what Royce described.
Oh, and next time someone talks about the Waterfall Model - don't forget to ask him or her whether he has read the original paper about it...
J for Java |
I for Internet, iMac, iPod and iPad |
Me for me