Open sourcing Remixthem, my first Android app

10 years ago, the iPhone re-defined what a mobile phone could be. But because I was an open source enthusiast, invested into the Google ecosystem and knew Java, I bought an HTC Magic, the second Android phone in the market. I was at the time an intern in Paris working on 3D software, and I had some free time in the evenings. This is when I decided to dive into Android development.

I downloaded the Android SDK, started some samples and read the docs, which, from what I remember, were quite good. I learnt about a few fundamental concepts of the Android OS: Intents, Activity, resources (and alternative resources), UI layouts, Drawables

At the same time, Google launched the second Android Developer Challenge, promising some large amount of money to winners of each category. This was, in my opinion, a great way to bootstrap the Android app ecosystem, and was for me an ideal target to get started on a real app.

The app

I previously used Photoshop (or more likely GIMP) to blend two faces together for fun. This was quite tedious to do by hand, and I always believed the process could be automated. “Remixthem” was born. The purpose was simple: the user would snap pictures of two faces and the app would blend them into one. I later realized that it was also fun to edit the features of a single face, so I added this mode.

At the time, there were not a lot of resources online and GitHub was barely launched, I remember using Google Code Search to look for relevant examples inside the source code of Android itself. The app uses the built-in face detection API to get the location of the eyes in both pictures. Then it uses some alpha masks to extract these features and later blend them. The user can also edit each part manually. I had fun drawing the graphics, in particular, I remember how aweful were the guidelines fro Android icons at the time (a very strange 3D perspective).

Technically, I learnt a lot. Of course, I learnt about Android application development, but also about the Java programming language and generic image manipulation techniques.


I also learnt a lot about what it takes to make a “product” from start to finish. In retrospect, here are the mistakes I did with Remixthem. I realized these quite early after launching the app, but never found the time (or maybe the motivation) to fix them.

  • Bad UX:
    • Overall, I think I simply did not do enough user testing. I just gave the app to 2 or 3 friends asking for feedback.
    • Forcing users to press the device’s physical “menu button” to access actions hurt discoverability of these features. I remember that somebody explicitly told me that the actions were not discoverable, but I discarded this feedback, based on the fact that this was at the time an “officially recommended pattern for Android apps”.
    • The flow of screens also needed improvement: Instead of landing the user on a main view, I could have opted in for a more direct experience, which bring me to the next point
    • The value statement was not clear: the fact that I used iconography to convey the purpose of the app did not help to understand what it was really doing. Users had to do multiple steps to get a result, instead, I could have conveyed the value statement with an example using familiar faces
  • No marketing or growth strategy: I built the app, published it and … waited :). I realized that just “publishing an app” does not mean users will discover it. To raise awareness of the app, I could have: shared more about the creation process on developer forums, presented it at local meetups, seek to be featured on blogs or website (at the time, there were very few Android apps). To make it grow, I could have pushed users to share results on social media (with a “Remixthem” watermark) or better integrate with the Facebook API.
  • Branding and verticals: Instead of “allowing users to remix faces”, I could have launched apps based on the same engine, but addressing particular verticals, for example: “baby’s face”, “doll face”, “ugly face”, or simply “adding hats and mustache”…
  • iOS version: At the time, Android had low penetration, if my goal was adoption among mobile users, an iOS version would have had a larger potential user base.
  • Not solving a big problem: Overall, the app was fun, but not really solving a user problem. Everything still had to be invented on mobile at the time, I could have picked an idea that people actually needed 🙂

10 years later many apps implement this feature, and in a very impressive way: Snapchat does it in real time for example.

Get it

The code is very old, but there is no reason to keep it private. Find the source code on GitHub.

The app is published (without any guarantee) on Google Play.

Setting up Stackdriver Error Reporting on Play Framework 1.4

Here is how I setup Stackdriver Error Reporting for my application running Play Framework 1.4:

My goal was to capture any Java exception in my production application and report it to Stackdriver Error Reporting for automatic exception monitoring and alerting.

I use the very simple Stackdriver Error Reporting report API: just send error stack traces using an HTTP POST request and an API key.

After creating a project and getting an API key in the Google Cloud Console, I instrumented my Play Framework application to catch all exceptions, format them in the expected structure and POST them to Stackdriver.  (make sure you are using at least JDK v1.7)

Here is the code I added to my main application controller:

A short amount of time after deploying this code, I started receiving alerts that new errors were occurring in my production application:

Screenshot of Stackdriver Error Reporting: 2 different errors occurred in my Play Framework app in the last 6 hours. The first one happened 42 times.

I was not aware of these application errors, now I have better visibility into their impact and will be able to prioritize what to fix.

My website (Cadeaux entre nous, to organize Secret Santas), has been running on Heroku for years and has huge usage spikes around christmas. This will help me make it more stable.

Disclaimer: I am a Product Manager at Google, working on Stackdriver

You just know the time

Are you always able to tell time after having checked your smartphone?
Very often, I’m not. Yet, time is displayed very big on the unlock screen and is always visible at the top.

When using my smartphone and when I don’t want to explicitly check the time,  I don’t pay any attention to the clock. It can be similar to the “banner blindness” phenomenon.

What if the time could be printed into my mind without doing any effort?

I wanted to experiment around this idea. In a short time, I created a live wallpaper for Android which color changes over the hour.

The wallpaper color changes during the hour.

I’m sure that after having checked the smartphone, we are able to tell what color was the background. And because a given color corresponds to a given number of minutes, we can approximately guess the time. The tint of the background color depends on the minutes : 0 min is red (0°), 30 min is turquoise (180°)… I think that the hour is not important, most of the time, we are able to guess the hour.
Of course, it requires a learning time. A time for our brain to learn the bijection between colors and minutes. This experiment will test if it is easily possible.

The HSV color wheel is mapped on a clock. Every color has its minute.

Give it a try by downloading “Color Clock Wallpaper” from the Android market.
And it’s open-source, bitch! You can grab the code on my github and come up with your own additions.

I will start using it every day, and see if this experimentation works. It may require to be tweaked, we will see.
I also have ideas for additional features:

  • Add a texture to the wallpaper, for it to be pretty.
  • I notice I like to tap on the screen when I have the “Nexus” wallpaper, because I know it will do something entertaining. What if this something could make me know the time?
  • Let the user choose the color gradient ?
And you,what do you think ? Did you test it ? Leave your comments here.