Open sourcing Remixthem, my first Android app
10 years ago, the iPhone re-defined what a mobile phone could be. But because I was an open source enthusiast, invested into the Google ecosystem and knew Java, I bought an HTC Magic, the second Android phone in the market. I was at the time an intern in Paris working on 3D software, and I had some free time in the evenings. This is when I decided to dive into Android development.
I downloaded the Android SDK, started some samples and read the docs, which, from what I remember, were quite good. I learnt about a few fundamental concepts of the Android OS: Intents, Activity, resources (and alternative resources), UI layouts, Drawables…
At the same time, Google launched the second Android Developer Challenge, promising some large amount of money to winners of each category. This was, in my opinion, a great way to bootstrap the Android app ecosystem, and was for me an ideal target to get started on a real app.
The app
I previously used Photoshop (or more likely GIMP) to blend two faces together for fun. This was quite tedious to do by hand, and I always believed the process could be automated. “Remixthem” was born. The purpose was simple: the user would snap pictures of two faces and the app would blend them into one. I later realized that it was also fun to edit the features of a single face, so I added this mode.
At the time, there were not a lot of resources online and GitHub was barely launched, I remember using Google Code Search to look for relevant examples inside the source code of Android itself. The app uses the built-in face detection API to get the location of the eyes in both pictures. Then it uses some alpha masks to extract these features and later blend them. The user can also edit each part manually. I had fun drawing the graphics, in particular, I remember how aweful were the guidelines fro Android icons at the time (a very strange 3D perspective).
Technically, I learnt a lot. Of course, I learnt about Android application development, but also about the Java programming language and generic image manipulation techniques.
Mistakes
I also learnt a lot about what it takes to make a “product” from start to finish. In retrospect, here are the mistakes I did with Remixthem. I realized these quite early after launching the app, but never found the time (or maybe the motivation) to fix them.
- Bad UX:
- Overall, I think I simply did not do enough user testing. I just gave the app to 2 or 3 friends asking for feedback.
- Forcing users to press the device’s physical “menu button” to access actions hurt discoverability of these features. I remember that somebody explicitly told me that the actions were not discoverable, but I discarded this feedback, based on the fact that this was at the time an “officially recommended pattern for Android apps”.
- The flow of screens also needed improvement: Instead of landing the user on a main view, I could have opted in for a more direct experience, which bring me to the next point
- The value statement was not clear: the fact that I used iconography to convey the purpose of the app did not help to understand what it was really doing. Users had to do multiple steps to get a result, instead, I could have conveyed the value statement with an example using familiar faces
- No marketing or growth strategy: I built the app, published it and … waited :). I realized that just “publishing an app” does not mean users will discover it. To raise awareness of the app, I could have: shared more about the creation process on developer forums, presented it at local meetups, seek to be featured on blogs or website (at the time, there were very few Android apps). To make it grow, I could have pushed users to share results on social media (with a “Remixthem” watermark) or better integrate with the Facebook API.
- Branding and verticals: Instead of “allowing users to remix faces”, I could have launched apps based on the same engine, but addressing particular verticals, for example: “baby’s face”, “doll face”, “ugly face”, or simply “adding hats and mustache”…
- iOS version: At the time, Android had low penetration, if my goal was adoption among mobile users, an iOS version would have had a larger potential user base.
- Not solving a big problem: Overall, the app was fun, but not really solving a user problem. Everything still had to be invented on mobile at the time, I could have picked an idea that people actually needed 🙂
10 years later many apps implement this feature, and in a very impressive way: Snapchat does it in real time for example.
Get it
The code is very old, but there is no reason to keep it private. Find the source code on GitHub.
The app is published (without any guarantee) on Google Play.