Open sourcing Remixthem, my first Android app

10 years ago, the iPhone re-defined what a mobile phone could be. But because I was an open source enthusiast, invested into the Google ecosystem and knew Java, I bought an HTC Magic, the second Android phone in the market. I was at the time an intern in Paris working on 3D software, and I had some free time in the evenings. This is when I decided to dive into Android development.

I downloaded the Android SDK, started some samples and read the docs, which, from what I remember, were quite good. I learnt about a few fundamental concepts of the Android OS: Intents, Activity, resources (and alternative resources), UI layouts, Drawables

At the same time, Google launched the second Android Developer Challenge, promising some large amount of money to winners of each category. This was, in my opinion, a great way to bootstrap the Android app ecosystem, and was for me an ideal target to get started on a real app.

The app

I previously used Photoshop (or more likely GIMP) to blend two faces together for fun. This was quite tedious to do by hand, and I always believed the process could be automated. “Remixthem” was born. The purpose was simple: the user would snap pictures of two faces and the app would blend them into one. I later realized that it was also fun to edit the features of a single face, so I added this mode.

At the time, there were not a lot of resources online and GitHub was barely launched, I remember using Google Code Search to look for relevant examples inside the source code of Android itself. The app uses the built-in face detection API to get the location of the eyes in both pictures. Then it uses some alpha masks to extract these features and later blend them. The user can also edit each part manually. I had fun drawing the graphics, in particular, I remember how aweful were the guidelines fro Android icons at the time (a very strange 3D perspective).

Technically, I learnt a lot. Of course, I learnt about Android application development, but also about the Java programming language and generic image manipulation techniques.

Mistakes

I also learnt a lot about what it takes to make a “product” from start to finish. In retrospect, here are the mistakes I did with Remixthem. I realized these quite early after launching the app, but never found the time (or maybe the motivation) to fix them.

  • Bad UX:
    • Overall, I think I simply did not do enough user testing. I just gave the app to 2 or 3 friends asking for feedback.
    • Forcing users to press the device’s physical “menu button” to access actions hurt discoverability of these features. I remember that somebody explicitly told me that the actions were not discoverable, but I discarded this feedback, based on the fact that this was at the time an “officially recommended pattern for Android apps”.
    • The flow of screens also needed improvement: Instead of landing the user on a main view, I could have opted in for a more direct experience, which bring me to the next point
    • The value statement was not clear: the fact that I used iconography to convey the purpose of the app did not help to understand what it was really doing. Users had to do multiple steps to get a result, instead, I could have conveyed the value statement with an example using familiar faces
  • No marketing or growth strategy: I built the app, published it and … waited :). I realized that just “publishing an app” does not mean users will discover it. To raise awareness of the app, I could have: shared more about the creation process on developer forums, presented it at local meetups, seek to be featured on blogs or website (at the time, there were very few Android apps). To make it grow, I could have pushed users to share results on social media (with a “Remixthem” watermark) or better integrate with the Facebook API.
  • Branding and verticals: Instead of “allowing users to remix faces”, I could have launched apps based on the same engine, but addressing particular verticals, for example: “baby’s face”, “doll face”, “ugly face”, or simply “adding hats and mustache”…
  • iOS version: At the time, Android had low penetration, if my goal was adoption among mobile users, an iOS version would have had a larger potential user base.
  • Not solving a big problem: Overall, the app was fun, but not really solving a user problem. Everything still had to be invented on mobile at the time, I could have picked an idea that people actually needed 🙂

10 years later many apps implement this feature, and in a very impressive way: Snapchat does it in real time for example.

Get it

The code is very old, but there is no reason to keep it private. Find the source code on GitHub.

The app is published (without any guarantee) on Google Play.

Climb Tracker for Android and Android wear

Here is how I used Android Wear and Firebase to build from scratch an indoor climb tracker.

Install it on Google Play, read the code on GitHub.

I learnt a few things from a first prototype I did:

  • I do not want to use my phone in a climbing gym,
  • I should not expect to have connectivity in a climbing gym,
  • I care more about the grades of the routes I complete than the routes themselves,
  • The gym’s routes are changing every few months, too quickly to maintain a database.

watch3_framed

So I decided simplify the app to its bare minimum by focusing on this scenario:

From my watch, I log the grade of the routes I complete.

After the session, I can see a list of my climbs on my phone.

Yes, the main scenario uses an wearable device: climbing with a smartwatch is not a problem, it’s always available and does not require to manipulate a phone.

Screenshot-phone1_framed

Also, it’s designed offline first: The watch does not require a phone to be nearby, or any internet connection. Once it gets close to its phone, data is transferred from the watch to the phone (once again, no internet required). The data is then stored locally on the phone and when connectivity is available, is synced to a server.

I then added the ability to add a climb from a phone, as I suppose that the intersection between climbers and Android phone owners is bigger than between climbers and Android wear owners 🙂

The grading system can be selected from the Settings menu and the user’s position is recorded when saving the climb, for potential future use.

Climb Tracker architecture
The Climb Tracker technical architecture

Technically, it is a native Android app, mostly because I needed to use the Wear SDK, but also because I wanted a reliable app experience and not struggle to imitate a native look and feel using web technologies. Transmitting data from the wearable to the phone uses the Android Wear DataApi.

The rest of the app is using the Firebase SDK with offline mode enabled: Firebase is deciding by itself when to sync the data with its server. I did not write any line of data sync or server-side code. And I loved it.

The app is following the latest permissions guidelines: asking them when opening the application, and being flexible regarding the location permission: if not granted, the location is not recorded but the app keeps working.

Get it on Google Play
The app is released on Google Play. It is open source and released on GitHub. Of course, I accept suggestions, bugfixes and pull-requests.

For example, would you like to translate it in your language?

Cloud cup: a multiplayer set of mini games for web and Android

I’m glad to present the work of a week-long hackathon I did last November with three other googlers.

I’ve been having for a long time the idea of a real time game using phones as controller and a big screen as the main game screen. You may recall a previous blog post about a first prototype. A few years later, I could pitch the idea at an internal hackathon and gather a team around the concept. While I initially wanted to build a dancing game, I realized that this was already done, so we decided to go towards mini games, an idea very compatible with a hackathon timeframe.

Realizing after a day that we would not be able to have a reliable and scalable real time architecture and a fun game after just five days using regular backend technologies, we decided to focus on the game itself and to use Firebase to handle the real time and backend parts of our system. It allowed us to get familiar with this very interesting technology. I already used Parse in the past, while they both fall into the Backend as a Service (MBaaS) category, I could notice with pleasure that Firebase was able to perform as well as Parse for the regular MBaaS features (and even better to my taste, by providing an Angukar SDK) but that it also provides impressive real time capabilities.

After a single day, we had a functional prototype of our game: Android phones were interacting in real time with an Angular application in a “Shake” game. The next three days were spent creating other mini games and streamlining the game mechanism.

And we won the Fun category!

While the hackathon was internal, it was not a problem at all for Google to let us open source and release our work to the public. You can find the sources on GitHub: Android and Web. Firebase showcased the project on its official blog.

Try it now: Install the Android app, and visit cloudcup.firebaseapp.com to start a game.

I hope you like it.

Beansight is now open source

Four years ago, I started working on Beansight with Cyril, Guillaume and Jean-Baptiste, a project that became a long adventure.

beansighthb

The website (that we host at www.beansight.com) and associated mobile apps allow users to create predictions and vote on other’s predictions. Computation is done to extract from all votes a percentage of probability for a given event.
The website features all mechanisms of a social website : registration, login, user profiles, followers / following, content creation, comments, moderation tools, administration dashboard, API, i18n.

Main page when logged into Beansight
Main page when logged into Beansight

All our code and design are now available under the open source Apache licence. That means anyone is free to use it to run a similar website or to build upon it. Get it from GitHub.

The technological architecture is quite regular: a server is generating web pages using a MVC framework, data is stored in a relational SQL database and user generated images are stored on the file system. Some simple client-side JavaScript is enriching these generated pages.

We built Beansight using the great Play! Framework. It turned out that Play! was a really great choice for our architecture and project. At that time, Play! was a Java web framework that got rid of the traditional Java web stack to focus on a simple MVC architecture, inspired by rails and other modern web frameworks, that prevails convention over configuration.
It was a real pleasure working with this language in a framework so well designed for websites like our.

Beansight Android application
Beansight Android application

Mobile application are native for iOS and Android and we used jQuery mobile for the mobile web version.
We decided to keep a very simple UI as part of our native mobile apps. We were one of the first apps to use ViewPager on Android for example.

We realised quite soon that we needed to build an API, mostly for these mobile apps. Our MVC architecture allowed us to easily create one. Ideally I think the main website should have use it (either client or server side). Anyway, our API code and website code were sharing a lot, thanks to our rich object oriented Models. You can find the API documentation in a GitHub wiki.

We used different hosts. We first started with PlayApps.net, the Platform As A Service offer from the builders of the Play! Framework. We never encountered any issue with it and were very satisfied to not bother about system administration. However, we had to move due to the service closing. Beansight was then running on Gandi Hosting. Here we had to take care about administring our server, which added some pain to the maintenance of Beansight.
Finally, in order to reduce the costs and make it easier to setup as part of the open source process, we made sure it is compatible with the Heroku PaaS hosting.
Today, Beansight can be easily run on any Linux server or pushed to Heroku with any MySQL database (beansight.com is now using clearDB for example).

I hope this code will be useful to somebody. I would be pleased to see you starting a new community from it, building something on top of it, or using it as part of another project.
While a few technical improvements could be done, I think it is still quite reliable, with a pretty well documented source code and good architecture.

Get all the code on GitHub:

Bringing video support to Phonegap Android

Phonegap for Android had serious issues with inline videos: HTML <video> tag was not supported at all on Android inferior to 4. (On Android 4.X, they require the View to be hardware accelerated)

On behalf of Joshfire, I worked on the main cordova Android source code by adding elements from the original Android browser. In the end, clicking on a video on a Phonegap application starts a fullscreen video player view. Hitting the “back” button goes back to the app. This is far from perfect, but better than nothing.

After submitting my pull request, I had warm and polite feedback from Simon Mc Donald of the Phonegap team, he helped to test my work and added the final touches before accepting the code into his branch.

Today the feature has been shipped into Phonegap 2.2.0. I had great feedback from both Phonegap creators and users. That’s something very motivating.

See the final commit in Phonegap’s source code.

You just know the time

Are you always able to tell time after having checked your smartphone?
Very often, I’m not. Yet, time is displayed very big on the unlock screen and is always visible at the top.

When using my smartphone and when I don’t want to explicitly check the time,  I don’t pay any attention to the clock. It can be similar to the “banner blindness” phenomenon.

What if the time could be printed into my mind without doing any effort?

I wanted to experiment around this idea. In a short time, I created a live wallpaper for Android which color changes over the hour.

The wallpaper color changes during the hour.

I’m sure that after having checked the smartphone, we are able to tell what color was the background. And because a given color corresponds to a given number of minutes, we can approximately guess the time. The tint of the background color depends on the minutes : 0 min is red (0°), 30 min is turquoise (180°)… I think that the hour is not important, most of the time, we are able to guess the hour.
Of course, it requires a learning time. A time for our brain to learn the bijection between colors and minutes. This experiment will test if it is easily possible.

The HSV color wheel is mapped on a clock. Every color has its minute.

Give it a try by downloading “Color Clock Wallpaper” from the Android market.
qrcode
And it’s open-source, bitch! You can grab the code on my github and come up with your own additions.

I will start using it every day, and see if this experimentation works. It may require to be tweaked, we will see.
I also have ideas for additional features:

  • Add a texture to the wallpaper, for it to be pretty.
  • I notice I like to tap on the screen when I have the “Nexus” wallpaper, because I know it will do something entertaining. What if this something could make me know the time?
  • Let the user choose the color gradient ?
And you,what do you think ? Did you test it ? Leave your comments here.