Rendering Blender 3D scenes in the cloud

I created a simple app that renders a Blender 3D scene in the cloud: users can customize the displayed message by changing an URL parameter, and the app will return the 3D rendered image.  Give it a try here.

The code is a very simple Python function that invokes the Blender open source software. It uses the Blender API in order to dynamically change the value of a Text object. The function is then simply wrapped in a basic Flask application in order to respond to HTTP requests. Because Blender needs to be installed, I am using a Dockerfile that runs “apt-get install blender”.

Find the code on GitHub.

This app was created to showcase the upcoming “serverless containers” product of the Google Cloud Platform. It allows to run any container “in a serverless way”: developers are able to deploy and run any language or library they want, and only pay for actual resources used when it receives requests. I demoed the feature during the Cloud Next 2018 keynote and other sessions. sign up for early access at g.co/serverlesscontainers

Advertisements

Attractors

The idea

For our Save the date, we had some visual ideas of what we wanted to achieve: Something relaxing and calm, inspired by the sea. We first started by picking a color palette and then decided to explore streams of particles, reminding streams of ocean currents.

See the final result here.

Using code for visual design

Anne and I developed our project using JavaScript and browser technologies (see below).

It was interesting to use code to achieve something that is visual design. Part of the end result is coming from the randomness of the algorithm, like the initial positions of the attractors and of the particles, but another part of the result comes from hand picked parameter values (colors, sizes, number, speed…). The final result is in my opinion an interesting mix of pre-defined visual aesthetics and randomness.

Like any artistic tool, code can be used to achieve a certain artistic result. The drawback with most development environment is the long feedback loop: Contrary to a drawing tool where the feedback is almost instant, code demands to finish the code modification before being able to visualize it.

The technology

We draw many particles on a web canvas. For each frame, we move the particles and draw their movement on top of the previous state. We do that at 60 frames per second.

The paths are never stored in memory, only the current position of the particles and the attractors are, what you see is just result of overlaying many many frames. The “shadow” is faked by using a transparent sprite, that we paint under each particle at each frame. This gives a nice depth effect to the end result, despite everything being in 2D.

The particles move by following the gradient of various “attractors”, like particles in a magnetic field. The position, number and impact of these attractors are generated randomly, within pre-defined boundaries.

In addition to particles and attractors, we added the ability to draw text and avoid certain zones. This complexified the codebase and does not lead in my opinion to great results. But we wanted to use it a little bit in our Save the Date.

 

A reusable library

Later, I moved the code responsible for painting particles in a reusable JS library. Find it on GitHub and on npm. The code could be cleaned up, in particular, is assumes a global ‘config’ object, this should be passed as an argument when initializing the library.

Everything can be tweaked: text to display, the color, number of particles, size, speed… here are some videos of various settings:

A ‘new tab’ Chrome extension

Very recently, I used the library to create a Chrome extention that replaces your ‘new tab’ page with this library (see sources on GitHub)

Install it from the Chrome webstore

Here is a video of the new tab page once the extension is installed:

Scuba Diving dashboard using Google Data Studio

As a scuba diver, I log all my dives in a dive log. For most divers, this is just a simple notebook, but some years ago, I started to also enter my dive in a simple Google Sheet.

After playing a bit with charts and the “Explore” feature of Google sheet to answer questions like “max depth” or “number of dives per regions”, I decided to create a dashboard to better showcase some dive stats and visualizations. While you can put some charts and numbers on a new sheet in Google sheet, I think it is not really designed for this. I preferred to keep Sheet as a database and to pull this data from  somewhere else.

Using Google Data Studio, I could add my dive log sheet as a data source and then put interesting visualizations and stats on a new canvas. Here is what I think make Data Studio a great tool:

  1. It’s free. I would not have paid to build a hobby dashboard.
  2. Custom themes and colors
  3. Filters: I added interaction by allowing visitors to filter by date, location, dive center…
  4. The tool makes it very easy to position things on a 2D canvas.

See my dive dashboard and log built on Google Data Studio

The tool has its limit and will not replace a real dive log. Building a better dive log and dashboard would definitely require some code (a long time ago I started but never finished).

Extracting all Go regular expressions found on GitHub

Sylvain Zimmer sent a proposal to the Go programming language for an optimization for Go’s regular expression processing. One of the first comment from the community was to give actual proof that this optimization would be useful in real life by analyzing regular expressions in a large corpus of real Go code.

Reading this, I knew this was the perfect job for Google BigQuery and the huge GitHub public dataset.

BigQuery is my favorite data analysis tool. Almost every day at work, I query terabytes of data using SQL queries in seconds. The GitHub public dataset contains a snapshot of all GitHub files. That’s right, all lines of all of all GitHub’s files are available for direct querying. You only pay for the amount of processed data, and BigQuery has a generous free tier (the first 1 TB of data processed per month is free of charge)

Using the query bellow, I extracted a list of all Go constant regular expressions on GitHub. The query took less than a minute to process ~2.,2TB of data.

Sylvain then run his optimization on these and concluded that his change would benefit more than 30% of the regular expressions in this list, which hopefully should convince of the benefit of his change. So now, I am waiting for his Pull Request to make it into the main Go Repository 🙂

Generating a name tag sheet from a list of names

I needed to generate a sheet of images featuring names from a list.

So wrote a short code that takes string inputs from a JavaScript array and creates a sheet, ready to be printed. I relied on svg.js to programmatically generate the SVG file.

See and get the code in this GitHub gist

Feel free to use it:

  • Add your background images into a ‘background folder’
  • Copy the code in an .html file next to this folder, and start a web server here.
  • Convert your list of names into a JavaScript array, you have many ways to do this, for me, I copied my list from a sheet into the Atom text editor and then use the multi-line editing to add the necessary characters.
  • Change parameters to make it yours.

Trying to confuse Google’s Vision algorithms with dogs and muffins

When I saw this set of very similar pictures of dogs and muffins (which comes from @teenybiscuit‘s tweets), I had only one question: How would Google’s Cloud Vision API perform on this.

At a quick glance, it’s not obvious for a human, so how does the machine perform?  It turns out it does pretty well, check the results in this gallery:

(also find the album on imgur)

For almost each set, there is one tile that is completely wrong, but the rest is at least in the good category. Overall, I am really surprised how well it performs.

You can try it yourself online with your own images here, and of course find the code on GitHub.

Technically it is built entirely in the browser, there is no server side component except the what’s behind the API of course:

  • Images are loaded from presets or via the browser’s File API.
  • Each tile is converted in its own image, and converted to base 64.
  • All of this is sent at once to the Google Cloud Vision API, asking for label detection results (this is what matters to us here, even if the API can do much more like face detection, OCR, landmark detection…)
  • Only the label with the highest score is kept from the results and printed back into the main canvas.