Producing data-driven holograms with Python/Blender/FBX

Here’s the situation I found myself in: Annalect Labs acquired a Hololens for purposes of producing holographic data visualizations and here I am, a lowly javascript developer wishing he could just make holograms with CSS!

Reviewing the options for game development with Unity and Visual Studio was intimidating to say the least. It looked like I would need to learn C# while adjusting to the Unity → export to Visual Studio → compile to Hololens toolchain. Worse, while I could follow along the Unity introduction to make a ball roll around a plane, I wasn’t able to export this basic demo to Hololens after hours of re-installing various versions of software.

I started wondering if Blender might let me produce animations for hololens, knowing that it allows for python scripting, which might allow my pythonic-data-scientist-coworkers to jump into producing holograms much faster than writing our own video game without any game engine experience.

Now, there are many guides on how to design things in Blender, import the meshes and actions into Unity, which can then export the project to Visual Studio for compilation. I hate the idea of dealing with that toolchain, dealing with incompatibility and export quirks each step of the way. I want to write a python script that produces an animation for Hololens.

With a little googling, I discovered that Hololens can indeed display animations in the FBX file format which Blender is happy to export, so I started playing around with the following code to make some cubes dance around:

The FBX file produced by this code can be opened directly in the Hololens app 3D Viewer (I just move my fbx files into a folder sync’d to onedrive to make it easy to access them on Hololens).

https://skfb.ly/6qPED

How about something a little more interesting: arranging a subset of the collection in a line. From here I hope you can use some imagination on how you could tie this in with data retrieval and visualization.

For Annalect Lab’s minimum viable hologram, we’re interested in visualizations of populations, and talking about subsets. So I’ll adjust the previous code to use meshes representing people. There’s a million ways to make meshes, but there’s a tool perfectly suited for my task:

http://www.makehuman.org/

I won’t say much about MakeHuman cause I think it’s pretty intuitive. You can customize everything about your character and use some pre-loaded outfits and poses, and then export the mesh as a .obj. To make it easy on myself I saved these obj files (man.obj, woman.obj) in the same directory I’ll output my animations. Now, these objs are made up of multiple meshes, and are fairly high resolution so there’s a lot of code to work through modifying them and reducing the poly-count to get to an acceptable file size for the default FBX file viewer on hololens, but the result is a lot of fun:

The main flow of the program is like this:

Import an object, which will select all 4 meshes of that object and join them.

Decimate the mesh so it isn’t so high resolution.

Create copies of each mesh.

Create an array of x,y coordinates along a normal distribution.

Loop through an array of all the objects and set their location to the next random coordinate.

Select the first 5 objects in the shuffled array and set their keyframes to animate them as they move into a line elevated from the group.

Do the same with the next 5 objects, but to the other side of the plane.

Save a .blend file and a .fbx animation to move to Hololens.

This is my first few days using the python API to blender, so there’s surely a better way to make all the selections and de-selections, and if you know how please let me in on it! In any case, I hope this gives you a glimpse over the kind of scripted animations Blender can help you with, and there’s a million other things in the docs I haven’t touched yet. In the future I’ll have some examples of how to hook these animations up to pandas dataframes for dealing with data retrieved from SQL queries 😀

Ready to get started?

Here’s a guide to setting up your blender and python environment:

http://www.makehuman.org/

High Quality Gifs with FFMPEG

After getting FFMPEG installed, let’s try it out on a MOV downloaded from my google photos account:

ffmpeg -i MVI_6654.MOV firsttry.gif

We’re calling the ffmpeg program and telling it that MVI_6654.MOV is our input file with the -i flag. the filename at the end defines the conversion and creates the new file, resulting in:


Pretty cool. But I bet I can make it loop nicely by just using the first few seconds, so we use the duration flag, -t and specify the duration in seconds.

ffmpeg -t 2 -i MVI_6654.MOV secondtry.gif


So it loops! kinda slow tho, maybe we can drop every other frame? A-ha. Thanks to the -r flag, we can choose a frame rate for the output (note that these options do different things based on their order. -t is used before the -i input, -r is used after the input, so it affects the output.)

ffmpeg -t 2 -i MVI_6654.MOV -r “15” thirdtry.gif


That’s more like it. As an added benefit, the file is now 2 megabytes instead of 8 megabytes. I’ve got another one to convert that has kind of a long input video, so I’m going to specify a time to start the gif as well as the duration. The -ss flag defines the starting point. Oh yeah, and your seconds can be decimals. If you have a longer video and want to define the starting position in hours, minutes, seconds, you can use “hh:mm:ss” format, like “00:00:03” instead of “3”

ffmpeg -t 3 -ss 0.5 -i MVI_6663.MOV -r “15” fourthtry.gif


This has been pretty simple, but I know I’ve seen better looking gifs. Let’s find out how to make it look more like a video…

Learned a lot from this blog about the color palette algorithms.

Found a relatively simple example on stackoverflow.

We have to generate a custom color palette so we don’t waste space storing colors we don’t use (the gif file format is limited to 256 colors):

ffmpeg -ss 2.6 -t 1.3 -i MVI_7035.MOV -vf  fps=15,scale=320:-1:flags=lanczos,palettegen palette.png

You should recognize what -ss, -t, and -i do here, -vf is a way to invoke filters on our video. so we can describe fps and scale here. Hm. I’ll admit I don’t know what -1 does. But the flags describe what algorithm to use, more info in that blog. So that generates palette.png which we can use in our 2nd line in terminal (by the way, the backslash at the end of the line is if you run out of space and need to keep typing, hit backslash and return and you can keep going on a new line before hitting return to complete the command.)

ffmpeg -ss 2.6 -t 1.3 -i MVI_7035.MOV -i palette.png 
-filter_complex “fps=15,scale=400:-1:flags=lanczos[x];[x][1:v]paletteuse” sixthtry.gif

This uses our .MOV as an input but has a second -i flag to use palette.png as a 2nd input. Then, ok, I copy pasted the rest of it, but I’m happy enough with what this produces that I’ll stop asking questions 😀

Let me know if there’s anything I can explain in more detail, but don’t ask me for help installing things, just google it!






Side by side comparison of easy, default color palettes and head-scratching custom color palettes

I’d like to revisit this with more general advice and showing off other aspects of ffmpeg, but in the meantime here’s articles that others’ found helpful.

https://rigor.com/blog/2015/12/optimizing-animated-gifs-with-html5-video