I always wanted to make a globe. This is the “AirOcean World Map”, a design by Buckminster Fuller and cartographer Shoji Sadao, which aims to depict earth as “one island in one ocean”, and distributes geographic distortion so no country appears to be much larger than it really is. It can also be folded into an approximate sphere, which I like.
As time goes by, I find myself interacting more often with giant numbers encoded in hexadecimal strings that are hard to tell apart. There are git commit hashes, API keys, magnet links, public private key pairs, transaction IDs, and with IPv6 even IP addresses are 64bit hexadecimal strings.
Two useful qualities of using random number generators for unique ids and hash functions:
- With a large enough bitspace, the resulting hash can be assumed to be universally unique (infinitesimal chance of collisions)
- the output is uniformly distributed — if the input changes even one bit, the output will be drastically different.
However, while its easy enough to look at a list of git hashes and remember the first few letters while you’re checkout out commits, you’ll forget it as soon as you have to remember the next one. Hexadecimal numbers are unique, but they all look the same.
In the interest of making the distinction between large numbers more visual, I started using geometric tessellations where I saw a similarly infinite range of possibilities. In this case with 5 x 24bit colors (between #000000 and #ffffff), and 8 bits to encode edge thickness, you get a large array of visually distinct patterns you could use as a header image, or background images – but this is just with the same 12-pointed star. There are a very large number of possibilities with 4, 5, and 6 fold symmetries and different choices of when to use space and when to alternate stars…
You can try out randomly generated mosaics here by clicking on the “flip 128 coins” link. That’s one coin for each bit of information encoded by the pattern! The URL is updated each time so you can share a pattern, and you can download the SVG ready to use as a background image.
On Twitter, Medium, and Instagram, everyone’s feed looks the same and I can never remember where I saw some piece of information. Maybe by allowing a geometric motif generator to customize a page, or just hashing the contents of a page into a mosaic, we can provide a more visual signal that something on a page changed since you last looked at it, and a way to jog your memory when you do see the same pattern.
This project came out of writing my own framework to create components that would be opening some file on disk, whether it was Markdown, source code, or media files. The priority was to make it easy to extend components by adding “actions” and “reactions” to a dropdown menu for each component.
In the video, I open a ‘library’ component that knows how to render icons and metadata from the filesystem. Then I open a file in a textarea component, which is the prototype for a component which renders markdown and another that becomes a code editor (with showdown and codemirror libraries).
remove from window becomes
become turns into
this.become(codemirror) and offers a drop down menu. This design feature is inspired by my favorite part of the 3D modeling program Blender, which allows you to mouseover every button and menu option in the entire interface, showing you the python call you could make instead. For instance, File > Save becomes
Exposing the underlying API this way allowed me to learn how to automate my Blender workflows as seen in Producing data-driven holograms with Python/Blender/FBX and I hope any application I write would allow others to automate it just as easily.
Source code for the proto component at the top of the inheritance tree can be found here and implements all the lifecycle callbacks provided by custom elements. For CodeMirror, the actions and reactions just allow updating the key binding to vim and changing the syntax highlighting, but options to download, overwrite, delete, and toggle word wrap are provided by the TextArea component.
AKA the longest shell script I ever wrote
Conda is an environment manager similar to virtualenv, but it features automatic dependency resolution that has made it a joy for me to use. To get a python environment set up with conda, its only a matter of running something like:
conda --name whateveryouwant python=2.7 a list of modules you need
You can then enter this environment at any time by calling
source activate whateveryouwant and then run scripts from there. You can also print a “requirements.txt” file that locks in the dependencies and versions so that the next person running your script can reproduce the exact environment on their system — which would ease a lot of pain in trying to run random python examples off the internet.
So what do I need CondaVision for?
But what if I don’t want to run scripts interactively? I have python scripts that I want to execute as a web API, but they all require different versions of pythons and python packages, so I needed a way to automate creating environments at runtime. Conda Execute aims to solve the same problem, but it introduces a new syntax to declare dependencies as a comment inside python files, and I couldn’t get it to recognize that I needed a python less than 3, so I went ahead and wrote a bash script that does the following:
- Scan through files in PYTHONPATH to get the names of modules that might be defined locally
- Combine that with a list of modules python includes to get a list of modules I don’t want to ask conda to install (
conda createwill throw an error if you say you need the
sysbuilt in module, for example)
- perform regex on the python file I want to execute (and all python scripts in PYTHONPATH) to extract all the modules required
- compare the results of regex with the list in step 2 to get a new list of modules I need to ask conda to take care of for me
- create a hash representing the combination of modules so I can compare it with environments that were created earlier
- if there is no existing conda environment that matches that hash, I create one
- then I activate the necessary environment and in that environment, execute the script
I really like using the fetch API packaged in evergreen browsers, but I was getting annoyed with having to set the credentials, redirect, and method options all the time, plus it always takes a bit of code to format the querystring correctly, plus it’s annoying to set headers and stringify JSON objects that I want to PUT to my server. So I wrote Kvetch. First argument is URL, which gets passed directly to fetch. Second argument is an optional query object. This can have as many key value pairs as you want, it just gets URIComponent encoded, joined with &s and =s, and appended to the URL after a ‘?’ (so don’t put the ? in yourself). Third argument is the Body a.k.a. Request Payload. It can be an object, a string, an ArrayBuffer (ie Binary data) or FormData.
A lot of people use axios, request, or other libraries on npm, but I didn’t want to add extra features and a bunch of dependencies, I just wanted to prevent repeating myself using the native API.
kvetch.get/post/put/delete/options(URL::string[, QueryObject::Object, Body::*)
You can leave the QueryObject and the Body blank if you don’t need them.
You can pass a falsey argument (
undefined, etc) as a QueryObject if you only need a Body.
If you give an Object as a Body, it will be JSON stringified and sent with an
ContentType. If you send FormData (including files), the body is handed
directly to fetch and it figures out what to do. If you pass a string,
it will be sent untouched with a ContentType of
text/plain. ArrayBuffers get sent with
application/octet-stream but I haven’t actually tested this and don’t know if it’s appropriate.
Here’s the source code, you can even copy paste this into your browser console to try it out, or fork the repo here: https://github.com/jazzyjackson/kvetch.js
The purpose of this script is to provide an interface for data scientists to interact with parameters submitted via an HTML form. By defining a schema at the top of a python script, the named parameters will either be the correct type and validated against regex, or the program will fail at the validation step. The schema can also be returned as JSON so a consumer of an API can understand what is required.
It also provides methods to read and write from mysql or psql, and write results to AWS buckets via Boto 3.
A TODO for this project is to create an endpoint that builds the web form based on the schema of any requested script, since the programmer will have already defined the
input type and name attributes as part of the schema.
This was written back when Python 2.7 was cool, so I’ll have to update it to replace ConfigParser and StringIO with the modern equivalents.
And for today’s episode of “I don’t know why it’s not working for you, it works on my system!” I was just testing this for the first time in 2 years and the files it created were empty. Looks like when running python on Windows using the
io module, I have to explicitly call
valid.file.close() for the file to finish writing. I’ll have to test and update the rest of my example files.
<code> blocks without any syntax highlighting.
Worse than that is having to log into wordpress and edit a blog post to make adjustments to the code — once code has been pasted into a blog, when will you notice that it has a typo?
iframe inside a
noscript, but GitHub doesn’t allow gists to appear in iframes, sending a
X-Frame-Options: deny header, so it would have to be server-side.)
I was happy to find the WordPress 5 compatible plug in called Gist Github Shortcode. I’m surprised it only has a few hundred active installations because it works great except for one thing, the styling on line numbers was broken! The line number is a
:before pseudo element, and the gutter that is supposed to contain it is table data (
<td>) , but the psuedo element got painted outside of the layout “flow” and gets clobbered by the source code which is painted right on top.
After fiddling with the display rules for a while I found that applying
display: flex to the table row expanded the gutter to almost fit the number, I then had to apply
position: relative; right: 0.5em; to shift the number back to the center.
I added this CSS snippet to the “Additional CSS” form on Dashboard -> Customize Your Site and was good to go, here’s a live embedding of the gist shortcode with the style applied:
Thankfully since the plug-in itself has a page on GitHub I was able to report issue #5 to the maintainer. Looking at the changelog, I just now realize this library hasn’t required an update in 5 years, and still works out of the box ! Being a wordpress plug in that loads content with a github API, both sides of these dependencies have remained stable enough over the last 5 years that old, useful software still works, cheers to that!
I’ll be keeping more geometric artwork at mosaic.coltenj.com
I’ve always loved holograms, those virtual objects intermingled with the real world in so many of my favorite films, from Star Wars to Zenon: Girl of the 21st Century. But I wasn’t looking forward to learning Unity game development and Windows system APIs before I could make my first animation for the hololens platform.
So while I still have to dig into Windows-Guts and Direct3D to make interactive applications, I was excited to find a straight forward way to generate hololens-compatible animations with just a few lines of python.
Using the python embedded in the open source Blender project, we can read files, make database queries, do whatever data science we want to do, and then import meshes or generate cubes, apply colors and materials, define keyframes, and export an FBX file that can be displayed in Hololens or on web pages. Hololens lets you open multiple animations and place them around the room, so while I can’t define any gaze-and-click behaviors, I can still move and scale and rotate my animation in the mixed-real world.
The first step of course, is to head to blender.org and download the latest. I’m going to be really verbose and try to help out people on both Windows and Mac, because I had to figure it out on both myself. Follow all the default options, and on Mac move blender.app to /Applications like you normally do. From here, you can check out a million youtube videos on what Blender is for (That’s how I learned it!) but next we’re going to add ‘blender’ to our path so we can execute it from the command line anywhere in our system and not even have to learn how to use the GUI.
For MacOS, you can run this in your terminal, or add to your bash.rc:
For Windows, open up powershell (right click, ‘run as administrator’, which you’ll need to do for any commands that save to disk) and run:
[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:Program FilesBlender FoundationBlender", [EnvironmentVariableTarget]::Machine)
Now anytime you open your terminal/powershell, you can type ‘blender’ to launch blender. (On MacOS, you may need to restart your shell for it to take effect, or type ‘source bash profile’ to force bash to reload your saved preferences, including path) (Another option for windows is to search for ‘edit system variables’ from the start menu, click ‘environment variables’ and use the GUI to add C:Program FilesBlender FoundationBlender to your Path)
But going through all that trouble just to open Blender isn’t the point. The point is now we can save python files and pass them to Blender as a command line argument. Before we do tho, there’s one last step: we have to re-save the start-up scene to be a blank slate, otherwise our animations will all have a cube sitting in the middle of the scene (Blender’s default start up scene).
So open blender, and hit Select -> (De)select All, then Object → Delete, confirm. (or just ‘a’, ‘x’, ‘Enter’ if you want to use keyboard shortcuts). Next click File → Save Startup File. We’re finally ready to generate some animations with python!
Save the following code as helloworld.py, navigate to the folder you saved it in terminal, and type:
blender -b -P helloworld.py
And you should see Blender printing out a load of information about what’s happening, then it should create an fbx file and exit.
Check out the inline comments for a little bit of understanding of what the code is doing, and I’ll be writing more example code in future blogs.
And of course, the answers to all your questions are in the docs!
To make it easy to deploy these to Hololens, I just drop them in the onedrive belonging to the account I created with the hololens. There’s a python API for uploading direct to onedrive, too, so I’ll probably explore that in the future.
Optional: import interesting python modules like numpy and pandas and all the rest.
Blender comes packaged with its own python executable hooked into all of Blender’s guts, so to install any modules that aren’t built-in to Python3 we need to run get-pip.py with Blender’s python. First, download get-pip.py, I’ll put in in my desktop. Then, navigate to blender’s python executable.
command on windows:
cd "C:Program FilesBlender FoundationBlender2.78pythonbin"
command on MacOS (assuming Blender was moved to Applications):
(Note that 2.78 is the blender version, not the python version) Run ‘ls’ and take note of the python exectuable’s name. On windows it was just python.exe, on mac it was python3.5m. Once you’ve cd’d into python/bin:
Install Pip on MacOS:
For Windows I had to include to the full path of get-pip.py. Also this requires administrator privileges. So run powershell “As Administrator”
./python "C:UsersColten JacksonDesktopget-pip.py"
If that exits successfully, you can run ‘pip install pandas’ and whatever python modules you want to use from within Blender. On Mac, pip added itself to my path and I could use it right away, but on Windows I had to reference the pip.exe to run it from the python directory, so it ends up looking like this (again as Admin so pip can save files to disk, your permissions may vary)
Let me know what questions you have, tho of course I would appreciate it if you googled it first !
More examples and explanations here:
PS all my gifs are in black and white because I don’t know how to do color correction, so I just get rid of the color 🙂
Also, shout out to my primary sources that taught me all this:
Reviewing the options for game development with Unity and Visual Studio was intimidating to say the least. It looked like I would need to learn C# while adjusting to the Unity → export to Visual Studio → compile to Hololens toolchain. Worse, while I could follow along the Unity introduction to make a ball roll around a plane, I wasn’t able to export this basic demo to Hololens after hours of re-installing various versions of software.
I started wondering if Blender might let me produce animations for hololens, knowing that it allows for python scripting, which might allow my pythonic-data-scientist-coworkers to jump into producing holograms much faster than writing our own video game without any game engine experience.
Now, there are many guides on how to design things in Blender, import the meshes and actions into Unity, which can then export the project to Visual Studio for compilation. I hate the idea of dealing with that toolchain, dealing with incompatibility and export quirks each step of the way. I want to write a python script that produces an animation for Hololens.
With a little googling, I discovered that Hololens can indeed display animations in the FBX file format which Blender is happy to export, so I started playing around with the following code to make some cubes dance around:
The FBX file produced by this code can be opened directly in the Hololens app 3D Viewer (I just move my fbx files into a folder sync’d to onedrive to make it easy to access them on Hololens).
How about something a little more interesting: arranging a subset of the collection in a line. From here I hope you can use some imagination on how you could tie this in with data retrieval and visualization.
For Annalect Lab’s minimum viable hologram, we’re interested in visualizations of populations, and talking about subsets. So I’ll adjust the previous code to use meshes representing people. There’s a million ways to make meshes, but there’s a tool perfectly suited for my task:
I won’t say much about MakeHuman cause I think it’s pretty intuitive. You can customize everything about your character and use some pre-loaded outfits and poses, and then export the mesh as a .obj. To make it easy on myself I saved these obj files (man.obj, woman.obj) in the same directory I’ll output my animations. Now, these objs are made up of multiple meshes, and are fairly high resolution so there’s a lot of code to work through modifying them and reducing the poly-count to get to an acceptable file size for the default FBX file viewer on hololens, but the result is a lot of fun:
The main flow of the program is like this:
Import an object, which will select all 4 meshes of that object and join them.
Decimate the mesh so it isn’t so high resolution.
Create copies of each mesh.
Create an array of x,y coordinates along a normal distribution.
Loop through an array of all the objects and set their location to the next random coordinate.
Select the first 5 objects in the shuffled array and set their keyframes to animate them as they move into a line elevated from the group.
Do the same with the next 5 objects, but to the other side of the plane.
Save a .blend file and a .fbx animation to move to Hololens.
This is my first few days using the python API to blender, so there’s surely a better way to make all the selections and de-selections, and if you know how please let me in on it! In any case, I hope this gives you a glimpse over the kind of scripted animations Blender can help you with, and there’s a million other things in the docs I haven’t touched yet. In the future I’ll have some examples of how to hook these animations up to pandas dataframes for dealing with data retrieved from SQL queries 😀
Ready to get started?
Here’s a guide to setting up your blender and python environment: