SVG Sonification

I recently completed a project that took original vector art made in Adobe illustrator and turned it into MIDI notes to generate music. When I began the project I knew that there were  a lot of ways to go about this, but I don’t know a ton of programming, so I wanted to keep it as simple as possible, without learning a whole new framework.

One approach I ran across is the no-prize solution of printing your drawing out on a transparency and tracing it into a MIDI grid as demonstrated by this brilliant composition based on a drawing of a unicorn. But SVG’s exist as a series of data points, right? So there must be a way to translate those points into MIDI signals… The following is my method, there are probably ways to improve this workflow, but I figured this might help someone else or at least provide some new tools to anyone interested in data processing.

Step 1. Make some vector art

Star Wars - illustrator vector art

I used Adobe Illustrator, but anything that can export an SVG file should work. You can open an SVG with a text editor to view the code. This method only works with the types of “roller-coaster”  line drawings seen in my Story Scores project. For example a straight line might only generate a single datapoint (e.g. “make a line at these coordinates and it goes straight for 200px”). I also ran into trouble trying to apply this method to closed polygons, because the resulting MIDI notes would trace the circumference of the shape in a linear progression over time. Instead of drawing a full circle in the MIDI grid, I got an unwrapped circle that looked more like a parabola.

Turn off the strokes for your artwork (I duplicated my artboard to keep the original version in tact).

Switch on Outline mode (⌘+Y) to see the raw paths, and drag each segment of your drawing into the Quick Export window. For my project, I wanted a different line for each character in a movie, and for the final audio composition, they’ll be represented by a different instrument, so I exported each character’s line as a unique SVG.

Step 2. Convert SVG to text coordinates

SVG’s are commonly converted to text for use in web development and the like, but the code isn’t quite what we need for this sonification experiment. Open an SVG in a text editor and copy the entire code.

svg code

Paste the code into this online SVG-to-WKT converter by David McClure. This extrapolates the SVG data onto a two-dimensional grid, giving you XY coordinates in what’s called “Well-known text.” Handy.

Delete the opening and closing parentheses and the “GEOMETRY” text at the beginning, so that all you’re left with is the coordinates. Copy all the coordinates….

…and paste them into this online tool from Text Mechanic to remove line breaks after every comma.

line break tool

Copy and paste the list of coordinates into a spreadsheet app. I used Google Sheets. Split the list of coordinates into two separate columns using “Data” >>> “Split text into columns” and set the separator to comma. The first column is just a linear progression from left to right; we’re interested in the second column—the Y values. You may be able to just use that second column for the rest of the process, but I found that most of my datasets included too many coordinates. So here’s a quick way to remove some of those numbers and retain the essential shape of your drawing.

google sheets
  1. In a new column, enter “=mod(row(),2)” (without the quotation marks).
  2. Drag that value down to paste it into the following cells, for the full length of your dataset.
  3. Scroll up to the top, and select the entire column.
  4. Select Data >>> Filter and uncheck the cells with a value of “1” or “blank” so only the “2” cells remain.
google sheets list

This is a non-destructive way to cut your dataset in half (you can re-activate the “1” cells if desired. You can also enter another value in the initial equation to reduce the number of cells further  — e.g. =mod(row(),8

Copy the second column of coordinates (Y values) and paste into TextWrangler (or similar rich text editor).

text wrangler replace all

Use “Replace all” to replace all the commas with semi-colons, and remove all the hyphens (-). We don’t want any negative numbers. Save each list of coordinates as a separate text file.

Step 3. Convert text coordinates into MIDI

I’m sure there are other options for this, but I’ve used Pure Data in the past, and it’s open source, so it’s free to use.  Pure Data (install  PD-Extended) is kind of tricky to lean, because there’s no  real GUI and it’s totally open-ended. If you want to learn more about PD, I recommend PD Tutorial—but you can also just download my patch and run it without too much trouble.

You’ll need to turn on your computer’s virtual MIDI Bus to allow PD to send MIDI signals to Ableton (or other DAW) as if it was an external MIDI controller. Here’s some instructions for doing this in OSX, where it’s called the “IAC Driver” — and if you’re on Windows, just do some Googling ¯\_(ツ)_/¯

pure data iac bus
 
ableton iac bus

Once you’ve activated a virtual MIDI Bus, make sure PD is set to use it (Media >>> MIDI Settings), and Ableton is set to receive it (Preferences >>> MIDI). You can test the connection by opening up PD’s “Test Audio & MIDI” window under “Media.”

pure data midi test

Place the PD file in the same folder as your text files (the patch will look for text files in the same root folder that its located in).

In the PD patch, change the name of the file referenced in the “read” object to match one of your files. In order to change text, you’ll need to enter Edit Mode (⌘+E).

Pura Data Midi send bpm

Set a BPM using the labeled number object, and set the “Max value for each file” to whatever the highest number is in your list of coordinates (it’s fine if you’re off by a little). The patch will preform some math to reverse the number range so that higher numbers result in higher MIDI notes. By default SVG’s measure distance from the upper left, so a high number value would equate to the bottom of the screen, and we want those troughs to be low notes (at least I did). Set a low/high range for your coordinate list. 

The “maxlib/scale” object will shift the range of MIDI notes to a new Low/High value. This allows you to customize the range of the MIDI notes coming out of PD. For example, you could limit the range to a low range (e.g. 30-50), or use the full range of notes from 0-127. The first two numbers will be the “input” for this object, and the second two will be the new range that is passed on to the “makenote” object. The other two number values going into “makenote” are the velocity and sustain. Customize them if you wish.

Click the big green square (toggle) to switch on the patch and begin sending notes. You may need to exist Edit mode and click on the “read” object to read the new text file, and then click the “rewind” object to make sure it starts at the beginning. You can press “rewind” anytime.

That’s it! Your DAW (Ableton, Logic, etc. ) should be receiving MIDI notes generated from the text file. If it’s not working, try restarting and relaunching both applications. Sometime the virtual MIDI bus needs a kick to get going.

Step 4. Choose your instruments

Now that you’ve got a banging MIDI machine, choose some MIDI instruments or samples to be triggered by your data-generated notes. You can create synths and noises in PD, but it’s a lot easier to do this composition part in a polished DAW.

Ableton MIDI tracks

Once you’ve chosen an instrument, hit record in your DAW — and then rewind the text file in PD to start from the beginning.

 

 

 

 

 

Interdisciplinary Anatomy Course

Students in library
Students visit the rare books collection at Penn State to see Renaissance anatomy texts.

For the past year, I’ve been teaching a new course at the Pennsylvania State University called The Visual Body. The course is co-taught by Dr. Nicole Squyres, and takes a holistic approach to exploring the field of scientific illustration. Students look at anatomical illustrations from the twin perspectives of art and medicine, using the field as a lens to study the histories of science, technology, and publishing. Alongside the history of the field, the course includes lab exercises with cadavers and design workshops that span traditional media to digital platforms. In an effort to put these skills to use, and step into the role of artists and authors themselves, students spend the final weeks of the course making zines and alternative publications about human anatomy.

The course is open to students from a variety of disciplines, and has proven to a be a novel mix of technical knowledge and freewheeling discussion. As in some of my own projects, I get to work in creative and analytical modes, and help students discover the possibilities of stepping outside traditional academic boundaries.

Here are a few of my favorite images from the vaults of anatomical illustration:

Charlies Estienne, 16th cen, Italian
Charlies Estienne, 16th cen, Italian
Guido-da-Vigevano
Guido da Vigevano, 14th cen, Italian
John-Arderne
John Arderne, 1430, British
Frederick Ruysch, 17th cen, Dutch
Frederick Ruysch, 17th cen, Dutch

Recent Writing

I’m continuing to write weekly blog posts for my virtual residency with SciArt Center, along with collaborator Paz Tornero. We’ve been sharing research about water pollution, creative activism, and data visualization; it’s a mix of important topics, so go check it out.

Along with that I wrote an article about the ongoing public discourse around “post-truth”—given my own practice of inventing narratives and mixing fact and fiction, I felt compelled to weigh in. There have been so many op-eds struggling to make sense of U.S politics recently, that I had plenty of information to go on when I began comparing Trump’s gaslighting to Fictive Art. The article was originally published on Medium but has since appeared on Bmore Art.

“Under the Scope” at Goucher College

Art by Benjamin Andrew in an exhibition at Goucher College

I’m excited to be participating in another art-science project (still can’t quite bring myself to use the word “sciart” in passing) at Goucher College. Under the Scope is a group exhibition featuring seven artists from the mid-Atlantic whose work engages science – both as subject and process. I’m showing three different projects form the past four years, including Sounds of Discovery, Chronoecology Corps, and the recently concluded Foggy Bottom Microobservatory.

Alongside objects and installations from those projects, the show features two of my short films, including a newly re-edited special edition of Chronoecology Field Report 2013064. 

The reception on Thursday Oct. 27 will also include a hands-on workshop inspired by the Microobservatory and focusing on wild fermentation. Since this workshope will be indoors, this one will also feature a microscope for examining yeast and bacteria under the scope (sorry).

Visit Goucher’s website for more information. 

SciArt Virtual Residency

Benjamin Andrew and Paz Tornero Sciart collaborators

I’m excited to be participating in a “virtual residency” hosted by the SciArt Center called The Bridge. The program pairs together artists, scientists, and hybrid practitioners to collaborate on new projects across geographic barriers. I’m partnered with biologist/artist Paz Tornero curently working in Spain. We’ll be blogging regularly over at SciArt, so check back there for updates to see what happens.

Paz and myself share an interest in microbiology, ecosystems, and environmental activism, though she’s worked in actual laboratories at universities around the world (compared to my studio-kitchen experiments).

Micro-Ecosystem Workshops

Benjamin Andrew - Foggy Bottom Microobservatory Workshop

As part of the Foggy Bottom Microobservatory in Washington D.C. I recently led a workshop about natural fermentation and wild microorganisms. After meeting at the Microobservatory itself, we discussed the near universal presence of wild yeasts and microbes and their legacy within a given environment. Yeasts replicate identical copies of each other, leading us to wonder if wild yeasts from Foggy Bottom’s historical breweries could still be present in the area. By cultivating wild microbes for foods and drink, we can literally ingest a living environment and transform our own micro-ecosystems.

Participants in the workshops learned about my project of cultivating wild yeasts for homebrewing beer, and made naturally carbonated sodas and sourdough starters that they took home at the end of the day. We also used sanitized cotton swabs to sample local plant life in hops of discovering wild yeast for brewing.

A second workshop will be offered on Saturday September 17 from 12:30-1:30pm.

Click here for more info on all Turf and Terrain events. 

 

Foggy Bottom Microobservatory

Foggy Bottom Microobservatory

I’m well into an ongoing project that is part of the 2016 Foggy Bottom Sculpture BiennialTurf and Terrain (curated by Danielle O’Steen). The Foggy Bottom Microobservatory is an effort to explore natural fermentation and wild microorganisms through the lens of a working observatory. A physcial structure is located at 915 26th Street NW, Washington D.C. and shows the ongoing work of a team of tiny researchers—or “micronauts”—as they study the tiny beasts responsible for fermented beers, sodas, and more.

Micronaut adventuring in a world of wild yeast

These microbes actually can be collected from a given environment and observed through a microscope. Over the next several months I’ll be culturing and isolating specific strains of yeasts and bacteria for use in homebewing and other DIY fermentation experiments. Inspired by the United States Naval Observatory (once supervised by Matthew Fontaine Maury!), which sent out a telegraph each day to mark the national time, the Microobservatory will be posting updates on experiments and adventures each day at 12pm EST. Follow @microobservatory on Instagram or visit the website to subscribe to a weekly email or just view the Broadcasts online. Also see a short feature on the project in A Creative DC.

 

 

Art + Science at JHU

Last year I took my first step into the world of curating, and while it was a sort of rag tag experiment of a show it did highlight a number of issues I’ve explored in my own work. I’m organizing a second iteration of Research Remix this year, and so wanted to mention it here.

The program is organized by the Digital Media Center at Johns Hopkins University and is designed to foster collaborations between JHU researchers and local artists and designers. As with last year project, participating researchers will submit summaries of their work and artists will choose a project to remix or interpret into a work of their own. This year we have substantial funding through an Arts Innovation Grant from JHU and are hoping to create even more collaborative art/science projects. Artists, designers, and researchers can sign up at the project website. See the video below for excerpts from last year’s program.