Yesterday we continued testing our aerial video setup, which consists of a Go Pro Hero 3+ mounted on an Zenmuse H3-3D gimbal attached to a Phantom 2 Drone Quadcopter.
After our conservative first flight, which did not exceed 40 feet or so off the ground, we delved into learning more about our rig’s abilities. In the latest flight, we honed the Phantom 2′s GPS home point feature, which once locking on to multiple GPS satellites, enables the quadcopter to return to its launch position and land itself—especially useful as it automates this in the instance of a signal loss from the controlling unit. We also launched this time with the compass fully calibrated, which allowed for a greater degree of control and course correction.
Another feature we tested this time around is the ability to adjust the camera tilt in flight, especially useful because the amount of fisheye distortion from the GoPro changes depending on its direction relative to the horizon line or any other linear plane. The GoPro allows for shooting “Superview,” wide, medium, and narrow—each tier evidencing progressively less fisheye distortion, but at the cost of resolution and breadth of view. The current model can shoot 4K at 15 fps maximum, 2.7k at 30 fps maximum, and higher frame rates as you go lower. This time around we tested at 2.7k, knowing we would be able to crop out landing gear or propellers that might drift into the shot while still having at least a 1080p resolution after cropping.
This time we also used the ProTune feature on the GoPro, which shoots with a flat color profile—preserving additional dynamic range at the cost of color saturation. Coloring in post-production can then reinstitute the lost color, which we did using basic luma waveform scopes in Final Cut Pro X for reference in conjunction with the automated color balance correction feature.
Armed with additional failsafes tested at lower heights and a nuanced set of GoPro settings, we went much higher as evidenced by the latest video and stills. The stills were captured using the interval shooting settings on the GoPro and no color-grading was done on them.
Our intent is to use the quadcopter to capture video to fill out our shot list for our Cold Storage documentary project, but can foresee applications for other metaLAB projects, especially those dealing with outdoor tree and plant life.
In preparation for getting some aerial footage to complete our Cold Storage documentary, we took our new Phantom 2 drone out for a spin, sporting a GoPro Hero3+.
Our test flight consisted of several increasing altitude pushes, hovering to test the unit’s auto-stabilization, and steering around several basic obstacles. To film with consistent speed and control will require some practice, but by the end of our flight we were gaining increasing levels of comfort with the operation.
The auto-stabilization works well in conjunction with the gimbal, but in reviewing the footage, it becomes apparent that on occasion the body of the drone would come into view before it settled. Using a less wide-angle option on the GoPro will hopefully address this the next time out, as well as minimize the degree of fisheye distortion.
The position of the sun will also be a challenge during our shoot for the film, so as to avoid the cast shadow of the device appearing in the shot.
There is GPS functionality and more advanced features we intend to test in the next flight along with greater heights in a more open space.
Curarium — what is it? Even if you pegged it as an aquarium for curating, what exactly does that mean? To point out some of the features and functions that Curarium enables with various types of collections, we put together this animation. We are steadily marching towards our beta launch. In the meantime, follow the Curarium blog for updates.
Similar to visualizations, individual records stored on Curarium can also be embedded within other media like WordPress. This gives access to the curarium interface for the record, plus any annotations associated with it.
The code that generates this particular embedding is:
<div class='curarium' style='width:800px;height:600px;'></div>
Stay tuned at Curarium.com!
Thumbnail visualization of a subset of the Harvard Art Museums collection, Japanese objects from the 17th Century
Out at the Harvard Depository in Southborough, Massachusetts there are many stories to tell. How do the books come to and from campus nearly an hour away? What is the best way to store a library collection whose offsite holdings alone are mounting to ten million? What does it take to keep books at cold preserving temperatures and film reels at even colder ones?
Our upcoming documentary, Cold Storage, uncovers an ecosystem of laser scanners, UV fly zappers, cherry pickers and a mezzanine of machinery. It shows a place where books are sorted not by the methods of Dewey or those of the Library of Congress but by size.
In this trailer, take a peek inside the expansive interiors where our story begins and stay tuned for the debut of our experimental and interactive documentary this summer, which will enable you to explore the HD as a lens by which to examine the cultural and technical dimensions of libraries.
As the Curarium platform finally enters its beta stage, we are fleshing out some cool functionality, like the ability to embed curarium generated visualizations in other platforms such as wordpress. With two simple lines of HTML code like this:
<div class='curarium' style='width:800px;height:600px;'></div>
We can embedd a treemap visualization displaying information about a particular collection.
Like, for instance, a diagram of all topics in the Homeless Paintings Collection:
A diagram of topics once we filter the collection to include the topic ‘beards’:
A diagram of dates once we filter the collection to include the topic ‘beards’:
A diagram of topics once we filter the collection to include the topic ‘saints’:
This past Wednesday, metaLab hosted its Spring openLab at the multi-tiered Arts@29 Garden space to demonstrate the progress and interconnection of projects underway by the core metaLab team, students, and affiliates
The upcoming metaLABprojects publications were out for preview. Core provocations from one, The Library Beyond the Book, were remixed in a derivative playable card deck while Library Test Kitchen’s inflatable mylar reading room, receipt printer spewing forth the U.S. Constitution, and custom 3D View-Masters encouraged discourse about library space and content interaction. All the while reels of Cold Storage, an interactive documentary about offsite storage through examination of the Harvard Depository, played out not just the library beyond the book, but the library beyond the library.
Cold Storage is complemented by a humanities studio course, one of two to debut this semester (along with Homeless Paintings of the Italian Renaissance). Existing outside a traditional departmental structure, these interdisciplinary courses have stressed a team-teaching dynamic and learning through experimentation in order to grapple with new materials, problems, and developing approaches to solving those problems with constant and critical evaluation of the process. In Cold Storage, a student may wireframe a web interface to host a staggering body of multimedia content or produce a tightly-focused video or audio piece that is part of the featured content itself. In Homeless Paintings, a student may investigate painterly representation of religious themes or design an algorithm to help identify a lost painting’s present whereabouts.
The lost paintings come from Bernard Berenson’s monochrome photo archive of Renaissance art, which concurrently is the pilot collection of Curarium—a web platform that ingests collection metadata and media to enable both item-level annotation and macro-visualizations that showcase and tell stories about the relationships among objects. Looking forward it seeks to also enable and enrich the kinds of stories that can be told about the relationships among multiple collections most immediately adding content from the Harvard Art Museums and the Arnold Arboretum.
Approaches to visualizing data from the Arnold Arboretum is the focus of The Life and Death of Data, which brought a series of projections and a topographic foam cut to openLab in order to map spatial and temporal acquisition patterns of plants and shrubs. The project is also slated to develop an online interactive documentary experience.
All of this merely scratches the surface of openLab, which also featured student work in data narrative, digital ethnography, and adversarial design from Mixed-Reality City and Connections, projects from the History Design Studio, work from Palladio at Stanford, and the bioluminescent Luminosities. Check out the video for a feel of the event.
In class during visualizations week, we focused to a great degree on the content of the viz, the value-add, the real reason for its rendering. But in section, we worked mostly on getting familiar with some data viz platforms. The students will carry these skills with them into their final projects and create visualizations that straddle nicely the gap between form and content, but for now they are strictly eye-candy.
Some Lev Manovich Imageplotting:
776 Van Goghs plotted Hue-Median (x) v. Saturation-Median (y)
Students noticed some potential trends, do you?
Van Gogh Again, 776 Again
Brightness-Median (x) v. Brightness-Median (y)
Manyeyes Tag Cloud of the screenplay of 12 Years a Slave
For their midterm project, students were asked to develop a set of criteria for building a collection of Homeless Paintings and create a spotlight in Curarium to share the collection they derived from those criteria. The criteria could take the form of an ontological scheme, or it could focus on iconography, place, or creator, or take entirely different form. Regardless, the spotlight needed to explain the criteria, curate individual works by its terms, and strive for the high comprehensiveness.
You can see some of the students’ work on the Curarium platform, which has an ever-changing and improving face:
In “Sign Sign of Three”, ungrad Shuya Gong studies paintings that feature Madonna, baby Jesus, and a third figure. She found 700 paintings and chose a sample group of 75 for her study. She categorized the paintings by how the three figures were arranged, finding that faces and poses came up again and again. She created a visually striking map and key of her chart of these figures’ relationship in space:
Shuya went on to do a fascinating close reading of some particularly similar sets of paintings. You should certainly check out her spotlight on Curarium.
Ben Zauzmer’s “Auto-Reverse Google Image Search” was another stellar project, which also gives a sense of the diversity of skill sets and interests in the class. Ben explains in his spotlight: “I wrote a program that loops through all 11,233 and marks which ones have a Best Guess on Google Reverse Image Search. Since Google prevents scraping, I used Selenium WebDriver to repeatedly open and close a browser that would automatically search Google Images, and then I used Python to record the results and write them to a CSV. The program took about 31 hours to run, or 10 seconds per painting.”
He found that about 771, or 7%, were matches, and that, true to that proportion, 7 of the first 100 had matches. So he investigated those records manually as a representative sample. Take a look at what he found at his spotlight.