Today, The Atlantic Cities published their favorite maps of the year, and our work with Climate Central on Surging Seas tops the list: “the most frightening, important maps of the year come from Climate Central's Surging Seas project, which offers an interactive map of all coastal areas of the Lower 48. In the discussion of potential sea level rise, these maps are the most alarming images out there.” The Atlantic has covered this project before.
While we could not have predicted the impact of Hurricane Sandy in October, our work with Ben Strauss and Remik Ziemlinski at Climate Central opened our eyes to the emerging behaviors of the world’s oceans on a warming planet and the risk to low-lying areas like New Jersey and New York. The ocean does not merely rise, it surges and bulges due to weather, seafloor topography and tidal forces. “The surface of the ocean bulges outward and inward mimicking the topography of the ocean floor. The bumps, too small to be seen, can be measured by a radar altimeter aboard a satellite.”
One way we communicated this impact was to refocus the map on the land that’s going to be underwater, and try to make it clear that this is the land that we're going to lose. We wanted to make it responsive and reactive, so we developed a map tiling method based on image sprites, a technique currently making its way from game development to web design.
Each 256 pixel map tile on the site is a tiny map sandwich, a stack of background and foreground images that combine public domain aerial photography from the US Government NAIP program and a custom rendering of data from OpenStreetMap. Using data calculated by Climate Central, we create a background “high tide” image that focuses attention on low-lying areas, and cover that with an image that’s ten tiles in one, an animated film strip of sea level rise from zero on up.
The resulting interactive map lets you quickly investigate the effects of different levels of water rise, something we might have described as “playful” during the development process, but merely terrifying and accurate now. The comparison of Red Hook and Gowanus in Brooklyn above shows one of New York’s hardest-hit neighborhoods.
This is a follow up post to yesterday's post about watercolor textures, Tuesday's about watercolor process, and Monday's announcing the launch of maps.stamen.com.
Terrain Layer has been on my mind since 2008 when I first started to experiment with digital elevation data, but it’s only really come together in the past year when Nelson Minar and I started kicking ideas around for making an open source answer to Google’s terrain layer. As an amateur pilot and an iPad owner, Nelson was interested in something that would make sense seen from high above. I was interested in something that would make sense at medium zooms, with all the crazy data-munging that implies.
An image as simple as this, for example, requires so much more than plain OpenStreetMap data can provide:
Obviously, there are the hills. The streets present a problem, too: many large roads that you might see at this scale are modeled as “dual carriageways” in OSM, which means that they’re actually two one-way roads as far at the database is concerned, so you can end up with a lot of doubled-up street names. The route numbers are often hidden away in weird tags with extra junk attached, and even picking the right color for the ground is a challenge.
Happily, the U.S. Geological Survey has our back. They publish absolute mountains (heh) of fascinating and useful data, including high resolution elevation models of the entire country that can be transformed into shaded hills, and types of land cover that can be colored according to vegetation.
Gem Spear helped me develop a custom color palette for the landcover, explaining the meaning of different plant classes, five kinds of forest, combinations of shrubs, grasses and crops, and a few tundras and wetlands and how they should appear together on a map.
The ground and hill renderings are about imitating the work of Eduard Imhof, whose use of color derived from grayscale relief simulated the appearance of hills in sunlight. My version is drastically toned-down from his, but the hint of warm and cool are there:
Back in the foreground, we’re making three adjustments to the base OSM data to improve the look of these maps.
First, High Road is a framework for normalizing the rendering of highways from OSM data, a critical piece of every OSM-based road map we’ve ever designed at Stamen. Deciding exactly which kinds of roads appear at each zoom level can really be done just once, and ideally shouldn’t be part of a lengthy database query in your stylesheet. High Road sorts it all out, giving you good-looking road layering despite the zoom level.
The shields and labels are both driven by some work I’ve been doing with Schuyler Erle on Skeletron. It’s an attempt to generalize complex linework in code using a range of techniques from the straight skeleton to Voronoi tesselation. I’m pre-processing every major road in the U.S. at a variety of zoom levels, so that big, complicated, doubled-up and messy roads are grouped together into neat lines that can be labeled using big, legible type:
The highways are a special beast. I’m using a combination of Skeletron and route relations to add useful-looking highway shields for numbered freeways. One particular OpenStreetMap contributor, Nathan Edgars II, deserves special mention here. I feel as though every time I did any amount of research on correct representation or data for U.S. highways, NE2’s name would come up both in OSM and Wikipedia. He appears to be responsible for the majority of painstakingly organized highways on the map, which means that maps like this of the East L.A. freeway system can look more legible:
The final assembly takes place in TileStache’s Composite provider, inspired by Lars Ahlzen’s TopOSM to do exactly the kind of raster post-processing and compositing that makes this terrain map possible. Everything we’re using is 100% open and available via Github.
I’ll close with some images of my favorite spots:
Shawn and I are back from an epic five days in Los Angeles, our second run at the MTV Video Music Awards and our fourth live event collaboration with our good friends at MTV. The first time we visualized live Twitter traffic for the VMA's, we were tightly focused on the pre-show broadcast.
This time, MTV pulled us right into the main show!
Thanks to an invitation from Executive Producer Dave Sirulnick, our now year-long amazing working relationship with MTV's Michael Scogin, and the energetic participation of Chloe Sladden and Robin Sloan from Twitter Media HQ it was possible to drive a massive, 95 foot-wide LED screen of up-to-the-minute tweets right inside the venue, with on-air updates and voice-overs from Sway. Check out the videos for all three updates, and more from Twitter on MTV “TJ” Gabi.
The visualization itself is a response to MTV's stark, black and white art direction for this year's show. Shawn and Geraldine pulled together a new take on our particle-based visualizer for the 2010 Movie Awards, cranking up the size and animated activity of the numbers and representing tweet volume with a snowy flurry of moving blips. The piece came in three versions, one for the web-based online audience that allowed visitors to tweet right in the interface, a second for the red carpet touch screen pre-show, and a third that was piped directly to the stage at key moments in the show.
What's amazing about working this particular show is the far-reaching pop-stravaganza of it all, and the new potential for Twitter's user base to feed back into the show itself. This time, the participation of the audience expressed itself as a detailed accounting of over 2.3 million tweets for almost a hundred different artists and stars hammering out over 9,000 tweets per minute for Lady Gaga, 7,000 per minute for Cher, and almost 10,000 combined for Eminem and Rihanna.
What if next time the messages themselves work their way into the show, blasting the enthusiasm of a worldwide live audience all over the LED-and-scrim walls of the stage set? What if we expand the participation of the viewers from responding to hashtags and tweeting 190,000 times from the online visualization interface, to direct interaction with the artists on and back-stage?
Maybe this is the way television grows into a two-way medium? Robin says:
It’s got the familiar thrill of live TV, but it’s not just one-way anymore. This kind of integration pipes the conversation around a live event back into the event itself, and there’s a wonderful juxtaposition happening behind the scenes to make that happen. It’s old tools and new technology side-by-side—NTSC and HTTP co-mingling … what’s way more interesting to me is the way that live TV and real-time information actually reinforce one another. Every time something big happened in the VMAs, we saw massive, immediate spikes in related tweets.
For now, I'm happy with last decade's tired, old “beautiful but useless” being replaced with the fresh, new “helpful and flashy”, “gorgeous visualizations”.
Meanwhile, I leave you with this photo Shawn took of 3% of Twitter's hardware load and me:
We paid a visit to the MTV Movie Awards in Los Angeles this past weekend, where Eric, Sha and I spent the weekend producing and supporting an on-line/on-air visualization of live Twitter traffic about the stars and movies featured in the show. This was the most recent high-profile use of our recently-launched Eddy platform, and we were thrilled to see it all perform like a champ!
Over the course of the event, we saw approximately 528,000 tweets during the East Coast broadcast, and almost 1 million covering both broadcasts and the resulting conversation through Monday morning. Traffic peaked at almost 5,500 tweets per minute at 9:30pm EDT during the Tom Cruise and Jennifer Lopez dance performance. Sandra Bullock alone saw 2,800 tweets per minute at 10:06pm EDT. 11,100 tweets were sent directly from the application itself. Read more about this project at media.twitter.com, TechCrunch, Flowing Data, Mashable, and Social Nerdia.
View the project live at tweettracker.mtv.com, or check out this dynamic summary streamgraph of the Twitter traffic for the East Coast broadcast:
Stamen eats together.
Every day, the studio gets lunch and shares it as a group. Our Mission neighborhood is ground zero for a crazy variety of amazing food, and most of it's available to-go. As we've grown over the years, we've started to generate progressively larger volume of packaging waste every day, and finally decided that there must be a better way. Inspired by London's Tiffinbites, we bought a set of excellent aluminum boxes, and started bringing them to the local restaurants where we get our lunches.
The boxes look great, the leak-proof lids snap shut, they're durable, and they're perfect for taking leftovers home. Whenever we order food that doesn't come in self-reinforcing log form, we try to get it in our fancy metal boxes instead. Most of our favorite local spots have enthusiastically taken to using them:
Here's the current lunch leaderboard, from Sha's Daytum account:
As more creative companies make their home in the Mission, we're hoping to see more of these amazing boxes in use around the neighborhood.
Digg Arc is the lastest addition to our continuing work for Digg Labs. The piece has seen several weeks of development and experimentation and three phases of development punctuated by two successive public releases. This is a visual diary of its creation, shared by Shawn Allen, Tom Carden, and me, Michal Migurski.
Arc began in Shawn's hands. We started with a few basic experiments in circular layout and basic arc geometry. At first, these took the form of simple interactive wireframes to prove that our math was right. We quickly attached these initial sketches to the Digg Flash Kit, and connected them to a source of real data.
Early interactive arc geometry experiments