It's time to start talking in public about the major revision to Bing Maps that Stamen designed for Microsoft.
It's not every day you get asked to re-imagine the state of online mapping for a company that has the resources to actually take a shot at it, and in looking back over the project archive in preparation for this post, I'm basically overwhelmed by the size of the undertaking that Blaise Aguera y Arcas and his group asked us to participate in. It's too big, and has become impossible for me to sum it up in any kind of cogent way—so I'm going to start by posting a few sample artifacts from along the way and see if that opens any floodgates.
Luckily, Justin O'Beirne has done an extremely detailed and comprehensive review of the design, right down to examining the similarities to old Rand McNally maps and hard-core analysis of micropolitan area placement. Another reason why it's been so difficult to get this post out the door is because Justin's done such an incredible job looking at all of the moving parts. But more on that later.
We designed a tileset for the maps that was intended to just...calm things down a bit. One of the things that it gets hard not to notice after a while is that 99% of the map tiles out there are hyper-colorful, with bright popping hues for freeways and on-ramps and all the rest of it. While this arguably makes for a clear map when you're trying to drive somewhere, it makes designing a map of things on the map a bit of a challenge, since the map itself is competing visually for attention. So the first thing to do was to come up with a way of thinking about the map as a place to put things on, not a map that already had everything on it. We called it "mylar" because it minds me of how old architectural drawings look when you draw on the matte side of a sheet of plastic mylar paper: muted, subtle, and leaving lots of room for a foregrounding effect above it. There are a few examples below, and a more complete set of them here.
There's no type in this set, except at the street level. The idea is that these are for use when the type is part of the presentation layer, as it is in this demo that we put together using this tileset underneath the California Stimulus Map that we did last year:
And here's how it looks in the context of Crimespotting:
It's about taking a deep breath, calming things down a bit, being considerate about the typography, and giving the map the room to serve as a canvas for things to play out on top of it.
We thought alot about zooms and text
One of the major themes to think about in was the migration of alot of the content out of the tiles themselves and into an interactive framework. The idea was that you could, for example, click on a city and be zoomed in to it, something that I think the delivered project does pretty well. In order to make this happen we had to think alot about the relative sizing of type and roads at various zoom levels—and to rethink the whole notion of there being discreet zoom levels at all. We wound up with lots of diagrams like this to illustrate things like the drop off rates of the sizes of the different road types as you move in and out of the map:
We had the chance to really think about how much information you might want to see on the map at any time. Justin has detailed thoughts on ideal densities for cities, and I think there's alot that we and Microsoft can do moving forward to improve the specific densities of areas like the Interstate 44 corridor in Missouri. The exciting part, for me, was to be able to think about the kinds of opportunities that open up if those kinds of decisions become dynamic. Like, what if there are always 50 labels on the map, regardless of what zoom level you're at? What if there's only one?
To this end we put together some demos that let you play with these numbers a bit. Be gentle on these if you would, they're pretty much raw files straight out of a client work process without alot of thought given to the niceties of interface and user experience. But the thing to try is to to adjust the number of labels and the scale factor, and see if you can find an arrangement you like. Pretty soon I wind up with maps that look like this:
Which I think is just lovely. Another fun thing to do is to set the max # of labels to 1 and drag around. You wind up with a map that only shows the largest city in the viewing area, which is a nice way of thinking about a place: what's most important here?
There's alot more to say. I haven't touched on breadcrumbs, or clicking on cities, or dynamic labeling. Next time. In any event: Bing maps!
This is my first post on our new Citytracking project, which is being supported by a grant from the Knight News Challenge. We've been working on it for a good part of August.Thus far it's been primarily talk and thinking and writing, working on the overall structure of the project, trying to get a handle on what a two year project intended to change the way people talk about cities feels like. I think that it's OK talk about the very early stages without having much other than thinking to show—my friend Teru Kuwayama (more him later), who also won a grant, is writing about getting ready to go to Afghanistan, and while there's certainly no comparison between Citytracking and what he's getting himself into, I feel good knowing that other grantees are talking about their projects in the very early stages of them.
I should also say that it's important to note that it's really the whole Stamen team that won the award, and not just me: while the award announcement lists "Eric Rodenbeck, Stamen Design" as the winner, I'm mainly there to represent the group whose collaborative work has made the grant possible.
In any event, Stamen projects tend to be somewhat more bounded than this, both in time and ambition, having to do with a specific and bounded dataset or problem domain. Perhaps more importantly, for projects of this scale, there's almost always a specific client. This time our client is us—and we can be pretty demanding :)
So one thing that's happening is that Geraldine's starting to think about logos:
And the studio's been meeting regularly to talk over some options, plan this fall's conference, and so on:
There's quite a bit we don't know. What we do know is that Citytracking is intended to be a public, open source project that takes data about cities and makes it more legible—more beautiful, interesting and accessible. We know that we're intending this project for three audiences: cities, journalists and the public (including businesses). And we know we want to make the project 1) so simple that regular citizens can get involved, 2) robust enough that real analysis can happen, and 3) interesting enough that children will play with it. Along the way we're planning to release server-side codebases, mapping algorithms, managed datasets, APIs and API specs, and new views on data. And finally we want to do this work in public, in close dialogue with our audiences, so that we know we're not going down rabbit holes and doing work that's not valuable.
It's this last bit, the public bit, that helps turn the problem into something that can be tackled and keeps it from becoming overwhelming. The question is not: What do we do for the next two years? but rather: What do we do next? The important thing is to do it in public, and listen to what people have to say. One important thing that has come out in our meetings about it is the idea that this project is not "the map that ate the world"—the end all be all project that is going to satisfy the need of every non-profit that wants to map their spending over time. The project is: Here's some work, grab the code, the license is cool, don't worry about it, use it, go ahead and publish your stuff. So in that vein we're hoping to be as transparent as possible—and there's the little matter of the grant requirement that we blog about it every couple of weeks to keep me motivated.
Our ideas so far have fallen into four categories, written in outline on the whiteboard sketch from our latest planning meeting above: Walking Papers v2, Crimespotting v2, Tile Farm, and Dotspotting.
Walking Papers v2
Mike has been extending Walking Papers for some time now, making it work with aerial imagery in 2009
and under stressful conditions in 2010
. There have been a few other interesting uses of the project: We're currently working with the Art Institute of Chicago
on bringing the project to the Institute for a show in early 2011, potentially involving kids from the Institute's Education department, and Sarah Van Wart at the I School in Berkeley
has been using it to with kids in Richmond, California on imagining a different future for their neighborhood
. Both of these instances suggest that there's value in a version of the project that doesn't require the information being imported back into Open Street Map
or demanding the technical expertise of knowing what to do with a github install
. What if the service allowed you to keep the information you add to it on the Walking Papers site?
This one seems like a natural for the grant; the notion here is that we take what's already been done on the Crimespotting project, both from an interface and a technical perspective, abstract out the feature set, and make it open source. Basically:
- Port it to HTML (probably using Polymaps, since there's already an example of crime in Oakland on the site).
- Abstract crime out of it: pull out references to crime and make it a framework for mapping trees, fire engines, whatever. This is a tricky one, mainly because it's not simply a question of replacing 'assault' with 'acacias.' Even the FBI's Universal Crime Reports, intended to give municipalities a simple and structured way of talking about crime, starts to get unwieldy if you start to include things like Property Stolen by Type, and one of the real strengths of Crimespotting is that it avoids a long list of different types of things and gets you right down to the exploration part. So it's a decision about editorial constraints more than it is a technical problem. But still tricky.
- Abstract cities out of it: have it work with any area. Again, tricky. The project is pretty much tuned to Oakland's specific crime profile and physical aspect ratio; San Francisco is more of a square shape and doesn't work quite as well with the way the site is designed. The amounts of crime (or whatever) per day vary greatly from city to city, so that needs to be factored in as well. Pretty soon you get right back into expert GIS-land if you're not careful.
- Provide a general brain dump of how it works—this would go right along with open sourcing the project, we'd need to document it and explain all the methods.
- Improve and extend the current project. It's been over a year since Tom came up with the Pie of Time. In particular I'd like to look at the way the date slider works; often I want to be able to compare overall crime volumes for the city to the extent area I'm currently looking at, and the interface only shows me city-wide statistics. Stuff like that.
This is designed to solve a problem that we've run into a couple of times on projects like CNN's Home and Away
, where we need to use a slippy-map setup but for creative or business reasons tilesets like Google's or OSM's aren't appropriate. In cases like these we'd like to quickly and easily be able to make and use our own tiles of the world, and it seems that others might want this as well. You should be able to style and download your own tiles for some level of detail, probably streets. In most cases you don't need the whole world, and you don't need every level of detail—but various and easily accessible knobs to make this less of a chore would be useful. We discussed the idea that we'd partner with Development Seed, as they're already doing some of this work on Mapbox
. What I'd like to see would be the ability to do this kind of work without having to mail anyone, and have it be fairly simple. They're also a Knight recipient this year
so we'll see how that goes.
This is one of the themes that we keep coming back to—all these pieces are out there, the stack is getting cleaner and the technology is great and there are plenty of ways to do this work, but it could be made much more straightforward and accessible to the people who want to tell stories about cities. It needs experts to do it now; the project is to make it easier for cities and journalists and the public.
If the point is to make tools that let people tell stories, and do it in a way that's free and open source and lovely, then maybe let's start at the beginning. Let's start from scratch, in a 'clean room' environment with this stuff. We could start from a baseline that's really straightforward, tackling the part that's about getting dots on maps, without legacy code or any baggage. Just that, to start. Dots on maps.
Our experience with different city agencies so far is making me realize that if this stuff is really going to work out in the long run, it's going to need to be able to consume stuff that cities actually use and use now, and not have to rely on fancy APIs. It's great that San Francisco
and New York
are releasing structured XML data, but Oakland is still uploading Excel spreadsheets (it's actually awesome that they do), and the Tenderloin police station is printing out paper maps and hand-placing colored stickers on them. At some point, if this really is the way things are going, we're going to need to meet the needs of actual functioning city agencies—and while APIs are great and necessary, for now that means Excel spreadsheets and Word docs. It also means being able to easily read in data that people have uploaded to google maps, interface with SMS systems like those that Ushahidi
There's baseline work to be done here to make this stuff internet-native. Every dot should have an HTML page of its own, for example, like they do on Crimespotting. You should be able to easily download the location of the dot, download the maps, download collections of dots. Maybe there's an 'export to PowerPoint' function, since that seems to be the lingua franca of most city departments. There should be a hosted version as well as one where you can download the software and install it yourself.
Shawn keeps asking: Where is the tool that lets you hoover up a city's shape files and look at them? Mostly they're sitting in folders, accessible via pulldowns and long lists. Especially as a developer, you want to be able to do quick visual comparisons and not have to download a bunch of .zip files just to get to work. The project is all about taking the stuff that people sort of can do, with lots of effort and fancy tools, and make it so regular people can engage in this work.
Currently, Dotspotting is looking ilke the best candidate. What I like most about it is that it's something that could genuinely open up some of this work to people who don't already know how to use these tools. It's a little risky—I'm nervous about getting bogged down in the technical details of extracting lat & long positions from Word files—but the Knight Challenge feels to me like it's at least partially about trying things that are new and risky. In my next post I'll talk more about the design & coding with that we're starting on now.
I posted some early examples last week, and SimpleGeo announced this morning the result of our collaboration with them over the past few months: Polymaps.org.
We've been working with Stamen to provide visual analysis of the huge datasets that we're working with, and how people can communicate this data in sophisticated ways. A first step toward that goal is the release of a free and open-source set of tools and map engines allowing people to perform relatively sophisticated operations on their data in the browser. The project has been online for a while at http://github.com/simplegeo/polymaps, and you can download the source code there; what's new is the addition of a series of example maps so you can demonstrate what's going on, and human-readable documentation so you can use them for your own projects. Some of the examples are straightforward, letting you do things like group points into clusters and drop scaled gradients on to map locations. Others are more robust, letting you do things like change which direction is north by rotating the map and visualize the quality of street surfaces in San Francisco.
We've been lucky enough to work with Mike Bostock on these this summer, and he's really outdone himself. If I had doubts about the ability of non-flash browser-native tech like Canvas and SVG to do expressive and richly interactive mapping (and I did), consider me a convert.
Now about that easter egg...
Close readers of Aaron's blog and flickr stream know that he's been thinking for some time about making maps out of things that people do, whether it's geotagging photos or tracing streets or deriving urban areas from analysis of satellite photos, and posting his experiments with them as he goes. I'm zazzed (thanks AG) to announce that he's racheted up of this effort into a combination of —wait for it—four different datesets into a cracklingly lovely pastiche, available for your perusal at http://prettymaps.stamen.com.
The thing to watch here—besides the lovely, and also that this experiment is likely to make your browser cry for a while until we work out the kinks—is that, at certain zoom levels and in certain places, the things on the map are addressable on rollover. Which is to say: they're not pre-rendered graphics that are cooked down into tiles for easy delivery, but actual shapes, actual streets, actual areas that can be individually targeted and messed with. So when it says "Columbus Circle" in the map above, and "Embarcadero" in the map below, that's because things called Columbus Circle and Embarcadero both are actually on the map, and not pictures of them, if you get my meaning.
Everything you see in the image above - every dot, every shape, every blue road, all of it—is its own special snowflake, with its own unique properties, right in the browser—and we're only just beginning to understand what that might mean.
More on this as we go—and hopefully the servers will stay up overnight, but in the meantime, enjoy!
Did you know that the Cabspotting project we designed with Scott Snibbe for the Exploratorium, is still going strong, providing a live view into the minute-by-minute realtime positions and status of the Yellow Cab taxi service in San Francisco? And that the project has an API that you can use to access this data in close to real time, provided you agree to keep us in the loop & not use it for commercial purposes and not to put too much strain on the server?
Eric Fischer and Alex Bayen (whose link is down at the moment) do:
Eric Fischer, whose Locals and Tourists photoset on flickr tore up the charts a few weeks ago, has been taking similar techniques (red is for photos taken by tourists, blue is for locals) and applied them to taxi pickups and dropoffs in SF. The Cabspotting set on flickr has more examples, including origins vs destinations and more. In the image above you can see a clear concentration of red (empty cabs) in the lower right where the depot is, and a hotly contested downtown where an archipelago of red empties nestles in a sea of blue fulls.
And Alex Bayen sent us a white paper called "Estimating arterial traffic conditions using sparse probe data," (link is down at the moment but you can get to it here in the meantime) which was presented at the 13th International IEEE Conference on Intelligent Transportation Systems, is about the extraction of overall traffic flows in cities from smallish amounts of data, and has images like this:
Which I understand about half of, but all of which I like.
The Cabspotting site has contact info if you'd like to get in touch about using this data for your own projects.
(part 2 in a series)
One of these days I'll get around to updating the maps section of our site, 'cause things've changed since the California Stimulus Map (although I still think the Robert Louis Stevenson quote is pretty good, it's basically 1/2 of our whole business plan). In the meantime I wanted to share a few things that Mike Bostock, who's taken a break from his studies at Stanford and contributing to the beautiful Protovis project (check out these examples) to work with us at Stamen on some browser-based mapping projects. We'll be releasing these in a more formal way in the next week or so, as part of our ongoing work with SimpleGeo, but I was enamored enough of these examples to want to post them ahead of time.
The first lets you manage tiles that are rotated off of the north-is-up tyranny—I say again tyranny—that most browser-based map engines lock you into, by making it possible to manage gridded tiles that are rotated at arbitrary angles. If the red rectangle below is the outlines of the browser, this demonstrates the engine's ability to keep the tiles in order no matter what angle the tiles are at. Pressing 'a' and 'd' will rotate things slideways, should you choose to:
Rotation and zoom:
What this gets us is the ability to arbitrarily change the rotation of maps that we're working with, independent of zoom level, so we can start to do things like easily look at the more eastern stretches of the Bay Area in a horizontal format, start to get views on the relationships between cities and towns and oceans and things that north-is-up leave behind on the table. And it can zoom, baby, take you all the way into Discovery Bay and back out again if that's your thing (click & drag around to see this in action):
And while you're setting longstanding map conventions on their ear (the madness! dogs and cats living together!), you might as well take things one step further and change the zoom level on the tiles you're requesting, so that a perfectly normal-looking and well-behaved slippy map, much like what you'd see here:
is suddenly loading twice as many map tiles, at twice the resolution, and displaying them at half the size that they're normally shown at:
To which you might say: so what, map nerd? On the second one, I can't read the text.
To which I'd probably say yes, ok, not-map nerd, but—which one gives you a better overview of the parks system in the Bay Area, the green stuff? Particularly around Half Moon Bay & the south Bay, there's detail in the second view that's simply not available in the first. And yes, you could maybe get to this view more directly, by firing up your desktop GIS system and running a query on the statistically relevant open space bounding boxes between a certain lat & long position south of San Francisco. But from where I sit that looks an awful lot like knowing the kind of answer you want to get, from the kind of question you already know how to ask. Somebody—in this case Stamen, since we designed this tileset, decided that you needed to be able to see green stuff at zoom level [n] and not at [n-1]. Did they (ok, we) make the right decision? How would you know?
The exciting part, for me, about this work is that it let's these kinds of questions emerge in a way that's opportunistic and dynamic and lets the medium speak for itself in a voice that comes directly from the actual material. All kinds of decisions are made in the crafting of a mapping tileset about what cities to show at what zoom levels, what kinds of features to display and how, where borders are between countries, and so on.
The idea that these kinds of decisions, and the ramifications that might fall out from these decisions, is in our hands, up for dispute and dialogue, available for messy debate and talk pages and inadvertant discovery...well that's the whole point, isn't it?
This is the kind of work that makes me want to code again, to fill directories with numbered iterations on the same idea, make the kind of thing that friends who code would step through and think "I wonder whether you could...oh right!" We'll see.
I'm excited to announce the start of a new kind of project for Stamen, working with the geolocation experts at Quova to explore the opportunities for visualization of their truly ginormous dataset—geographic information on all of the IP addresses on the internet.
What's new here, for us, is that we've agreed to publicly blog about the project; not once it's finished, but as we move forward and develop new ideas. Mike has done this kind of thing before, blogging about crimespotting as it was being developed, but it's a harder thing to do with paying clients who generally need to be the first to announce to the world that a project is going live. So I'm really pleased that Quova's agreed to make this process public. Let's see how it goes!
Generally we divide investigative data visualization projects into three distinct phases—explore, build and refine. The idea is that you start off by wrapping your hands around the data, getting a feel for the flow and rhythm of it, and you base a project around what you find there. This approach, we hope, leads to projects that feel natural and deliberate and appropriate to the data. The alternative—coming up with an idea ahead of time and then shoehorning the data into it—runs the risk of the project feeling like a square peg being hammered into a round hole, and nobody wants that.
In this case we've separated out the initial investigation ("explore") into its own phase, with some hopefully interesting results to show. The idea of this initial investigation is not so much to figure out what the answers are, but to get a good sense of what the questions might be, and we do that by building some initial representations of the data and seeing what kinds of results come out. Mike's said more about this in a talk at User Research Friday, notes here, talk here. What we're trying to do is to come up with a basic metaphor for the project, and we do that by making stuff.
And so: back to Quova, and ginormous datasets. What Quova can tell you, basically, is where all of the computers are. The dataset is somewhere in the neighborhood of 4 billion separate addresses, changes all the time, and is of intense interest to companies like banks, online stores, newspapers, and—wait for it—internet gambling companies. For this last group, it's important for them to understand where people are logging in from, since different places have different laws about it, both in the US and in the UK (where this dataset comes from).
We talked about maps, of course, but that seemed too...pedestrian a thing to lead with, too obvious. You need to have some style with this work, y'know? We wanted to get a sense for the overall flow of things before making blinking maps (although I'm sure we'll get to those eventually). So as a first stab at understanding the data, Tom & Aaron've been building on Lee Byron & Martin Wattenberg's work on streamgraphs and continuing their investigation of solr as the backend, and pointed that at gambling data from the UK in early June 2010.
Here's a slice through what we found. All of the graphs are over the same time period; the actual values've been removed, so each of the graphs shows a different total volume of lookups, to protect the innocent. It's important to note that, since these are streamgraphs, volume is represented by the total vertical size of each section along the graph—some people have told me that they read these as positive and negative values along a central zero-axis. But more on that later.
We start out with all countries, and while Ireland, Denmark, the United States and other countries are all represented here, by far the largest country (red) is the UK, which makes sense as these are lookups for British gambling sites.
Stripping out all countries besides the UK lets us facet the search on the different cities represented, and again we find one clear winner; in this case London (light blue). The other larger cities in this graph, Manchester (green) Ilford (red) and Birmingham (purple) all show the same kind of day-to-day similarity, where there's a low point some time after midnight (where the lines are), and a peak in the early evening, presumably when people are at a pub or at home.
And dialing all the way down into London, where we can facet on postal code, shows us that a single post code—EC1, basically the center of London, the City, where the financial heart of the Capital lies, is responsible for a huge portion of the daily spike in gambling lookups. The data doesn't get any more granular than postal code, so this gives us a pretty good indication that the consistent daily spike in worldwide gambling lookups (by the companies in question) are coming from centrally-located gambling centers concentrated in the center of town.
London vs. the World:
So armed with that insight we can pull all the way back out to all of the countries in the world, slice the world by postal code, and pick out pretty easily (in orange this time) EC1's role in making that early world-wide graph twist and shout:
There are lots of things to say about these—first among them that these are early early sketches and prototypes and have intentionally had lots of rough edges (like color choices) left un-polished so that they're not regarded as finished designs—but the main point I think is important is that these are investigations, steps in a process of discovery. Exploratory data visualization isn't so much about finding the answer to a question as it is discovering what the interesting questions to ask are. What is there to find in a dataset as rich and varied as the one owned by the people who know where all the computers are? And it's not so much a question of picking which visualization to use to get the maximum insight into a pile of data, but of developing a literacy in using visualization as a language to have nuanced conversations about the world. There are a number of other interesting conclusions to be drawn from Quova's dataset that are worth pulling out, but this is getting long so I'll save those for another post.
Tom gave a talk at ETech a few years ago where he outlined some common assumptions that the studio has about data visualization and recommendations for how to do this kind of work. One of these days we'll get him to publish them all (they're really good), but in the meantime this one feels appropriate:
(19) Start and End With Questions
"Traditional statistical charts can be a good first step to generate questions, especially for getting an idea about the scope of a data set. Good questions to start with include “how many things do we have”, “what do we know about each thing”, “how do the things change over time”, “how many of each category of thing do we have”, “how many things are unique” and “how many things is each thing connected to”. I don't believe that any visualization can answer all of these questions. The best visualization will answer some questions and prompt many more."
I'm in Boston today with Deborah, Tom & Ben, where the John S. and James L. Knight Foundation, as part of its Knight News Challenge, has just announced that Stamen is one of the winners of this year's Knight News Challenge grant. The grant is funding the development of City Tracking, which will present digital data about cities that journalists and the public can easily grasp and use, and provide tools to let them distribute their own conclusions.
You probably know that Stamen maintains an active independent research process, where we regularly commit time to developing self-funded and self-initiated projects like Crimespotting and Walking Papers and experiments at city.stamen.com, often involving the visualization of digital civic data. We take it seriously enough that I like to say at presentations that I'm going to show some commercial projects and some research projects, and I hope that people won't be able to tell the difference.
The thing is that while this work is vital to the studio, it by necessity has to take a back seat sometimes to the more commercially-oriented work that we do, if only in the sense that it sometimes gets delayed until we have some "free time." The grant will allow us to allocate the same kind of time and effort to city-related visualization work as we do to our client work, and we're hoping that it will lead to a substantial increase in the quality and quantity of this kind of work from the studio. We want to raise the bar on the role digital civic infrastructure plays in public dialogue around cities.
<hugs self and spins around in circles with happiness>
About the Knight News Challenge
The John S. and James L. Knight Foundation’s Knight News Challenge is a 5-year, $25 million international contest to fund digital news experiments that use technology to inform specific geographic communities.
About the John S. and James L. Knight Foundation
The John S. and James L. Knight Foundation advances journalism in the digital age and invests in the vitality of communities where the Knight brothers owned newspapers. Knight Foundation focuses on projects that promote informed and engaged communities and lead to transformational change. For more, visit www.knightfoundation.org.
We paid a visit to the MTV Movie Awards in Los Angeles this past weekend, where Eric, Sha and I spent the weekend producing and supporting an on-line/on-air visualization of live Twitter traffic about the stars and movies featured in the show. This was the most recent high-profile use of our recently-launched Eddy platform, and we were thrilled to see it all perform like a champ!
Over the course of the event, we saw approximately 528,000 tweets during the East Coast broadcast, and almost 1 million covering both broadcasts and the resulting conversation through Monday morning. Traffic peaked at almost 5,500 tweets per minute at 9:30pm EDT during the Tom Cruise and Jennifer Lopez dance performance. Sandra Bullock alone saw 2,800 tweets per minute at 10:06pm EDT. 11,100 tweets were sent directly from the application itself. Read more about this project at media.twitter.com, TechCrunch, Flowing Data, Mashable, and Social Nerdia.
View the project live at tweettracker.mtv.com, or check out this dynamic summary streamgraph of the Twitter traffic for the East Coast broadcast:
We've been working with our friends at CNN for almost a year on a project that is live as of today, a mapping of coalition casualties in Iraq and Afghanistan called Home and Away and live on CNN.com.
The project is a sobering look at the human cost of two wars in the Middle East, and as such we've worked within a restrained and sober palette of blacks, whites and greys. CNN has hooked the maps up to CNN iReport and we're hearing stories of people using the map to post memories and share stories about their lost loved ones. It's not been the easiest subject material to work on, but we've come away with a keen sense of the human face of these conflicts and hope you'll take the time to look around a bit at the stories that these kinds of maps can tell.