As a follow-up to the first visualizations we made of user activity on Digg (posted to the digg blog in 2006), we've widened the scope of our visualizations to show an entire day's worth of digging activity on the site in greater detail. The resulting images, made by Tom Carden, illustrate some general patterns, and one controversial story immediately becomes visible; more about this below.
I was first introduced to IBM's new Manyeyes project when Fernanda Viegas spoke about it at Adaptive Path's excellent IDEA conference in Seattle back in October. We presented too; it was a "morning of visualization" :) .
Mike Migurski's Digg friends
Since it launched, the site has deservedly gotten a ton of attention and seems to be growing every day. The focus on "democratization of visualization" is absolutely right on. What clinches the site's utility for me is that it allows you to basically screencap the particular way you're looking at the data, so what you make can be shared and referenced.
Visualization (and flash and java generally) have been historically terrible at this aspect of things; data flows through, the framework responds, you get it looking just great, and then... you're done, unless you want to take a screengrab, post it to flickr, yadda yadda yadda...and then you lose the ability to interact with the data and draw your own conclusions. The chain of reasoning gets broken any time you try and do anything with the material. Manyeyes solves this problem by generating thumbnails of whatever aspect of the data you're looking at, and provides links back to the original data, so you can make your own graphings; it's just great. This ability to handle a specific slice of visualized data is becoming more and more of an interest to us here at Stamen; look for more on time and visualization here in the next few months.
The 2007 Amgen Tour of California started this weekend. It looks like some of my former colleagues at Quokka who are now at
Macromedia Adobe have been working on the realtime race tracker.
Live Race Coverage
Francis Potter from Adobe sent over the annoucement from Yottapixel:
An engineering team at Adobe has been working for a few months on the Tracker. Many of the team members worked in 1998-2001 at Quokka Sports, a dot.com-era startup which pushed the envelope at the time with live sports coverage. In many ways, the Amgen Tour Tracker represents the culmination of what that team was trying to accomplish years ago.
Some of the stylistic elements in the Tour Tracker pick up where Quokka left off. The live video feed resizes to fit the user’s browser — even if that means the video is pixelated. Data, commentary, and image thumbnails are overlayed on the edges of the video, sometimes obscuring the action. There are lots of shades of grey in the interface. The end result delivers a rough, cluttered look which emphasizes the liveness, urgency, and experimental nature of the medium.
The world's moving onto the web, all right:
Ottowa: Scientists will soon start attaching microchips to fish and other marine animals to track their movements around the world's oceans and learn how they are being affected by phenomena such as climate change and overfishing, experts said on Monday.
Canada's Ocean Tracking Network, has gotten a hefty grant to build an ocean-based network of life-based mobile data gatherers. It's an ambitious project—tag a whole slew of marine beasties, drop sensors in the ocean, see where they go, watch this change over time, and use this data to make arguments for policy and as a basis for further scientific research.
The Tagging of Pacific Pelegiacs project has had something like this going for a while now; their Near Real-Time Animal Tracks covers several animal species over a wide range of Pacific Ocean locations. What seems noteworthy about the OCT is their desire (and now mandate) to extend this technology worldwide, with an expressly political aim. Exciting.
When the Exploratorium and Scott Snibbe approached us in 2005 to visualize realtime GPS positions of San Francisco taxi cabs in the Cabspotting project, we leapt at the chance. Taxi data is highly dynamic, mappings of it change noticeably from viewing to viewing, has an easy reference to the real workd, and the data lends itself to visually inspiring and meaningful work with the lightest of touches from us; all things we like.
It also suggests a wide range of potential new questions to answer and new things to make. Adam Millard-Ball's suggested applications for just about everyone: taxi drivers (how long is the average wait for a pickup in this neighborhood?), taxi passengers (obviously, where's the nearest taxi, but also do they tend to be here at this time), TLC planners (do we really need more taxis in North Beach? the data'd tell you), and city planners (looks like alot of people are taking cabs at 8:50am from Point A to point B, maybe we should have a taxi stand or ride share there?).
Updates once a minute or so
From the beginning, we made a deliberate decision to map only the data—to let the material we were getting from YellowCab tell the story, and not to rely on any kind of overlay or specific relation to an underlying, pre-generated map. We wanted to see what story the data itself would tell. It resulted in some pretty interesting artifacts; in particular the activity around the Bay Bridge between San Francisco and Oakland.