Stamen is a design and technology studio in San Francisco.

You are at Stamen's first blog, mostly written by Eric Rodenbeck. Visit our new blog! or come visit us at our newly designed stamen.com.

Contact Stamen, or follow us on Twitter

Recent blog posts

Oct 16, 2015

Bringing the best of National Geographic’s classic map design into the digital world

Since 1915, the National Geographic Society has been making some of the best maps the world has ever seen. So we were honored, humbled, and maybe even a little nervous when we got the opportunity to work closely with the Nat Geo team designing and building an interactive feature all about the Amazon, launched yesterday.

Just how much history did we need to live up to? As it happens, Nat Geo marked the centennial of its map-making in January 2015 with this fine article, which gives us a cumulative snapshot:

At this writing (the count is obsolete as soon as it is tallied), National Geographic cartographers have produced 438 supplement maps, ten world atlases, dozens of globes, about 3,000 maps for the magazine, and many maps in digital form.

At Stamen, we’ve been making digital maps for a decade and a half, pushing especially on the idea that digital cartography can have every bit as much craft and delight as the best of print maps.

We were most definitely not starting from scratch. Indeed, we approached the project with a shared commitment to drawing as much as possible from Nat Geo’s print assets and tools and from Stamen’s history using open source web mapping tools to design with data.

As soon as we dug into the work, we discovered we were making a lot more than a map. Yes, the foldout print piece featured a detailed set of maps on one side, but it also showcased beautifully detailed illustrations of the three main types of forest in the Amazon, along with a dozen stand-alone renderings of species ranging from a harpy eagle to a jaguar (elements which some map nerds call the “epimap”). And it drew on a wide range of data that could be visualized in new ways on the web.

With such rich and diverse visual assets, Stamen designer Nicolette Hayes worked closely with National Geographic’s design team (especially Fernando Baptista and Vitomir Zarkovic, along with project lead Ryan Morris) to design an interface that could give both detailed context or a simple, clean overview.

Maps from Print to Web

We knew that the Nat Geo cartography team uses MAPublisher with Adobe Illustrator to make its print maps, painstakingly fine-tuning individual labels and line placements as one can do with print tools. That kind of control is possible with web tools, but it’s not easy.

We had a choice: Try to recreate the print map from scratch using pure web tools or find a middle way to reuse the work Nat Geo had done for print.

From the start, reuse made a lot more sense. We had two key challenges: Introducing zoom-dependent styling (especially for labels) and rendering a map in a projection other than Web Mercator. (Map nerd explanation: Almost all maps you see on the web are in Web Mercator, which fits well into the square grid of the web but distorts reality significantly. Almost all good print maps use some other projection that reduces distortion for the focus area and purpose of the map.)

Using techniques we first tried on this simple Ocean Planning map, we developed a deceptively simple method to solve the projection problem: Create all tiles in the same desired projection but then simply let the Leaflet mapping library we use treat them as if they were in Web Mercator. We call it “lying to Leaflet.” A little deception is a good thing in this case. (You can get a bit more detail in this slide deck.)

For zoom-dependent styling, the solution was essentially hand work between Stamen’s Alan McConchie and Nat Geo’s Virginia Mason and Debbie Gibbons, carefully increasing label size and decreasing density and making other tweaks until each zoom looked just right.

Artworks as Maps

If the maps give you the lay of the land, the illustrations bring you into the heart of the forest, taking the flat map plane and turning it vertical, where so much of rainforest diversity comes to life.

We debated for a brief period about how to make sure that the illustrations got equal footing with the map. At first, there didn’t seem to be an obvious pattern to follow. But it turned out the answer was simple: treat the drawings like maps.

So we took the high-resolution source files and turned them into zoomable and pannable images. Just like a map, you can zoom out for an overview or zoom in to see intricate details and reveal labels on the various animals in the scene.

Context and Change Over Time

The maps and artwork give us a full spatial picture of the Amazon, but we also wanted to show change over time.

For the cyclical flooding that occurs each year, Nat Geo’s Fernando Baptista animated the waters rising to attract a whole different set of inhabitants. For the more sobering changes of fire and deforestation, the Stamen team created two animations from data provided by the Nat Geo editorial team, rendering individual frames in TileMill and pulling them together in Adobe After Effects. These show regional fires on the left and a zoomed-in view of deforestation on the right:

   

Oh, and en tres idiomas

The project is being published in English, Portuguese, and Spanish (the latter two launching soon). Stamen’s director of technology Seth Fitzsimmons and project director Dan Rademacher worked with Nat Geo’s translators and our technology partners at Presence PG to build the site in a way that made it simple to swap out not just text but also images and map layers that included either words or units (to switch between imperial and metric).

Ready for more!

Grafting together workflows and expertise from print and digital worlds was incredibly rewarding: both in the process and in the results. We’re looking forward to more opportunities to help evolve the tools and methods we’ve started using in this project.

Oct 13, 2015

Sea Level Rise: Mapping the future so we can make better choices today

Yesterday, we launched critical new work with our longtime partners at Climate Central: Mapping Choices lays out starkly the stakes of carbon emissions over the next several decades.

It also shows us a much more sobering picture than is often shown in near-term climate change forecasts. That's partly because this map isn't about what will happen to us between now and 2050 or now and 2100.

Rather it's about how the choices we make between now and 2050 or 2100 will lock in very different futures for our heirs, maybe several generations from now. Will New York be underwater? Will the Sacramento Valley once again be an inland sea? Or will we take action now and mitigate the worst possible outcomes?

It's not easy to think about how choices we make now could mean the world to people living centuries from now.

But that was our challenge for this project. And we needed to make that complex idea clear and compelling in an updated interface that works well on screens large and small. With deep content guidance from Ben Strauss and his team at Climate Central, we were able to focus on getting the designs and interactions just right.

Along with the ss6m.climatecentral.org map that launched earlier this fall, Mapping Choices is the second in a series of new maps we've made for Climate Central. Stay tuned for for more this fall.

Already, the work has gotten great notice in Wired. You can read the peer-reviewed science behind the map in the Proceedings of the National Academy of Sciences or get the lay person’s version from ClimateCentral.org.

Try out the map and share what you find! there's a handy "download screenshot" button (the camera) and a full-featured embed as well.

A few sobering comparisons:

Sacramento: Once again an inland sea?

 

Goodbye Boston Hahbah!

 

Florida: Should Boca stay or go?

May 5, 2015

Diving into ecosystem data with Berkeley's Ecoengine and interfaces from Stamen

New Tools for Research with UC Berkeley

Explore, Compare, Inspire!

Most people know that the University of California at Berkeley is a world-class research university. Some folks have heard of the Hearst Museum of Anthropology. But not so many people know that the university houses seven natural history museums which together hold 12 million specimens that form the most complete representation of our state’s living and extinct plants and animals. Our new work with Cal is designed to help change that. The Ecoengine is a powerful resource for understanding changing ecosystems, more than ever a crucial challenge for our times.

We’re thrilled to have built the main interfaces for searching and analyzing that data about those specimens, along with a whole lot more information that’s been brought together in a single database and API called the Berkeley Ecoinformatics Engine (also known as the Ecoengine).

The Ecoengine API is already a remarkable resource for the most tech-savvy academics. Our job was to make it searchable by researchers and students who would rather use a web browser to discover data and test hypotheses than jump straight into the statistics package R or proprietary desktop GIS systems.

The challenge, then, was to create interfaces that hide none of the complexity of the data — researchers want to see it all, no dumbing down! — but also that are intuitive to use and that produce findings that are easy to share.

The Ecoengine has data from specimens collected around the world, though the collections are concentrated in California.

Explore

With millions of data points across thousands of categories, Explore presented a challenge of designing a search interface that’s highly flexible, but also gets results quickly and is easy to learn.

The facets along the left side, which are a key part of the underlying database, also give an immediate sense of the scope of the Ecoengine data and allow you to quickly drill into the data even if you’re not sure what to search for at the outset. In the screenshot above, in just a few clicks, we narrowed the scope down to just birds in California with known locations and physical specimens in the collection.

At right, the default facets immediately tell you the scope of the available data: mostly in the United States, mostly in California, and predominantly animals.

The search box at the top allows for very specific queries (like for a species name) and the timeline next to it lets you narrow your search to a slice of a few years.

Compare

After we developed Explore, Charles Marshall (director of the Museum of Paleontology and one of the principal investigator on the Holos/Ecoengine project) challenged us to push further in our second phase.

A common limitation of web-based biodiversity databases is that you’re limited to seeing one query at a time. Our charge from Charles was to “break the lock of single-taxa views of change.”

So we designed an interface purpose-built for comparing diverse spatial datasets. It starts with either simple term queries of the Ecoengine entered directly in the Compare tool, or with more complex queries brought over from the Explore tool. These can include facet selections, time ranges and bounding box filters as well as search terms.

Here’s an example comparing observations of western fence lizards and dusky-footed woodrats:

That might seem like an obscure thing to search for! But by different indirect mechanisms, fence lizards help keep Lyme disease occurrence low. Woodrats increase occurrence. So this is a map with some direct interest for anyone concerned about Lyme disease.

The interface is highly configurable, with drag-and-drop layers, custom labels, editable colors, and multiple basemap options. It includes boundaries (like state, county, ecoregion), our own Terrain layer, and the light and dark maps we designed for CartoDB:

And every map configuration can be easily bookmarked and shared, since we write the queries and configurations into the stateful URL.

Inspire

The Explore and Compare interfaces needed to look at home on the holos.berkeley.edu website (and work within a larger production environment). Just as important, our work needed to lay the foundation for other developers to use the freely available API to build their own tools for use cases none of us at Stamen or at the University had previously considered.

Making code open source is one thing (and we’ve done a lot of that). Making it easy to understand and reuse is another. and that’s even harder if the code then needs to work within a production CMS (in this case, Mezzanine).

The solution here came about rather naturally: divide the load. So we have two production interfaces that can be built and deployed on the main Holos site, and then we have other versions and prototypes of many more interfaces that are right at home on Github Pages:

Steal this code!

The Explore and Compare interfaces also run just fine on Github Pages (see Explore and Compare). So fork and modify! But those are pretty complex, and we made many other prototypes along the way. We hope the examples below will inspire others to grab the code and try their hands at making their own ecovisualizations.

Antarctic Chordata

  • A stress-test of loading all Chordata in a non-Mercator projection centered on Antarctica

Arctic Chordata

  • The same as the previous, but centered on the North Pole

Lizards and Woodrats

  • Spot spatially co-occurring observations by toggling layers

Taxa Sampling Distributions

  • Example of small multiples to compare sampling distributions.

  • ColorBrewer palettes

Woodrats over Decades

  • Example of small multiples to compare temporal distributions

Quercus

  • Small multiples with search functionality (edit "quercus")

  • Split by search facet

  • Displays top 24 facets for a search

Photos

  • Simple photo-viewing app, accepts URLs in the same format as Explore

Bulk Download

  • A tool for generating CSV text from a query

  • Downloads multiple pages of data. A limitation of the API is that results are always paginated, so loading all data for a query requires some work.

Observations

  • An early version of Explore with search box, time filter, pagination and export options

  • Could be a good starting place for new EcoEngine applications, since the app is only about 250 lines long and uses only d3.js

Early version of Explore

  • An early version of explore with a preview of available photos and a "Detail" pane that lists out information about observations that are hovered over.

Sensors

  • A simple "hello world" of accessing and printing EcoEngine data with d3

  • Lists an index of available sensors

Scatterplot

  • A simple D3.js scatterplot showing observations by country over time.

Parallel Coordinates

  • A D3.js parallel coordinates plot showing a sample of 2000 observations.

The Berkeley Ecoinformatics Engine is funded by the W. M. Keck Foundation.

Apr 21, 2015

Hacking for Parks in DC: Agency Data
+
Citizen Data

On April 11 and 12, the Interior Department held its first-ever hackathon. The topic was one that's close to our hearts: Parks! The myAmerica Dev Summit attracted people from all over the country interested in parks and code, and especially the new Recreation Information Database API (RIDB).

Two Stamen folks went to the hackathon and worked on two things, both aimed at better telling the stories of parks through open data and social media.

First, we brought along a dataset of links to about 77,000 Flickr photos harvested from within parks (using the tools we built for CaliParks.org). Those images came in handy for another team at the event: The folks from Booz Allen Hamilton built an awesome WanderList site to search the RIDB parks and show flickr photos in those parks (here's their code).

And second, we worked with our friends at Trailhead Labs to figure out efficient ways to both improve trails data in OpenStreetMap and increase agency engagement with the OSM community. Here's more on that front.

Trail Assist: Agency Data
+
Citizen Data

Slides on SpeakerDeck

Our goal over the weekend of the #dev4outdoors hackathon at the Department of the Interior was to make substantial progress on adding detail and texture inside the 3,000+ parks in the Recreation Information Database (RIDB).

The RIDB gets you point locations and more than 40 text attributes like “camping” and “paddling.”

But where exactly are those amenities? And what do those parks actually look like? People visit parks not for the point at the other end of a driving route but rather for the rich experience they’ll have when they get there.

They’re going to want to know what that experience is before they get there. And once they’re, they want a good map.

When faced with thousands of parks, we think social media and open data are key resources to not only fill in the gaps but also show the diversity of experiences you can have in parks.

So, how to fill in a map of thousands of parks?

Ask all the people: Agency staff, park volunteers, OpenStreetMappers, social media users. Fill in the details of the park with help from social media and crowd-sourced map data!

On Caliparks.org, we harvest Instagram photos to show how people experience thousands of parks:

Trailhead Labs built the open source traileditor.org so people can email in photos of trailheads, to build a crowd-sourced trailhead database with images and detailed information:

But crowdsourced data is not without its challenges. It can work amazingly well at scale, but it can also be wrong.

That’s where Trail Assist comes in. We propose to make it easier than ever for the parks community, including park visitors, to report and flag bootleg trails (also called social trails or even desire lines). The system can also help agencies improve their own data.

First, email a photo of the suspected bootleg trail to social@traileditor.org, and that will add it to the TrailEditor site.

Then we provide some information about what bootleg trails are and why we should be concerned about them.

But more important, we push that point location to OpenStreetMap (OSM), the largest and most detailed publicly accessible map of the world ever made.

That information then appears on OSM and alerts agencies or volunteer mappers that they should inspect the area more closely.

For that, we’ve begun developing tools that make it much simpler to see where open data and official data differ.

Working within the open source Java OSM editor (JOSM), we used some tools built by Stamen (the backend of Map Stack) and API integration with CartoDB to make a view like this:

The yellow is agency data. The magenta is OSM data, and the variable blue lines are the Strava Global Heatmap. We use layer compositing blend modes to make yellow and magenta cancel to white, and then layer on the Strava map, making it instantly obvious which trails are most likely to be social trails and which of those are most heavily used.

Bringing richer trails data into the OSM community

One of our key goals is to improve data within OSM in a way that is true to the community that makes the map, and therefore gets owned by volunteer mappers, vastly increasing the likelihood that data quality will be maintained and improved over time.

While we worked on-site during the hackathon, we engaged in a larger conversation (via Slack and social media) with members of the OSM community about trails and trailhead data. Brandon Knight (@geobrando) published a detailed proposal for a new standard for defining trailheads within OSM.

That inspired community discussion (so far, 20 responses in the community thread), and a clear, succinct definition (in our opinion) emerged: A transition point between a trail network and the developed transportation network of roads, mass transit, etc. (See the start of the discussion here.)

In our work at the hackathon, we also began circulating a proposal for a new social_path tag to clearly mark areas where trails exist that aren’t part of the official trail network.

Our goals for this proposal are twofold: to enrich the data already present within OSM and to facilitate exclusion of such trails. Deleting unsanctioned trails is problematic, as they clearly exist in the real world.

Like roads, most trails fall under the responsibility of a government agency that designates the permitted uses of a trail (whether horses or bikes are allowed, for example), and they also determine when a trail should not be used, particularly if there are safety or environmental considerations. The trail still exists, and (rather than deleting it) we can further specify its characteristics within OSM.

Beyond preventing edit wars, we attempted to come up with a scheme that would facilitate easy exclusion of the data from most maps (to discourage use in the real world) while remaining available for cases when such data is legitimately useful (e.g. for evacuation or firefighting). Within the OSM community, such tagging could be considered “tagging for the renderer” (since there’s a clear intent to influence how they’re displayed, and this is generally discouraged), but we prefer to look at it as being akin to describing the precedence and use of roads using highway=tertiary, highway=service, etc.

As with roads, not all trails are equal.

For developers of websites and apps for parks, there’s a huge benefit in using OSM as a (free!) one-stop-shop for data regardless of jurisdiction. In fact, the benefit is so huge that developers will keep doing it. Even if the data is wrong.

Better to make the data right AND provide easy tools for hiding or marking prohibited trails. So let map designers query for trails where highway != social_path and they get to serve better data, while agencies get a clear path to better maps across products they might not even know about and the open data community gets more accurate data.

What’s next?

  • Enhance the proof of concept to more easily incorporate data from any source.
  • Investigate potential integration with the NPMap team’s Places Editor modifications to iD.
  • Work with the OSM community and local agencies to propose and drive adoption of a social trail tagging norm.
  • Organize a grassroots Trail Blitz in the Bay Area to field test the tools.
  • Field test the tools with data from local agencies.
  • Package up the tools for use by others.

Mar 4, 2015

Parks are social

There’s a story in the American nature-loving tradition that people go to the outdoors in part to get away from other people. Some people might even say that’s the main reason to go outside.

We’re ever more certain, however, that in truth parks are social, people live their lives in them, and bringing together support for parks means embracing that with open arms.

That’s what we’ve been trying to do with the concept, design, and launch of CaliParks.org, our web application that brings together curated park location and activity data with completely uncurated image feeds from Instagram and Flickr, capturing images posted by anyone within the boundaries of California’s nearly 12,000 public parks.

We’re thrilled with stories like this one from New America Media—soon to be translated into Spanish, Chinese, Korean, and Vietnamese! There you’ll meet young, diverse urban dwellers ready to visit and support parks, and also leaders like José Gonzalez of Latino Outdoors and Rue Mapp of Outdoor Afro, who are leading the way in showing the parks world that diverse communities already use and support parks and are ready to do more, given the right context.

And that context is all about the conversations we collectively have about, in, and for parks. What stories do we tell? Who do we ask to share their experiences?

As many people as possible!

Right after we launched CaliParks.org last month, we worked with our friends at Upwell to hold a “Parkinar” about social media in parks — a conversation about conversations in parks.

The response was amazing: More than 80 people participated directly in the Parkinar, including a range of influencers from large agencies like the National Park Service, community groups like Riverside’s Save Our Chinatown Committee, and blogs like Outdoorsy Mama and Modernhiker.com.

Upwell’s research on the conversational impact showed that a single virtual event actually increased parks conversation more than LA Times:

That’s nice for CaliParks.org, but it’s even better for parks as a whole: When we open up the conversation, the stories start flowing.

Let’s keep that going!

We’re working on improving CaliParks.org, so send us your feedback. There’s a feedback link on the bottom of every page on the site — and we do read those submissions!

And let us know how we can help you expand the conversation. Here are some ideas:

  • CaliParks.org has what we call featured “story blocks” prominently on the home page. Right now we’re featuring “Explore State Parks.” Yeah! Do you have a favorite set of parks that speak to you and the community you’re part of? Send us a list of ten or twenty parks and a link to a Creative Commons photo on Flickr, and we’ll feature your favorite parks (and your group in the story block!), and then you can share the list with your community. (And if you need help finding an image, we can do that too.)
  • Embed maps from CaliParks.org along with activities and social media from your favorite parks in your blog or website. Just find your favorite parks, hit the “Embed” button, and copy the code into your site (like this)!
  • Take control of the way your favorite parks appear on CaliParks.org, and on social media. Post photos from your favorite parks to Instagram and Flickr and we’ll harvest them within a day to show on CaliParks.org. Just be sure you turn on location services so that your geolocation gets picked up.
  • Keep talking about parks on social media! Use the hashtag #caliparks so we can all share the conversation.

And keep in touch! Thanks!

Jul 17, 2014

How leaky is your city? Methane Maps for the Environmental Defense Fund

by Dan

Outside our own kitchens, gas leaks aren’t something we’d ever thought much about here at Stamen. But being a studio filled with progressive San Francisco do-gooders and all, we do think about climate change and sustainability. We also think an inordinate amount about all the sensors out in the world gathering data – satellites, airplanes, cellphones, Jawbones, and, of course, those Google Street View cars.

Last spring, we learned from the Environmental Defense Fund (EDF) that those cars are doing a bit more than taking photos of businesses and tigers wandering through parking lots. Some of them are also measuring natural gas leaks and tracking the impact those links might have on climate change.

Stamen worked with EDF and Google Earth Outreach to design and build a system that EDF staff could use to crunch complex data files used by scientists into easily digestible maps that give an immediate sense of the scale of leaks in various cities around the country. For now, the maps cover only Boston, Indianapolis, and Staten Island, but Google will keep gathering data in more cities and the system we built for EDF allows them to easily add more maps as more cities come online.

Here’s New York, which is totally covered in methane leak occurrences:

As is Boston:

You might ask, is this normal? Well, take a look at Indianapolis:

There’s a lot less going on.

It was important to EDF and Google that we show the paths the cars drove, as well as the leaks detected, since the absence of leaks could mean sound pipes, or just an area that hadn’t yet been sampled. We took our cue from the familiar blue lines of Google Street View maps to show the drive paths, and then desaturated the base map to make sure the focus stayed on the leaks and sample areas.

Among our design goals for the project was to balance the visual presentation against the nature of the data. The leaks detected are really something called “verified peaks” — elevated methane levels detected frequently enough and at high enough levels to be significant. But, given winds and the potential for leaks to be contained in buildings and a thousand other variables, a measured peak doesn’t necessarily mean there’s a leak at an exact location out to n-decimals of latitude and longitude. So we opted for larger dots and diffuse edges.

And all these leaks are much lower than anything that would cause your neighborhood to erupt in flames. So the overall palette is a bit muted — nothing’s going to explode!

But our vast natural gas system — from wells getting drilled across the county to the networks of pipes in our cities to the knob on your kitchen stove — does make a huge difference for the future of our climate. We’re delighted to have helped EDF and Google begin mapping the part of that infrastructure that most closely touches tens of millions of people across the country.

View the project live.

Jun 18, 2014

ACA 411: Visualizing Obamacare

by Dan and Heather

This year, people across California began getting health insurance coverage in new ways, thanks to President Barack Obama’s signature Affordable Care Act, often called Obamacare.

As the new marketplaces, subsidies, and penalties got discussed and debated in the media this spring, we at Stamen were hard at work with our frequent client the California Healthcare Foundation, creating a system to visualize and communicate a range of key measures to assess the impact and effectiveness of Obamacare.

ACA 411 is a deceptively simple interface into a rich dataset that covers everything from how many people are uninsured in the San Joaquin Valley (18.1%) to how much a small firm pays for health insurance per person ($7,379). ACA 411 covers more than 40 different indicators at launch, and CHCF will be adding data over several years, tracking long-term trends and adding new indicators in the future.

It was initially a bit daunting to develop a systematic, dynamic way to display all this raw data, but we were determined to help make the ACA understandable by applying some delight to the figures.

Within each indicator, the data is sliced in several ways: types of care, providers, demographics, enrollment, quality of care, income levels, even geographic averages. With so many categories, the data can give you a general overview or a very specific peek into health coverage. After relentless explorations, we’ve developed a system we think works pretty darn well. And it’s a solution flexible enough to accommodate new data in years to come.

The data is broken down into three categories: Domain, Topic, and Indicator. Once the user makes their selection, they are taken to the indicator’s page, which then has a variety of data points to look at.

By default, the page opens to the Total datapoint and the most recent year of data collected, with trending data displayed on the left. Indicators with data from more than one year also let you browse through the different years. Once you click on another tab, you have the ability to compare those specific data points to the trended totals, as well as the ability to export, share, or embed the data you’re looking at.

The combination of data points results in thousands of potential exports. Here are some examples of other charts you’ll find in the project:

The technical challenges in the project were in how to make the whole system run off a simple set of spreadsheets that anyone could edit in Excel. We worked hard to make sure that CHCF can manage the project into the future, adding new indicators and assigning chart types to them, all via simple text files they can publish through their existing content management system.

We’re looking forward to seeing those trend lines fill out for years to come — and, we hope, measure ever better health coverage and health care for California.

View the project live!

Jun 10, 2014

RFPs, Parks, and the Internet

Here at Stamen, we’ve been doing more works for and about parks, first our map for the Golden Gate National Parks Conservancy and then our own parks and social media experiment at parks.stamen.com.

We love parks!

I especially love parks. I spent almost a decade telling stories of local parks and wildlife for Bay Nature Institute, and toward the end of my time there I was constantly trying to push the envelope of what a small nonprofit could do with technology.

Now that I’m at Stamen I’ve got a slightly different view of how park agencies and nonprofits can and should engage with technology.

Back at Bay Nature, I thought, “I’ll figure out precisely what we want and do all the research, and then I’ll just hire someone to make that exact thing!”

I’ve now seen this same thought process at work in a few RFPs for mobile websites and interactive maps for parks and trails. Figure out every possible feature you and your colleagues might want, put them in a giant list, throw in some house-made wireframes, and then send out the RFP for precisely what you know you need.

Makes perfect sense. Except:

It doesn’t work.

What you think you know at the beginning of the process may well turn out to be dead wrong, and, more importantly, you lose so much of the big win of hiring professional designers when you try to solve all the design problems ahead of time.

When Stamen took on the effort to redesign the maps of the Golden Gate National Parks Conservancy, we were fortunate to have flexible, and all-around awesome, project leads at the Conservancy. Michael Norelli and Mason Cummings were ready and willing to hear that the enormous feature set they’d assembled was too big to do well in one scope of work.

Instead, we focused on the key goal: Get people to the parks.

Being able to focus that way was incredibly important to making sure the effort was successful. It meant the Trip Planner was the most important thing to build out fully, complete with bespoke directions to obscure locations and special warnings if parking at Muir Woods was impossible.

We made some awesome Trail Views (and those weren’t in the RFP), but the key to success here was understanding that getting people to the parks was the top priority and the main measure of success.

What about the original RFP’s several other pages of requirements, from hawk tracking to editable management zones? We had to let those go for future phases of work. If we’d tried to do them all, we wouldn’t have finished anything well. Check out the Golden Gate National Parks Conservancy's open source code repository if you want to do a deep dive into the code that made that project possible.

Better Scoping Principles

  • Prioritize:  define must-haves, then cut your list in half
  • Start simple, move quickly
  • Release early, release often
  • Be ready to change your priorities
  • Favor open data, standard formats, and existing tools
  • Prioritize?  YES!

Avoid letting data management dictate design

A common thread in parks and mapping RFPs is that data pipelines, GIS workflows, and general data management sometimes receive a lot more attention than the ultimate goal, which is usually something like getting people to parks or helping people discover new trails or find programs in parks.

That's natural, since the people writing the RFPs are the ones who most often wrangle all that data. But unless you’re really doing a pure infrastructure project, resist the temptation to let data management and your internal needs dictate the thing you’re getting built.

Which leads to:

Use the kinds of software you aim to hire someone to build!

This might seem obvious, but it’s surprisingly rare, especially if you’re a longtime parks pro. Hey, you already know tons of trails and parks and birds and nature lore, why do you need to use an app or website?

Well, first, you might discover some things you don’t know or new ways of looking at things you do know. But more importantly: Just trying out existing tools and really using them is a huge source of insight. Heck, you might discover something that works so well that you want to just hire the people who built it to make a special version for your park or agency.

Keep it simple! (That is hard.)

You might notice that the best existing services and applications are often the simplest. Whether through dumb luck or, more likely, previous mistakes combined with persistence, they’ve figured out what people want most and they’ve given it to them. And they’ve made it easy.

You can do the same in your search for a technology firm to build your stuff: Keep it simple. Favor qualifications and past work over ability to navigate complex documents and matrices. Find a firm with which you can have a relationship based on collaboration and trust. Hire them, take some risks, be ready for small failures in the service of big wins. And have fun.

Syndicate content