Why Collaborative Development Works in a Proprietary World

    by Jeffrey Warren
    February 15, 2012

    Public Laboratory is made up of a diverse group of contributors, some working from their homes or garages, some from their workplaces or even university labs. What brings us together is the idea that open-source, collaborative development can result in inexpensive and accessible environmental sensing.

    But to many, the way our community operates can be disorienting. We’ve approached these unique challenges in several ways.

    Most people are familiar with collaborative development of textual works, such as co-authorship, or even mass co-authorship in projects such as Wikipedia. Software development is textual as well, and such communities are made possible by carefully tailored open-source licenses, which effectively stop any individual or organization from controlling the whole project.


    By contributing to these works — say, an open-source web browser or an article on gumdrops — authors are assured attribution but cannot stop others from building upon their work, improving or adapting it for new uses. This works in part because each time programmers or Wikipedians contribute, their name is explicitly entered in a registry of sorts. By publishing their contributions, they give up a certain amount of control — of course, they’d almost certainly built upon the prior contributions of others who made the same choice.


    Balloon mapping has spawned dozens of variations and improvements as it has spread across the globe.

    Now imagine applying that system to non-textual works, such as a new kind of camera or a tool for detecting air pollution. The way Public Laboratory works, these designs are developed, tested and improved slowly through dozens of meet-ups, workshops, field events, and brainstorming sessions. At each meeting, participants agree to share their contributions in an open-source manner — but there is typically no explicit record of every contribution.


    To compound this, journalists (not to mention partners and even funders) prefer hierarchical organizations so they can say things like “developed at MIT,” and they really love citing individuals, not nebulous groups of “contributors.” We’ve often had to insist on group attribution in the media, and developing a so-called “attribution infrastructure” is a major focus on our website.

    Design for attribution

    We recently launched a small set of new features on our website, PublicLaboratory.org, to address these challenges. While many people make use of our tools, as a community we’d like to highlight those who contribute improvements and share their knowledge with others. With that in mind, we’ve come up with some ways to track when Public Laboratory contributors actually post about their work on the PLOTS website.


    Taking a cue from socially oriented open-source website Github.com, we’ve posted small graphs of the amount of activity on a given project over the past year. A quick look at these graphs shows how much activity they’ve seen in recent weeks, and gives visitors a sense of how dynamic a research community is involved in a particular project.


    This box is shown on every Public Laboratory tool page or place page.

    Above that graph, we’ve listed contributors and the number of posts they’ve made (which are tagged with the tool, i.e. “thermal-photography”. The intent here is not to make things competitive (though that wouldn’t necessarily be a bad thing) but to give people a sense of satisfaction that they’ve been a part of a communal effort, and a glimpse (to outsiders) of the number of people who have made the project happen.

    By placing emphasis on the posting of content, we hope to highlight attribution for those who do good documentation and share it in a public venue — though anyone is welcome to use, adapt, repurpose, and improve upon Public Laboratory projects.

    In order to be an active participant in our grassroots research efforts, you’ve got to reach out to others and share your work. This may not be natural for many people; contributors from many backgrounds are often accustomed to sole authorship credit, while others wonder who will care whether they publish or not. In a collaborative effort such as ours, however, success is gauged by how many others are able to leverage your work and reproduce or improve upon a set of tools you have contributed to. In an open-source context, seeing someone else replicate or adapt your work is a gratifying affirmation that your documentation and development work have resulted in legibility and accessibility for a potential collaborator, not an instance of plagiarism or infringement.


    A network graph for the OpenStreetMap project shows the complex web of distributed contributions to a typical open-source project.

    ShareAlike and Free Hardware

    “Open source” means different things to different people, and with the above challenges in mind, it’s important to make some distinctions. Strictly speaking, open source just means that you publish the source files of your work — and in the case of hardware, the associated design files. A good open-source project will provide legible documentation and support for others who wish to read and understand those files. If you’ve heard of “free software” (we’ll invoke the refrain “free as in freedom, not as in beer“ here), you might be familiar with its more stringent requirement that users have the right to “run, copy, distribute, study, change and improve“ the software. This is the basis of our approach to open source, public, civic science — and it underlies our community’s aversion to proprietary non-free (in both senses of the word) software such as Photoshop or Google Earth.

    The noted lack of such freedoms in the area of scientific equipment and instrumentation — and the barriers that creates for a more legible and participatory approach to science — is a major motivation for our work.

    Finally (for now) there is the idea of requiring anyone who takes advantage of these freedoms (by downloading, adapting, modifying and improving) to share their work in turn, under the same license. This requirement, known variously as a “sharealike” or “copyleft” clause, can be controversial, as it explicitly requires people (and companies) to become producers, and not just users, of open-source works. With some exceptions for datasets and privacy considerations, we have adopted sharealike licenses across all Public Laboratory content, and are in the process of releasing even our hardware designs under a sharealike license, the CERN Open Hardware License.

    While these ideas may be unfamiliar for many, they make it possible for diverse communities such as ours to develop complex technical systems in a way which attributes and protects contributors’ work, and ensures that these shared efforts remain public, accountable, and open to newcomers. They allow anyone to use PLOTS tools and techniques without needing to seek permission, while encouraging newcomers to contribute just as they benefit. They offer a public and grassroots alternative to closed, expensive, and proprietary systems of technology production which have resulted in a science that serves powerful and wealthy corporations above local communities and the underprivileged.

    Such considerations are an important part of the PLOTS approach to building participatory environmental science collaborations. Ideally, our community’s works will inspire readers or viewers to apply civic science ideas to their own lives — adapting tools to local issues — and with luck, they will become active participants in our research community by sharing their work publicly. In time, some may go on to organize local civic science groups, further the development of PLOTS’ open-source tools, innovate new technologies or approaches to environmental monitoring, and challenge and refigure the very structure of participation.

    Tagged: attribution collaborative community github open source plots proprietary public laboratory

    Comments are closed.

  • Who We Are

    MediaShift is the premier destination for insight and analysis at the intersection of media and technology. The MediaShift network includes MediaShift, EducationShift, MetricShift and Idea Lab, as well as workshops and weekend hackathons, email newsletters, a weekly podcast and a series of DigitalEd online trainings.

    About MediaShift »
    Contact us »
    Sponsor MediaShift »
    MediaShift Newsletters »

    Follow us on Social Media