Warning: Parameter 1 to wp_default_scripts() expected to be a reference, value given in /home/public/wp-includes/plugin.php on line 571

7 Comments

  1. Thanks for mentioning ParticipateDB!

    Our catalogue focuses on public participation (the process of involving people in decision making: http://participatedb.com/faq#participation), not the broader field of civic engagement or activism. One of the challenges we try to address is the fact that many forward-thinking, useful or otherwise promising approaches to online participation don’t get the visibility they deserve.

    We plan to add tagging and a proper search function shortly. Any suggestions how we can improve the site further, just let us know. Thanks!

    As for the quality of the content, it’s a collaborative endeavor. We currently have 30+ contributors and hope to sign up a lot more over the coming months.

    Posted June 29, 2010 at 6:33 pm | Permalink
  2. As an urban planning and public engagement practitioner, I’ve been noodling around a new model for understanding public engagement processes and techniques. Arnstein’s model is the classic in planing literature, but it comes from a particular advocacy position that is not always helpful. Arnstein’s model presumes that more direct decisions-making by citizens is always better. Instead, I suggest that different kinds and levels of involvement/control are appropriate for different kinds of decisions and at different stages of the decision-making process. Most citizens don’t actually want to be involved in every decision (like exactly which route the trash trucks should drive), but many do want to have a strong say in decisions that involve value judgments and weighing priorities (is it more important to have more frequent trash pickup or more frequent recycling?). Rather than a “ladder” of citizen participation, where the top rungs are always “better”, I think there is a more nuanced matrix of citizen participation, structured around the type of communication that needs to take place. Sometimes, the project won’t succeed unless many stakeholders engage in dialogue with many other stakeholders (many-to-many dialogue) communication, and in this case, broad, large group participatory processes are needed (like open space technology, world cafe, future search, study circles, etc.). Other times, an agency needs to have back-and-forth discussions with varied stakeholders (one-to-many: two-way dialogue), and forums like structured workshops, websites with comment ability, surveys, etc. are needed. And sometimes, agencies just need to clearly communicate about a decision that has already been made, like when a traffic lane will be closed (one-to-many announcement: one-way communication), and methods like open houses, static websites, radio announcements, etc. are needed.

    Posted June 30, 2010 at 8:45 am | Permalink
  3. Jennifer,

    Thanks for posting your experience and thoughts, I appreciate it.

    I agree that Arnstein’s model, though classic, is in need of re-thinking to allow for the correct amount of involvement and detail for what people actually want to get involved in.

    But this is the part I find interesting:

    Sometimes, the project won’t succeed unless many stakeholders engage in dialogue with many other stakeholders (many-to-many dialogue) communication, and in this case, broad, large group participatory processes are needed[…]. Other times, an agency needs to have back-and-forth discussions with varied stakeholders (one-to-many: two-way dialogue), and forums like structured workshops, websites with comment ability, surveys, etc. are needed. […] And sometimes, agencies just need to clearly communicate about a decision that has already been made, like when a traffic lane will be closed.

    From where I am standing, you are describing the engage, consult and inform model. It’s a helpful breakdown for professionals doing task management for themselves, but I think it is still fundamentally agency-centric and as a framework does not recognize the co-creation potential across all three purposes of communication with individuals who command influence within their own social networks. It still frames the agency as the entity steering, or determining the purpose of, the conversation — with a bit of an iron fist or as a benevolent facilitator (to riff off the way open source communities are often managed). I think what’s been challenging for me, approaching public engagement from not a practitioner perspective but from a community-oriented perspective, is that in the framework you describe, it’s still the agency deciding what’s needed, that does not acknowledge that members of the community are actually talking to each other on things that are relevant all the time, and that meaningful things are happening in those conversations even if they don’t slot themselves obviously into the work as it’s been laid out in a consultation plan with specific milestones and deliverables.

    Even the impact of social media and open data for coordination on the “inform” task can be significant. Let’s use your traffic lane analogy. When the “information” about a traffic closure is make available in a slightly different format — say, as a package of geographic files that can be plotted on the map, like the Olympic Transportation Road closures were in the City of Vancouver’s data catalogue, in addition to that ad in the newspaper and that radio announcement — you open the door for others to present the information in ways that can be more meaningful and useful through mashing-up with other information. The app Vanpark2010 took the Olympic road closure and venue information to help drivers locate places where they could still park their cars. As a single organizational entity knowing about your action, you can’t do that alone. (And organized efforts to do this so far have proven unusable and unsustainable: see i-Move.ca.)

    Furthermore, the “engage, consult, inform” does not feel to me the model you would adopt to cooperate, coordinate or collaborate with actors of potentially equal standing. Instead, it frames the sponsoring organization as the only actor whose actions are worth talking about, which can be kind of insular and even insulting.

    I hope I don’t come across as if I’m pouncing on you, Jennifer, because I don’t intend to; rather, I’m offering things that I see from my own experience that point to a different conception of public engagement, which (definitely!) feels ridiculous for me to do given I haven’t actually tackled what’s required by the work. It doesn’t ring any less true, and I hope to further develop my ability to explain what I see and what it means. I’m very appreciative of your comments in helping me talk it through.

    Karen

    Posted June 30, 2010 at 10:22 am | Permalink
  4. Karen, check out the IAP2 Spectrum of Public Participation (PDF). Their model lists five increasing levels of public impact: not just inform, consult and involve, but also collaborate and empower (though the latter may occur less frequently and may, as Jennifer pointed out, not always be feasible).

    Posted July 2, 2010 at 8:55 pm | Permalink
  5. Thanks so much for the link, Tim! I’ve been meaning to delve more into IAP2 and had no clue where to start, so I’m grateful for the pointer and interested to see what role they describe for professionals in collaborating and empowering members of the public.

    Posted July 7, 2010 at 10:49 am | Permalink
  6. My colleagues and I are the developers of the Structured Public Involvement protocol mentioned in the blog. There are many unresolved questions about how to improve the quality of public participation in planning, and public goods management broadly speaking. By this, we mean the expenditure of taxpayer money and/or the social allocation of risks, benefits and disamenities. This means almost everything that local, state and national government does.

    There are so many problems with the field of public involvement, it’s sometimes hard to know where to start. Here are a few observations. These are not representative of mainstream thought. In some cases, we’ve discovered that they are considered almost revolutionary, although they’re supported by a large volume of hard data. Here’s one example.

    What are we trying to achieve?

    There’s no need for professionals to guess at this.

    Our data, gathered from thousands of citizens during real projects, tells that citizens do not want “citizen control”, or Level 8, on the Arnstein Ladder. They want Level 6, or “partnership.” So do professionals such as engineers and planners.

    The current gap between where citizens believe they are being treated now, and where they would like to be depends on their experiences and the agencies with whom they have dealt (we have data from projects as diverse as transit oriented development to nuclear plant remediation). But in all cases there’s a significant Gap.

    So, this means:
    1. Citizens don’t want “citizen control.” So why do some professionals claim that they do? If so, show us the data. Obtained from large numbers of real citizens. Otherwise, let’s base our conclusion on a real data set.
    2. There’s a Gap of 2-4 points between what citizens believe they are now, and where they want to be in terms of quality of participation.
    3. Professionals think they’re doing better than the public thinks they are.
    4. This data also means the problem should be methodological. If the professionals and public want the same quality of participation, then we can work on methods to accomplish this. There’s no need to diminish people’s rights, or capacities, to participate.

    Next, what are the public processes trying to accomplish? Why don’t we apply the same analytic treatment to large-scale processes that we do for smaller ones, and evaluate the process performance against these criteria?

    So let’s set out a set of performance indicators for public processes, such as quality, inclusion, clarity of decision support and efficiency.

    Many believe these indicators to be mutually exclusive, or tradeoffs (e.g. it is widely believed that quality is inversely related to inclusion, or scale, or even process efficiency)

    This need not be so. The reason for this thinking is that current theoretical frameworks for public involvement are totally inadequate. The biggest issue is that the customary, normative and even implicit ideology of “consensus” is unworkable when dealing with large, diverse groups, with varying values, from whom valuations must be elicited in reasonably short timeframes and translated into effective, meaningful, and fair outcomes. Logically:

    Consensus does not equal high performance.
    Consensus does not equal justice. Or equity.

    Conversely…

    Lack of consensus does not equal lack of equity.
    Lack of consensus does not automatically undermine justice.

    Yet, in the fields of planning, urban design, transportation infrastructure, energy futures and resource management, very few can, or have, published more thoughtfully on how to design effective, high-performance processes that do not rely on, demand, or expect “consensus.” Nor have many researchers collected data on what stakeholders really think of these processes.

    Here are just a few interesting facts about the data above and some other findings.

    Fact: the planning profession doesn’t relish these data being published in its journals – search for “Arnstein Gap” and wonder why it’s documented in many other forums but not in its apparently most suitable home of planning. This analysis can’t be silenced, however.

    Fact: the spectacle of transportation designers, planners and other consultants spending – literally in some cases – hundreds of millions of dollars of public money in ways that affect thousands of citizens, but without being held accountable for delivering a fair, equitable, inclusive and efficient public involvement process is almost ubiquitous.

    Fact: many agencies and contractors, including engineering, planning, and design, find reasons why quality evaluation shouldn’t be undertaken. The idea of citizens openly evaluating the planning/design process quality, at large public meetings, using electronic polling to ensure simultaneity, independence and equity in stakeholder valuations, seems to cause problems. We’ve done this for years and don’t see why we shouldn’t always evaluate our process. If it works, let citizens tell us. If not, also, let them tell us. Let this become a normal practice. Let processes that DO NOT contain evaluations be defended against normative expectations.

    In the course of our research and professional work, we’ve amassed data on many aspects of public processes, much of which contradicts not what we hear, but what is often being done..e.g. supposedly “consensusal” processes resorting to exclusion to ensure “difficult” “ill-informed” or “problematic” people or views don’t “derail” “subvert” the process…. how ironic! But, how unfortunately typical.

    The most interesting finding though, is how often even logical, well-educated professionals want to retreat from, or flatly deny, hard data. Including public process performance data such as the quality evaluation. Example: we hear that processes that don’t aim for consensus don’t – or can’t – “work”. But there’s no definition for “work” and no “data” other than informal evaluation by agents of the design authority, sponsoring agency or other consultants. This situation is, to put it mildly, not consistent with the democratic goals that are espoused by many of these agencies in their mission statements.

    Let’s have factual process criteria, let’s measure them, and publish the results openly.

    Ultimately, an Executive Order that mandates process evaluation will be needed. This could be similar to the 1994 EO on EJ.

    Why not? We know this evaluation can be done. We’ve done it with thousands of citizens, in real and controversial projects. This clearly should be done. With all the effort being directed into public processes, why are we apparently the only ones in the world who possess such process quality data?

    But there are numerous vested interests who do not want this to be realized. In spite of a strengthening rhetoric on participation, inclusion and public process quality in many professional and academic fields, when confronted with a metric involving , the fear is palpable. This is precisely why political movements such as the Tea Party are making accountability a centerpiece of their platforms. It is important to citizens, and they’re going to take action.

    Reactionary thinking is out of date, out of touch, and increasingly indefensible. The systemwide rhetorics on public involvement need to be replaced by higher performance processes. These processes need to be measured against hard quality criteria. It’s not just the right thing to do, it’s soon going to be the only thing to do. This can’t come soon enough to improve governance in democratic societies.

    Posted August 17, 2010 at 7:51 pm | Permalink
  7. Keiron,

    Thanks very much for your thorough comment. Please take this comment as a commitment to undergo a thorough reading and to formulate a reply – just not right at the moment.

    Posted August 26, 2010 at 12:42 pm | Permalink

One Trackback

  1. By Conversations Elsewhere on July 2, 2010 at 10:26 pm

    […] post about social media and public engagement that mentions ParticipateDB (which is how it showed up in […]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*