[Updated] Defense dollars aren’t a better predictor of the Amash vote

July 27th, 2013

This was updated twice since first posting, as indicated below.

In a Wired article yesterday Lawmakers Who Upheld NSA Phone Spying Received Double the Defense Industry Cash, the author said that based on an analysis by MAPLight “defense cash was a better predictor of a member’s vote on the Amash amendment than party affiliation.” That suggests there’s evidence defense cash had something to do with the vote. There isn’t.There isn’t much.

Everyone who’s been following the Amash vote already knows that the vote was not along party lines in the least. Take a look at the seating chart diagram on the GovTrack vote page:

Liberal Democrats and conservative Republicans happened to form a coalition in opposition of NSA data collection (an “Aye” vote), while moderates in both parties voted to reject the amendment. (The seating chart arranges representatives by their GovTrack ideology score.) So, first, the fact that defense cash was a better predictor than party is not very interesting.

A better question is whether defense cash is a better predictor than a legislator’s pre-existing personal convictions, as measured by our ideology score.

It isn’t.

Defense cash’s prediction

To make this quantitative, let’s make the prediction like this. Since we know the vote was 205/217, let’s put the 217 legislators who received the most defense cash into one group and the bottom 205 legislators into another group. How well do those groups match the vote outcome? Here’s the breakdown by counts:

Less$ More$
Aye 123 82
No 82 135

In other words, this prediction is right for 123+135 = 258 legislators, or just 61% of the time.

Ideology’s prediction

We can do a similar analysis based on the ideology score. The idea is that the further from the center a legislator is, the more likely he or she was to vote for the amendment. So let’s make groups for the 205 legislators with scores furthest from the median ideology score (“extreme”) and the 217 closest (“moderate”). Does that match the vote?

A little better.

Extreme Moderate
Aye 131 74
No 74 143

This prediction is right for 131+143 = 274 legislators, or 65% of the time. That’s a little better than defense cash, but let’s call it a draw.

[update: added 7/29/2013]

We have two predictors for the vote — personal conviction and campaign contributions — that are about equally good,  and both are equally plausible. In the absence of other data, there’s no reason to prefer one explanation of the vote over the other.

Better together?

Votes are often mostly along party lines. That is, vote and party are often extremely highly correlated. That also means that to the extent money is highly correlated with votes, it’s then necessarily highly correlated with party affiliation too. That makes it very difficult, or impossible, to try to separate the influences of party and money.

But the Amash vote presents a uniquely interesting case because ideology (distance from the center) and defense dollars are not really correlated at all  (r=-.05). That means ideology is good at predicting 60ish% of the votes and defense dollars are good at predicting a slightly different 60ish%. Maybe we can put them together to predict more than either can predict alone?

Let’s start with the predictions from the ideology score. We know we got 35%, or 148, of the votes wrong. So let’s swap the 74 congressmen in the ‘extreme’ group with the highest defense cash (call them the A group) with the 74 representatives in the ‘moderate’ group with the least defense cash (call them the B group). If money has any effect, we’d predict these to be the representatives most likely to be affected. Here’s how those representatives voted:

A B
Aye 35 38
No 39 36

Note that the by ideology alone, we predicted the As to be Aye voters and the Bs to be No voters, which was right 35+36=71 times. After the swap, we make the reverse predictions, which is right 39+38=77 times. The swap improves our predictions for 6 votes, or 1.4% (6 out of 422 aye and no votes).

The predictors are better together. That means there is room for an influence of defense dollars on the vote, even for a skeptic like me that prefers an explanation in terms of ideology first. But it’s a small effect in absolute terms. And this effect goes both ways. The 6 votes extra are split between 4 additional no-votes due to defense-dollars and 2 additional aye-votes due to lack-of-defense-dollars.

So let’s boil this down to one number. Out of the 422 votes, maybe about 4 no-votes were due to the influence of defense contractor campaign contributions. Even in a tight vote like this, that wouldn’t have affected the outcome. And it’s still a big maybe. This is a miniscule correlation that is probably due more to random chance than any actual influence of money.

(In a linear regression model, the adjusted r-squared roughly doubles when we put the factors together.)

[end of update]

What does it mean?

Since we have two predictors that are about equally good, and one has nothing to do either with defense or money, there’s no reason to think that defense cash had anything directly to do with the outcome of this vote.

There’s obviously a role of campaign cash in our political system. In particular, only candidates who can raise cash can run for office. I’ve written about that in my book if you want to know what I think in more detail.

But if you want to relate industry cash to a particular vote, you’re going to have to at least beat other explanations that aren’t based on that industry’s cash.

So, here’s the thing, it’s important that we actually tell truthful stories, not just ones that we can spin to match our beliefs.

[update: added 8/19/2013] Ben Klemens, a statistician, has turned this data into an interesting logit model and quantifies in a better way the effect of money on the vote: post 1, post 2. [end of update]

Analysis details

After merging the vote and ideology data from GovTrack with the campaign contributions aggregated by MAPLight into a single table (download), I ran the following script in R:

data = read.table("table.csv", header=T, sep=",")
attach(data)
# There were 205 Aye-votes.
num_ayes = sum(vote=='Aye')
# Group legislators by how much defense contractor money they received.
# Call the bottom 205 legislators the 'Less$' group, and the other half
# the 'More$' group.
defense_dollars = ifelse(rank(contribs) <= num_ayes, 'Less$', 'More$')
# Group legislators by how far their GovTrack ideology score is from
# the House median. Call the most extreme 205 legislators the 'Extreme'
# group, and the other half the 'Moderate' group.
distance_from_center = abs(ideology - median(ideology))
is_extreme = ifelse(rank(-distance_from_center) <= num_ayes, 'Extreme', 'Moderate')
table(vote, defense_dollars)
table(vote, is_extreme)
cat("cor(contribs, distance_from_center) =", cor(contribs, distance_from_center),"\n")
swap_size = 74
group = ifelse(is_extreme=='Extreme', '0', 'Z')
group[is_extreme=='Extreme'][rank(-contribs[is_extreme=='Extreme']) <= swap_size] = 'A'
group[is_extreme!='Extreme'][rank(contribs[is_extreme!='Extreme']) <= swap_size] = 'B'
print(table(vote, group))

The legislative data dance is a song that never ends

July 17th, 2013

The House Appropriations committee passed up another chance to advance core transparency practices in Congress. In a draft report published this morning for FY2014 appropriations, the committee makes no mention of legislative data. And in the Bulk Data Task Force’s finally-released recommendations, the Library of Congress gets all worked up over something no one has been asking for.

Here’s the short of it. Can we get a spreadsheet simply listing all bills in Congress? Is that so hard? I guess so.

After last year’s legislative branch appropriations bill report said the committee was “concerned” that the public would misuse any bulk data downloads, The Washington Post covered how the public uses this sort of data for good, and House leadership formed a Bulk Data Task Force to consider if and how to make bulk legislative data available. That task force submitted recommendations to the House Appropriations committee last December, but it was only made available to the public last week (see this, page 679).

In the recommendations, the task force noted that it had begun several new transparency projects. One is the Bill Summaries project, in which the Library of Congress will begin to publish the summaries of House bills written by the Congressional Research Service (CRS) in some structured way. The Library of Congress’s report to the task force has some choice quotes:

“some groups may try to leverage this action to drive demand for public dissemination of CRS reports”  (Note that “CRS reports” are different from “CRS summaries.” That’s a whole other can of worms.)

“CRS could find itself . . . needing to clarify misrepresentations made by non-congressional actors”

“if there is an obligation to inform the general public to the risks of non-authoritative versions of the information, it has not been included in the estimates”

These CRS summaries have already been widely distributed… on GovTrack… for nearly a decade. (And, I’m sorry, but what risks am I causing?) And while I wouldn’t mind having the summaries easier to get from the Library, I certainly am not gunning for them. I want data like the list of cosponsors, what activities bills have gone through, or just a simple list of bills. If the Library thought this wasn’t a great place to start with bulk data, well, I couldn’t agree more!

Some of the other projects mentioned in the recommendations are indeed very useful (some of which I wrote about here). Others, however, touted bulk data success without making any new data available. In the recommendations’s meeting minutes in the appendix, the task force wrote that it discussed “what data is available on GovTrack compared to what would be available through the proposed GPO project.” Quite a bit! That proposed GPO project turned into the one that made no new data available. In their next meeting they met with me and folks from other groups (Sunlight, Cornell LII, and so on), but I don’t recall them asking me the question they posed the week before, oddly.

The other projects mentioned in the bulk data task force recommendations are:

  • Congress.gov, THOMAS’s upgrade, which is explicitly not providing any bulk data (except perhaps through the new Bill Summaries Project)
  • Member Data Update: The Clerk’s list of Members of the House now includes Bioguide IDs, which is fantastic and very helpful.
  • A new House History website launched or will launched. See, I don’t even know. Again, not bulk data.
  • Docs.House.Gov: Committee schedules and documents have been added. (Great! I’m using that data on GovTrack already.)
  • New XML data for House floor activity. (This is pretty interesting but a little disorganized. I would rather scrape THOMAS than use this XML data.)
  • The Clerk is launching a Twitter account. (No data here.)
  • HouseLive speaker search. (Searching videos. Data? Who knows.)
  • Stock Act public data disclosure.
  • Legislative Data Dashboard (not quite sure what this is).
  • Converting the United States Code to XML. (This is a big and commendable project.)
  • A contest to get the public to convert bills to the Akoma Ntoso XML data format. (Does not count as open government data if the public has to do the work.)
  • Replacing MicroComp (an old bill/report text drafting tool?).
  • Positive Law Codification (when did that become in scope for this task force?).
  • Editorial Updating System (no idea what this is).

So while the recommendations support the use of legislative data generally, it made no long term goals for broad access to the legislative data on THOMAS. And as for the only data in motion now, the Library of Congress appears not to be happy about making it widely available.

The committee report for the annual legislative branch appropriations bill, which kicked off the task force last year, has been an important document for legislative transparency in the past. Besides last year’s step backwards, in 2009 the report indicated the House supported “bulk data downloads” for the bill status information on THOMAS.gov. Though nothing came of it. This year the committee said nothing, so, well, I guess nothing will come of it too.

New Open Data Memorandum almost defines open data, misses mark with open licenses

May 9th, 2013

TL:DR: The new E.O. and memorandum are good for transparency and lock in almost all of the generally accepted notions of open government data. But it misses the mark on the requirement of “open licenses.”

With an executive order and a new Memorandum on Open Data Policy today, the focus on entrepreneurship remained at the forefront of federal data policy. This focus began with last year’s Digital Government Strategy, and these days weather data and GPS signals are the examples of choice. That said, the policies set in the new memorandum are quite good for the classic use of this data (transparency, accountability, and civic education) even if “transparency” is only barely mentioned in passing.

Defining Open Data: How well does it do?

This new Open Data Memorandum presents the most detailed definition to date of “open data” by the federal government. It included many of the principles that our community has reached consensus on, but it gets one severely wrong.

As I wrote many years ago, the 2009 Open Government Directive itself already adopted some of the principles of open government data including: online, primary, timely, public input, and public review. It also added two principles of its own: being pro-active about data release and creating accountability by designating an official responsible for data quality.

Comparing to my list of open government data principles in my book, the new memorandum’s definition of open data covers:

  • Principle 1: Information should be online (to quote the Memorandum: “retrieved, downloaded”)
  • Principle 2: Primary (the Memorandum even uses language from the 8 Principles; interestingly the memorandum places this under the heading of “Complete,” which was a different principle from the original 8 Principles).
  • Principle 3: Timely.
  • Principle 4: Accessible (the Memorandum repeats the language from the 8 Principles, “available to the widest range of users for the widest range of purposes” and the use of “multiple formats” where necessary, and for documentation says the data should be “described”).
  • Principles 5 and 10: Analyzable (“machine readable”).
  • Principle 6: Non-discriminatory
  • Principle 7: Non-proprietary (open) data formats
  • Principle 14: Public review (“A point of contact must be designated to assist with data use and to respond to complaints about adherence to these open data requirements.”)

Its definition also states that open data has a presumption of openness. (Principles 2-7 and 14 are from the 8 Principles of Open Government Data. Principle 1 is from the Sunlight Foundation.)

Elsewhere in the memorandum it addresses:

  • Principle 13: Public input (“engage with customers” for prioritizing what data should be made available and how to make it available)
  • Principle 15. Interagency coordination (“interoperability”)

It also asks agencies to create data catalogs to include datasets “that can be made publicly available but have not yet been released” at agency.gov/data URLs. And it says agencies must consider the needs of open data at all stages of the information collection lifecycle. In other words, data should be collected in such a way as to promote public dissemination of open data later on.

The Memorandum misses the principle that data should be license-free, which is a core principle and a grave mistake. It also misses the peripheral principles of permanence, the use of safe file formats, and practices of provenance and trust (e.g. digital signatures). (These last two are ACM principles.)

“Open licenses” presume access is closed by default!

Rather than requiring open data to be license-free, which was a core part of the 8 Principles of Open Government Data, it instead promotes the use of “open licenses.” This is a subtle but important distinction. Licenses presume data rights. Open licenses, including open source licenses and Creative Commons licenses, create limited privileges in a world where the default is closed. These licenses create possibilities of use that do not exist in the absence of the license because copyright law, or other law, creates an initial state of closedness.

Most open licenses only grant some privileges but not others, and some privileges come along with new requirements. The GPL and Creative Commons Attribution License, for instance, rely on copyright law so that restrictions on data use intended by the open license (GPL’s virality clause, or the restriction that users must attribute the work to the author) are enforceable in court.

Federal government data is not typically subject to copyright law, and in this case a license is not needed for the data to be open. Thus the application of a license suggests a change from the open-by-default state of this data to a closed-by-default state where a license is required to open it up. While the memorandum requires “an open license that places no restrictions on their [the dataset's] use,” the term “open license” is typically understood to presume a default closed state. This policy opens the door (so to speak) to agencies applying licenses (i.e. new contractual agreements) to data that serve only to restrict use.

Federal government data not subject to copyright cannot be free if a license is applied. The license-free principle of the original 8 Principles says open government data cannot be limited in this way.

When data may be subject to copyright protection (copyright law is murky and there are many gray areas), or when copyright law definitely applies (such as to documents produced originally by federal government contractors), then a public domain dedication such as the Creative Commons CC0 statement or the Open Data Commons Public Domain Dedication and License (PDDL) (both of which combine a waiver and a license) is appropriate. A public domain dedication differs from an open license in that it disclaims copyright and other protections, whereas, again, an open license implies that such a limitation on use is already present. The CC0 statement was successfully used by the Council of the District of Columbia to disclaim copyright over data files containing the DC Code.

What’s the definition used for?

While the definition of open data is otherwise quite strong, the definition is used just once in the whole memorandum. The memorandum does not mandate that government data be open data under its definition, at least as far as I could see. The only use of the open data definition is in its request for agencies to create roles for staff to ensure data released to the public are open. That is, staff should promote open data, but open data itself is not required.

Although the definition itself is not used much, there are independent provisions that repeat some of the same principles. Agencies must use “machine-readable and open formats,” existing standards, and metadata. And information collection should be done in a way to support information dissemination: “[A]gencies must design new information collection and creation efforts so that the information collected or created supports downstream interoperability between information systems and dissemination of information to the public.”

It also requires the use of open licenses:

“Agencies must apply open licenses, in consultation with the best practices found in Project Open Data, to information as it is collected or created so that if data are made public there are no restrictions on copying, publishing, distributing, transmitting, adapting, or otherwise using the information for non-commercial or for commercial purposes.”

As I mentioned, federal-government-created data needs no license to be open, although the memorandum implies that all agency data should have an open license. (That’s either legally impossible or it means something usual.) For other data, it appears that the memorandum intends to create a public-domain-like state. But it is qualified, for contracts may only use “existing clauses” (i.e. standard contract terms already approved by OMB) to implement terms of open licensing. Looking over those terms, I don’t see the necessary legal framework to do it. And a nearby footnote confusingly says that a data user who modifies the data “is responsible for” describing the change. Does that mean an “open license” can require users to describe modifications? The qualifications make it very difficult to know what an acceptable implementation of open licensing looks like.

Conclusion

While the goals of the Memorandum in defining open data and using open licenses are laudable, the implementation does not meet the 8 Principles’s requirements of open government data, at least under the usual understanding of “open license,” and the use of the definition to promote open data is very limited.

PS. As Derek Willis points out over Twitter, the “mosaic effect” paragraphs in the memorandum are also somewhat concerning. The mosaic effect is hard to quantify and therefore difficult to limit, and this creates a big hole for keeping data government out of public reach.

UPDATE 5/10/2013 #1:

Rufus Pollock points out that the Open Data Commons Public Domain Dedication and License (PDDL) is similar to CC0 and would also be appropriate. I agree.

Eric Mill notes that for data already in the public domain, the Creative Commons Public Domain Mark, which is basically an icon/badge, would be appropriate. Agencies should definitely mark public domain data as such.

UPDATE 5/10/2013 #2:

I added a few paragraphs to the section now called “What’s the definition used for?”.

DC opens its “code”, embracing principles of open laws

April 4th, 2013

This morning DC’s legal code went online as open data. I’ve worked with government before on open data, but never have I worked with a government body that moved so deftly through the technical, policy, and legal issues as the DC Council’s Office of the General Counsel. So, before anything else, thanks to the general counsel V. David Zvenyach and his staff for their time and expertise on this.

The TL;DR version goes like this:

Tom MacWright wanted to build his own version of the DC Code website. The DC Council couldn’t share its electronic copy of the Code because it contained intellectual property owned by West. This became a little and very geeky controversey (spurred by Carl Malamud). But Zvenyach — the general counsel — recognized the value of making the law open and did it. He removed the West IP from their electronic copy of the Code (I helped), posted the file on the Council’s website, and even included a CC0 public domain dedication

The last bit all happened within a matter of days, and it was one of the easiest open data success stories I’ve been a part of. Tom recapped the events here and began hacking the code immediately. He held a hacakthon on April 14 which he wrote about here (and Eric Mill wrote about here).

DC is setting an example for other jurisdictions. In terms of the 10 Principles of Law.Gov, DC’s bulk law download — achieved within only a few days of work — satisfies principles of no-charge to access (1), no copyright or terms of use (2), data in bulk (3), and, to some extent, machine processability (8).

Here’s the longer version:

This all began a few months ago when DC-based civic hacker Tom MacWright took an interest in making local law more accessible. Intending to import the DC Code into Waldo Jaquith’s State Decoded project, he ran into a small problem: he couldn’t get a complete copy of the law. Intellectual property issues prevented the DC Council from simply emailing over their copy of the Code.

Many states, like the District, contract out the codification and code-publishing work to a third-party like West (owned by the Canadian-owned Thomson Reuters) or Lexis (owned by the Amsterdam-based Reed Elsevier). DC had previously contracted out to West, and last year switched to Lexis. Neither likes to share. DC’s official website to read the Code — which has been run by West — is free to the public, but copying any part of the Code off of that website might violate West’s copyright or terms of service, or both. Sharing the law might have been illegal.

In the case here in DC, the DC Council had Word documents containing the Code, given to them by their contractor West, but the documents contained West’s logo. The DC Council could not share the documents with West’s logo intact. And it wasn’t easy to take those logos out (more on that later). Informally speaking, West owned the DC Code.

I had met Zvenyach, the general counsel, before. He is very technologically savvy and has been trying to modernize the office he took over only a few years ago. We had even talked about holding a hackathon to help him do it. (As a DC resident, I’m also interested in DC law.)  But his office, like all of government, is bound by limited resources and much work to do. When Tom brought the issue onto Zvenyach’s radar, I don’t believe there was any point at which Zvenyach didn’t want to make the files available. It was, as far as I’ve observed, merely a matter of time and resources.

Tom wrote more about the intellectual property issues here and here. Coincidentally, on Monday Ed Walters of Fast Case gave a great talk on the issue of who owns the law at Reinvent the Law — I highly recommend watching it. He’s also written extensively about it.

Tom asked Carl Malamud to get involved. Carl has been working on this issue in other states, like in Oregon, where the State of Oregon itself claimed copyright over their laws. Carl bought (for quite some money) a physical copy of the DC Code, digitized it, and mailed thumb drives in the shape of famous presidents containing the digitized code to various important people. This was a spin on a tactic that Carl began in the 1990s when he opened the SEC’s corporate filings data: get the data online, pressure the government to put the data online themselves, and then help the government take over that responsibility.

The media and bloggers caught on, beginning I think with Corey Doctorow on March 27, followed by DCist on March 28, The Washington Times on March 31, Steve Schultze on April 1, and Think Progress on April 3. The files themselves went up on April 4, so little more than a week from the first media blog post about it, and the decision to put the files up with a CC0 license was made in any case some days earlier. It really did not take much pressure at all. (Tom also wrote a post on Greater Greater Washington on March 19.)

Carl had noticed early on that the DC Council asserted copyright over the Code. Some of the media reports focused on that. As Zvenyach explained in The Washington Times article, the rationale was to protect DC from West, by making sure West could not claim copyright over the same Code, not to limit access to the law. Whether or not state codes can be copyrighted was mostly besides the point, and the focus on this issue turned out to be a red herring. It was resolved quickly with the choice of the Creative Commons CC0, a public domain dedication.

I went in to Zvenyach’s office on April 3 to help them take West’s logo out of the Word documents. There was one document per title of the Code, or about 50 documents, many in the 50-megabyte size range. The West logo was in the header, but the header was specified independently for each section of the code, so in reality there were thousands of logos to take out. We also took out a DC copyright line from the documents, which was also repeated in each section.  It took about 4 hours for Microsoft Word to process all of the files, and 1 hour for us to figure out how to do it so “quickly.”

When I left Zvenyach’s office that evening, Zvenyach pointed out the presidential thumb drive still sitting on his desk that he received from Carl — unfortunately I forget if it was a little George Washington or a little Abraham Lincoln. I have a feeling that thumb drive will be around for a while.

Now, there is a bigger issue here. There’s no plan for updating the public files. DC’s contract with Lexis going forward doesn’t require Lexis to provide DC with an electronic copy of the code. Perhaps after this they’ll refuse to do so. But we’ll tackle this another time.

Public Comment to the House Appropriations Legislative Branch Subcommittee for FY2014

March 18th, 2013

I will be submitting the following public comment to the House Committee on Appropriations Subcommittee on the Legislative Branch regarding Public Access to Legislative Information.

—–

I write to urge the subcommittee to expand funding for legislative transparency.

I am the president of Civic Impulse LLC, which operates the free legislative tracking service GovTrack.us. Our website has become an authoritative source for legislative information:

  • More citizens turn to GovTrack.us for information about the status of legislation than the Library of Congress (LOC)‘s THOMAS and Congress.gov websites. [See compete.com.]
  • Hundreds of House and Senate staff use GovTrack.us each day.
  • More than 70 congressmen use GovTrack services to display congressional district maps and their voting record on their official website.

Why is this? GovTrack.us has become the de facto authoritative source for legislative information because the Congress does not publish enough “bulk legislative data.” In 2004 we stepped in to fill the vacuum created by the lack of information coming from the Congress. It is long past due for the House to correct this problem.

When the Committee released a draft report last year indicating it intended to have legislative branch agencies publish less bulk data, The Washington Post picked up on the story and wrote:

“At Congress’s ’90s-vintage archive site, there’s no way to compare bills side by side. No tool to measure the success rate of a bill’s sponsor. And there’s certainly no way to leave a comment. Congress makes it hard for outside sites to do any of this, either, by refusing to give out bulk data on its bills in a user-friendly form.” (“Congressional data may soon be easier to use online,” The Washington Post, June 8, 2012.)

Soon after, the Speaker and Majority Leader formed the “Bulk Data Task Force.” Since the formation of the task force, new bulk data projects have been completed at the Government Printing Office (GPO) including bulk bill text and at the House Clerk (committee schedules and documents and bulk floor action data).

“Bulk data” is a core component of any government information dissemination program. The House Clerk publishes roll call vote results as bulk XML data. In 2009, the Government Printing Office began offering bulk data for bill text, the Federal Register, and other publications. The Office of Law Revision Counsel publishes the United States Code in multiple bulk data formats. Bulk data can be produced at a fraction of the cost of other information dissemination methods, such as colorful websites.

Yet much information about the Congress remains out of public view. There is no public bulk data for the status of legislation (the LOC “BSS” database), amendments, or committee votes. I believe that eventually all official artifacts of the legislative process should be available online, free, in real time, and as structured bulk data. [See Recommendations to the Bulk Data Task Force.]

And, sadly, proposals for cost-reduction threaten the public’s access to the law itself. A 2013 congressionally-funded report by the National Academy of Public Administration (NAPA) called for the Congress to consider charging the public fees to read the law online at GPO’s website. NAPA’s report is severely out of touch. There is no dispute that it is a moral imperative for Congress to fund programs that provide broad access to the law and other parts of the public record.

GovTrack.us is a demonstration that bulk data creates broad public access and that bulk data is also the most cost-effective way to create access. Since 2004, GovTrack.us has reached tens of millions of individuals at a cost of less than $1 million.

The Committee can advance broad public access to legislative information by providing adequate funding for:

  • Publishing the LOC legislative status (“BSS”) database as bulk data. [See Recommendations to the Bulk Data Task Force.]
  • Enhancing GPO’s highly successful FDSys system.
  • Creating bulk data program officers at GPO, LOC, and under House Clerk.
  • Evaluating the cost and impact of legislative transparency by an organization that believes in the public’s right to primary legal documents (i.e. not NAPA).

Thank you for the opportunity to submit comments on legislative branch appropriations for FY 2014.

Joshua Tauberer

President, Civic Impulse LLC

Open Data Day 2013 Hackathon Recap

March 2nd, 2013

Last weekend in perhaps as many as 100 cities around the world open data enthusiasts held hackathons. Here in DC we too were celebrating February 23 as International Open Data Day. And it was, dare I say, a great success.

Over 150 developers, data scientists, social entrepreneurs, government employees, and other open data enthusiasts participated in our event, first at a kickoff Friday night at Google’s DC headquarters and then at the Saturday session at The World Bank. Participants worked on local DC issues, global open source mapping, world poverty, and open government. Here are some quick links:

Videos: One | Two – Photos: One | Two

Eric’s Recap | Sam’s Recap | Tumblr | Storified Tweets

Press coverage is listed at the end.

Our approach to the hackathon was a little different than many others. Our goals were to strengthen the open data community, to foster connections between people and between projects, and to emphasize problem statements over prototypes and solutions. There was no beer or pizza at our hackathon, no competitions, and no pressure to produce outputs. Participants came motivated and stayed focused without needing to be treated like brogrammers. This created a positive, welcoming, and highly productive environment.

In the morning Eric Mill (Sunlight Foundation/@konklone) ran a several-hours-long tutorial on open data for about 40 participants. Some were new to coding. Others were project managers (inside and outside of government) who wanted to learn more about what open data is all about from the ground up. Eric walked the participants through exploring APIs through the web browser and using command-line tools to process CSV files — a very concrete way to explain the benefits of adding structure to data.

Several projects focused on local DC issues: mapping zoning restrictions (more), graphing public and charter school enrollment and (other education data), mapping trees by species, and building a database of social service providers.

A large team of map hackers worked on mapping Kathmandu in Open Street Map to aid disaster response, and with their collaborators around the world mapped over 7,000 building footprints.

Global poverty and international development was the focus of several other projects, from building APIs for international development project performance data to measuring poverty in real time using Twitter.

The open government projects worked on adding semantic information to legislative documents, comparing legislative documents for similarity, extracting legal citations, cataloging our government representatives at the local level, and building “devops” tools for rapid deployment of VMs that might be useful in government or for open data researchers.

And there were other projects that don’t fit into any of those categories, like building Python tools for creating education curricula,

The event was organized by me (Josh Tauberer/GovTrack/@JoshData), Eric Mill (Sunlight Foundation/@konklone), Katherine Townsend (USAID/@DiploKat), Dmitry Kachaev (Presidential Innovation Fellow/Millennium Challenge Corporation/@kachok), Sam Lee (The World Bank/@OpenNotion), and Julia Bezgacheva (@ulkins/The World Bank).

Thanks to The World Bank especially, and to Google, the participants that helped out with registration in the morning, and to everyone who came!

This was DC’s second open data day. Our first was on Dec. 3, 2011 and was co-hosted by POPVOX (Josh Tauberer) and Wikimedia DC (Katie Filbert). See what we did on the post-event recap at https://www.popvox.com/features/opendataday2011. Participants then worked on improving access to U.S. law, scanning federal spending for anomalies following Benford’s Law, understanding farm subsidy grants, building local transit apps, and keeping Congress accountable. Only about half of the participants were programmers, buteveryone found a way to be involved.

It was also DC’s second international development data day. The last one was held on December 9, 2012 in the lead-up to the Development DataJam hosted by White House’s Office of Science & Technology. Those events primarily served as ideation jams to bring together issue area experts and data experts to develop new ideas and partner for new solutions. Experts were sought out to inform the discussions, but anyone with an interest in open data in development were welcomed and participated.

Press coverage

DCist: Hack D.C.: Hackers Put Open Data to Use to Help Improve Local Government

The Atlantic Cities: Is There a Link Between Walkability and Local School Performance?

Greater Greater Washington: How school tiers match up with Walk Score

Greater Greater Education: Community of civic hackers for education takes shape

Would the real hacktivist please stand up?

January 18th, 2013

Professor Peter Ludlow wrote of “lexical warfare” over the term “hacktivist” in a recent New York Times blog post. Unfortunately the war that Ludlow observed has been long over for at least 10-20 years now, and what might have once been a reasonable analysis of the meaning of the word is today simply wrong.

Ludlow’s Position

Ludlow depicts the war as tug-of-war between two ends of the spectrum. On the one end is what we generally call cyber crime, the sort of “hacking” portrayed in movies. The other end, in Ludlow’s description, is a “less sinister” and more generic activity. An example he gave is putting wool sweaters on trees, whatever that is. Ludlow also indicates that he believes this form of hacktivism has no “positive affect.” Ludlow’s analysis is fundamentally incorrect. There is no spectrum on which a war is occurring. And the other sort of hacktivism most certainly has a positive affect.

The Meanings of Hack (n.)

“Hack” has two at least three distinct meanings as a noun. It’s a homograph. Just like “mouse” and “keyboard” are (think rodents and pianos). A lot of jargon is like this. And “gay”. “Gay pride” is not an attempt to tug the definition of “gay” away from “happiness”. Maybe decades ago it was. It isn’t today. “Hack” is the same way. One meaning is more or less the same as cyber crime — that much Ludlow got right.

Another meaning is the sense of hack in “party hack” or “hack journalist.” (A hack journalist is someone who takes the side of whoever their employer is at the time.) There is no “hacktivist” in this sense, but this meaning demonstrates the plausibility of the argument that I’m making: that “hack” isn’t the object of lexical warfare but instead has multiple unrelated meanings. (Thanks to Neville Ryant for reminding me of this meaning.)

The Good Sort of Hacking

The last meaning of hack is hard to pin down, and I can’t claim to define it, but it’s roughly the perverting of something’s original purpose to solve a new problem. Rube Goldberg machines are hacks. The use of the lunar lander to bring the Apollo 13 crew home was a hack. Putting folded-up newspapers under table legs to stop tables from shaking is a hack. Hacks are often creative uses of technology. Hacks are usually applauded. They’re positive, creative, even artistic.

In my neck of the woods, “civic hacking” is a term for creative, often technological approaches to solving civic problems like how to get more people to register to vote or making beautiful city maps. It has nothing to do with crime. Sometimes it has nothing to do with computers. It’s about solving real world problems.

Hacking is by no means some sort of jargon specific to the tech-nerd culture either. There’s a website devoted to hacking IKEA products called IKEA Hackers. Its creator defined hacking too:

IkeaHackers.net is a site about modifications on and repurposing of Ikea products. Hacks, as we call it here, may be as simple as adding an embellishment, some others may require power tools and lots of ingenuity.

An example is turning a pillow into a child’s Haloween custom. (Cute, and of course not criminal!) In June 2013, The Home Depot used the hashtag #HDHacks to promote DIY projects. From IKEA furniture and Home Depot supplies to computer systems, there’s a shared hacker culture around repurposing, creativity, and solving problems.

If you’re a journalist writing about hacking or hacktivism, take a moment to think about which type of hacking you mean.

Is It Warfare?

So let’s compare now: cyber crime and solving problems.  This is not a natural spectrum. Not that there can’t be overlap. That’s how historically the words are related (to the best of my knowledge), like if you look back in the 1980s when the term was first coming into mainstream use. There’s a reason the two meanings shared a single word: Using technology for unintended reasons is often illegal. But it’s not because it’s hacking (in the positive sense of hacking) but because technology can do so much that it’s easy to run up against the boundary of the law. God forbid you use a copy machine (or ipod?) to copy something without permission! Stuff like that.

Is there a case of lexical warfare here? Ludlow defines what he means:

“Lexical Warfare” is a phrase that I like to use for battles over how a term is to be understood.

If lexical warfare is a battle over the single meaning of a term, that is not the case here. Civic hackers don’t particularly care that “hack” is used to refer to cyber crime. We lost that battle before I was born a long, long time ago. And cyber criminals don’t care about what civic hackers are up to, so far as I have seen.

There’s more evidence from how the verbs are used. You can “hack a server” (i.e. break in) and “hack the weather” (solve weather-related problems), but while one “hacks into” systems, one “hacks on” problems. “Hacking into voter registration” and “hacking on voter registration” mean something different. The choice of preposition (“into”/”on”) depends on which type of hack you mean, and it is evidence that the two meanings are distinct. (As a verb, by the way, “hack” has even more meanings, some totally unrelated to any of the meanings of the noun so far. Related to the problem solving meaning, some use “hack” to mean the same as to do computer programming.)

“Hack” is a case of peaceful coexistence. Problems only arise when reporters confuse the two groups. They misunderstand how the word is being used. Journalists, not only should you be clear to yourselves about which “hack” you mean, but also be clear in your writing. Us hackers — the civic hackers and others like us — don’t want to be indicted for other people’s crimes. “Criminal hacking” and “problem-solving hacking” might be a good way to be clear in writing.

Derivations

The words “hacker,” “hacktivist,” and “hacktivism” all share the same ambiguity that derives from the meanings of “hack.”

A “criminal hacktivist” is roughly someone who does ”criminal hacking,” like denial of service attacks, for political purposes.

A “problem-solving hacktivist” is roughly someone who builds websites to motivate the public toward a public policy goal. The IT guys at nonprofits are problem-solving hacktivists (among many other groups of people).

In one of the articles Ludlow cites, the one in Infosecurity Magazine, hacktivism is said to be defined by Wikipedia as “the use of legal and/or illegal digital tools in pursuit of political ends.” This conflates the two meanings into one. This definition incorrectly includes anyone who emails their representatives in government, for instance. Such an action is not hacktivism because it is neither criminal hacking nor a creative or technological solution to a problem.

A “hackathon” — a hacking marathon — for the problem-solving type is when a bunch of optimistic people gather in a room and try to solve some problems.  Often with computer code. Often open-source and for the public good. Not always.

For “civic hacking,” see the discussion here by Jake Levitas. But I stand by the definition I wrote earlier: creative, often technological approaches to solving civic problems. (I’m not going to define civic…)

If you don’t know me, I’m a civic hacker and I’ve got a degree in linguistics. The title of this post of course refers to the famous Eminem song.

On 1/19/2013 I updated the post to include a third meaning of hack, “party hack.” Thanks Neville. On 1/20/2013 I added an the examples “hacking into” and “hacking on” and discredited the Wikipedia definition. On 2/28/2013 I added the section on IKEA Hackers.

On 6/5/2013 I added the paragraph containing the link to Jake Levitas’s discussion on civic hacking.

On 7/1/2013, I added a link to Home Depot’s #HDHacks promotion.

On the new bulk bill XML from GPO

January 10th, 2013

The following is my reaction to today’s announcement from the Speaker on the availability of bill XML in bulk from the Government Printing Office. It’s adapted from the email I sent to Nick Judd for his article on the data. The part about institutionalizing transparency was really Daniel Schuman’s idea — sorry I didn’t attribute that! [Update: Also see Alex Howard's article.]

What we’re seeing with the bills bulk data project is how the wave of culture change is moving through government. Over the last two years the House Republican leadership has embraced open government in many ways (my 112th Congress recap | the new House floor feed). With this bills XML project, we’re seeing more legislative support agencies being involved in how the House does open government.

This isn’t a technical feat by any means, but it is a cultural feat. The House and GPO worked together to institutionalize a new way for the House to publish bulk data.

Because of the way Data.gov is managed in the executive branch, we’ve become accustomed to big announcements. The bills bulk data project and the other recent projects show that the House is taking a different approach, an incremental approach, to open government data: publish early and often, gather feedback, then go on to bigger projects. This is something open government advocates have been asking for.

As I mentioned, the tech side itself is not much. They took files they and the Library of Congress already make available (and in some sense already in bulk) and zipped them up into up to 16 ZIP files. (4 files now, but that will probably grow to 16 by the end of the Congress.) So there’s no new data here, and thus not the data that the bulk legislative data advocates have been asking for. But it’s on the road to that. The files involved in this project have the text of legislation but not bill status, which is what the bulk data advocates have been asking for.

There is one thing crucial missing from this, and that’s that there is no feedback loop with the users of this data. The incremental approach can’t work unless the users of the data have a way to tell GPO what is and is not working. There is no public point of contact for these files, and I don’t even know a private point of contact at GPO.

But that doesn’t detract from the fact that this is a good step forward.

Transparency in the 112th House

January 4th, 2013

The House Republican Leadership over the past two years really surprised me.

When the open gov tech community coalesced at the start of the 107th Congress in 2007 Democrats had just regained control of Congress after a series of ethics scandals in 2006 brought the Republican Party’s commitment to ethics into question. But despite Speaker Pelosi’s call for transparency at the start of the Democrat’s control, honestly very little happened over the following four years (the launch of HouseLive.gov and the availability of disbursements PDFs come to mind).

In fact, when calls for transparency persisted in the House — that is, Republicans asking Democrats for more transparency — we would often chalk that up to transparency being used by the minority party as a delay tactic.

But when the Republicans took over in 2011, they kept at it. With mixed success, of course. Some promises, like 72-hour delays before votes, were not taken even remotely seriously. But that shouldn’t detract from what they got right:

  • They began a moratorium on earmarks, which was somewhat successful.
  • They launched Docs.House.gov, which gave the public a heads-up about what would be happening on the floor up to a week in advance. Prior to Docs.House.Gov, (UPDATED) there was essentially no advanced notice whatsoever there was no structured data about the House calendar. (Thanks to Eric Mill for correcting my apparent exaggeration.)
  • They held a “hackathon” in December 2011, during which transparency and technology activists in the public had a chance to talk with House staff and get to understand the complexities of the House better.
  • They held a legislative data and transparency conference in February 2012, the first conference of its kind.
  • They promised legislative data, and after public outcry they formed a task force to consider it. (On the downside, we had to have an outcry.)
  • They centralized committee video webcasting and archiving infrastructure, leading to much more of committee proceedings being available over the web.
  • At the very end of the 112th Congress they made any committee documents sent to GPO available electronically by default (update: link posted)
  • The Clerk’s official list of members got a new column of bioguide IDs.
  • They began the creation of data standards for committees which lead to significant updates on Docs.House.Gov on the first day of the 113th Congress.
  • (UPDATE) They passed the DATA Act.
That said, all I ever wanted was bulk data on the status of legislation and I haven’t gotten that. Maybe this year?

User Experience at “Tunnel Creek”: What we can all learn from The New York Times’s Snow Fall piece

December 23rd, 2012
Posted in Code | No Comments »

Just a few paragraphs in to The New York Times’s six-part Snow Fall series, I was captivated equally by the story and by the innovative magazine-style in which the story was presented. So I began taking notes about the user experience of reading Snow Fall, knowing there would be a lot to learn for other user interface projects.

Read the rest of this entry »