At every public meeting, citizens are invited to stand up and share their three minutes. Few hear their voices. This project makes it easier for citizens to keep tabs on public meetings. It uses data science and natural language processing to create a more visible dialogue between the public, citizens wishing to effect change, the news media and local government. The project aggregates the content of public meetings and makes them more accessible, easier to track and more shareable.

Project Description

pubc_concept_overview.png

GOAL
Empower citizen engagement by allowing users to distill the content of local public meetings into meaningful, shareable insights.

MILESTONES
Idea: October 2015
Start Date: January 2017
End Date:  on-going
Status: In development

TEAM
Kate Stohr
Data Scientist, Journalist

Sam Ford
Strategic Partnerships, Media Innovation

Ramya Mallya
UI/UX Designer

Carl Huebner
JavaScript Front-end

VALUE PROPOSITION
Public Comment makes it easier for users to track, search and share the outcome of public meetings.

PROBLEM
Currently most people do not attend public meetings. Open government laws compel states and local municipalities to make records of meetings publics, often in the form of video or audio recordings. However,due to their length, few take advantage of this public service. City managers, journalists and members of the general public find these records time consuming to access and difficult to use.

HOW CAN WE MAKE THE CONTENT OF LEGISLATIVE MEETINGS MORE ACCESSIBLE, EASIER TO TRACK AND MORE SHAREABLE?

  • The average public meeting is 1 hour to 2 hours long, some lasting five or more hours

  • Members of the public may be interested in a specific issue of a multi-issue agenda

  • Decision points can occur at any point in a meeting and time code tracking by agenda item is not always readily available

SOLUTION
Public Comment uses natural language processing and machine learning to provide searchable and shareable synopsis of recorded meeting content. Mobile-friendly, audio/video snapshot includes:

  • Key topics

  • Speaker identification (as available)

  • Actions taken (motions, continuances, consent calendar readings, etc.)

HOW IT WORKS

  • Video feeds from public meetings are identified by users or by our team

  • Videos are transcribed using an automated transcription system (speech-to-text API)

  • Transcriptions are then analyzed using natural language processing and machine learning to identify key topics and phrases, people, agenda items and actions.

  • These results are then imported into a table and used to create a searchable summary or digest of the public discourse at each meeting

  • Email alerts, mobile notifications and automatically generated podcasts keep users up-to-date on new updates to the meetings they are tracking allowing them to track public meetings over time

DEVELOPMENT TIME
Phase 1: Proof of Concept
(Q1-Q2 2017)

  • Feasibility Research

  • Identify Team

  • Research competition

  • Research/Identify Partners

  • Product design sprint

  • Market demo site

  • Concept validation

Phase 2: Create Prototype
(Q3-Q4 2017)

  • Identify tech stack

  • Develop branding

  • Wireframes

  • Mockups

  • Build prototype

  • User testing/feedback

  • Secure funding

TOOLS
Python, Node.js/ React Native, Speech-to-Text API's,  

Updates