Our Approach to AI

With many in political tech talking about recent advancements in AI, CTA thought it apt to share our internal perspective and guidance that we’ve offered to our team.

As software engineers, product managers, analysts, and political tech practitioners, we’ve long leveraged tools that use predictive analytics and automation. As software engineers, our jobs would be exceedingly tedious without it. Our code editors have performed text prediction and offered guidance, all built by self-learning algorithms for many years. And political analysts have relied on machine learning algorithms to help us write persuasive communications to voters.

While the advancements we’ve seen in the past few months in generative AI are exciting, CTA remains cautiously optimistic about the future of AI and its implications for political tech. In our work at CTA, we foremost think about the privacy, security, and reliability of the data we hold for our partners. There are many open legal and ethical questions about the ownership of data produced by generative AI, and we have significant concerns about the privacy of data input into AI systems.

As a result, we’ve asked our team to be extremely judicious in their usage of AI systems and to treat them as they would any other Internet-based service.

We would never provide private CTA or partner data to any service, so we won’t provide it to Google Bard, OpenAI ChatGPT, Microsoft’s GitHub Copilot, or any future AI platform.

We’re excited to see the innovative work of our partners in the political tech space, and we hope to see responsible usage of new platforms that balance innovation with the supreme responsibility all of us have to protect progressive organizations’ tactics, strategies, and most importantly, American voters’ private data.

Here’s a summary of what we’re asking of ourselves and our team:

  1. Be secure. First and foremost, protect CTA’s privacy and the privacy of our partners.

  2. Be curious, but cautious. Feel free to investigate whether generative AI can be helpful — i.e. in starting a short python script — but be cautious in just copy-and-pasting! Just like you wouldn’t use something from Google, StackOverflow, or Wikipedia without critical thinking, don’t mindlessly believe everything AI tells you.

  3. Be transparent. Share internally what’s been helpful and what’s not helpful so others can learn and so that we, as an organization, can smartly maintain an up-to-date understanding of the opportunities and threats AI presents.

Previous
Previous

It’s My List and I’ll Match If I Want To!

Next
Next

Notes from BenDesk: Granting PAD Access