AI For Good

Harnessing The Power of Technology Without Compromising Your Organization’s Values

*Stay tuned for Part 2: Prompt Engineering and Creative Solutions

Generative AI is growing exponentially in every single field. 92% of Fortune 500 companies are adopting OpenAI’s technology. By becoming early adopters, campaigns and nonprofits can position themselves ahead of the curve, gaining a competitive edge with increased efficiency, innovation, and impact. 

Why embrace AI now? 

More than half of business leaders want to adopt Generative AI, while only 4% believe they have all the skills they need to achieve their AI goals. This skills gap presents a unique opportunity for early adopters to shape the evolution of these tools and to define their application in meaningful ways. For campaigns and nonprofits, embracing this technology can lead to more effective communication, more personalized outreach, and the ability to scale efforts in ways that were previously unimaginable.

The risks are real, but so are the opportunities:

The concerns around AI are legitimate and especially relevant for nonprofits. Issues like deepfakes, misinformation, biased algorithms, and data privacy breaches pose significant ethical risks. Any misstep in AI adoption—such as using AI tools that inadvertently spread false information or rely on biased data—can undermine the very values that nonprofits strive to uphold. 

However, the reality is that AI is not going away. Instead of avoiding AI altogether, nonprofits and campaigns can adopt a proactive approach: understand the risks, educate your team, and implement ethical safeguards. This guide will walk you through the specific ethical considerations you need to be aware of, such as data privacy, copyright issues, and bias in AI outputs. It will also offer practical advice on how to use AI in a way that is responsible, transparent, and consistent with your organization’s mission.

What you need to know to get started:

Legal considerations 

The first step in adopting AI should be understanding the legal and ethical boundaries. Knowing where you can safely operate not only alleviates anxiety but also makes it easier to innovate confidently. It might seem daunting initially, but being aware of these considerations will empower your organization to use AI responsibly.

 What’s all this I hear about copyrighting? What can I actually use?

When we talk about AI exploiting copyrighted content, we’re looking at the input used to train models. However, that’s not all that impacts you – read on to learn about how end-users of AI can mitigate copyright infringement risk and ensure ownership of their content. 

Input data: The content that an AI model “digests” during training. 

ChatGPT is openly trained on copyrighted data, as is its image generator, Dall-e. Claude–Anthropic’s popular ChatGPT rival–markets itself as an ethically trained model, but is nevertheless facing a high-profile copyright lawsuit.  

FireflyAI is a safer bet – and is designed to be safe for commercial use, but is not immune from copyright infringement. It’s trained on Adobe Stock’s own image library, which critics acknowledge contains images generated via MidJourney, an AI tool known to be trained on copyrighted input.

Why you should care about this: 

The AI-generated content you share is not free from copyright risks, even if the platform you utilize emphasizes otherwise. Copyright law is evolving alongside AI, and the only way to ensure your content is safe from litigation and remains ethically sound is to use AI as a drafting tool, never a direct-to-client or direct-to-consumer product. 

When your organization adopts AI, it should also adopt a commitment to stay up to date with the latest AI legislative developments. Here at Spero Studio, we stay up to date with the latest policies, legislation, and best practices so that your organization can lighten its burden while staying informed. 

Output Data: What you actually create when you use a generative AI tool.  

When you use a generative AI tool, the output data refers to any text, images, code, or other creative content produced by the AI. From a technical standpoint, there are multiple contributors to your content; the developers who wrote the algorithm, the content the algorithm was trained on, and the prompt you provide. 

 There is significant debate surrounding whether these AI-generated works can be copyrighted and, if so, who owns them. The U.S. Copyright Office explicitly states that it will only register original works “created by a human being”.

Why you should care about this:

As with above, the safest bet is to use AI as a drafting tool. You must add your creative input—touch every pixel, edit every sentence. If your work involves substantial AI generation without significant human modification, you face potential copyright infringement claims and legal backlash. Contributing your own creativity through each round of edits allows you to reinforce your claim to the work and establish it as a product of human effort.

Fair Use: Safety Guidelines

In essence, fair use means the use of copyrighted material done for a limited and “transformative” purpose, such as to comment upon, criticize, or parody a copyrighted work. Ongoing lawsuits challenge the idea that AI models trained on copyrighted data are fair use. Either way, fair use remains one of the most cited copyright doctrines, and complying with fair use guidelines is a key step for remaining legally protected and ethically sound. 

Why you should care about this: If you consistently integrate your own creativity into AI-generated content, you don’t need to worry as much about potential legal pitfalls.

 Here are the biggest takeaways for creators:

  • Transformation is key:  Always add your own substantial creative input to the AI-generated content. Edit, modify, and build upon the AI output to make it truly your own.

  • Know your source materials: Understand whether the AI draws from highly creative works, and approach such outputs with caution.

  • Avoid over-reliance: Use smaller portions of AI-generated content and avoid taking the most substantial parts of any original work.

  • Consider market impact: Avoid uses that could substitute or compete directly with the original work in the marketplace

 What about creating political content? 

Here’s the deal – generative AI policies around AI are constantly changing. OpenAI bans content creation for political campaigning, but how far this extends is evolving. While the ban extends to all content currently, their focus for enforcement is primarily on the use of custom built applications for political campaigning and lobbying, and misleading content that may deter people from voting or otherwise

That isn’t stopping organizations on both sides of the political aisle from developing and deploying content for political campaigns.  

So, what does this mean for you? 

  • Stay informed about the latest policies and guidelines. AI policy is a rapidly changing field, and the boundaries of what is permissible can shift quickly.

  • Steer clear of developing custom-built AI applications explicitly designed for political campaigning or lobbying without speaking with a legal team.

  • As always, focus on using AI for creative drafting, not for developing a final product.

  • Avoid manipulation: Steer away from using AI to generate content that could be interpreted as trying to deceive, manipulate, or mislead the public. This includes refraining from creating deepfakes, false narratives, or anything that could be seen as voter suppression.

  • Stay transparent and authentic: Always be transparent about the use of AI in content creation. Authenticity is critical in political messaging, so emphasize that AI tools are simply one part of your broader creative process.

  • Monitor changes: Keeping an eye on policy and legal developments is key for compliance.

Ethical Considerations

AI reflects and perpetuates racism.  Research proves it: 

  • AI-driven targeted advertising can result in discriminatory outcomes. Researchers found that ads with content stereotypically associated with Black users (ex: hip-hop) primarily reach Black users, while ads associated with White users (ex: country music) might mostly reach White users. This occurs even when the advertiser does not intend for the ad to be shown predominantly to any specific group. 

  • Large language models are often trained on data that underrepresents minority groups, leading to biased outputs that do not reflect the diversity of real-world language use.

  • Many AI language models can encode and perpetuate gender and racial stereotypes, including associating Black communities with crime and White communities with professional roles. 

  • Content moderation algorithms disproportionately flag or remove content from marginalized communities, including posts by Black and Brown creators, due to inherent biases in moderation policies and tools. 

  • Automated translation tools often fail to accurately handle slang, dialects, and culturally specific phrases, leading to mistranslations and culturally inappropriate content.

Dhanaraj Thakur, PhD, an instructor in the Department of Communication, Culture and Media Studies at Howard University and research director at the Center for Democracy and Technology, encapsulates the essence of AI’s racial problem. As Thakur puts it: AI cannot empathize with human experiences or understand the nuances of race.

 Without conscious effort to mitigate these biases, AI systems will continue to perpetuate discriminatory practices and outcomes. Recognizing the limitations of AI in understanding human experiences, particularly around race, we must consider proactive measures to mitigate bias and ensure fair outcomes. Here are targeted solutions for organizations that care about promoting equity and inclusiveness in their content generation:

  • Customize and fine-tune outputs. Adjust AI-generated content to better align with the cultural context and preferences of Black and Brown communities. Provide prompts that include culturally relevant terms, community-specific events, or references. Fine-tune sentiment analysis tools by feeding them additional data that captures a wide range of expressions and contexts specific to these communities. Use prompts to incorporate data that represents the linguistic, cultural, and social diversity of Black and Brown communities.

  • Create opportunities for direct engagement with Black and Brown audiences to gather feedback on how they perceive AI-driven content. Use surveys, focus groups, or community forums to learn about their preferences and concerns, and adjust your use of AI tools accordingly.

  • Provide training for your team on the potential biases of AI tools and how they can manifest. Ensure that all team members understand the importance of cultural sensitivity and inclusivity in content creation. Equip them with strategies to identify and mitigate bias when using AI tools.

  • Be transparent with your audience about when and how you use AI tools. Explain how AI tools assist in content creation and the steps you take to ensure the content is culturally sensitive and relevant. Transparency builds trust and encourages feedback that can help refine your AI 

  • Always have humans, particularly those who are culturally competent, review AI-generated content before publishing. This step is crucial for catching any biased language, misinterpretations, or offensive content that AI tools might miss. Encourage team members from diverse backgrounds to provide feedback on AI-generated outputs to ensure they resonate with the intended audience.

As AI becomes an essential tool for organizations, nonprofits, and campaigns, it’s important to adopt it responsibly while aligning with your mission and values. Spero Studio is here to help you navigate these complexities, from understanding practical and ethical challenges to leveraging AI for good. Reach out to us for support on how to integrate AI effectively into your operations, ensuring your technology use is transparent, fair, and culturally sensitive.

Stay tuned for our next article, where we will dive into prompt engineering and creative solutions—practical strategies to use AI tools more effectively while upholding your commitment to equity and inclusion. Let’s work together to harness the power of AI without compromising the principles that define your organization.

Paden McNiff

Paden, Account Director at Spero Studio, is a creative political strategist, with a standout role as the Digital Director at Activate America, a national Super PAC. There, Paden led a digital overhaul that reached over 10 million voters. Her strategic acumen also extended to roles with ZEV2030 and The Next Generation, where she spearheaded social media campaigns and developed comprehensive digital toolkits, effectively mobilizing constituents California-wide towards the goal of 100% Zero Emission new vehicle sales by 2030. Paden has been instrumental in amplifying the digital presence of political campaigns and nonprofits, enhancing community engagement and driving policy advocacy forward. 

Previous
Previous

A Message from the CEO