
Nonprofits should engage with AI

As artificial intelligence (AI) continues to proliferate across industries, it's crucial that nonprofits are involved in steering its ethical development in these early days. The consequences of not doing so are already apparent in the failures of social media giants like Facebook and Twitter to address the needs of the social good, and to reduce the harm caused by their platforms. Nonprofits can have a significant impact on the ethics of how AI is used, and there are already examples of how this can be done using the mission-driven sector’s influence and reputation.
Nonprofits have a unique perspective on the world that is grounded in their mission to create positive changes in society. Their focus on societal good and values can help to ensure that AI is developed and implemented in a way that aligns with the needs and desires of society as a whole. By engaging in the ethical design of AI, nonprofits can help to prevent the negative consequences that can arise from the misuse of this powerful technology. This will require nonprofits to move from “wait and see” to “engage and shape with intention” leadership approaches.
Nonprofits often feel cash strapped, beholden to funders, and unsure how to use their influence outside of their core missions. However there are good real world examples of the sector influencing and changing the way the largest, most powerful, and wealthiest companies operate. One of the most significant examples of this influence is the case of Apple and its decision to improve labor conditions in its Chinese factories. Nonprofit groups such as China Labor Watch and the Fair Labor Association played a key role in raising awareness of the poor working conditions in Apple's Chinese factories. This led to public pressure on Apple to take action, and ultimately, the company agreed to improve working conditions and increase transparency in its supply chain. The actions of these nonprofit groups had a tangible impact on Apple's stock price, which fell by over 7% at the height of the controversy, which no doubt put pressure on both the board and the leadership to act.
Another example of nonprofits influencing tech giants is the case of Google and its involvement in the development of AI for military use. In 2018, Google employees and outside groups, including the International Committee for Robot Arms Control, raised concerns about Google's work with the Department of Defense on Project Maven, which involved using AI to analyze drone footage. As a result of the public pressure, and the awareness created within Google’s internal staff, Google announced that it would not renew its contract with the Department of Defense, and it did this before its stock price was even affected.
The influence of nonprofits on the ethics of AI development is particularly important given the historical failures of social media giants to address the ethical challenges of their platforms. Facebook and Twitter are prime examples of this failure, with both companies facing criticism for allowing misinformation, hate speech, and other harmful content to flourish on their platforms. Nonprofit groups have been instrumental in raising awareness of these issues after they have happened. But the pressure put on these companies to take action has not been sufficient, as the problems persist to this day. Early action and shaping the narrative during technology’s growth & deployment phases are essential.
Nonprofits have a critical role to play in ensuring that the development and implementation of AI are guided by ethical principles that prioritize societal good over corporate profit. By engaging with the tech industry early on and advocating for ethical standards, nonprofits can prevent the kinds of ethical failures that we've seen in the past with Facebook and Twitter.
Through their influence on the stock prices of tech companies, nonprofits have a powerful tool for shaping the direction of AI development and ensuring that it aligns with our values as a society. Nonprofit’s also can help shape the underlying way AI works by pushing generative AI & large language model systems such as ChatGPT and Bard to use ethical nonprofit sector created content in their training models. In the case of these technologies, they often are what they “eat” and if they are trained on unethical or unvetted content, that will show up in how these systems create content and operate.
By engaging with the tech industry and advocating for ethical standards, nonprofits can prevent the kinds of ethical failures that we've seen in the past with social media giants like Facebook and Twitter. Through their influence on stock prices, nonprofits have a powerful tool for shaping the direction of AI development and ensuring that it aligns with our values as a society. It's time for nonprofits to step up and take a leading role in the ethical development of AI, for the sake of us all.
A practical first step should be a push by nonprofits, think tanks, and educational institutions to change the design of Microsoft's Bing ChatGPT & Google's Bard Chat implementations to include citation links to articles and web content they are using or referencing within their answers, as well as publishing dates for data, studies, articles, and other content it is using to build answers. It's often the case that the nuance of ideas is lost or miss stated by these systems, and that it isn't clear if the material used to build an answer has been superseded by new research or events. We need to push these solutions to use the best practices we as a society have already discovered and codified.

Chat with the author
If you'd like to make a connection and perhaps collaborate on something:I'd love to talk with you!No matter if you want to build your professional network,or think there might be a great opportunity to to work together or partner.