Robot nonprofits and algorithmic philanthropy

Robot Nonprofits and Algorithmic Philanthropy

In Viewpoints by Lucy Bernholz, Ph.D.Leave a Comment

This post is part of a series of excerpts from Philanthropy and Digital Civil Society: Blueprint 2018, the ninth annual industry forecast from Lucy Bernholz. Read the entire Blueprint series and join the conversation on social media with #blueprint2018.


Big data, open websites, and algorithms make certain kinds of research much easier. Scanning job postings as a way to understand what organizations are doing, as we did with the Chan Zuckerberg Initiative, is an old investigative reporting technique but is a kind of research that is facilitated by today’s tools. We will undoubtedly see more of it in the future. Small teams, such as the three people at Transparency Toolkit, build open source code and distribute algorithmic tools for reporters to use to follow developments across entire industries. I haven’t seen such a tool applied to nonprofits or philanthropy. Yet.

Anecdotes abound about nonprofits, social enterprises, impact investors, and philanthropists using more evolved digital tools than just websites and social media. Certain subsectors, such as the arts, humanitarian aid, and journalism, often seem to lead the way. Sure enough, the International Federation of Red Cross and Red Crescent Societies has examined the use of “chatbots for good,” and the World Food Programme has experimented with Facebook chatbots in delivering services. The expansion of predictive policing software—built from data sets and proprietary algorithmic analysis—and the use of Google’s Deep Dream neural network software to make art provokes us to ask, “Who owns the output?” and “Who is responsible for the decisions informed by the software?”

Advocates of artificial intelligence (largescale data sets plus algorithms designed to learn assigned tasks and improve exponentially based on programmed analysis) abound in the business world. In the social sector, groups such as NESTA in the U.K. and Charities Aid Foundation have both published reports on the possibilities of AI for philanthropic practice. In 2017, the Miami-based Astrient Foundation, a scholarship funder supporting disadvantaged students, rebranded itself as Philanthropy.ai, claiming to use artificial intelligence to power its scholarship program. The city of Turin, Italy, launched a Data Science for Philanthropy center late in 2017 that will be important to watch.

Civil society and philanthropy, however, have bigger roles to play than merely using these tools. Efforts to understand and think about the regulatory demands of artificial intelligence are growing. The World Economic Forum Network on AI, IoT, and the Future of Trust worked with the AI Now Institute, a research effort led by humanists and engineers, to look at the immediate societal changes caused by machine learning and big data approaches to criminal justice, health care, and employment. Research on the social and political implications of AI attracted philanthropic support from a group of foundations (Knight, Omidyar, Hewlett) and individuals (Reid Hoffman, Jim Pallotta) that joined together to create a $27 million Ethics and Governance of Artificial Intelligence Fund. Elon Musk has invested in a different $10 million effort to promote public interest in AI, the Open Philanthropy Project published a review of some of the risks of AI, and Stanford University has undertaken a 100-year study of AI.

Foundations are supporting researchers to examine other technologies, including the blockchain. At this point the goal is to produce guidelines or principles that can guide the use of this and other technologies for “social good.” There are many such guidelines—including the Signal Code from Harvard Humanitarian Initiative, Responsible Data’s principles, the principles Civil society and philanthropy, however, have bigger roles to play than merely using the tools of artificial intelligence. and codes in the Digital Impact Toolkit, the Data for Development principles, and others. Consumer Reports has even added digital security into the ratings it now offers on consumer electronics. What we haven’t yet figured out is how to mesh these codes and make them default norms that are easy to integrate into software.

Another important opportunity for civil society is as a place where communities take control of their data to advance their social, political, and economic well-being. We can see this in apps such as Streetwyze and associations such as 18 Million Rising that build “digital hygiene” and tech independence directly into their advocacy. International Indigenous data charters, data principles for health equity, and even ethical principles for government experimentation are all indicators that people, and communities, recognize the value of their digital data.

Anni Rowland-Campbell, a board member of the Web Science Trust, argues that philanthropy has a bigger role to play than simply conducting or supporting the research about the governance and ethics of science and technology. In a recent provocation, she argues that philanthropy must stand up for humans in the digital age. “[Philanthropy] must work to shape the value system that will determine how government and business operates both now and as the digital world evolves.”

This is a lot to ask of philanthropy, but it does point to three critical understandings: first, that we are all affected by the rapid changes in technology; second, the ubiquity of mobile and remote sensors means that we now live in “cyber-physical systems”; third, that we can’t leave the development of these systems to governments and businesses; and fourth, that values matter—not only the values of the market (efficiency, profit, ownership) or those of governments, but those of people and civil society.

(Visited 38 times, 1 visits today)

Leave a Comment