In December 2019, the magazine-style weekly news site The Economist published an interview with GPT-2, a type of artificial intelligence. The AI had been trained to generate text and answered questions about the state of the world in 2020, responding to the question “What is the future of AI?” by saying “It would be good if we used the technology more responsibly. In other words, we should treat it like a utility, like a tool. We should put as much effort into developing the technology as necessary, rather than worrying that it’s going to harm us and destroy our lives.” Though the program did not truly understand what it was saying and provided generally vague answers, it was impressive enough for its developer OpenAI to introduce its successor, the more-powerful GPT-3, earlier this year. Between then and now, the key development in the world of artificial intelligence is a growing awareness and adaptation of AI in newsrooms across the globe over the past year. Such developments raise questions about the potential impact artificial intelligence may have on journalism in the future. What effect might such technology have on key areas such as democracy, diversity in reporting and journalistic public values? This story quotes University of Amsterdam professor Natali Helberger, who said that while artificial intelligence has a transformative power in that it can provide new ways to engage with readers and that readers can find information more easily to be better informed, but that journalists must take care to not abuse the power and to protect fundamental rights and freedoms. Specifically, she noted concern over a lack of structural independent funding in this area. “A lot of tech innovation in journalism is funded by the Google News Initiative. It’s really cool that they do it,” she said in the article. “But Google is a company, right? Media play a key role in our democracy and they should always be independent.”
There are a couple ethical issues raised in the article. One is a general issue in that journalists must determine the best way to implement artificial intelligence into reporting without abusing the power and still protecting fundamental rights or freedoms. More specifically, Helberger’s concern over a lack of independent funding presents an ethical dilemma. Is it ethical for journalists to perform studies and research this technology if the funding is coming from a private company — in this case Google — rather than independent sources? Does that create some conflict of interest?
One side of this dilemma is that implementing technology makes reporting more efficient and that the technology itself presents no ethical dilemmas. Instead, those issues arise from how the technology is used. Regarding the lack of independent funding for these AI initiatives, it can be argued that the potential for AI to transform journalism in the way the profession can engage with its audience and in the way readers can access information outweighs potential concerns over the lack of independent funding for such research.
Another side of this ethical issue is that any form of technology is embedded with values and that those values can’t be minimized — for example, the value of telling the truth. On the issue of funding for research on artificial intelligence, it can be argued that independence is a core value of journalism along with truthfulness and objectivity. By accepting funding from a private company like Google rather than independently funding such research, that value is compromised.
The first part of this ethical dilemma can be analyzed by looking at what the book has to say about technological advancements in general. In discussing some shortcomings of Facebook’s advertising algorithm, the book notes that many activities that technology makes possible require the use of the computer’s giant data processing capabilities. This presents two different views of technology itself, one in which technology is viewed as a means to increase efficiency while holding no inherent values and all ethical dilemmas arise from how such technology is implemented, and a second view in which any form of technology is inherently embedded with values and before any form of technology is adopted, its values must be understood. The book does not say which, if either, of these views is the “correct” view, noting that “Being a competent and ethical professional does not require you to resolve this deeply philosophical debate. But it does require you to acknowledge that it exists and to think clearly about whether, in the process of claiming efficiency, you have overlooked important questions of values” (68). According to the book, then, it’s possible to act ethically regardless of which view of technology you adopt so long as you acknowledge the opposite view and take care to consider both questions of values and of efficiency. Regarding the second half of the issue, whether or not the lack of independent funding for AI research, the book does not explicitly list independence as one of its key ethical values (accuracy, confirmation, tenacity, dignity, reciprocity, sufficiency, equity, community and diversity). However, it notes both that “no list of ethical news values should be considered conclusive” (42) and that the Corporation of Public Broadcasting added transparency as an ethical news value. Considering that, it seems logical to imagine that independence could also easily be added as an ethical news value. A news organization’s credibility is massively important, and factors such as transparency, accuracy and independence all factor into credibility. Readers want reporters and news organizations to be independent so that they know their reporting isn’t influenced by people or companies. It’s important to maintain some independence in journalism, and I don’t think the book would be suggesting that being funded entirely by a private company is completely ethical, but it doesn’t seem that the act of accepting some funding from a private company is necessarily unethical on its own.
In conclusion, after analyzing the book, it’s possible to act ethically regarding the first half of the issue no matter which side of the technological debate you fall on. You can view technology as a means to efficiency, devoid of inherent value, or you can view it as a form with embedded values that shouldn’t be minimized; both sides can act ethically according to the book as long as they acknowledge the other side and consider both aspects. Regarding the second aspect of this ethical dilemma, the lack of independent sourcing for research, it appears that on its own, accepting research funding from private organizations such as Google is not inherently unethical. Independence is not listed as one of the explicitly stated ethical news values in the book, but the remark that the list should not be considered a comprehensive one leaves room for debate. Independence in journalism is important, as is transparency, a value also not among the values listed in the book but which has been adopted as one by the Corporation of Public Broadcasting. Therefore, it’s a good idea to maintain some form of independence, and independently funding some research projects appears to be recommended, but the act of accepting some funding from private companies does not appear to be inherently unethical in and of itself.