Today the Journalism AI report was published, with the results of surveying 71 news outlets across 32 countries. How is the power of artificial intelligence affecting the news we see, and how much responsibility is the media industry taking for controlling the power that AI gives them?
Who is behind the Journalism AI report?
The report was produced in partnership between:
- The London School of Economics and Political Science (LSE)
- POLIS, the journalism think tank at LSE and,
- Google News Initiative
This aims to analyse the way in which AI is currently in use in modern journalism. It also looks at the future applications of AI throughout the media.
The goal? To review how news organisations can grasp the opportunities to innovate using AI technology. And, conversely, what ‘economic, ethical, and editorial threats’ this presents.
Google says they wanted to understand ‘how AI offers new powers to journalists across the reporting process, from news gathering to distribution‘ and how media companies ‘must be ready to consider and carefully monitor the ethical and editorial implications of these new technologies‘.
So what have we learned?
The report makes for really interesting reading. It considers how AI is already in use in applications such as identifying news trends, producing and editing content, and marketing.
Key outcomes identified are that: ‘artificial intelligence (AI) is a significant part of journalism already but it is unevenly distributed. AI is giving journalists more power, but with that comes editorial and ethical responsibilities’.
Uneven distribution refers to the size and resources of competing news organisations. Those with budgets to invest stand to significantly reduce workloads by using AI tech to carry out tasks which were previously manual. Smaller companies simply don’t have the resources to source and implement such technology.
How is AI impacting modern journalism?
We reported last week on the impact of spatial journalism. This allows journalists to share and explain events in a much more immersive way.
This of course has pros and cons. There are positives about allowing viewers a heightened understanding about what is happening in the world around them. However, there are also concerns about ethics and exploitation of suffering in order to gain viewers.
Of course with Facebook announcing their intentions to create a ‘dedicated place for news’ and working to repair their sour relationship with the journalism industry, we probably shouldn’t be surprised that Google are getting involved.
The way that some journalists perceive the big tech companies is summed up in this quote from the study; ‘Facebook, Google are AI-driven companies impacting the news industry by stealing eyeball time and advertising income.’
In a nutshell, AI makes reporting the news faster, smarter and more targeted. Media outlets already use a lot of this tech to filter content to be most relevant to the viewer, to create engaging content and to lead their marketing strategies.
How do the survey respondents feel about AI in journalism?
Overall, the tone seems to be positive. There is a lot of talk of saving money by automating processes. Given declines in income with printed media being overtaken somewhat by online reporting, this economy is important to help news organisations thrive.
The news itself is never going to go away. It is how this is reported which is likely to keep evolving. With the increasing worldwide use of social media, making sure the news is reported accurately is a challenge.
There is some scepticism about whether using algorithms will enhance the quality of journalistic content. One survey response said; ‘I am concerned about journalists’ complacency and over reliance on algorithms and how they may end up doing a disservice to our audience. Poorly trained algorithms may do more harm to journalism‘.
What are the ethical issues?
The ethics of using AI is hotly debated. We looked last month at how social media platforms like Pinterest use AI technology positively to filter out and identify potentially harmful content.
There are also much more sinister applications. Considering the use of AI in weaponised robots is frankly terrifying. I don’t think anyone would disagree that allowing machines autonomy in potentially fatal circumstances has no positive aspect.
What seems clear is that legislation and guidelines are essential to ensure AI is not used in an exploitative way.
Credibility of journalists is crucial. Using AI to report and share the news potentially creates exposure to error. Nobody needs to see any more ‘fake news’ arguments in the press! Another big issue is ‘deepfakes’, and how AI can manipulate images or content.
So what are the outcomes of the report?
LSE says that the report is not intended to give a list of strategies for implementation. Rather, it is ‘an introduction to and discussion of journalism and AI‘.
The aim is to identify and highlight how the news media is already using AI, and how this might change as technology develops.
Google says that; ‘With AI, the news industry has an opportunity to continue to reinvent itself for the information needs and behaviour of people in our data-driven era. But with these new powers come responsibilities to maintain quality, increase editorial diversity and promote transparency of the systems they create.’
For once, I couldn’t agree more.