Workshop on the Issues of Machine Ethics in Modern Journalism Bergen, November 29-30, 2018

The aim of this workshop is to discuss various aspects of machine ethics issues in modern journalism and explore the possibility of a joint grant proposal. We will be looking to develop a research project that develops algorithmic, legal and ethical guilelessness to help journalists and the public in identifying and handling ethical issues in modern journalism. The workshop is made possible by a seed grant from SAMKUL. 


Workshop participation is by invitation only. Get in touch with the organisers if you believe that you can contribute to the workshop goals.

Detailed motivation:  
Media services, just like other content providers, increasingly rely on software to automate the creation of news feeds. Automation of aggregated news feeds improves the efficiency of the news production process, among else by tailoring the news agenda to individual interests. This process of pre-selecting news is necessary — there is far more content available today than a person can consider — but its ramifications to society need to be fully investigated and adequate computational and legal measures need to be taken to ensure that automation does not undermine democracy. Following are examples of the possible emerging consequences of automating news.
A pre-selection of options for a user is typically constructed from that user’s previous choices combined with the preferences of other, similar users. Offering news selection based on what an algorithm assumes that a person is interested in can lead to a possible echo chambers and filter bubbles, isolating people from what is really happening in society.
People that are only offered news stories that support their pre-existing beliefs strengthens confirmation bias leading to polarisation in society. A polarised society is one in which compromise and agreement is difficult to find.
A further concern is that it is becoming possible to compute a strategy of controlling public opinion in a society by identifying which reader should be given access to which type of news. The influence of mass media on public opinion and agenda setting is well established. The influence that new media have on these same societal aspects is comparatively less explored.
We are specifically concerned with the effect of automation on the journalistic social contract.
The journalistic social contract is a metaphor used to describe the democratic role that the press plays in helping the citizens maintain oversight of the government, on one hand, and helping the government maintain transparency of its decisions to the citizens, on the other hand. The impact of algorithms on journalism’s social contract and the exchange of rights and obligations in the citizen-journalism relationship in the digital era is not well understood.
With the advent of Web 2.0, the reciprocity principle in the journalist-citizen relationship has weakened. Some algorithmic affordances help citizens in performing their obligations in the contract. Example of this is the access to crowd-sourced resources, that engender social mobilization within a `networked’ public sphere. Other algorithmic affordances limit a citizen’s power in maintaining the contract. Example of these are surveillance, terms of service contracts that induce `digital serfdom’, or the emergence of `data shadows. `Data shadows’ is a term that refers to the totality of all small traces of information that an individual leaves behind through everyday activities.  
We want to bring together experts from the related fields to share their concerns and insights on this intrinsically interdisciplinary phenomenon at the core of a well-functioning informed democracy. Particularly, we hope theoretical, historical, or contemporary issues from communication, journalism, and media studies can facilitate an exploration of features technological solutions might or ought to manifest. On the other hand, technical solutions — while they provide the necessary mechanisms to support this new media reality — are restricted to concerns that may be operationalised in a computer program. To which degree are these mechanisms sufficient, and in which tasks is human intervention still necessary?