This is a research project exploring the narratives of AI, both in fiction and non-fiction (i.e. media coverage), investigating how these creations of narratives affects our perceptions of technology, social opinions and policy makings. The research is carried out by the Dr. Stephan Cave of the Royal Society.
Below link is the PDF of the final report.
https://royalsociety.org/~/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf
I spent quite a long time reading and studying this text one weekend. It mentioned the movie Conceiving Ada by Lynn Hershman Leeson as an example of staring female character as a main role in technology/AI narrative. I ended up watching it and enjoying it very much.
The other document mentioned in the report, that gave interesting insight was the report from the Royal Society’s public dialogue on machine learning. (see below link)
In this report, 4 typical reactions from the public members are observed:
– “I can personally relate to this technology, because I can see where this could have an impact on my life, whether good or bad”
– “This is an important emerging technology and it carries potential risks and benefits to society”
– “I can’t see how this would work – humans are too unique for machines to really understand us”
– “I’m suspicious about the purpose of this technology”
What I find interesting here is that I can totally recognize myself among these opinions. (of course, I am not that unique!) Often one does not recognize your own opinion in the context of the general opinion, yet that is exactly the point of “general” opinion. Anyway, this gave me a reassurance that I can use myself/ my gut feeling as a measure of “public opinion”. Some of the people in survey asks “what is this technology for?” “who is benefiting from this?”, which are often not asked in media, but often we are suspiciously asking ourselves on the other side of the screen.
In the report, it continues to “concerns and opportunities” discussed by the public participants.
This technology could:
– depersonalise me and my experiences
– restrict me and others
– replace me and others
– improve how I interact with services I use
– improve how society works
Especially when they talk about depersonalization, there are two aspects to it, such as:
– Some feared depersonalisation because they saw machine learning as altering how they enjoy experiences they value (for instance, driverless cars taking away the pleasure of driving).
– Others were worried about depersonalisation because they did not believe that an algorithm would be able to accurately predict individuals’ needs or behaviours, particularly in people-facing roles. Instead, they worried that machines would make broad generalisations about groups of people, rather than producing a tailored, individual analysis.
When I think of the use of AI and automation in work, I was also concerned these points.
– Will it take away the fun of working/ making? Work can be also an enjoyment for some people.
– What are the “human factor” in the work we do? Is it exactly the same if anyone/anything finishes the task? or is there a value in “you” doing it?
In the reference, there was a link to this essay written by a person who works in the freezer department of a distribution centre for a large supermarket chain. It is interesting to hear the individual experience and reflection of the encounter to AI in work places.