Wednesday, October 14, 2015

Analyzing My Text's Cultural Setting

Assessing the Cultural Setting of "Striking the Balance On Artificial Intelligence"

In this blog entry, I will analyze the cultural background behind Cecilia Tilli's article on artificial intelligence at slate.com, and assess how the background speaks to norms of today's culture.


Cryteria, "The camera eye of HAL9000, an artificial intelligence in 2001: A Space Odyssey" 1 October 2010 via wikipedia.com.
Attribution 3.0 Unported (CC BY 3.0) License.

1. I think that the primary cultural belief that affects the writing of this article is the cultural predisposition to distrust artificial intelligence. As the author, Tilli, cites, our cultural film industry is full of movies that play upon our society's distrust of advanced, futuristic technology as seen in science fiction movies such as 2001: A Space Odyssey and Terminator, which the author explicitly names in her writing. This belief that artificial intelligence is dangerous is the reason why, in January, 2015, a meeting of developmental researchers of artificial intelligence met in Puerto Rico to create goals and guidelines for the development of this technology. Additionally, the cultural value of use and function, or utilitarianism, in today's society plays a large part in the text because today we all view technology in terms of its usefulness, and the author wrote to this evaluation that we perform when considering new scientific advancements.

2. The text tackles these cultural values and beliefs directly by stating their influence in the issue of artificial intelligence development. Tilli specifically cites science fiction films and how they speak to our society's fear of technology, and she uses this evidence to explain why news sources are interpreting scientists' caution in development of AI as fear. However, Tilli speaks directly to this anxiety over technology in our culture by validating it through examples in which AI could negatively impact our world, such as by automation or shifting of global power, but then providing an optimistic promise for the technology's potential. Most importantly, Tilli appeals to our cultural value of utility in scientific advancements to justify a steady, but cautious, development of artificial general intelligence.

3. The text, as stated above, targets the cultural fear of artificial intelligence but does not criticize it, but rather specifies in what ways it is valid and what ways it is not. To some degree, Tilli criticizes the fear over AI but only in the instances of uninformed, reckless interpretation of caution in developing AI technology as fear and allusion to doom. Overall, Tilli actually supports a careful approach to research of AI, but frames this support through stating the benefits in utility of having such a technology, so as to ease our cultural fears and turn them into directed caution to accompany steady progress in the realization of AI technology.

No comments:

Post a Comment