Tuesday, October 13, 2015

Cultural Analysis of "Striking the Balance On Artificial Intelligence"

Conducting a Cultural Analysis on a Text

For the writing of this blog post, I performed an analysis upon the cultural references used in Cecilia Tilli's article "Striking the Balance On Artificial Intelligence" and how those references inform the reader of the rhetorical situation of the piece. While some of these words were not directly used in the text, I identified the idea of utilitarianism, scientific advancement, and caution as culturally informative key words that the text speaks on. I believe Tilli speaks to utilitarianism in our culture by always reiterating that artificial intelligence should serve a purpose or function for humans, which reflects our culture's need for scientific advancements to be checked in a sense by a need for such advancements. However, Tilli's other central point in her writing that researchers should be cautious of developing such technology for any purpose, lest the worst potential of the technology become a reality. Ultimately, I think Cecilia Tilli's argument regarding these ideas is that artificial intelligence could greatly benefit humanity and change its course for the better, but we as creators must be cautious as we move closer and closer to the realization of such technology.

agsandrew, "Soul Geometry Two 003" January 24, 2014 via deviantart.com.
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. 

I believe that Tilli maintains a tight, collected argument in her article that manages to speak about the topic of caution in developmental research while basing her discussion in the recent event of an artificial intelligence research conference and in cultural history, primarily in film and modern uses of technology. I think that Tilli uses references to science fiction films, which are based on hyper-scientific advancement, to ground her discussion in easy-to-understand contexts that she contrasts with the real situation of today so that she can state her point. Her point, being displayed in an understandable light, is that artificial intelligence is still far from being realized and even further from being purposed and given a function in our utilitarian cultural approach to technology, yet despite this distance such research and development must be treated with caution, not fear.

By referencing films that have displayed out-of-control technology, Tilli effectively identifies that our culture has a predisposition to not trust artificial intelligence, yet she also references figures such as Elon Musk and Stephen Hawking, who are regarded as some of the brightest minds in our day, to illustrate the point that this technology is being handled by the best scientists we have. Additionally, Tilli touches upon the necessity for technology to have function in our culture by referencing GPS systems, computer logic systems as seen in games, and the potential for a specialized robotic surgeon to appeal to readers' understanding and appreciation of applied technology.

However, Tilli unifies her argument by stating points of caution among these references by saying that such technology could be ethically wrong in its final state, could automate a threatening amount of jobs in our market, and could even disrupt the global state of power if not developed with caution and delicacy.

I honestly believe that Tilli conducts a strong argument that would win the support of a reader because it uses references to film culture to illustrate our culture's overly-worrisome disposition towards artificial intelligence, yet it also references our culture's history of utilitarian drive and economic hardships to convince the reader to identify with her stance of caution on developmental research of AI.

No comments:

Post a Comment