Wednesday, October 14, 2015

Analyzing Message in "Striking the Balance on Artificial Intelligence"

Determining the Message and Purpose in Cecilia Tilli's Article on Artificial Intelligence

In this blog post, I will assess the message of my selected text and analyze its purpose. In doing so, I will be able to understand why the article was written and what it hopes to accomplish through its audience.

Srinath66, "Message Srinath66" 8 February 2009 via wikimedia.org.
Creative Commons Attribution 3.0 Unported License.
Of the bullet points listed on page 181 of Student's Guide to First Year Writing, several seemed useful in assessing my text's author's goals in the writing of her article.
I think that Cecilia Tilli is certainly responding to a specific event, which is the publishing of news articles by news sources that fuel the misconception of caution being fear in regards to artificial intelligence research. As Tilli stated herself, providing hyperlinks to specific articles as examples, the open letter that was the product of a research meeting in Puerto Rico in January this year was being mistaken as a fearful, doom-saying text when in reality it was only a promise by researchers to remain aware of the implications of their work as it becomes more and more realized through technology.
Thus, Tilli is also seeking to inform her readers of artificial intelligence research and its conduct, since the issue is being misunderstood currently. She insists that the caution being taken by the scientists working with artificial technology is for the purpose of being responsible and is in no way a proclamation of doom in our future by the the technology. In fact, Tilli provides numerous examples in our film culture that are evidence that our society is overly fearful of artificial intelligence technology, and while acknowledging its potential for harm, reassures the reader that the doctrine of caution is there to prevent those outcomes.
In so doing, Tilli is convincing her readers to not feel fearful, but hopeful and even excited for the development of artificial intelligence technology, as Tilli assures that it has safety guidelines for its development so that the technology will benefit mankind to the greatest extent possible, and in a responsible manner.

Tilli is not, in her article, reflecting or analyzing an event. As an active participant in the creation of research guidelines for AI, she is not discussing the topic in a critical way that would be characteristic of an analysis or reflection. Rather, she is advocating for an approach to treating the topic of AI research and for a change in how our culture responds to advances in the field.

Tilli's message that AI research should be conducting with caution, but not fear so as to spur its advancement with responsibility in mind, is nuanced in the sense that Tilli qualifies her message. Tilli concedes and even agrees with certain worries and anxieties over AI technology in her article, which adds a depth to her message because she ultimately uses these qualifications in a turn by saying that those very anxieties are being taken into account and handled carefully in the steady development of AI technology, so that there is thus nothing to fear as long as caution remains chiefly among research guidelines' doctrines.

No comments:

Post a Comment