163 lines
38 KiB
HTML
163 lines
38 KiB
HTML
|
<div class="first-page">
|
|||
|
<div id="title_edition">
|
|||
|
A Nourishing Network
|
|||
|
</div>
|
|||
|
<div id="title">
|
|||
|
Re-Centralization of AI Focusing on Social Justice
|
|||
|
</div>
|
|||
|
<div id="author">
|
|||
|
by Adnan Hadzi, Denis Roio
|
|||
|
</div>
|
|||
|
<pre id="ascii_blob">
|
|||
|
----------------------------------a-----------------------------------
|
|||
|
-------------------------- - - - - - -----------------------------
|
|||
|
----------------------n--n--n--n--n--n--n--n--n-----------------------
|
|||
|
--------------------o----o---o----o----o---o----o---------------------
|
|||
|
----------------u-----u-----u-----u-----u-----u-----u-----------------
|
|||
|
-------------r------r------r------r------r------r------r--------------
|
|||
|
----------i-------i-------i-------i-------i-------i-------i-----------
|
|||
|
-------s--------s--------s--------s--------s--------s--------s--------
|
|||
|
-----h---------h--------h---------h---------h--------h---------h------
|
|||
|
----i---------i---------i---------i---------i---------i---------i-----
|
|||
|
----n---------n---------n---------n---------n---------n---------n-----
|
|||
|
----g---------g---------g---------g---------g---------g---------g-----
|
|||
|
----- --------- -------- --------- --------- -------- --------- ------
|
|||
|
-------n--------n--------n--------n--------n--------n--------n--------
|
|||
|
----------e-------e-------e-------e-------e-------e-------e-----------
|
|||
|
-------------t------t------t------t------t------t------t--------------
|
|||
|
----------------w-----w-----w-----w-----w-----w-----w-----------------
|
|||
|
--------------------o----o---o----o----o---o----o---------------------
|
|||
|
-------------------------r--r--r--r--r--r--r--------------------------
|
|||
|
---------------------------- k-kk-k-kk-k -----------------------------
|
|||
|
</pre>
|
|||
|
</div>
|
|||
|
<header id="pageheader-issue">
|
|||
|
A Nourishing Network
|
|||
|
</header>
|
|||
|
<header id="pageheader-theme">
|
|||
|
Re-Centralization of AI <br> Focusing on Social Justice
|
|||
|
</header>
|
|||
|
<div class="essay_content">
|
|||
|
<p>
|
|||
|
<pre id="first_letter_mel">
|
|||
|
██╗
|
|||
|
██║
|
|||
|
██║
|
|||
|
██║
|
|||
|
╚═╝
|
|||
|
</pre>
|
|||
|
n order to lay the foundations for a discussion around the argument that the adoption of artificial intelligence (AI) technologies benefits the powerful few,<sup><a href="#fn1" class="footnote-ref" id="fnref1" role="doc-noteref"><sup>1</sup></a></sup> focusing on their own existential concerns,<sup><a href="#fn2" class="footnote-ref" id="fnref2" role="doc-noteref"><sup>2</sup></a></sup> we decided to narrow our analysis of the argument to social justice (i.e. restorative justice). This paper represents an edited version of Adnan Hadzi’s text on Social Justice and Artificial Intelligence,<sup><a href="#fn3" class="footnote-ref" id="fnref3" role="doc-noteref"><sup>3</sup></a></sup> exploring the notion of humanised artificial intelligence<sup><a href="#fn4" class="footnote-ref" id="fnref4" role="doc-noteref"><sup>4</sup></a></sup> in order to discuss potential challenges society might face in the future. The paper does not discuss current forms and applications of artificial intelligence, as, so far, there is no AI technology, which is self-conscious and self- aware, being able to deal with emotional and social intelligence.<sup><a href="#fn5" class="footnote-ref" id="fnref5" role="doc-noteref"><sup>5</sup></a></sup> It is a discussion around AI as a speculative hypothetical entity. The question then could arise, if such a speculative self-conscious hardware/software system were created, at what point could we talk of personhood? And what criteria could there be in order to say an AI system was capable of committing AI crimes?
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
Concerning what constitutes AI crimes the paper uses the criteria given in Thomas King et al.’s paper Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions,<sup><a href="#fn6" class="footnote-ref" id="fnref6" role="doc-noteref"><sup>6</sup></a></sup> where King et al. coin the term “AI crime”. We discuss the construction of the legal system through the lens of political involvement of what could be considered to be ‘powerful elites’<sup><a href="#fn7" class="footnote-ref" id="fnref7" role="doc-noteref"><sup>7</sup></a></sup>. In doing so we will be demonstrating that it is difficult to prove that the adoption of AI technologies is undertaken in a way, which mainly serves a powerful class in society. Nevertheless, analysing the culture around AI technologies with regard to the nature of law with a philosophical and sociological focus enables us to demonstrate a utilitarian and authoritarian trend in the adoption of AI technologies. Mason argues that “virtue ethics is the only ethics fit for the task of imposing collective human control on thinking machines”<sup><a href="#fn8" class="footnote-ref" id="fnref8" role="doc-noteref"><sup>8</sup></a></sup> and AI. We will apply virtue ethics to our discourse around artificial intelligence and ethics.
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
As expert in AI safety Steve Omonhundro believes that AI is “likely to behave in antisocial and harmful ways unless they are very carefully designed.”<sup><a href="#fn9" class="footnote-ref" id="fnref9" role="doc-noteref"><sup>9</sup></a></sup> It is through virtue ethics that this paper will propose for such a design to be centred around restorative justice in order to retain control over AI and thinking machines, following Mason’s radical defense of the human and his critique of current thoughts within trans- and post- humanism as a submission to machine logic.
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
The paper will conclude by proposing an alternative practically unattainable, approach to the current legal system by looking into restorative justice for AI crimes,<sup><a href="#fn10" class="footnote-ref" id="fnref10" role="doc-noteref"><sup>10</sup></a></sup> and how the ethics of care could be applied to AI technologies. In conclusion the paper will discuss affect<sup><a href="#fn11" class="footnote-ref" id="fnref11" role="doc-noteref"><sup>11</sup></a></sup> and humanised artificial intelligence with regards to the emotion of shame, when dealing with AI crimes. In this paper we will aim at re-centralizing AI ethics through social justice, with focus on restorative justice, allowing for an advanced jurisprudence, where human and machine can work in symbiosis on reaching virtue ethics, rather than being in conflict with each other.
|
|||
|
</p>
|
|||
|
<p class="subheading">
|
|||
|
The Disciplinary Power of artificial intelligence
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
In order to discuss AI in relation to personhood this paper follows the descriptive psychology method<sup><a href="#fn12" class="footnote-ref" id="fnref12" role="doc-noteref"><sup>12</sup></a></sup> of the paradigm case formulation<sup><a href="#fn13" class="footnote-ref" id="fnref13" role="doc-noteref"><sup>13</sup></a></sup> developed by Peter Ossorio.<sup><a href="#fn14" class="footnote-ref" id="fnref14" role="doc-noteref"><sup>14</sup></a></sup> Similar to how some animal rights activists call for certain animals to be recognised as non-human persons,<sup><a href="#fn15" class="footnote-ref" id="fnref15" role="doc-noteref"><sup>15</sup></a></sup> this paper takes on the notion of AI as a non-human person being able to reflect on ethical concerns.<sup><a href="#fn16" class="footnote-ref" id="fnref16" role="doc-noteref"><sup>16</sup></a></sup> Here Wynn Schwartz argues that “it is reasonable to include non-humans as persons and to have legitimate grounds for disagreeing where the line is properly drawn. In good faith, competent judges using this formulation can clearly point to where and why they agree or disagree on what is to be included in the category of persons.”<sup><a href="#fn17" class="footnote-ref" id="fnref17" role="doc-noteref"><sup>17</sup></a></sup> In the case of AI technologies we ask whether the current vision for the adoption of AI technologies, a vision which mainly supports the military-industrial complex through vast investments in army AI,<sup><a href="#fn18" class="footnote-ref" id="fnref18" role="doc-noteref"><sup>18</sup></a></sup> is a vision that benefits mainly powerful elites.
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
In order to discuss these questions, one has to analyse the history of AI technologies leading to the kind of ‘humanised’ AI system this paper posits. The old-fashioned approach,<sup><a href="#fn19" class="footnote-ref" id="fnref19" role="doc-noteref"><sup>19</sup></a></sup> some may still say contemporary approach, was to primarily research into ‘mind-only’<sup><a href="#fn20" class="footnote-ref" id="fnref20" role="doc-noteref"><sup>20</sup></a></sup> AI technologies/systems. Through high level reasoning, researchers were optimistic that AI technology would quickly become a reality.
|
|||
|
</p>
|
|||
|
Those early AI technologies were a disembodied approach using high level logical and abstract symbols. By the end of the 80s researchers found that the disembodied approach was not even achieving low level tasks humans could easily perform.<sup><a href="#fn21" class="footnote-ref" id="fnref21" role="doc-noteref"><sup>21</sup></a></sup> During that period many researchers stopped working on AI technologies and systems, and the period is often referred to as the “AI winter”.<sup><a href="#fn22" class="footnote-ref" id="fnref22" role="doc-noteref"><sup>22</sup></a></sup> Rodney Brooks then came forward with the proposition of “Nouvelle AI”,<sup><a href="#fn23" class="footnote-ref" id="fnref23" role="doc-noteref"><sup>23</sup></a></sup> arguing that the old-fashioned approach did not take into consideration motor skills and neural networks. Only by the end of the 90s did researchers develop statistical AI systems without the need for any high-level logical reasoning;<sup><a href="#fn24" class="footnote-ref" id="fnref24" role="doc-noteref"><sup>24</sup></a></sup> instead AI systems were ‘guessing’ through algorithms and machine learning. This signalled a first step towards humanistic artificial intelligence, as this resembles how humans make intuitive decisions;<sup><a href="#fn25" class="footnote-ref" id="fnref25" role="doc-noteref"><sup>25</sup></a></sup> here researchers suggest that embodiment improves cognition.<sup><a href="#fn26" class="footnote-ref" id="fnref26" role="doc-noteref"><sup>26</sup></a></sup> With embodiment theory Brooks argued that AI systems would operate best when computing only the data that was absolutely necessary.<sup><a href="#fn27" class="footnote-ref" id="fnref27" role="doc-noteref"><sup>27</sup></a></sup> Further in Developing Embodied Multisensory Dialogue Agents Michal Paradowski argues that without considering embodiment, e.g. the physics of the brain, it is not possible to create AI technologies/systems capable of comprehension.
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
Foucault’s theories are especially helpful in discussing how the “rule of truth” has disciplined civilisation, allowing for an adoption of AI technologies which seem to benefit mainly the upper-class. However, should we then consider the notion of ‘deep-truth’ as being the unwieldy product of deep learning AI algorithms? Discussions surrounding truth, Foucault states, form legislation into something that “decides, transmits and itself extends upon the effects of power”<sup><a href="#fn28" class="footnote-ref" id="fnref28" role="doc-noteref"><sup>28</sup></a></sup>. Foucault’s theories help to explain how legislation, as an institution, is rolled out throughout society with very little resistance, or “proletarian counter-justice”<sup><a href="#fn29" class="footnote-ref" id="fnref29" role="doc-noteref"><sup>29</sup></a></sup>. Foucault explains that this has made the justice system and legislation a for-profit system. With this understanding of legislation, and social justice, one does need to reflect further on Foucault’s notion of how disciplinary power seeks to express its distributed nature in the modern state. Namely one has to analyse the distributed nature of those AI technologies, especially through networks and protocols, so that the link can now be made to AI technologies becoming ‘legally’ more profitable, in the hands of the upper-class.
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
In Protocol, Alexander Galloway describes how these protocols changed the notion of power and how “control exists after decentralization”<sup><a href="#fn30" class="footnote-ref" id="fnref30" role="doc-noteref"><sup>30</sup></a></sup>. Galloway argues that protocol has a close connection to both Deleuze’s concept of control and Foucault’s concept of biopolitics<sup><a href="#fn31" class="footnote-ref" id="fnref31" role="doc-noteref"><sup>31</sup></a></sup> by claiming that the key to perceiving protocol as power is to acknowledge that “protocol is an affective, aesthetic force that has control over life itself.”<sup><a href="#fn32" class="footnote-ref" id="fnref32" role="doc-noteref"><sup>32</sup></a></sup> Galloway suggests that it is important to discuss more than the technologies, and to look into the structures of control within technological systems, which also include underlying codes and protocols, in order to distinguish between methods that can support collective production, e.g. sharing of AI technologies within society, and those that put the AI technologies in the hands of the powerful few.<sup><a href="#fn33" class="footnote-ref" id="fnref33" role="doc-noteref"><sup>33</sup></a></sup> Galloway’s argument in the chapter Hacking is that the existence of protocols “not only installs control into a terrain that on its surface appears actively to resist it”<sup><a href="#fn34" class="footnote-ref" id="fnref34" role="doc-noteref"><sup>34</sup></a></sup>, but goes on to create the highly controlled network environment. For Galloway hacking is “an index of protocological transformations taking place in the broader world of techno-culture.”<sup><a href="#fn35" class="footnote-ref" id="fnref35" role="doc-noteref"><sup>35</sup></a></sup>
|
|||
|
</p>
|
|||
|
<p class="subheading">
|
|||
|
AI technologies and Restorative Justice: The Ethics of Care
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
Having said this, the prospect could be raised that restorative justice might offer “a solution that could deliver more meaningful justice”<sup><a href="#fn36" class="footnote-ref" id="fnref36" role="doc-noteref"><sup>36</sup></a></sup>. With respect to AI technologies, and the potential inherent in them for AI crimes, instead of following a retributive legislative approach, an ethical discourse,<sup><a href="#fn37" class="footnote-ref" id="fnref37" role="doc-noteref"><sup>37</sup></a></sup> with a deeper consideration for the sufferers of AI crimes should be adopted.<sup><a href="#fn38" class="footnote-ref" id="fnref38" role="doc-noteref"><sup>38</sup></a></sup> We ask: could restorative justice offer an alternative way of dealing with the occurrence of AI crimes?<sup><a href="#fn39" class="footnote-ref" id="fnref39" role="doc-noteref"><sup>39</sup></a></sup>
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
Dale Millar and Neil Vidmar described two psychological perceptions of justice.<sup><a href="#fn40" class="footnote-ref" id="fnref40" role="doc-noteref"><sup>40</sup></a></sup> One is behavioural control, following the legal code as strictly as possible, punishing any wrongdoer,<sup><a href="#fn41" class="footnote-ref" id="fnref41" role="doc-noteref"><sup>41</sup></a></sup> and second, the restorative justice system, which focuses on restoration where harm was done. Thus an alternative approach for the ethical implementation of AI technologies, with respect to legislation, might be to follow restorative justice principles. Restorative justice would allow for AI technologies to learn how to care about ethics.<sup><a href="#fn42" class="footnote-ref" id="fnref42" role="doc-noteref"><sup>42</sup></a></sup> Julia Fionda describes restorative justice as a conciliation between victim and offender, during which the offence is deliberated upon.<sup><a href="#fn43" class="footnote-ref" id="fnref43" role="doc-noteref"><sup>43</sup></a></sup> Both parties try to come to an agreement on how to achieve restoration for the damage done, to the situation before the crime (here an AI crime) happened. Restorative justice advocates compassion for the victim and offender, and a consciousness on the part of the offenders as to the repercussion of their crimes. The victims of AI crimes would not only be placed in front of a court, but also be offered engagement in the process of seeking justice and restoration.<sup><a href="#fn44" class="footnote-ref" id="fnref44" role="doc-noteref"><sup>44</sup></a></sup>
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
Restorative justice might support victims of AI crimes better than the punitive legal system, as it allows for the sufferers of AI crimes to be heard in a personalised way, which could be adopted to the needs of the victims (and offenders). As victims and offenders represent themselves in restorative conferencing sessions, these become much more affordable,<sup><a href="#fn45" class="footnote-ref" id="fnref45" role="doc-noteref"><sup>45</sup></a></sup> meaning that the barrier to seeking justice due to the financial costs would be partly eliminated, allowing for poor parties to be able to contribute to the process of justice. This would benefit wider society and AI technologies would not only be defined by a powerful elite. Restorative justice could hold the potential not only to discuss the AI crimes themselves, but also to get to the root of the problem and discuss the cause of an AI crime. For John Braithwaite restorative justice makes re-offending harder.<sup><a href="#fn46" class="footnote-ref" id="fnref46" role="doc-noteref"><sup>46</sup></a></sup>
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
In such a scenario, a future AI system capable of committing AI crimes would need to have knowledge of ethics around the particular discourse of restorative justice. The implementation of AI technologies will lead to a discourse concerning who is responsible for actions taken by AI technologies. Even when considering clearly defined ethical guidelines, these might be difficult to implement,<sup><a href="#fn47" class="footnote-ref" id="fnref47" role="doc-noteref"><sup>47</sup></a></sup> due to the pressure of competition AI systems find themselves in. That said, this speculation is restricted to humanised artificial intelligence systems. The main hindrance for AI technologies to be part of a restorative justice system might be that of the very human emotion of shame. Without a clear understanding of shame it will be impossible to resolve AI crimes in a restorative manner.<sup><a href="#fn48" class="footnote-ref" id="fnref48" role="doc-noteref"><sup>48</sup></a></sup>
|
|||
|
</p>
|
|||
|
<p>
|
|||
|
Furthering this perspective, we suggest that reflections brought by new materialism should also be taken into account: not only as a critical perspective on the engendering and anthropomorphic representation of AI, but also to broaden the spectrum of what we consider to be justice in relation to all the living world. Without this new perspective, the sort of idealized AI image of a non-living intelligence that deals with enormous amounts of information risks to serve the abstraction of anthropocentric views into a computationalist acceleration, with deafening results. Rather than such an implosive perspective, the application of law and jurisprudence may take advantage of AI’s computational and sensorial enhanced capabilities by including all information gathered from the environment, including that produced by plants, animals and soil. Thus we might want to think about a humanised symbiosis between humans and technology,<sup><a href="#fn49" class="footnote-ref" id="fnref49" role="doc-noteref"><sup>49</sup></a></sup> along the lines of Garry Kasparov’s advanced chess,<sup><a href="#fn50" class="footnote-ref" id="fnref50" role="doc-noteref"><sup>50</sup></a></sup> as in advanced jurisprudence.<sup><a href="#fn51" class="footnote-ref" id="fnref51" role="doc-noteref"><sup>51</sup></a></sup> A legal system in which humans and machines work together on restoring justice, for social justice.
|
|||
|
</p>
|
|||
|
</div>
|
|||
|
<p><br></p>
|
|||
|
<div class="ref-position">
|
|||
|
|
|||
|
<section class="footnotes" role="doc-endnotes">
|
|||
|
<hr />
|
|||
|
<ol>
|
|||
|
<li id="fn1" role="doc-endnote"><p>Cp. G. Chaslot, “YouTube’s A.I. was divisive in the US presidential election”, Medium, November 27, 2016. Available at: https://medium.com/the-graph/youtubes-ai-is-neutral-towards-clicks-but-is-biased-towards-people-and-ideas-3a2f643dea9a#.tjuusil7d [accessed February 25, 2018]; E. Morozov, “The Geopolitics Of Artificial Intelligence”, FutureFest, London, 2018. Available at: https://www.youtube.com/watch?v=7g0hx9LPBq8 [accessed October 25, 2019].<a href="#fnref1" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn2" role="doc-endnote"><p>Cp. M. Busby, “Use of ‘Killer Robots’ in Wars Would Breach Law, Say Campaigners”, The Guardian, August 21, 2018. Available at: https://web.archive.org/web/20181203074423/https://www.theguardian.com/science/2018/aug/21/use-of-killer-robots-in-wars-would-breach-law-say-campaigners [accessed October 25, 2019].<a href="#fnref2" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn3" role="doc-endnote"><p>Cp. A. Hadzi, “Social Justice and Artificial Intelligence”, Body, Space & Technology, 18 (1), 2019, pp. 145–174. Available at: https://doi.org/10.16995/bst.318 [accessed October 25, 2019].<a href="#fnref3" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn4" role="doc-endnote"><p>Cp. A. Kaplan and M. Haenlein, “Siri, Siri, in my Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence”, Business Horizons, 62 (1), 2019, pp. 15–25. https://doi.org/10.1016/j.bushor.2018.08.0 04; S. Legg and M. Hutter, A Collection of Definitions of Intelligence, Lugano, Switzerland, IDSIA, 2007. Available at: http://arxiv.org/abs/0706.3639 [accessed October 25, 2019].2<a href="#fnref4" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn5" role="doc-endnote"><p>N. Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford, Oxford University Press, 2014.<a href="#fnref5" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn6" role="doc-endnote"><p>Cp. T. King, N. Aggarwal, M. Taddeo and L. Floridi, “Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions”, SSRN Scholarly Paper No. ID 3183238, Rochester, NY, Social Science Research Network, 2018. Available at: https://papers.ssrn.com/abstract=3183238 [accessed October 25, 2019].<a href="#fnref6" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn7" role="doc-endnote"><p>P. Mason, Clear Bright Future, London, Allen Lane Publishers, 2019.<a href="#fnref7" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn8" role="doc-endnote"><p>Mason, Clear Bright Future.<a href="#fnref8" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn9" role="doc-endnote"><p>S. Omohundro, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence, 26 (3), 2014, pp. 303–315, here: p. 303.3<a href="#fnref9" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn10" role="doc-endnote"><p>Cp. C. Cadwalladr, “Elizabeth Denham: ‘Data Crimes are Real Crimes”, The Guardian, July 15, 2018. Available at: https://web.archive.org/web/20181121235057/https://www.theguardian.com/uk-news/2018/jul/15/elizabeth-denham-data-protection-inf ormation-commissioner-facebook-cambridge-analytica [accessed October 25, 2019].<a href="#fnref10" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn11" role="doc-endnote"><p>Cp. B. Olivier, “Cyberspace, Simulation, Artificial Intelligence, Affectionate Machines and Being Human”, Communicatio, 38 (3), 2012, pp. 261–278. https://doi.org/10.1080 /02500167.2012.716763 [accessed October 25, 2019]; E.A. Wilson, Affect and Artificial Intelligence, Washington, University of Washington Press, 2011.<a href="#fnref11" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn12" role="doc-endnote"><p>Cp. P.G. Ossorio, The Behavior of Persons, Ann Arbor, Descriptive Psychology Press, 2013. Available at: http://www.sdp.org/sdppubs- publications/the-behavior-of-perso ns/ [accessed October 25, 2019].<a href="#fnref12" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn13" role="doc-endnote"><p>Cp. J. Jeffrey, “Knowledge Engineering: Theory and Practice”, Society for Descriptive Psychology, 5, 1990, pp. 105–122.<a href="#fnref13" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn14" role="doc-endnote"><p>Cp. P.G. Ossorio, Persons: The Collected Works of Peter G. Ossorio, Volume I. Ann Arbor, Descriptive Psychology Press, 1995. Available at: http://www.sdp.org/sdppubs-publications/persons-the-collected-works-of-peter-g-ossorio-volume-1/ [accessed October 25, 2019].<a href="#fnref14" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn15" role="doc-endnote"><p>Cp. M. Mountain, “Lawsuit Filed Today on Behalf of Chimpanzee Seeking Legal Personhood”, Nonhuman Rights Blog, December 2, 2013. Available at: https://www.nonhumanrights.org/blog/lawsuit-filed-today-on-behalf-of-chimpanzee-seeking-legal-personhood/ [accessed January 8, 2019]; M. Midgley, “Fellow Champions Dolphins as ‘Non-Human Persons’”, Oxford Centre for Animal Ethics, January 10, 2010. Available at: https://www.oxfordanimalethics.com/2010/01/fellow -champions-dolphins-as-%E2%80%9Cnon-human-persons%E2%80%9D/ [accessed January 8, 2019].<a href="#fnref15" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn16" role="doc-endnote"><p>Cp. R. Bergner, “The Tolstoy Dilemma: A Paradigm Case Formulation and Some Therapeutic Interventions”, in K.E. Davis, F. Lubuguin and W. Schwartz (eds.), Advances in Descriptive Psychology, Vol. 9, 2010, pp. 143–160. Available at: http://www.sdp.org/sdppubs-publications/advances-in-descriptive-psychology-vol-9; P. Laungani, “Mindless Psychiatry and Dubious Ethics”, Counselling Psychology4 Quarterly, 15 (1), 2002, pp. 23–33. Available at: https://doi.org/10.1080/09515070110102305 [accessed October 26, 2019].<a href="#fnref16" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn17" role="doc-endnote"><p>W. Schwartz, “What Is a Person and How Can We Be Sure? A Paradigm Case Formulation”, SSRN Scholarly Paper No. ID 2511486, Rochester, NY: Social Science Research Network, 2014. Available at: https://papers.ssrn.com/abstract=2511486 [accessed October 25, 2019].<a href="#fnref17" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn18" role="doc-endnote"><p>Cp. Mason, Clear Bright Future.<a href="#fnref18" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn19" role="doc-endnote"><p>Cp. M. Hoffman, and R. Pfeifer, “The Implications of Embodiment for Behavior and Cognition: Animal and Robotic Case Studies”, in W. Tschacher and C. Bergomi (eds.), The Implications of Embodiment: Cognition and Communication, Exeter, Andrews UK Limited, 2015, pp. 31– 58. Available at: https://arxiv.org/abs/1202.0440<a href="#fnref19" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn20" role="doc-endnote"><p>N.J. Nilsson, The Quest for Artificial Intelligence, Cambridge, Cambridge University Press, 2009.<a href="#fnref20" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn21" role="doc-endnote"><p>Cp. R. Brooks, Cambrian Intelligence: The Early History of the New AI, Cambridge, MA, A Bradford Book, 1999.<a href="#fnref21" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn22" role="doc-endnote"><p>Cp. D. Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence, New York, Basic Books, 1993; H.P. Newquist, The Brain Makers, Indianapolis, Ind: Sams., 1994.<a href="#fnref22" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn23" role="doc-endnote"><p>Cp. R. Brooks, “A Robust Layered Control System for a Mobile Robot”, IEEE Journal on Robotics and Automation, 2 (1), 1986, pp. 14–23. Available at: https://doi.org/510.1109/JRA.1986.1087032 [accessed October 25, 2019].<a href="#fnref23" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn24" role="doc-endnote"><p>Cp. Brooks, Cambrian Intelligence.<a href="#fnref24" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn25" role="doc-endnote"><p>Cp. R. Pfeifer, “Embodied Artificial Intelligence”, presented at the International Interdisciplinary Seminar on New Robotics, Evolution and Embodied Cognition, Lisbon, November, 2002. Available at: https://www.informatics.indiana.edu/rocha/publications/embrob/pfeifer.html [accessed October 25, 2019].<a href="#fnref25" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn26" role="doc-endnote"><p>Cp. T. Renzenbrink, “Embodiment of Artificial Intelligence Improves Cognition”, Elektormagazine, February 9, 2012. Available at: https://www.elektormagazine.com/articles/embodiment-of-artificial-intelligence-improves-cognition [accessed January 10, 2019]; G. Zarkadakis, “Artificial Intelligence & Embodiment: Does Alexa Have a Body?”, Medium, May 6, 2018. Available at: https://medium.com/<span class="citation" data-cites="georgezarkadakis/artificial-intelligence-embodiment-does-alexa-have-a-body-d5b97521a201">@georgezarkadakis/artificial-intelligence-embodiment-does-alexa-have-a-body-d5b97521a201 [accessed January 10, 2019]</span>.<a href="#fnref26" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn27" role="doc-endnote"><p>Cp. L. Steels and R. Brooks, The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents, London/New York, Taylor & Francis, 1995.<a href="#fnref27" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn28" role="doc-endnote"><p>M. Foucault, “Disciplinary Power and Subjection”, in S. Lukes (ed.), Power, New York, NYU Press, 1986, pp. 229–242, here: p. 230.<a href="#fnref28" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn29" role="doc-endnote"><p>M. Foucault, Power, edited by C. Gordon, London, Penguin, 1980, p. 34.6<a href="#fnref29" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn30" role="doc-endnote"><p>A.R. Galloway, Protocol: How Control Exists After Decentralization, Cambridge, MA, MIT Press, 2004, p. 81.<a href="#fnref30" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn31" role="doc-endnote"><p>Cp. M. Foucault, The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979, London, Pan Macmillan, 2008.<a href="#fnref31" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn32" role="doc-endnote"><p>Galloway, Protocol, p. 81.<a href="#fnref32" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn33" role="doc-endnote"><p>Cp. Galloway, Protocol, p. 147.<a href="#fnref33" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn34" role="doc-endnote"><p>Galloway, Protocol, p. 146.<a href="#fnref34" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn35" role="doc-endnote"><p>Galloway, Protocol, p. 157.<a href="#fnref35" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn36" role="doc-endnote"><p>Crook, Comparative Media Law and Ethics, p. 310.7<a href="#fnref36" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn37" role="doc-endnote"><p>Cp. R. Courtland, “Bias Detectives: The Researchers Striving to Make Algorithms Fair”, Nature, 558, 2018, pp. 357–360. Available at: https://doi.org/10.1038/d41586-018-05469-3 [accessed October 25, 2019].<a href="#fnref37" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn38" role="doc-endnote"><p>Cp. H. Fry, “We Hold People With Power to Account. Why Not Algorithms?” The Guardian, September 17, 2018. Available at: https://web.archive.org/web/201901021 94739/https://www.theguardian.com/commentisfree/2018/sep/17/power- algorithms-technology-regulate [accessed October 25, 2019].<a href="#fnref38" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn39" role="doc-endnote"><p>Cp. O. Etzioni, “How to Regulate Artificial Intelligence”, The New York Times, January 20, 2018. Available at: https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence- regulations-rules.html [accessed October 25, 2019]; A. Goel, “Ethics and Artificial Intelligence”, The New York Times, December 22, 2017. Available at: https://www.nytimes.com/2017/09/14/opinion/artificial-intelligence.html [accessed October 25, 2019].<a href="#fnref39" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn40" role="doc-endnote"><p>Cp. N. Vidmar and D.T. Miller, “Socialpsychological Processes Underlying Attitudes Toward Legal Punishment”, Law and Society Review, 1980, pp. 565–602.<a href="#fnref40" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn41" role="doc-endnote"><p>Cp. M. Wenzel and T.G. Okimoto, “How Acts of Forgiveness Restore a Sense of Justice: Addressing Status/Power and Value Concerns Raised by Transgressions”, European Journal of Social Psychology, 40 (3), 2010, pp. 401–417.<a href="#fnref41" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn42" role="doc-endnote"><p>Cp. N. Bostrom and E. Yudkowsky, “The Ethics of Artificial Intelligence”, in K. Frankish and W.M. Ramsey (ed.), The Cambridge Handbook of Artificial Intelligence, Cambridge, Cambridge University Press, 2014, pp. 316–334; Frankish and Ramsey, The Cambridge Handbook of Artificial Intelligence.<a href="#fnref42" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn43" role="doc-endnote"><p>Cp. J. Fionda, Devils and Angels: Youth Policy and Crime, London, Hart, 2005.8<a href="#fnref43" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn44" role="doc-endnote"><p>Cp. Nils Christie, “Conflicts as Property”, The British Journal of Criminology, 17 (1), 1977, pp. 1–15.<a href="#fnref44" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn45" role="doc-endnote"><p>Cp. J. Braithwaite, “Restorative Justice and a Better Future”, in E. McLaughlin and G. Hughes (eds.), Restorative Justice: Critical Issues, London, SAGE, 2003, pp. 54–67.<a href="#fnref45" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn46" role="doc-endnote"><p>Cp. J. Braithwaite, Crime, Shame and Reintegration, Cambridge, Cambridge University Press, 1989.<a href="#fnref46" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn47" role="doc-endnote"><p>Cp. A. Conn, “Podcast: Law and Ethics of Artificial Intelligence”, Future of Life, March 31, 2017. Available at: https://futureoflife.org/2017/03/31/podcast-law-ethics-artificial-intelligence/ [accessed September, 22 2018].<a href="#fnref47" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn48" role="doc-endnote"><p>Cp. A. Rawnsley, “Madeleine Albright: ‘The Things that are Happening are Genuinely, Seriously Bad’”, The Guardian, July 8, 2018. Available at: https://web.archive.org/web/20190106193657/https://www.theguardian.com9/books/2018/jul/08/madeleine-albright-fascism-is-not-an-ideology-its-a-method-interview-fascism-a-warning [accessed October 25, 2019].<a href="#fnref48" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn49" role="doc-endnote"><p>Cp. D. Haraway, “A Cyborg Manifesto”, Socialist Review, 15 (2), 1985. Available at: http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html [accessed October 25, 2019]; C. Thompson, “The Cyborg Advantage”, Wired, March 22, 2010. Available at: https://www.wired.com/2010/03/st-thompson- cyborgs/ [accessed October 25, 2019].<a href="#fnref49" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn50" role="doc-endnote"><p>Cp. J. Hipp et al., “Computer Aided Diagnostic Tools Aim to Empower Rather than Replace Pathologists: Lessons Learned from Computational Chess”, Journal of Pathology Informatics, 2, 2011. Available at: https://doi.org/10.4103/2153-3539.82050 [accessed October 25, 2019].<a href="#fnref50" class="footnote-back" role="doc-backlink">↩︎</a></p></li>
|
|||
|
<li id="fn51" role="doc-endnote"><p>Cp. J. Baggini, “Memo to Those Seeking to Live for Ever: Eternal Life Would be Deathly Dull”, The Guardian, July 8, 2018. Available at: https://web.archive.org/web/20181225111455/https://www.theguardian.com /commentisfree/2018/jul/08/live-for-ever-eternal-life-deathly-dull-immortality [accessed October 25, 2019].</p>
|
|||
|
</div>
|
|||
|
<br>
|
|||
|
<div class="bio_adn">
|
|||
|
<p>
|
|||
|
<strong>Adnan Hadzi</strong> is currently working as resident researcher at the University of Malta. Adnan has been a regular at Deckspace Media Lab, for the last decade, a period over which he has developed his research at Goldsmiths, University of London, based on his work with Deptford. TV/Deckspace.TV. It is through Free and Open Source Software and technologies this research has a social impact. Currently Adnan is a participant researcher in the MAZI/CreekNet research collaboration with the boattr project. Adnan is co-editing and producing the after.video video book, exploring video as theory, reflecting upon networked video, as it profoundly re-shapes medial patterns (Youtube, citizen journalism, video surveillance etc.). Adnan’s documentary film work tracks artist pranksters The Yes Men and net provocatours Bitnik Collective. Bitnik’s practice expands from the digital to affect physical spaces, often intentionally applying loss of control to challenge established structures and mechanisms, formulating fundamental questions concerning contemporary issues.<br> dek.spc.org / bitnik.org / deptford.tv <br> <br> <strong>Denis Roio</strong>, better known by the hacker name Jaromil, is CTO and co~founder of the Dyne.org software house and think&do tank based in Amsterdam, developers of free and open source software with a strong focus on peer to peer networks, social values, cryptography, disintermediation and sustainability. Jaromil holds a Ph.D on “Algorithmic Sovereignty” and received the Vilém Flusser Award at transmediale (Berlin, 2009) while leading for 6 years the R&D department of the Netherlands Media art Institute (Montevideo/TBA). He is the leading technical architect of DECODE, an EU funded project on blockchain technologies and data ownership, involving pilots in cooperation with the municipalities of Barcelona and Amsterdam.
|
|||
|
</p>
|
|||
|
</div>
|
|||
|
<a href="#fnref51" class="footnote-back" role="doc-backlink">↩︎</a></li>
|
|||
|
</ol>
|
|||
|
</section>
|