Agency questionnaire draft development
Allikas: Lambda
General background of "agency" wiki: Agency is the capacity of an actor to act in a given environment. Synonyms? ... Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes. See also: https://en.wikipedia.org/wiki/Action_theory_(philosophy) sep: In very general terms, an agent is a being with the capacity to act, and "agency" denotes the exercise or manifestation of this capacity. https://plato.stanford.edu/entries/agency/ Review of measuring agency of people: https://link.springer.com/article/10.1007/s11205-021-02791-8 Associated area: agency in social cognitive theory (SCT) SCT considers the self-as-agent to encompass four core features of human agency (Figure 1): intentionality, forethought, self-reactiveness (self-regulation), and self-reflectiveness (self-efficacy) ---- Dimensions ---- A paper https://www.researchgate.net/publication/364024468_The_Concept_of_Agency_in_the_Era_of_Artificial_Intelligence_Dimensions_and_Degrees "What are the dimensions of agency that can accom- modate the differences in human and non-human actors without privileging one over the other?" * Passivity dimension The agent can be passive (Rowe, 1991), where they are acted upon rather than acting (passivity). This is the case where the agent-involving actions are events, such as the scalpel "making" an incision * Automaticity Dimension Minimal agency can be witnessed in automatic actions (Aguilar & Buckareff, 2015; Schlosser, 2011). Automatic actions are based on properties or dispositions. An acid dis- solving a metal is an example of automatic action. Uncon- scious or habitual actions and automatic responses to stimuli that do not involve rational thinking (Schlosser, 2011) are also considered automatic actions. * Rationality Dimension The agent could be involved in reasoning, i.e., rationality (Aguilar & Buckareff, 2015). This level of involvement of the agent is based on the standard action theory proposed by Davidson (Schlosser, 2019). Here the agents demonstrate reasoning or acting based on beliefs and desires (Davidson, 1963). For example, when a physician diagnoses a patient, the physician's beliefs and intentions are the reasons that cause the action * Endorsement Dimension However, it is now established that though the action may be rational, it may not be intentional (Schlosser, 2011). As demonstrated by Frankfurt (Velleman, 1992), an unwilling addict may administer a drug to themselves out of desire, but it may conflict with their desire to wean themselves away from drugs. * Freedom-to-Choose Dimension The other condition for agential control and involvement is that the agents should have the freedom to act, i.e., choose from among the alternatives, and not be constrained by external events. For the actions to be free, the agent should be conscious of the motivating reasons (O'Connor, 2009), and the agent should be free to choose which among the motivating reasons should influence the actions (O'Connor, 2009; Clarke, 1996). * Consciousness Dimension In the context of conscious awareness required for free action, the notion of access and phenomenal experience of agency has to be discussed. The phenomenal consciousness is the first-person experience of agency. ------ A paper "Group Agency and Artificial Intelligence" https://link.springer.com/article/10.1007/s13347-021-00454-7 Citations: The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. To develop the parallel, let me give a more precise definition of intentional agency. An intentional agent is an entity, within some environment, that meets at least three conditionsFootnote 15: It has representational states, which encode its "beliefs" about how things are. It has motivational states, which encode its "desires" or "goals" as to how it would like things to be. It has a capacity to interact with its environment on the basis of these states, so as to "act" in pursuit of its desires or goals in line with its beliefs. ------ A paper "AI agency vs. human agency: understanding human-AI interactions on TikTok and their implications for user engagement" https://academic.oup.com/jcmc/article/27/5/zmac014/6670985 hrough in-depth interviews with TikTok users, this study investigates how users collaborate with AI when using AI-powered social media and how such dynamics shape user engagement. We found that TikTok users are receptive to personalized experiences enabled by machine agency. ------- Initial ideas by Tanel: Goals: * Does have goals? Like a rock probably does not. * Size of the goals of agent * Variability of goals: more different kinds of goals * Amount of various possible paths towards a goal / goals Capabilities: * Speed in achieving / time scale (rock has 5 mill years, person has 100 years) * Homoestasis (stay in the comfort zone) ??? staying alive, keeping a suitable environment? offspring? * Ability to make copies of oneself??? * Strength of will related to goals * Intellectual / computational capabilities Note: willing to learn from mistakes (related to homoestasis) * Physical or social capabilities Notes: * Free will at the end, not to create bias * Education etc is recorded * Maybe several times the questionnaire is taken * Have you spent more than 5 minute reading about agency Example potential agents: No intelligence, as far as we know: * an atom * a drop of water * a mountain * a hurricane * a planet Concrete living beings with some intelligence, perhaps: * a virus * a microbe * a brain cell * an ant * a mouse * a human Systems: * a lamp with a movement sensor * a chess program * a moving robot * a memetically evolving system (like operating systems of computers or all machine learning systems together) * a physically evolving system (like life on earth) * hypothetical superintelligent being / society able to use up stars * hypothetical superintelligent being / society able to change principles of the universe -------- Sellest vaatenurgast võiks siis mõelda agentsuse sisustamisest kui mitmemõõtelisest sujuvast spektrist, kus võimalik asju pidada vähem agentseks ja rohkem agentseks: mitmes dimensioonis, niiet üldiselt ei ole kerge ka võrrelda kahe erineva süsteemi agentsust. Ütleme, et süsteemil on olukorra headuse hinnang ja variandid tegutsemiseks, et saavutada paremat olukorda. Siis üks mõõt võiks olla olukorra nö mahukus/suurus/pikaaegsus/vägevus. Näiteks oleks madala agentsusega süsteem liikumissensoriga õuevalgusti, veidi rohkemaga maleprogramm, veidi rohkemaga iseliikuv robot, veel rohkemaga memeetiliselt paljunev süsteem, siis füüsiliselt pikaaegselt paljunev ja evolutsioneeruv süsteem (eesmärk siis maksimaalne paljunemine väga pikas perspektiivis), veel suurem hüpoteetiline superolend, kes võiks muuta tähesüsteeme, veel suurem superolend, kes suudaks modifitseerida universumit vms. Teine ja vbl kolmas võimalik dimensioon on tegevusvariantide olemasolu ja nende reaalne mõjukus. Kui tegevusvariante üldse ei ole, st ainult üks determineeritud tegevus ja planeerida / alamülesandeid teha ei saa, siis tundub vähem agentne. Kui tegevusvariante on küll palju, aga ükski neist ei mõju, siis tundub ka vähem agentne. Näiteks liikumissensoriga õuevalgustil on ainult kaks tegutsemisvarianti, aga mõlemad eesmärgi jaoks supermõjusad, maleprogrammil on rohkem variante, vähem garanteeritud tulemusega, iseliikuval robotil veel rohkem variante, evolutsiooni-eesmärgiga süsteemil veel rohkem variante (ja igaüks veel vähem garanteeritud ehk mõjus) jne. Kui anda suur ülesanne a la muuta õuetemperatuuri Tallinnas juunikuus (ntx kõrgem temperatuur oleks parem), mis on nö küllaltki suur/raske/pigem pikaajaline ülesanne, siis ühelgi süsteemil ei ole reaalselt mingeid viise seda saavutada, ja programm, mis lihtsalt monitooriks temperatuuri ja kel oleks olukorra headuse mõõt ja näiteks teeks malekäigu või piiksataks igakord, kui temperatuur ei ole eelmise mõõtmisega võrreldes kasvanud, ei aitaks kuidagi kaasa eesmärgi saavutamisele: oleks siis tunnetuslikult vähem agentne. ----------- Here are some of my ideas for a questionnaire about agency: Firstly, I would give a general definition of agency: ability to control one's actions and through this, change external environment. Secondly, I would imagine measuring the sense of agency for the person answering, to get them into this mindset and also seeing how they see themselves. I found these questions in this paper that cover multiple aspects of the agency experience, such as a controlling self, a physical self and interaction with the environment: I am in full control of what I do I am just an instrument in the hands of somebody or something else My actions just happen without my intention I am the author of my actions The consequences of my actions feel like they don't ogically follow my actions My movements are automatic: my body simply makes them The outcomes of my actions generally surprise me Things I do are subject only to my free will The decision whether and when to act is within Nothing I do is actually voluntary While I am in action, I feel like I am a remote controlled robot My behavior is planned by me from the very beginning to the very end I am completely responsible for everything that results from my actions These could be answered on the scale 1-10. And then I'd introduce different objects for the person to measure, also on the scale 1-10, and comparisons between objects and so on. I think measuring oneself first would give a good base to work from. ------- Michelangelo või Taavet - elus või mitte / endiselt füüsiliselt alles või mitte Bob marley või nublu - elus või mitte / inspiratsiooniallikas vähematele või rohkematele. Soomlased või samid - ühtedel on riik teistel mitte Vang või valvur - üks juriidiliselt piiratud, teine ametialaselt ja sotsiaalselt piiratud CIA või Vatikan - Erinevus ressursside iseloomus / erinevus piirangutes mis neile rakenduvad. Jumalikud juhised võibolla vabamalt tõlgendatavad kui rahvusvaheline õigus Kapten Ameerika või Loki - Ühel tugevad põhimõtted ja nendest tulenevad piirangud, teisel palju liikumisruumi "hea" ja "halva" poole vahel. ------ Then I also have a few ideas for the 'agency questionnaire': An intelligent prisoner vs a free fool. A busy rich CEO vs a relaxed primitive tribe member in the jungle. A robot with a random number generator vs an almost identical robot without. An educated person vs an uneducated person. A stubborn cripple vs a docile non-handicapped person. A disciplined monk vs a hurried chaotic business-person. (I tried to phrase it gender-neutrally. 'Businessman' and 'tribesman' sound more natural but are not gender-neutral). ---- Küsimused agentsuse erinevate tahkude alt: Time & Impact - Which has more agency: A volcano that shapes landscapes over millennia (A) vs. a beaver that builds dams over its lifetime (B)? - Which has more agency:: A century-old redwood tree gradually changing its forest ecosystem (A) vs. a scientist conducting experiments for 5 years (B)? Choice & Goals - Which has more agency: An ant following pheromone trails to food (A) vs. a self-driving car navigating traffic (B)? - Which has more agency: A plant growing towards sunlight (A) vs. a child choosing which toy to play with (B)? Consciousness & Autonomy - Which has more agency: An AI chatbot having conversations with thousands of people (A) vs. a wild wolf hunting in its territory (B)? - Which has more agency: A smartphone responding to user inputs (A) vs. a cat deciding when to eat (B)? Adaptation & Learning - Which has more agency: A river carving a new path after a landslide (A) vs. a student learning to play piano (B)? - Which has more agency: Evolution of bacteria developing antibiotic resistance (A) vs. a person developing new habits (B)? ------ Formaat: - Küsida iga küsimuse juures lisaselgitust stiilis "Why did you choose this rating?" - Skaalat võiks kasutada stiilis: -3: A has much more agency -2: A has moderately more agency -1: A has slightly more agency 0: Equal agency +1: B has slightly more agency +2: B has moderately more agency +3: B has much more agency ----------