Robopsychologie
JKU - MA Psychologie
JKU - MA Psychologie
Fichier Détails
Cartes-fiches | 111 |
---|---|
Langue | English |
Catégorie | Technique |
Niveau | Université |
Crée / Actualisé | 21.06.2020 / 25.10.2020 |
Lien de web |
https://card2brain.ch/box/20200621_robopsychologie
|
Intégrer |
<iframe src="https://card2brain.ch/box/20200621_robopsychologie/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>
|
Créer ou copier des fichiers d'apprentissage
Avec un upgrade tu peux créer ou copier des fichiers d'apprentissage sans limite et utiliser de nombreuses fonctions supplémentaires.
Connecte-toi pour voir toutes les cartes.
Self-Determination Theory (Ryan & Deci, 2000)
Central assumptions:
All human beings have a limited set of basic psychological needs.
Their satisfaction is essential for well-being, flourishing and optimal performance.
If they don’t get satisfied -> negative consequences.
Their satisfaction leads to more autonomous forms of motivation.
There are three BPN that are discussed as particularly important:
o Competence
o Autonomy
o Relatedness
1) COMPETENCE as basic psychological need
2) AUTONOMY as basic psychological need
3) RELATEDNESS as basic psychological need
1) COMPETENCE as basic psychological need
- Competence concerns the experience of effectiveness and mastery, to feel confident in relation to whatever you doing
- It becomes satisfied as one cabably engages in activities and experiences oppurtunities for using and extending skills and expertise
- When frustrated, one experiences a sense of ineffectiveness or helplessness
2) AUTONOMY as basic psychological need
- refers to the experience of volition and willingness
- when satisfied one experiences a sense of integrity as when one's actions, decisions, thoughts, and feelings are self-endorsed and authentic
- When frustrated, one experiences a sense of pressure and often conflict such as feeling pushed in an unwanted direction
3) RELATEDNESS as basic psychological need
- denotes the experience of warmth, bonding, and care, to be cared for by others, to care for others, to feel like you belong in groups that are important to you
- It is satisfied by connecting to and feeling significant to others
- relatedness frustration comes with a sense of social alienation, exclusion and loneliness
Conclusion regarding competence, aunonomy and relatedness?
For the acceptance of AI applications in society, utilitarian product attributes such asusefullness, ease of use and expected output quality are important - but basic psychological needs such as autonomy, competence and relatedness or hedonic needs such as stimulation and enjoyment should not be ignored
- How does Jentsch describe uncanny feelings?
- How did Freud emphasize it?
- Jentsch: as “intellectual uncertainty” and not being “at home” (un-heimlich) in the situation concerned
- Freud: - In contrast to Jentsch, Freud emphasized that what is uncanny is something that seems to be “un-homely” (unheimlich) and unfamiliar, but at the same time “homely” and familiar (“the unfamiliar in the familiar”).
In Freud’s view, the uncanny might be anything we experience in adulthood that reminds us of early psychological stages or of primitive experiences.
How do we (after Mori) in the uncanny valley perceive very high to perfect human like machines?
Not perceived as uncanny, because not distinguishable from real humans anymore
Examples: Lifelike anroid robots, social bots, lifelike synthetic voices, AI-generated portraits of (real or fake) persons, deep fake videos
Ethical question: Do we want to live in a world where humans and machines are impossible to distinguish?
EU AI Ethics Guidelines say: Machines must be identifiable as such
Where is Sofia in the uncanney valley?
High but not perfect level of human-likeness:
- Perceived as uncanny/threatening
- Non-perfect android robots, non-perfect computer- animated faces and avatars, synthetic voices, AI-generated portraits/videos with small glitches
Relationship animal likeness & likeability = U-shaped function & UV effect found
When where robots prefered? Animallike? Not animallike?
Robots were preferred when they looked very animal-like or not animal-like at all as compared to robots who mixed realistic and unrealistic animal-like features
Recent neuroscientific results on uncanny valley
Across two experimental tasks, the ventromedial prefrontal cortex (VMPFC) encoded an explicit representation of participants’ uncanny reactions.
Which brain areas where active?
The ventromedial prefrontal cortex
it signaled the subjective likeability of artifical agents on a nonlinear function of human-likeness, with selective low likeability for highly humanlike agents
The same brain areas were active when participants made decisions about wheter to accept a gift from a robot. One further region - the amygdala, which is responsible for emotional responses - was particulary active when participants refected gifts from the human-like, but not human artificial agents.
Why are highly but not perfectly humanlike artificial figures creepy?
Evolutionary approaches:
Categorical uncertainty / perceptual mismatches:
Expectancy violation / Prediction errors:
Evolutionary approaches:
Pathogen avoidance, mate selection, also mortality salience was mentioned earlier (e.g. Ho, MacDorman, & Pramono, 2008; MacDorman & Ishiguro, 2006)
Categorical uncertainty / perceptual mismatches:
Not clear to which category belongs (human? machine? hybrid?) (e.g. Jentsch, 1906; Gray & Wegner, 2012)
Expectancy violation / Prediction errors:
If people evaluate the robot’s behavior according to a human schema, it might not measure up to these expectations due to ist imperfections
(MacDorman, 2006; Matsui, Minato, MacDorman, & Ishiguro, 2005, 2006; Mitchell, Szerszen, Lu, Schermerhorn, Scheutz, & MacDorman, 2011; Saygin, Chaminade, Ishiguro, Driver, & Frith, 2012; Steckenfinger & Ghazanfar, 2009)
Developmental influences: No Uncanny valley for kids?
Children below the age of 9 didnt rate the humanlike robot as creepier in comparison to the machine like robot. This suggests a developmental effect for the uncanny valley
Interplay between trustor, trustee & situation.
Trustors's propensity (Tendenz) to trust:
Propensity to trust is regarded as a stable individual trait that refers to the general tendency for someone to trust other individuals.
Propensity to trust has a global effect on trust intentions (Colquitt et al., 2007) and trustworthiness assessments (Jones & Shah, 2016).
However, the impact of trust propensity is most salient early in interpersonal interactions, when other information may not yet be available (McKnight, Cummings, & Chervany, 1998).
Once other information becomes more salient, such as the trustee’s previous behaviors, propensity to trust will have a weaker influence on the extent to which the trustor will make him/herself vulnerable to the trustee (Mayer et al., 1995).
Trustee‘s perceived trustworthiness
Trustworthiness is the trustor’s perception of the trustee (Mayer & Davis, 1999).
Perceptions are formed as a trustor interprets and ascribes motives to the trustees’ actions (Ferrin & Dirks, 2003). Thus, perceptions of trustworthiness, although inherently within the trustor, are a function of the interaction of trustor and trustee as the trustor is processing information about the trustee. It is important to note these are the ascribed beliefs of the trustor and are not necessarily factual.
As interactions mature, a trustor will increasingly depend on the behavior of the trustee rather than personal dispositional factors, such as propensity to trust, when making trust evaluations (Jones & Shah, 2016; Levin et al., 2006).
Cognitive Trust
Cognitive trust describes the willingness to rely on a partner‘s ability/competence and predictability/reliableness (Moorman et al., 2992; Rempel et al., 1985; Johnson-George & Swap, 1982).
It arises from an accumulated knowledge that allows one to make predictions, with some level of confidence, regarding the likelihood that a trustee will live up to his/her/its obligations.
Cognitive trust is knowledge-driven, the need to trust presumes a state of incomplete knowledge. A state of complete certainty regarding a partner's future actions implies that risk is eliminated and trust is redundant.
Affective Trust
Affective trust is the confidence one places in a partner on the basis of feelings generated by the level of benevolence/care and integrity the trustee demonstrates (Johnson-George & Swap, 1982; Rempel et al., 1985).
It is characterized by feelings of security and perceived strength of the relationship.
Affective trust is decidedly more confined to personal experiences with the focal partner than cognitive trust. As emotional connections deepen, trust in a partner may venture beyond that which is justified by available knowledge. This emotion-driven element of trust makes the relationship less transparent to objective risk assessments.
EU Definition of Artificial Intelligence
sense - think (plan) - act
Machine Learning
- (Deep Learning, Reinforced Learning)
- Reasoning - information processiong (Search, Planning, Knowledge) - Decision making
Research fields of diferent at Pichelrs institute:
- Object Detection
- Sematic Scene Segmentation
- Human Pose Detection
- Deep Reinforcement Learning
- Object Detection (3D needed for Robotics, Video mit Hund und Rad)
- Sematic Scene Segmentation (autonomous driving, what is the scene about? Meaning)
- Human Pose Detection (Video mit Sportlern, Tänzern) Auch hier 2D schon relativ gut, 3D noch nicht
- Deep Reinforcement Learning (Robots learning to move)
- Online Training (Robot learns objects, Robot dealing with unknown situations or objects)
- Learning Robot Grasping Policies (Greifarme, Objekte greifen)
- Imitation learning (Video mit Glas füllen)
- Deep Reinforcement learning (sorting out a bin, make space to grab and place things)
What is trust?
What is a trustfull person?
Cambridge Dictionary:
Trust is to believe that someone is good and honest and will not harm you, or that something is safe and reliable
Trustful person:
- Consistency and reliabilty (meet the expectations, avoid surprises and risks
- Adequacy and adaptability
- execution (always competent and professional)
- Honesty and openness (communicate inform and explain; point to room for imporovement)
Trustworthy Robots: Safety, Creditibility and Explainability
EU guidelines for Turstworthy AI published april 8, 2019: What does this document?
- it talks about requirements of trustworthy Ai
- Technical methods when realising explainable AI
- Also if you have a system how to assess that it is trustworthy
Framework for Trustworthy AI
Introduction (3)
Chapter 1 (2)
Chapter 2
Chapter 3
Introduction: Lawfull Ai, Ethical Ai, Robust AI
Chapter 1: Foundations of Trustworthy AI -> 4 Ethical Principles (Respect for human autonomy, Prevention of harm, frames, Explicability)
Chapter 2: Realisation or Trustworthy AI -> 7 key Requierements (Technical, non technical methods)
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparancy
5. Diversity, non-discrimination and fairness
6. Societal and environmental wellbeing
7. accountability
Chapter 3: Assesment of Trustworthy AI -> Trustworthy AI Assesement List
7 key requierements of trustworthy Ai
Again: 7 Keyrequirements of Trustworthy AI:
- Human agency and oversight
- Technical Robustness and safety
- Privacy and Data governance
- Transparency
- Diversity, non-discrimination and fairness
- Societal and environmental wellbeing
- Accountability (Auditability and accounting)
Technical Methods for Trustworthy AI (5)
- Archticures for Trustworthy Ai
- Ethics and rule of law by design
- Explanation methods (X to AI)
- Testing and validating
- Quality of Service Indicators
Difference between Turst and Credibility?
Trust comes from the heart (Firm belief in the reliability, truth or ability of someone or smthg
Credibility needs some sort of justification or proor (head), able to believed in, in justifying confidence.
A credible robot system adheres to guidelines and standards from the technical perspective while at the same time being constructed with the effect on iths users mind.
We aim to provide credibility guidelines and technical architectures that, when followed, give a robot system "certified" trustworthiness (which is similar to the current robotic safety approach)
What is CredRoS?
CredRoS - Credible and Safe Robot Systems
- Sensitive manipulation and robbot safety
- Dynamic detection of the environment
- Sensory perception of the human being
- Multimodal Human-Robot Interaction
- Task planning and task execution
- Demnonstration
- Explore
- Develop and integrate for demonstration
- contribute
Here you see different points of view. Horizontal achsis is the timeline, which is the present with different steps. Different points of view and different times.
First thing is to sense smthg via sensor and input data. This is interpreted somehow according to environment and context. Also according to the history.
Interact or React is what the robot can immidiately do
Also it can plan smthg for the future
Next step is to preserve (speichern)
________
Situation Context (immediate)
Reflex Context (nearly immediate, given the context, the robot should be able to foresee smthg)
Safety Context
Tacticle Perception
Somatosensation can be split up into 3 sensory systems.
- Hapsis or Touch
- Nociception (Temperature and Pain)
- Proprioception (Body Awareness)
What is Proprioception?
Bodyawareness
- Sensory information from muscles, tendons (Sehnen) and ligaments (Bänder)
- Bodily position and awareness
Proprioceptive activities are:
- Jumping on a Trampoline
- Climbing a rockwall
- pulling a heavy wagon
- monkey bars
Explain Proprioception
The Brain, vestibular organs, eyes usw.
The brain receives and interprets information from multiple inputs:
Vestibular organs: in the inner ear send information about rotation, acceleration (Beschleunigung) and position
Eyes send visual information
Stretch receptors in skin, muscels and joints (Gelenke) send information about the position of body parts
-
- 1 / 111
-