Robopsychologie
JKU - MA Psychologie
JKU - MA Psychologie
Set of flashcards Details
Flashcards | 111 |
---|---|
Language | English |
Category | Technology |
Level | University |
Created / Updated | 21.06.2020 / 25.10.2020 |
Weblink |
https://card2brain.ch/box/20200621_robopsychologie
|
Embed |
<iframe src="https://card2brain.ch/box/20200621_robopsychologie/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>
|
Create or copy sets of flashcards
With an upgrade you can create or copy an unlimited number of sets and use many more additional features.
Pile: Tactile, Auditive, Visual Perception
Log in to see all the cards.
Ultrarealistic voice cloning
Since voice-synthesis softwares can copy the rhythms and intonations of a person’s voice, they can be used to produce convincing speech.
These softwares are getting more accessible and were used for a large-scale theft in one specific case:
A managing director of a British energy company, believed that his boss was on the phone.
◦ He followed orders on a Friday afternoon in March 2019 to wire more than $240,000 to an account in Hungary.
◦ It was not his boss on the phone but someone using ultrareslistic voice cloning.
Synthetic Speech Generated from Brain Recordings.
A research group at UCSF Weill Institute for Neuroscience found a way to translate brain recordings into speech.
A decoder transforms brain activity patterns produced during speech into movements of the virtual vocal tract.
A synthesizer converts these vocal tract movements into a synthetic approximation of the participant’s voice.
This new synthetic speech should be able to build a device that is clinically viable in patients with speech loss.
Selective Attention
We are very good at focusing our attention on a particular object in our environment and disregard unimportant details.
This way we can focus our senses on what matters to us
What happens to the information less relevant?
Are faces treated as a combination of different parts?
- it is suppressed
Faces are not treated as a combination of different parts. Faces are processed as “wholes“
A judgment about the upper half changes depending on the lower half of the face
What are Human Perceptual Biases?
• African American (AA) and European American (EA) participants saw images of the same ethnicity faces and other ethnicity faces
• Participants showed stronger FFA (fusiform face area) activation for the same ethnicity faces
What makes human vision special?
• We can recognise familiar faces despite...
- Showing different emotional expressions
- Seeing them in different angles
Large configural changes leave recognition unharmed
Recognition of familiar faces is remarkably robust under a range of deformations:
Individuals on the left can be recognized if they are familiar to us despite changes to metric distances between facial features
Explanation – Colour Constancy
• How can the blue/black dress phenomena be explained?
The blue/black dress problem can be explained with a phenomenon called colour constancy which is the way that our brains interpret colours.
What you see in the picture depends on your individual perception and where you see it:
- Shadows are interpreted differently by our visual system
- When the shadows are removed: the colour is perceived differently
Why do we see optical illusions?
- The brain has developed a "rulebook" of how objects should look from past experiences
- When interpreting new information, our brain relies on previous knowledge
- the brain takes a "shortcut" to focus on important aspects
- optical illusions fool our brains by taking advantage of these shortcuts
Categorization starts very early in infanc
- 9 month:
- 3-4 month?
9-month-old infants were able to pass rapid categorization of human and ape faces (Peykarjou et al., 2017)
Perceptual categorization of cat and dog silhouettes by 3- to 4-month-old infants (Quinn, Eimas & Tarr, 2001)
Problems with Computational Visual Perception
Generative Adversarial Networks (GANs)
Adversarial images
- Generative Adversarial Networks (GANs) revealed a new challenge for image classification: Adversarial Images
- Adversarial Images are images whose class category looks obvious to a human but causes massive failures in a deep network
- with only a minor disortion (seemingly) a deep network's classification of the image goes from a panda to a gibbon
Problems with Computational Visual Perception
A biological system saves a lot of computation through selective attention and an opportunistic sampling of visual patterns
Instead of a serial image-processing pipeline, most biological vision systems involve a tight feedback loop in which orienting and tuning of the visual sensor plays an essential role
Errors can be dangerous in real-world applications, for example autonomous driving
Ethical Problems with Computational Visual Perceptio?
• It is necessary to highlight the fundamentally flawed ways that ImageNet classifies people in “problematic” and “offensive” ways.
• It is crucial to assess the fallibility of AI systems and prevalence of machine learning bias
Example of flawed classification:
Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 71% of cases for women
Human judges achieved much lower accuracy: 61% for men and 54% for women
The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.
Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).
WaveNet by Google DeepMind
Duplex‘ naturally sounding voice was developed using DeepMind's WaveNet technology
WaveNet directly models the raw waveform of the audio signal. It is a fully convolutional neural network (CNN). Input sequences are real waveforms recorded from human speakers. After training, the network is sampled to generate synthetic utterances
Able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems.
How humanlike should a virtual assistant sound?
Is it important for users to be able to distinguish between human and machine (e.g. on the phone)? Would you like to know?
Are artificial voices, which sound very humanlike, sometimes creepy?
Would you prefer to talk to a realistically human sounding bot or one that clearly sounds like a machine?
Are these preferences perhaps context-dependent?
Which visual image of a virtual conversation partner does a more or less human-like voice actually evoke?
Research at the LIT Robopsychology Lab:
User Expectations of Robot Appearance Induced by Different Robot Voices
Results
Human-likeness of the drawn robots was generally high across all conditions.
Some features appeared in almost all drawings regardless of the voice (e.g., head, eyes).
Other features were significantly more prevalent in voice conditions characterized by low human-likeness (wheels) or high human-likeness (e.g., nose).
„Female“ over-representation in voice assistants
Most companies that produce automated voices hold auditions for voice actors and collect recordings of them speaking. Then they invite focus groups to rate the voices on how well they convey certain attributes: e.g., warmth, friendliness, competence
Some studies suggest that female synthetic voices are preferred (voicebot.ai, 2019) as they are perceived as warmer compared to male voices (Karl MacDorman, Indiana University).
Other studies revealed the opposite:
Results indicated that female human speech was rated as preferable to female synthetic speech,
and that male synthetic speech was rated as preferable to female synthetic speech
(Mullenix et al., 2003).
Male voices are perceived as more intelligent (Clifford Nass, Stanford University)
Q – The first genderless voice
This first gender-neutral artificial voice was created to reduce gender bias in AI assistants.
Between ??? and ??? Hz (= gender-neutral range according to research)
Between 145-175 Hz (= gender-neutral range according to research)
Voice was refined after surveying 4,600 people
Collaboration: Copenhagen Pride, Virtue, Equal AI, Koalition Interactive & thirtysoundsgood
Tacticle Perception
Somatosensation can be split up into 3 sensory systems.
- Hapsis or Touch
- Nociception (Temperature and Pain)
- Proprioception (Body Awareness)
What is Proprioception?
Bodyawareness
- Sensory information from muscles, tendons (Sehnen) and ligaments (Bänder)
- Bodily position and awareness
Proprioceptive activities are:
- Jumping on a Trampoline
- Climbing a rockwall
- pulling a heavy wagon
- monkey bars
Explain Proprioception
The Brain, vestibular organs, eyes usw.
The brain receives and interprets information from multiple inputs:
Vestibular organs: in the inner ear send information about rotation, acceleration (Beschleunigung) and position
Eyes send visual information
Stretch receptors in skin, muscels and joints (Gelenke) send information about the position of body parts
Proprioception Task
- Activity: Find your fingertips
- Close your raise both hands above your head keep the fingers of your left hand totally still. With your right hand quickly touch your index fingertip to your nose then quickly touch the tip of your thumb of your left hand with the tip of your right indexfinger. Quickly repeat the entire process while attempting to touch each fingertip (always return to your nose in between fingertip attempts).
Try again, but this time, wiggle the fingers of your raised hand while you're doing this.
Switch hands and try again. How successfully did you locate each fingertip? Did you improve with time? Was there a difference when you used your right versus your left hand?
Tactile Perception:
Phantom Limb (Gliedmassen) Sensations
If a limb is lost through an accident or amputation – around 60% of patients experience phantom limb sensations
The patient feels the presence of their lost limbs
Why?
This happens as neurons in the somatosensory cortex that received input from the sensory receptors of the amutated limb - receive input from the neighbouring regions of the body
This leads patient to teel that the limb is still there
Phantom Limb Pain
Amputees often have to deal with phantom limb pain
Explanation by Neuroscientist Vilayanur Ramachandran:
If you pick up a glass = your brain sends signals down your arm to move towards the glass and pick it up
If your arm is not there = no movement happens
Thus your brain keeps sending "move" signals
Phantom Limb Pain
Can visual feedback help this never ending signals?
Mirror Therapy is one way to help with phantom limb pain
The rubber hand illusion can help to deal with this pain.
Tactile Perception: Touch & Proprioception
Similar to the Mirror Therapy for phantom limb pain – visual feedback can also be given via Virtual Reality to treat phantom limb pain
how?
Using VR Games that require the patients to move their limbs
the amputated limb‘s image is filled in, in the VR Game = so they receive visual feedbackExample: https://www.youtube.com/watch?v=VI621YPv9C0
Robot Proprioception
Again, what is Proprioception?
For what tasks is proprioception important for Robots?
Proprioception = bodily awareness
Robot proprioception is important for a variety of tasks:
- To avoid interference with objects / humans
- Avoid harming itself, others
- Bheave autonomously
name two examples of Body aware robots:
Soter et al., 2018:
Kwiatkowski & Lipson, 2019:
Soter et al., 2018:
- Bodily aware soft robots: Integration of proprioceptive and exteroceptive sensors
- stimulated soft robot can learn to imagine its motion even when its visual sensor is not available
Kwiatkowski & Lipson, 2019:
- A robot modeled itself autonomously
- The robot had no prior knowledge about its shape
- The self-model was then used to perform tasks & detect self-damage
- Where are proprioceptive sensor needed in autonomous driving?
- Inertial Measurement Units (Koordinationssystem)
- measures values internally to the system (robot / autonomous car)
- e.g. motor speed, wheel load, direction, battery status
Inertial Measurement Units
- IMU are employed for monitoring the velocity and position changes
- tachometers are utlized for measuring speed and altimeters for altitude (Höhenlage)
Human Touch
Human Somatosensory System:
Name 9
- hot
- cold
- pain
- pressure
- tickle
- itch
- vibrations
- smooth
- rough
Artificial Skin
Name one austrian company
Airskin - blue danube robotics: developed in Austria
- Industrial Robot with touch-sensitive skin
- Robot and gripper are fully covered with soft Airskin pads, including clamping (befestigen) shearing areas
In the event of a collision between the robot and an employee or an object, the collision sensor responds and instantly triggers an emergency stop.
Columbia engineers have developed a ‘tactile robot finger with no blind spots’ (February 26, 2020)
What is special about it?
- Their finger can logalize touch with very high percision - less than 1mm - over a large, multicurved surface, much like its human counterpart
- Integrating the system onto a hand is easy: Thanks to this new technology the finger collects almost 1000 signals but only needs a 14 wire cable connecting it to the hand and it needs no complex off-board electronics.
• A capacitive pressure sensor system with 108 sensitive zones for the hands of the humanoid robot iCub was designed (Schmitz et al., 2010).
• The results show that the sensor can be used to determine where and (although to a lesser extent) how much pressure is applied to the sensor.
-
- 1 / 44
-