Robopsychologie

JKU - MA Psychologie

JKU - MA Psychologie


Set of flashcards Details

Flashcards 111
Language English
Category Technology
Level University
Created / Updated 21.06.2020 / 25.10.2020
Weblink
https://card2brain.ch/box/20200621_robopsychologie
Embed
<iframe src="https://card2brain.ch/box/20200621_robopsychologie/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>

Pile: Tactile, Auditive, Visual Perception

Tacticle Perception

Somatosensation can be split up into 3 sensory systems. 

  1. Hapsis or Touch
  2. Nociception (Temperature and Pain)
  3. Proprioception (Body Awareness)

What is Proprioception?

Bodyawareness

  • Sensory information from muscles, tendons (Sehnen) and ligaments (Bänder)
  • Bodily position and awareness

Proprioceptive activities are:

  • Jumping on a Trampoline
  • Climbing a rockwall
  • pulling a heavy wagon
  • monkey bars

Explain Proprioception

The Brain, vestibular organs, eyes usw.

The brain receives and interprets information from multiple inputs: 

Vestibular organs: in the inner ear send information about rotation, acceleration (Beschleunigung) and position

Eyes send visual information

Stretch receptors in skin, muscels and joints (Gelenke) send information about the position of body parts

Proprioception Task

  • Activity: Find your fingertips
  • Close your raise both hands above your head keep the fingers of your left hand totally still. With your right hand quickly touch your index fingertip to your nose then quickly touch the tip of your thumb of your left hand with the tip of your right indexfinger. Quickly repeat the entire process while attempting to touch each fingertip (always return to your nose in between fingertip attempts).
  • Try again, but this time, wiggle the fingers of your raised hand while you're doing this.

  • Switch hands and try again. How successfully did you locate each fingertip? Did you improve with time? Was there a difference when you used your right versus your left hand?

Tactile Perception:
Phantom Limb (Gliedmassen) Sensations

  • If a limb is lost through an accident or amputation – around 60% of patients experience phantom limb sensations

  • The patient feels the presence of their lost limbs

    Why?

This happens as neurons in the somatosensory cortex that received input from the sensory receptors of the amutated limb - receive input from the neighbouring regions of the body

This leads patient to teel that the limb is still there

Phantom Limb Pain

Amputees often have to deal with phantom limb pain

Explanation by Neuroscientist Vilayanur Ramachandran:

If you pick up a glass = your brain sends signals down your arm to move towards the glass and pick it up

If your arm is not there = no movement happens

Thus your brain keeps sending "move" signals

Phantom Limb Pain

  • Can visual feedback help this never ending signals?

Mirror Therapy is one way to help with phantom limb pain

The rubber hand illusion can help to deal with this pain.

Tactile Perception: Touch & Proprioception

  • Similar to the Mirror Therapy for phantom limb pain – visual feedback can also be given via Virtual Reality to treat phantom limb pain

how?

  • Using VR Games that require the patients to move their limbs
    the amputated limb‘s image is filled in, in the VR Game = so they receive visual feedback

  • Example: https://www.youtube.com/watch?v=VI621YPv9C0

Robot Proprioception

 

Again, what is Proprioception?
For what tasks is proprioception important for Robots?

Proprioception = bodily awareness

 

Robot proprioception is important for a variety of tasks:

  • To avoid interference with objects / humans
  • Avoid harming itself, others
  • Bheave autonomously

name two examples of Body aware robots:

  • Soter et al., 2018:

  • Kwiatkowski & Lipson, 2019:

Soter et al., 2018:

  • Bodily aware soft robots: Integration of proprioceptive and exteroceptive sensors
  • stimulated soft robot can learn to imagine its motion even when its visual sensor is not available

 

Kwiatkowski & Lipson, 2019:

  • A robot modeled itself autonomously
  • The robot had no prior knowledge about its shape
  • The self-model was then used to perform tasks & detect self-damage

 

  1. Where are proprioceptive sensor needed in autonomous driving?
  2. Inertial Measurement Units (Koordinationssystem)

 

 

  • measures values internally to the system (robot / autonomous car)
  • e.g. motor speed, wheel load, direction, battery status

 

Inertial Measurement Units

  • IMU are employed for monitoring the velocity and position changes
  • tachometers are utlized for measuring speed and altimeters for altitude (Höhenlage)

Human Touch 

Human Somatosensory System: 
Name 9

  1. hot
  2. cold
  3. pain
  4. pressure
  5. tickle 
  6. itch
  7. vibrations 
  8. smooth
  9. rough

Artificial Skin

Name one austrian company

Airskin - blue danube robotics: developed in Austria

  • Industrial Robot with touch-sensitive skin
  • Robot and gripper are fully covered with soft Airskin pads, including clamping (befestigen) shearing areas
  • In the event of a collision between the robot and an employee or an object, the collision sensor responds and instantly triggers an emergency stop.

Columbia engineers have developed a ‘tactile robot finger with no blind spots’ (February 26, 2020)

What is special about it?

  • Their finger can logalize touch with very high percision - less than 1mm - over a large, multicurved surface, much like its human counterpart
  • Integrating the system onto a hand is easy: Thanks to this new technology the finger collects almost 1000 signals but only needs a 14 wire cable connecting it to the hand and it needs no complex off-board electronics.

• A capacitive pressure sensor system with 108 sensitive zones for the hands of the humanoid robot iCub was designed (Schmitz et al., 2010).

• The results show that the sensor can be used to determine where and (although to a lesser extent) how much pressure is applied to the sensor.

iCub's Hand

 

What do you know?

  • the palm has 48 taxels
  • each of five fingertips has 12 taxels
  • fingertip has a shape similar to human fingertip
  • the fingertip is small
  • the fingertip provides 12 pressure measurements
  • and is intrinsically compliant

Artificial Touch

  • A prosthetic hand developed by SynTouch is:

  • equipped with human-like fingernails and fingerprints
  • able to use contact detection
  • able to adjust force
  • mimicking sensation of vibrations, textures and temperatures
  • the hand and reflexes connect on their own

Users have an unprecedented ability to pick up objects without having to actively think about the amount of force they are applying - just like te reflexes of a real human hand

One of the most advanced Robot Hands..

  • ... created by the Shadow Robot Company and includes sensors from SynTouch

  • is a....

  • It is a Tactile Telerobot that can fuse your own hand movements with the robot hands

  • It can be used for a variety of tasks and gives realistic touch feedback

What happens if people implant technical bodyparts? (Cyborgs = cybernetic organism)

  • A person/being with organic and biomechatronic body parts

3 criteria: 

  1. Communication between body and brain must be intact
  2. Technical aspect has to be melted with the body so it becomes a body part
  3. The additional body part has to improve the human's senses / capabilites

Neil Harbisson

Who is he?

  • part of the Cyborg community
  • he has an antenna implanted which helps him to perceive vicible and invisible colours via audible vibrations in his skull
  • these colours include infrareds and ultraviolets
  • he can also receive colours from space, images, videos music or phone calls directly into his head via internet connection

Who is Moon Ribas?

  • She developed the Seismic Sense: an online seismic sensor once implanted in her feet that allowed her to perceive earthquakes taking place anywhere in the planet through vibrations in real time.
  • Ribas’ seismic sense also allowed her to feel moonquakes, the seismic activity on the Moon

  • Ribas believes that by extending our senses to perceive outside the planet, we can all become senstronauts. Adding this new sense allowed her to be physically on Earth while her feet felt the Moon.

Auditive Perception
The Ear Anatomy - Summary

 

Pinna
Auditory Canal
Eardrum
Ossicles
Cochlea

Pinna: Collects and amplifies the sound waves

Auditory Canal: The soundwaves from the pinna are deflected into the auditory canal

Eardrum: vibrates and sends the vibration onto the ossicles

Ossicles (Hammer, Anvil (Amboss), Stirrup (Steigbügel))

Cochlea: Snail shaped: Contains lympathic fluid within this fluid is the basilar membran which has hair cell attached to it

Auditory Illusions

What makes human hearing special? 

Cocktail Party Effect:

◦ The brain is able to focus the auditory attention on a particular stimulus

◦ Simultaneously, it is filtering our other auditory stimuli

◦ An example for this: hearing your own name in a noisy room

Historically, systems struggle to filter out background noise to recognise the “main speakers“ voice.

Hiw did Google Research tackled this problem?

They trained "multi-stream convolutional neural network“

  • When trained the system is now capable of focusing on a single voice and filtering out everything else. 

  • This means that a trained system might be even better at filtering out noises compared to a human...

What makes human communication special?

Understanding prosody (Verslehre), intonation (Betonung) and emotions

What is subglottal pressure?

What is prosodic modulation?

  • Physiological variations in an emotionally aroused speaker can cause an increase of the subglottal pressure (the pressure generated by the lungs beneath the larynx), which can affect voice amplitude and frequency = expression of the speaker’s emotional state

  • Expression of emotions through prosodic modulation of the voice, in combination with other communication channels, is crucial for affective and attentional regulation in social interactions in adults and infants (Sander et al., 2005; Schore and Schore, 2008).

  • For emotional communication: prosody can prime or guide the perception of the semantic meaning (Ishii et al., 2003; Pell et al., 2011; Newen et al., 2015; Filippi et al., 2016).

  • Human speakers can put words and their meaning into context.

  • Humans assess gestures and facial expressions simultaneously with spoken words

Problems related to AI voice assistants

In comparison to humans, AI assistants have difficulties.....

Difficulties in understanding prosody, intentions and humour from spoken language.

  • This can cause misinterpretations of spoken language Missy Cummings, an associate professor at MIT said:

“You could do all the machine learning in the world on the spoken word, but sarcasm is often in tone and not in word,” she added. “[Or] facial expressions. Sarcasm has a lot of nonverbal cues.”

New AI voice assistant can understand intonations

 

Which one?

A new AI assistant, OTO is specialising on this and can be used to assist in real-time conversational data:

“OTO is moving beyond speech-to-text, pioneering the first multi-dimension conversational system with the ability to merge both words + intonation (Acoustic Language Processing). This provides a much richer understanding of the context, sentiment and behaviors of a conversation.“

Recent research has focussed on detecting depression from a person‘s speech pattern.

How?

  • AI algorithms can now more accurately detect depressed mood using the sound of a person‘s voice, according to new research at the University of Alberta

  • An App could collect voice samples from people over longer time periods when they speak naturally

  • Over time, the App would track indicators of mood

Voice Cloning - Voice cloning is easily accessible today:

  • Montreal-based AI startup Lyrebird provides an online platform that can mimic a person’s mimics speech when trained on 30 or more recordings

  • Baidu introduced a new neural voice cloning system that synthesizes a person’s voice from only a few audio samples

  • New Github project (Sept., 2019): Users enter a short voice sample and the model — trained only during playback time — can immediately deliver text-to-speech utterances in the style of the sampled voice

Ultrarealistic voice cloning

  • Since voice-synthesis softwares can copy the rhythms and intonations of a person’s voice, they can be used to produce convincing speech.

  • These softwares are getting more accessible and were used for a large-scale theft in one specific case:

A managing director of a British energy company, believed that his boss was on the phone.

◦ He followed orders on a Friday afternoon in March 2019 to wire more than $240,000 to an account in Hungary.
◦ It was not his boss on the phone but someone using ultrareslistic voice cloning.

Synthetic Speech Generated from Brain Recordings. 

  • A research group at UCSF Weill Institute for Neuroscience found a way to translate brain recordings into speech.

  • A decoder transforms brain activity patterns produced during speech into movements of the virtual vocal tract.

  • A synthesizer converts these vocal tract movements into a synthetic approximation of the participant’s voice.

  • This new synthetic speech should be able to build a device that is clinically viable in patients with speech loss.

Selective Attention

  • We are very good at focusing our attention on a particular object in our environment and disregard unimportant details.

  • This way we can focus our senses on what matters to us

What happens to the information less relevant?
Are faces treated as a combination of different parts?

  1. it is suppressed
  2. Faces are not treated as a combination of different parts. Faces are processed as “wholes“
    A judgment about the upper half changes depending on the lower half of the face

What are Human Perceptual Biases?

• African American (AA) and European American (EA) participants saw images of the same ethnicity faces and other ethnicity faces

• Participants showed stronger FFA (fusiform face area) activation for the same ethnicity faces

What makes human vision special?

• We can recognise familiar faces despite...

  • Showing different emotional expressions
  • Seeing them in different angles
  • Large configural changes leave recognition unharmed

  • Recognition of familiar faces is remarkably robust under a range of deformations:

  • Individuals on the left can be recognized if they are familiar to us despite changes to metric distances between facial features

Explanation – Colour Constancy

• How can the blue/black dress phenomena be explained?

 

The blue/black dress problem can be explained with a phenomenon called colour constancy which is the way that our brains interpret colours.

What you see in the picture depends on your individual perception and where you see it:

  • Shadows are interpreted differently by our visual system
  • When the shadows are removed: the colour is perceived differently

Why do we see optical illusions?

  • The brain has developed a "rulebook" of how objects should look from past experiences
  • When interpreting new information, our brain relies on previous knowledge
  • the brain takes a "shortcut" to focus on important aspects
  • optical illusions fool our brains by taking advantage of these shortcuts

 

Categorization starts very early in infanc

  • 9 month:
  • 3-4 month?

  • 9-month-old infants were able to pass rapid categorization of human and ape faces (Peykarjou et al., 2017)

  • Perceptual categorization of cat and dog silhouettes by 3- to 4-month-old infants (Quinn, Eimas & Tarr, 2001)

Problems with Computational Visual Perception

  • Generative Adversarial Networks (GANs)

  • Adversarial images

  • Generative Adversarial Networks (GANs) revealed a new challenge for image classification: Adversarial Images
  • Adversarial Images are images whose class category looks obvious to a human but causes massive failures in a deep network
  • with only a minor disortion (seemingly) a deep network's classification of the image goes from a panda to a gibbon

Problems with Computational Visual Perception

  • A biological system saves a lot of computation through selective attention and an opportunistic sampling of visual patterns

  • Instead of a serial image-processing pipeline, most biological vision systems involve a tight feedback loop in which orienting and tuning of the visual sensor plays an essential role

  • Errors can be dangerous in real-world applications, for example autonomous driving

Ethical Problems with Computational Visual Perceptio?

• It is necessary to highlight the fundamentally flawed ways that ImageNet classifies people in “problematic” and “offensive” ways.

• It is crucial to assess the fallibility of AI systems and prevalence of machine learning bias

Example of flawed classification:

  • Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 71% of cases for women

  • Human judges achieved much lower accuracy: 61% for men and 54% for women

  • The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person.

  • Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style).