-


Fichier Détails

Cartes-fiches 368
Langue Français
Catégorie Informatique
Niveau Université
Crée / Actualisé 31.05.2025 / 09.06.2025
Lien de web
https://card2brain.ch/box/20250531_maio
Intégrer
<iframe src="https://card2brain.ch/box/20250531_maio/embed" width="780" height="150" scrolling="no" frameborder="0"></iframe>

What is hypernudge ? 

Big Data analytic nudges (‘hypernudge’) : 

• Networked, continuously updated, dynamic, and pervasive (Yeung, 2017)
    • Execution continuously updated and refined within a networked environment
       -> personalized choice environment
    • Feedback loop enabling dynamic adjustment of an individual’s choice architecture in real time
    • Leveraging algorithmically determined correlations between data items not observable through human cognition alone
    • ‘Dynamically configuring users’ choice environment: subtle, unobtrusive, very powerful techniques to influence decisions Organizations can develop personalized strategies for shaping individuals’ decisions at large scale

 

CHATGPT:

Technique utilisant les Big Data et les algorithmes pour influencer subtilement les décisions individuelles.

  • Personnalisé et mis à jour en continu

  • Boucle de rétroaction en temps réel

  • Corrélations invisibles à l’humain

  • Contrôle discret mais puissant

Permet aux organisations de façonner les choix à grande échelle sans coercition.

Exemple : Plateformes comme Amazon ou Netflix qui ajustent dynamiquement les recommandations.

Give 4 Critique of nudge

• Illegitimate motives (active manipulation, such in the Cambridge Analytica Scandal)
• Exploiting cognitive weaknesses to provoke desired behaviors entails a form of deception
• Curtail employees’ ability to act voluntarily and reflect on own choices
• Lack of transparency: opaque algorithms shielded from external scrutiny, leading to concerns of abusive use

Give some Mitigating adverse effects

• Rights to informational privacy
• Right to opt out
• Sharing information about data practices
• Algorithmic transparency
• Reward system that benefit employees
• Algorithmists to monitor nudging practices
• Designing for informing and ambiguity

Voici la traduction en français :

"Mitigating adverse effects" : Atténuer les effets négatifs

  • Droits à la vie privée informationnelle

  • Droit de se retirer / de refuser (le traitement des données)

  • Partage d’informations sur les pratiques en matière de données

  • Transparence algorithmique

  • Système de récompense qui profite aux employés

  • Experts en algorithmes pour surveiller les pratiques de nudging

  • Conception axée sur l’information et la réduction de l’ambiguïté

Give Mitigating adverse effects in design

• Designing for informing and ambiguity: probabilistic context-based nudging rather than deterministic and decontextualized nudging
• Many algorithmic systems are designed to be over-confident: probabilistic calculations underlying a data- driven insight hidden from users and results presented as facts
• Provide reasoning and context for the generated insights

-> Facilitate reflection, propose alternative options, act voluntarily, develop practical wisdom

 

Ex: information about confidence levels could be included in the design 

In traditional organizations, what algorithmic management reshapes ? 

1. Power dynamics between workers and managers
2. Professional identities
3. Roles and competencies
4. Knowledge and information

•  Algorithmic management is a sociotechnical process emerging from the continuous interaction of organizational members and algorithmic systems 
•  Deeply embedded in pre-existing social, technical, and organizational structures

For Power dynamics between workers and managers, what increase and what decrease ? 

Increased power to managers
• Overcoming cognitive limitations in dealing with information overload
• Streamlining work processes
• New opportunities to exercise control over workers

Decreased power to managers
• No managerial agency in full delegation to algorithms (managerial intervention only in design and development of algorithms)
• Lost opportunity to build tacit knowledge
• “Algorithmic leadership”: automation of leadership activities of managers such as motivating, supporting, and transforming workers

For Professional identities, what are the changes? 

• Algorithmic management shapes how managers perceive their professional identity
• Managers want to deepen their autonomy, set professional boundary against new practices and other groups, and enhance their status by increasing their specialization
• Professional identities affect the way managers integrate knowledge claims generated by algorithms

 -> Managers will support or reject algorithmically-generated decisions depending on whether they view it as enhancing or undermining their professional identities and status

For Roles and competencies, what are the changes? 

• Shifting roles: workers and managers not passive recipients of algorithmic results as they align algorithmic systems to their needs and interests
• Demand for algorithmic competencies: skills supporting workers in developing symbiotic relationships with algorithms
• Risk of upskilling only a fragment of organizational members (deskilling others)
• Emerging role of “algorithmists”: data scientists, human translators, mediators, and agents of algorithmic logic (Gal et al., 2020)
• Algorithmic competencies limited by resistance against algorithms: algorithmic aversion and cognitive complacency

For Knowledge and information, what are the changes and give some solutions? 

• Technical opacity: “black box” character of AI systems (design principles and operational complexity)
• Organizational opacity: lack of information due to strategic interests and intellectual property (professional gatekeeping and externalization of algorithmic development to third parties)
• Solutions:
    • Explainable AI: making complex models more understandable by humans through various technical methods (labor-specific and organization-specific policies)
    • Algorithmic audit: idea that algorithms may produce a record that can be read, understood, and altered in future iterations
    • Disclosing more information on AI systems: human involvement, data, models, and algorithmic inferences
    • Stakeholder involvement in AI design: communication and deliberation 

What is innovation ? 

Innovation is the development and implementation of new ideas by people who over time engage in transactions with others within an institutional order

Invention is the conversion of ...  into ....

Invention is the conversion of cash into ideas.

Innovationis the conversion of ...  into ....

Innovation is the conversion of ideas into cash.

What are the 3 elements for innovation ? 

Novel

Useful

Implemented

WHat is Radical innovation ? 

Create new knowledge and provide significant technological breakthroughs in products and processes

What is incremental innovation ? 

Build upon existing knowledge base and provide small improvements in current product lines and processes

In the table : Types of AI contribution in innovation, what are barriers to innovation (Y) and Inovation process (X) ? 

image

What are the 4 Classification of organization based on their AI attitude and describe them with an example

•(1)opportunity identification and idea generation – e.g. identifying user needs, scouting promising technologies, generating ideas;

•(2)idea evaluation and selection – e.g. idea assessment, evaluation;

•(3)concept and solution development – e.g. prototyping, concept testing; and

•(4)commercialization launch phase – e.g. marketing, sales, pricing.

Generative AI as creator/assistant OR a threat?

How to use it ? 

•Use generative AI to free up the time for innovation

•Use generative AI to create.

•An enemy for innovation and creativity?

 

Text summarization

Exploring solution spaces with AI (idea generation)

What is Creativity for innovation ?

Creativity is the source of innovation

What is innovation ? (Linked with previous question)

Innovation is the process of transforming creative ideas to create value

What are the three pillars of innovation ? 

•Application

•Value creation

Robustness

What are the three pillars of Creativity? 

•Originality

•Imagination

Flexibility

Give the impact schema of Individual/Team Creativity and Work environment and describe it

image

Why Do We Need To Consider AI Risks?

AI is increasingly being used in decision-making in sensitive industries e.g. banking

Why AI Can Be Boundless ? 

1. Contemporary AI (e.g. LLMs) have access to internet
2. Self-replication capability: We have taught AI how to code
3. AI knows about humans: we have taught AI about human behaviours

What is the Decomposition of Risk ? 

image

For the risk decomposition, give a mitigation

Vulnerability -> Robustness

Hazard exposure -> Monitoring

Hazard -> Alignment 

What are the 2 differents types of attack ? 

Adversarial Attacks and Privacy Attacks

What is Adversarial Attacks and give an example

An adversarial attack is designed to fool an ML model into causing mispredictions by means of injecting deceitful data meant to deceive classifiers. This type of corrupted input goes by the name adversarial example

An adversarial example is a corrupted instance characterized by a perturbation of small magnitude, virtually imperceptible,which determines the ML model to make a mistake. To human eyes, adversarial examples seem identical to the original. To machines, however, they work almost as an optical illusion, causing them to misclassify data and make false predictions.

What is Privacy Attacks

In privacy-related attacks, the goal of an attacker is to gain knowledge that was not intended to be shared. Such knowledge can be about the training data or information about the model or even extracting information about properties of the data

What are Targeted vs. Untargeted Attacks ? 


Targeted Attack : aims to misclassify the input (e.g., image) to a specific label (e.g. panda to gibbon)

Untargeted Attack : aims to misclassify the input to any wrong label (e.g. panda to any other animal)

How can Can We Avoid Adversarial Examples?

Many works have tried to, but follow-up works showed that all fail


The main successful defenses in practice now incorporate adversarial examples during training

What are the 3 privacy matters and describe them? 

kimage

Give 3 Common Privacy Attacks

Model Inversion

Data Extraction 

Membership Inference

What is Model Inversion  & Data Extraction ? 

image

Give a concrete example of model inversion

image

Large neural networks memorize some of their training data samples, is it a problem if we dont allow this ? 

Yes, Preventing memorization hurts accuracy

Give an example of Data Extraction

image

What is Membership Inference ? 

Goal: Attacker uses the model to infer if a particular data point is present.

Who to defend against Membership Inference ?  

Defend using differential privacy!