AccueilGroupesDiscussionsPlusTendances
Site de recherche
Ce site utilise des cookies pour fournir nos services, optimiser les performances, pour les analyses, et (si vous n'êtes pas connecté) pour les publicités. En utilisant Librarything, vous reconnaissez avoir lu et compris nos conditions générales d'utilisation et de services. Votre utilisation du site et de ses services vaut acceptation de ces conditions et termes.

Résultats trouvés sur Google Books

Cliquer sur une vignette pour aller sur Google Books.

Chargement...

Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition

par Wendy Hui Kyong Chun

MembresCritiquesPopularitéÉvaluation moyenneDiscussions
19Aucun1,152,151AucunAucun
How big data and machine learning encode discrimination and create agitated clusters of comforting rage. In Discriminating Data, Wendy Hui Kyong Chun reveals how polarization is a goal-not an error-within big data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Correlation, which grounds big data's predictive potential, stems from twentieth-century eugenic attempts to "breed" a better future. Recommender systems foster angry clusters of sameness through homophily. Users are "trained" to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible. Chun, who has a background in systems design engineering as well as media studies and cultural theory, explains that although machine learning algorithms may not officially include race as a category, they embed whiteness as a default. Facial recognition technology, for example, relies on the faces of Hollywood celebrities and university undergraduates-groups not famous for their diversity. Homophily emerged as a concept to describe white U.S. resident attitudes to living in biracial yet segregated public housing. Predictive policing technology deploys models trained on studies of predominantly underserved neighborhoods. Trained on selected and often discriminatory or dirty data, these algorithms are only validated if they mirror this data. How can we release ourselves from the vice-like grip of discriminatory data? Chun calls for alternative algorithms, defaults, and interdisciplinary coalitions in order to desegregate networks and foster a more democratic big data.… (plus d'informations)
Aucun
Chargement...

Inscrivez-vous à LibraryThing pour découvrir si vous aimerez ce livre

Actuellement, il n'y a pas de discussions au sujet de ce livre.

Aucune critique
aucune critique | ajouter une critique
Vous devez vous identifier pour modifier le Partage des connaissances.
Pour plus d'aide, voir la page Aide sur le Partage des connaissances [en anglais].
Titre canonique
Titre original
Titres alternatifs
Date de première publication
Personnes ou personnages
Lieux importants
Évènements importants
Films connexes
Épigraphe
Dédicace
Premiers mots
Citations
Derniers mots
Notice de désambigüisation
Directeur de publication
Courtes éloges de critiques
Langue d'origine
DDC/MDS canonique
LCC canonique

Références à cette œuvre sur des ressources externes.

Wikipédia en anglais

Aucun

How big data and machine learning encode discrimination and create agitated clusters of comforting rage. In Discriminating Data, Wendy Hui Kyong Chun reveals how polarization is a goal-not an error-within big data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Correlation, which grounds big data's predictive potential, stems from twentieth-century eugenic attempts to "breed" a better future. Recommender systems foster angry clusters of sameness through homophily. Users are "trained" to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible. Chun, who has a background in systems design engineering as well as media studies and cultural theory, explains that although machine learning algorithms may not officially include race as a category, they embed whiteness as a default. Facial recognition technology, for example, relies on the faces of Hollywood celebrities and university undergraduates-groups not famous for their diversity. Homophily emerged as a concept to describe white U.S. resident attitudes to living in biracial yet segregated public housing. Predictive policing technology deploys models trained on studies of predominantly underserved neighborhoods. Trained on selected and often discriminatory or dirty data, these algorithms are only validated if they mirror this data. How can we release ourselves from the vice-like grip of discriminatory data? Chun calls for alternative algorithms, defaults, and interdisciplinary coalitions in order to desegregate networks and foster a more democratic big data.

Aucune description trouvée dans une bibliothèque

Description du livre
Résumé sous forme de haïku

Discussion en cours

Aucun

Couvertures populaires

Vos raccourcis

Évaluation

Moyenne: Pas d'évaluation.

Est-ce vous ?

Devenez un(e) auteur LibraryThing.

 

À propos | Contact | LibraryThing.com | Respect de la vie privée et règles d'utilisation | Aide/FAQ | Blog | Boutique | APIs | TinyCat | Bibliothèques historiques | Critiques en avant-première | Partage des connaissances | 206,791,011 livres! | Barre supérieure: Toujours visible