AccueilGroupesDiscussionsPlusTendances
Site de recherche
Ce site utilise des cookies pour fournir nos services, optimiser les performances, pour les analyses, et (si vous n'êtes pas connecté) pour les publicités. En utilisant Librarything, vous reconnaissez avoir lu et compris nos conditions générales d'utilisation et de services. Votre utilisation du site et de ses services vaut acceptation de ces conditions et termes.

Résultats trouvés sur Google Books

Cliquer sur une vignette pour aller sur Google Books.

The Alignment Problem: Machine Learning and…
Chargement...

The Alignment Problem: Machine Learning and Human Values (original 2020; édition 2020)

par Brian Christian (Auteur)

MembresCritiquesPopularitéÉvaluation moyenneMentions
2203124,028 (4.25)2
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--… (plus d'informations)
Membre:Wayfaring
Titre:The Alignment Problem: Machine Learning and Human Values
Auteurs:Brian Christian (Auteur)
Info:W. W. Norton & Company (2020), Edition: 1, 496 pages
Collections:Votre bibliothèque
Évaluation:****
Mots-clés:Aucun

Information sur l'oeuvre

The Alignment Problem: Machine Learning and Human Values par Brian Christian (2020)

Chargement...

Inscrivez-vous à LibraryThing pour découvrir si vous aimerez ce livre

Actuellement, il n'y a pas de discussions au sujet de ce livre.

» Voir aussi les 2 mentions

3 sur 3
There was lump in my throat when Deep Mind’s AlphaGo crushed Lee Sedol at Go, the oldest (3000 year old) arguably most complex strategic board game , cause with that AI not just defeat the greatest player ever but effectively wiped any future association of GO and Humans . No Human will even beat AI at GO again period, that fortress is breached! We have essentially been relegated to a mere factoid in the timeline of this planet.

While capitalism will ensure the inevitability that humans will be pushed “out of the loop” in every aspect – The question is not if but when . Brian Christian’s Alignment Problem educates the reader with the real pitfalls of depending on algorithms and inherent drawbacks of machine learning . Brian more than Nick Bostrom’s – Super Intelligence in my opinion dwells much deeper on the alignment problem at hand ; Bostrom set the stage for AI safety and was labelled as an alarmist ; well not anymore .

From dopamine exploiting social media algorithms to parole sentences to mortgage application approvals ; these highly pervasive machine learning algos now control various aspects of humans , while Congress grapples with legislation & red-tape .

The book gives an over arching view on how the ML algos came about around the following “pillars” curiosity, imitation, reinforcement, model bias , bad data samples etc. and why it is crucial to align AI goals with Human values .

And as often is the case the problems are more of the philosophical nature than anything , this also highlights the importance of psychology , social anthropology , neurophysiology and psychoanalysis playing a quintessential part in future development of this nascent field .The latter part of the book deals with possibly the tougher questions which AI posses ; happy to see the Effective Altruism movement founder Will MacAskell get a page in there too . ( )
  Vik.Ram | Aug 12, 2022 |
An impressive, conversation-based analysis of how AI systems developed through processes of machine learning (ML) might be constrained to be both safe and ethical. I had little idea of how rich and massive the research on this has been. In nine chapters with carefully chosen one-word headings (Representation, Fairness, Transparency, Reinforcement, Shaping, Curiosity, Imitation, Inference, and Uncertainty), the author describes a sequence of diverse and increasingly sophisticated ML concepts, culminating in what is called Cooperative Inverse Reinforcement Learning (CIRL). Whether AI will ever stop being part of what I regard as the wrongness of modern technology, I don't know, but at least there are people in the field who have their hearts in the right place.
  fpagan | Mar 21, 2022 |
There is a great book trapped inside this good book, waiting for a skillful editor to carve it out. The author did vast research in multiple domains and it seems like he could neither build a cohesive narration that could connect all of it nor leave anything out.

This book is probably the best intro to machine learning space for a non-engineer I've read. It presents its history, challenges, what can be done, and what can't be done (yet). It's both accessible and substantive, presenting complex ideas in a digestible form without dumbing them down. If you want to spark the ML interest in anyone who hasn't been paying attention to this field, give them this book. It provides a wide background connecting ML to neuroscience, cognitive science, psychology, ethics, and behavioral economics that will blow their mind.

It's also very detailed, screaming at the reader "I did the research, I went where no one else dared to go!". It will not only present you with an intriguing ML concept but also: trace its roots to XIX century farming problem or biology breakthrough, present all the scientist contributing to this research, explain how they met and got along, cite author's interviews with some of them, and present their life after they published their masterpiece, including completely unrelated information about their substance abuse and dark circumstances of their premature death. It's written quite well, so there might be an audience who enjoys this, but sadly I'm not a part of it.

If this book was structured to touch directly the subject of the alignment problem it would be at least 3 times shorter. It doesn't mean that 2/3 are bad - most of it is informative, some of it is entertaining, a lot seems like ML things that the author found interesting and just added to the book without any specific connection to its premise. I really liked the first few chapters where machine learning algorithms are presented as the first viable benchmark to the human thinking process and mental models that we build. Spoiler alert: it very clearly shows our flaws, biases, and lies that we tell ourselves (that are further embedded in ML models that we create and technology that uses them).

Overall, I enjoyed most of this book. I just feel a bit cheated by its title and premise, which advertise a different kind of book. This is the Machine Learning omnibus, presenting the most interesting scientific concepts of this field and the scientists behind them. If this is what you expect and need, you won't be disappointed! ( )
  sperzdechly | Mar 18, 2021 |
3 sur 3
The Alignment Problem does an outstanding job of explaining insights and progress from recent technical AI/ML literature for a general audience. For risk analysts, it provides both a fascinating exploration of foundational issues about how data analysis and algorithms can best be used to serve human needs and goals and also a perceptive examination of how they can fail to do so.
ajouté par Edward | modifierRisk Analysis, Louis Anthony Cox Jr. (payer le site) (Mar 3, 2023)
 
Vous devez vous identifier pour modifier le Partage des connaissances.
Pour plus d'aide, voir la page Aide sur le Partage des connaissances [en anglais].
Titre canonique
Titre original
Titres alternatifs
Date de première publication
Personnes ou personnages
Lieux importants
Évènements importants
Films connexes
Épigraphe
Dédicace
Premiers mots
Citations
Derniers mots
Notice de désambigüisation
Directeur de publication
Courtes éloges de critiques
Langue d'origine
DDC/MDS canonique
LCC canonique

Références à cette œuvre sur des ressources externes.

Wikipédia en anglais

Aucun

"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--

Aucune description trouvée dans une bibliothèque

Description du livre
Résumé sous forme de haïku

Discussion en cours

Aucun

Couvertures populaires

Vos raccourcis

Évaluation

Moyenne: (4.25)
0.5
1
1.5
2
2.5
3 2
3.5 2
4 12
4.5 1
5 9

Est-ce vous ?

Devenez un(e) auteur LibraryThing.

 

À propos | Contact | LibraryThing.com | Respect de la vie privée et règles d'utilisation | Aide/FAQ | Blog | Boutique | APIs | TinyCat | Bibliothèques historiques | Critiques en avant-première | Partage des connaissances | 206,455,006 livres! | Barre supérieure: Toujours visible