Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, the importance is massive, and you understate it. While we may all want to just believe, the regulatory ecosystem worldwide (where it applies, esp. in financial realm) demands that we provide explanations of why models made certain decisions on certain data. Without this ability, models will not be allowed to drive innovation or decisions in many areas of life, from financial (credit and risk) to medical (recommendations for treatment) to legal (best contract approach or best defense approach for a lawsuit).

Saying that humans make mistakes and cannot explain their decisions is, in fact, one of the very reasons we want to have better models. We hope they will do better than most people to create a better world. And their explanations will hopefully provide insight into how we as people make good (and bad) decisions.

And sure, we can change the laws over time, but having advanced models which can allow humans to understand decisions and even provide diagnostics to improve the models will be transformative. Until then, we will see massive impact in some areas of our lives, and frustrating holdbacks in others, driven either by the distraction of building for regulatory constraints or by choosing not to build in regulated areas at all.



We know why these models make decisions on data. They're optimizing for lower error rates.

The black box unveiled for a convolutional neural network is this: over the course of several thousand rounds of performing the dot product of the RGB values of a given training datum against a weight vector, this network has determined that a weight vector containing these values optimally reduce the error produced by "squashing" the output matrices of said dot products in a softmax function when the "squashed" value is compared against the pre-determined true value.

If you would also like to be able to correctly predict whether a given input is part of the class for which this model was optimized for, we suggest your weight vectors also contain these values as this will reduce the number of false positives and false negatives you will produce from your prediction.


No. If the machine is telling me what medicine to administer to the patient, then I want to know exactly what data points the machine thought were relevant (i.e., the relevant symptoms). Furthermore, I want to know what about those symptoms indicated to the machine a particular diagnosis.


The criticism of artificial intelligence in making decisions in lieu of human judgement often pits the process against an impossible ideal that doesn't exist and likely could not exist.

For instance, some research suggests that judges are a lot less likely to grant parole before lunch. The theory is that no parole would be the default no-brainer decision a judge can make, which makes sense. [0]

The fact is that many decisions are made by humans with, at best slight personal bias and at worst malice.

With a systematic AI approach you can be somewhat confident that the decisions made are based on a wide range of experience, optimizing for an agreed upon objective (e.g. likelihood of reoffense). It's not perfect but it's better and can improve a lot easier without the need to villainize individuals

[0] http://blogs.discovermagazine.com/notrocketscience/2011/04/1...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: