The Ultimate Process/Outcome Thinking Test

I’m constantly on the look out for good examples of the divide between those who utilize process thinking and those who use outcome thinking. It’s not a new concept, but it caught my attention as being one of the purest examples of this dichotomy.

Take a look and tell me what you think of this article:

https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/

If you think it might be a legitimate concern and future AI should be designed to negate this effect, you’re a dirty outcome thinker.

If you think machine learning is pure and any bias shown is just a result of real world outcomes, you’re still a dirty outcome thinker.

Here is why: the mechanism of bias in machine learning is never reveled . There are two basic ways a machine can learn. Either (A) the system is learning from raw data and making predictions about outcomes on it own, or (B) the system is learning from a human counterpart and and copying their performance.

Lets take granting a loan for example.

A is troubling because there’s a possibility the system is making accurate predictions about real outcomes. It’s possible that on average, people of a certain race, background, or even eye color, have a higher default rate. The system wouldn’t know why that is, it would just know the smart money isn’t on those people.

B is just down right dumb. A system is taught, not using real world data, but rather using the history of human decisions, it will never learn anything humans don’t already know. It will just learn to be very good at playing human. Machine learning scientists don’t usually take this approach for that reason, so the bias is usually not derived from human decision, but rather from historical data.

A is still dumb because it’s looking at historical data hoping that it will be able to infer future outcomes. It’s not concering itself with why a trend exists or even if the information being processed is relevant. Machine learning can come up with some crazy accurate correlations in historical data that have zero predictive power. Understanding a process, a reason why, is necessary to predict the future. That’s why machine learning alone is not the solution to better loan application processing.

Black people being turned down for loans is not symptom of a racially biased system. It’s the symptom of a system designed for the sole purpose of turning people down. The computer’s job is to find ways to discriminate. The idea that you might ask a computer to select half a group of applicants to reject, and then become confused when the group selected showed similarities in at least some respects is appallingly stupid.

Here is why the talk of bias in machine learning is a red flag for process thinkers:It’s rejecting the outcome of a system that does nothing but analyse outcomes.

Machine learning based selection processes like loan approvals and parole appeals are just another facet of the paper man problem I wrote about previously. A machine is only able to make selections based on digitized information. All ability to use social persuasion, character reference, or intellectual debate to effect the outcome of a decision is gone. It further strengthens the selections for men and women who lack real world skills but display exemplary resumes.