I like to toot my own horn as much as the next guy, but I have to say I really called it on this trend of AI denial. Just keep a look out for this one as it ramps up.
Yesterday, insiders from numerous think tanks and advanced computing companies came together to announce to the world that everything they’ve heard about Artificial Intelligence (or “AI”) over the last few years has been false. “There are still no computers that can think unique thoughts on their own. It’s all techno mumbo-jumbo and marketing speak to convince investors to invest in one company or the next. In fact, you may have been part of the entire effort to make AI seem more real.,” said the spokesman for the group, Nerdy McSoontobejobless. “Chances are that you’re in on the act, but just don’t know it. If you’ve ever been asked to prove that you’re ‘not a robot’ by selecting squares that include street signs, you’re basically spoon-feeding an database algorithm what a sign is so that standard text-recognition software can figure out what the sign says.”
Ok, I understand that this user’s comments were in part satirical, but oddly enough they express a notion that is growing in popularity.
If you happen upon these and other similar headlines enough times in a short period, you might think that something has changed in our outlook on technology. “Oh, sorry guys. We were pretty sure we could build something amazing, but we just finished adding up the numbers and, turns out, it’s impossible to do anything more that what were currently doing with computers. Yep, that’s right. Seems we’ve hit a ceiling.”
This is of course in stark contrast to other headlines like:
It gets to a place where the average Joe might wonder what the real turn out of all this technology is really going to be. On one side we’ve go people saying that we can just fix the economy with a few tariffs and on the other you have Stephen Hawking warning us to arm up for the impending eradication of life at the hands of computers. We seem to be lacking a good way of judging the reality of AI and automation.
So, lets break out my favorite analysis tool” Process vs. outcome. We are, in a sense, concerned with the outcome of AI, more exactly we’re concerned about the future outcome. We want to predict where it will take us. To make that prediction we have two ways of thinking.
Outcome thinking will attempt to gather information about our current state in the present, maybe some previous historical states, and extrapolate on that information. It’s pretty weak at this sort of thing because it relies on accurately measuring parts of the real world we don’t really have access to. We personally don’t know how many jobs have been taken over by computers. We can try to estimate based off what articles in the news say, but then were back to the issue of sifting through contradictory headlines.
Process thinking requires that we simply attempt to answer a series of questions. Have computers ever proven capable of performing the same task as a human? Does the capacity of a computer to perform a task increase or decrease with the amount of computing power available to it? Does computing power per dollar increase or decrease over time? Is leveraging automation profitable? Is there something functionally non-reproducible about the human mind?
These aren’t meant to be leading questions. The answers to them do matter in how we draw our conclusions. What is important here is that the questions do not attempt to measure actual events or information, but rather the mechanism behind the trends. We don’t need to measure progress directly, we just need to know that it is inevitable through natural market forces.