Why AI is a Lie & Everyone Knows it

Peter Salinas
Towards Data Science
10 min readOct 29, 2019

--

Photo by Ben White on Unsplash

Time to lose some old friends and make some new ones!

First, take a moment to distill and define what intelligence is. OK. How many of you agree with that definition? There is probably a massive divide. In the “Academic” community I belong to, which not a lot of people like, we call this “Bias”. Its very simple, if something cannot be measured and understood “universally”, relative to whatever social universe you exist in its “Theory”.

Ok. What is “Artificial”? Well, this one is fairly simple. So, we at least know one part about AI is universally true, its all not real, I mean… Artificial. Zing.

We are hyped on something we can’t universally define that isn’t real. Sorta.

You're Intelligence no matter how you look at it

Our brains consume patterns, effectively, they consume math, day one. Day one is also something arguable today, sadly, the few groups that had some footing researching what “Day 1” for babies is, got squashed pretty early because of a LOT of cultural reasons and “Big Business” of academia knows you can’t sell well on culturally touchy topics of the time. What was learned, however, is that we can effectively translate patterns very young. Our “Research” in some form during a similar time demonstrated “Classical Music makes your baby smarter”. Sorta. Contextually speaking.

What wasn’t clarified is its less about the music, its more the consistency of patterns. And we use those patterns to trigger how we react today. There is a good chance if you’re very musically inclined, artistic, or use music in any way to trigger some process of thinking, you had very specific patterns of music exposed to you very young. In many ways, this is a topic of “AI”. It's about “Learning”.

Brains in infancy, Artificial or Organic, will grow relative to the patterns of data that are being digested. You have to have things train those brains, but you also have to have some kind of trajectory or output of where you want that training to go. There are lots of little details here that I can save for another post, but there is quite a bit of beautiful and terrifying math to it all that could make things get weird. Suffice to say, all things we grow are in many ways an abstraction of ourselves. You can call it a generational curse, legacy, lineage, social learning, neural networks, choose your cultural bias. Math doesn’t care.

We are all uniquely predictable

You ever watch enough of a certain kind of show, where you sorta just know everything that is going to happen? Same music, same tones, same characters, same story arcs but with a little twist? The show isn’t “real”, but the people that created that show are. Indirectly, everyone is a people watcher, and we all adapt to the patterns we are exposed to, even if we cannot always articulate it this way.

Now the same “logic” or patterns that exist in watching CSI till its not fun anymore, also exist with businesses. We tend to forget that these big companies are just massive structures of political powerhouses that make money and serve their own societies. Like our own governments and the people they serve, they are also predictable. And like some of our own politicians, they sorta don’t always see or understand why people see them as “evil”.

The reality is, and I HATE saying this because I'm more “consumer” than “Entrepreneur, Developer, Engineer, AI, Creepy Data Weirdo”, but that topic of “Evil” is relative to the person acting on it. We see some of the most amazing villains in film this way too, where we understand so much, that evil becomes more human and some point in the story everything changes. George Lucas factually had this powerful grasp of the patterns of human nature, but I think he sorta lost that connection when he started talking about the science of the force. Woah, George, we didn't need any of that reality in our not quite real sci-fi flick.

Rise of the Machines

We have factually always built more and more machines overtime to negate the use of physical or mental energy to make our lives easier. Even migrating into calculators and computers, its less that someone could NOT do those things, its that it would take a single person too much time, or have their own brains trained in REALLY messed up ways, or take even more time to properly break up the work to have others help them in a productive way. This is what the “weirdos” did, those weirdos that eventually would change the world, or at least become your boss.

What a calculator does, is factually what Machine Learning does in Data Science, it's simply a different abstraction and business use case. It’s Math with more politics, more intuition, and things moving so fast you don't even know if Data Science is doing what it's supposed to be doing. That said, they are doing their best. The topics here are when we call something “AI” what most of us mean is “Machine Driven Calculation”, and/or a location in which decisions can be made in a digital environment.

In Data Science we are building “Data Science Platforms” and in AI we are building “AI platforms”. In my industry, the “Game Industry” we are fighting it tooth and nail saying that AI can never replace creativity! Goodness, it is gonna hit my industry hard when they realize how close it is to level the playing field. Nvidia is getting close to ensuring that one. As with many things, they are solving the same issues, with the same pains, from different perspectives.

The fact of the matter is, if you make decisions, you have access to data, you understand the patterns of various practices, you can either be empowered by AI or, inevitably replaced by it. It started with Factories, it's happening with Logistics (both human and goods) and “creativity” and “business intuition” is the next topic to be aware of. It's well on its way now, even if you try and ignore it.

It’s Right Under Your Nose

Today AI is a topic owned primarily by “Computer Science” and as engineers like to do, from Databases to AI, we consume ALL the things. We are more in line with Physicists than we are with practical developers if you give us a bit of flexibility. We can’t even help it. We were programmed this way almost out of the womb. But many of us become cold weirdos for a reason and when brilliant computer scientists start becoming philosophers, we are getting a bit too close to crazy. There should be some concern and I'm walking that line too (send help!).

I recently had an exchange with a BRILLIANT and well-respected mind in AI leading a multimillion-dollar company in the efforts of AI. If you’re reading this its maybe not you, I talk to a lot of you, you already know I'm a chatterbox. Or it is you and I'm trying to salvage our relationship (❤ U). But I asked about how they handled their Data Architecture and Database and it was brushed off as if I was a lunatic. To be fair, that's a relatively true statement. Different article. But I also was well aware that because I don’t present my “Academic Background” my dialog will be subject to the bias of that individual's perspective.

The reason that the topic is important, is because training AI is about the “Frequency” amount of data consumed and “Context”, as in what information is being used to train “AI” (or a person) towards an output. Inevitably if you build with your own bias and rely on “PhDs” you’ll overlook the context of what you're doing and the research that was done decades prior to doing, almost literally, the same things Academia has done before. The reality of this is, actual “AI” in so many ways is quite possible today, in many ways, it was done during the first boom in the 80s, we just forgot. Academia makes big money forgetting old research and repackaging it.

You Are What you Eat

If you cannot define intelligence and you have your own views and approach to building AI using machines and contextual data, your output, the thing you create, will inevitably just be an extension of you. Just like your art is an extension of you, or your business or culture is the same. So if you go at it training massive simulation data and ignore foundational topics of how brains work, which we determined literally decades ago, you're gonna end up making something as weird as you are.

We do not train people by smashing all the things into them without there being some divides in actually understanding people. Because our brains are quite literally math, pattern consuming machines, subject to limited capabilities (cognitive load), we will create what we set out to when it comes to our machines. But this is the exact topic as to why AI today is not AI as you see it. More than likely, the more “Unicorn” an AI group is, the less likely it's doing AI. The logical irony being, Unicorns just aren’t real.

AI platforms today are focussed on “Centralizing Data” or “Automating Machines”, both of them will call themselves AI or Data Science platform. The amount of funding they get has to do with who they know, what success they had, and how philosophical a group can be about selling the fantasy of it. The catch is, you cannot scale this way, because for an AI platform to actually work, an existing organization needs to be willing to release control of pretty much all of its data. And if you do that, its a breach of trust, security, people will lose jobs, and it's going to make a huge mess of an existing reporting structure.

Not Trusting AI is a Good Thing, Sorta

Today we have a lot of brilliant minds exploring AI for a variety of topics. Narrative, Business, Language, Bots, Logistics. If an Organic brain can determine a pattern and the term AI got them overly excited, they are gonna get all the money for it. Sad facts are, its characteristically more likely someone that currently DOESN’T trust AI, or doesn't want to understand it, that is best suited to use it.

This happens because they have enough experience and exposure for their own minds to have been trained with powerful “Intuition”. “Cognitively” speaking, their own brains have consumed enough historical data and see enough patterns in current markets (and also dislike socializing enough) that their minds are as effective as Machines in many ways, they simply deal with their own overloads with creativity that makes it harder for some to work with. This shows in Depression, Anxiety, Drinking, or whatever output an individual has to make up for validation or connection.

What those individuals do not realize, is that the way AI thinks is effectively closer to the way they think. If those minds could find a way to understand AI if a Computer Scientist could bypass their own theories of “Compute” or “AI”, some really amazing things would happen. We aren’t culturally there yet, but it should happen soon. We have big issues with separation when it comes to creativity, or business, or academic research. We all effectively have similar goals, we are just REALLY bad at communicating them in our respective languages.

AI for ALL the things

AI in practice is actually quite simple, but the approach we take today is due to “Societal Hubris”. It's very difficult to be challenged and communication is sensitive. In practice, linking references and articles and math on this post would spark a slew of debate and dialog to disprove decades of theories recreated and siloed. Or telling someone who just raised money on a billion-dollar valuation that they aren’t sneaky, they are just repackaging the same “Machines” with different languages and calling out how it won't work how they sell it also ain't the best way to make friends.

But here is the thing. If you dig deep enough and you bypass your bias on what languages you use, what platforms, how to train Data, what Data Science is, etc, etc, its a fairly easy thing to figure out. The thing that makes it REALLY difficult, is it takes a genuine desire of “Cultural Collaboration” to actually build an AI company from the ground up. And its this reason why I also feel pretty strongly that the big companies today, Unicorn, Public Company, Investment Groups, etc are not likely going to be the company that cracks AI, at least not in the way they are selling it.

Mostly, because we cracked it years ago and the very nature of “Innovation” is when we take a few old ideas and figure out how to bypass cultural barriers to make something new which was actually older than we realize. I still chuckle when someone working with Neural Networks gets excited that they realized it was like a brain, then publish a paper I know I saw a decade ago. Or when we look at an algorithm in one field, which existed in another, then someone called it “Machine Learning” and there is a slew of high fives forgetting about the nerds that did it first.

Fun facts, Academia is more “Bro Culture” than it will admit today. And your idea of a Nerd is maybe WAY off track. Thanks, Pop Culture!

Skynet vs. Jarvis

I don’t think we give people enough credit, there is something mathematically inevitable about our capacity to solve problems and have society keep things in check. No matter how much a government watches, how off-balance a product is, or how inspiring a mind can be, social intelligence will always prevail. Sometimes “bad” things happen to allow better things to occur and failure forces change.

What I can tell you now, after exposure to a LOT of “AI”, “Data Science” and “Data” companies, is their own objectives in leadership will inevitably result in something that is a reflection of who they are and who they surround themselves with. If you have a CEO that sells hype, with Investors that fund hype, or a public market that responds to emotions, they will keep themselves in the balance of their own motivations and this impacts EVERY person below them, systemically.

The reality of this is, in order to understand AI, you factually, Mathematically, Scientifically, and Culturally need to understand people. And if you're too busy controlling AI today, you're probably not going to be creating AI for tomorrow. Because ultimately, society wants Jarvis, and even if they don’t know it, their actions will systemically assure Skynet doesn't happen. And if you’re a brilliant mind who is afraid of the end of times, you don't have enough understanding of the people you serve. You should spend more time with them.

Unless we are already in the Matrix. Be nice to hackers.

--

--