Why This Generation Won't Solve AI

Peter Salinas
Towards Data Science
11 min readDec 16, 2019

--

Photo by Franck V. on Unsplash

Some may not be aware of it, but AI has been a thing for quite some time. There have been many minds around the field but the one that stands out in much of my own work is John McArthy. A bit of an oddball, well respected, and has had some significant contributions to science and technology that have impacted our world today. Without his work, we would not have the internet, many capabilities in programming languages today would be severely limited, MIT probably wouldn’t be what it is today, and DARPA likely wouldn’t have advanced as far as it has without it.

To be clear, he, nor anyone else is someone I personally idolize. For one, I have met too many heroes who have been humanized, two: I happen to come from a field that fairly regularly quantifies human behavior and that sorta changes your view of people, and three: I did some weird stuff in human simulation in games when I was 20 something that basically all but changed (ruined?) my social perspective of human beings. That said, I also fundamentally think all people are brilliant, because how could they not be? Brains are massive calculators that read and translate patterns and nature as a whole is pretty fantastic.

Admittedly I wish more people understood this in a meaningful way, it would help how people view and connect with one another and make stuff.

Social Perception

There are a LOT of things to unpack here, but there are a few topics that should be made clear. AI, as in the facet of replication of human intelligence today, was not the intent of the first abstractions of AI, however, it was recognized that perhaps one day it would be. Factually, the earliest inception of AI did a lot of the same stuff we do today, but with clunkier machines. This started with chess, today it’s done in video games, and like chess, it’s also challenging the most profound minds in games and making them worry about their place in the world.

I do find it interesting how many of the most brilliant minds in Technology and AI seem to be pushing boundaries of AI, which have been pushed in the past in various forms, but there is a principal of cognition, which was a predecessor of AI which sorta calls this out. Essentially, it’s that humans align more with visual languages that are universally accepted, in our society this is words and pictures, or entertainment. The more time and faith you put in a word, the less that word or thing seems to be understood for its application. One such topic would be the cloud, which is also distributed computing, which is also timesharing.

This is a topic I have seen a lot of great engineering leaders get very frustrated with in presentations, however, what is most important to remember here is that technology, software specifically, is simply an environment which allows data to exist in whatever form it is, and the architecture and structure of that software allows for various things to occur subject to those rules. That’s really all there is to it, but even in technical worlds, philosophical debate exists, ironically.

Classifications of Intelligence

This one is maybe my favorites of the entire topic, but it's also the most culturally interesting and conflicting. Its a general assumption that AI is “A thing”, though we know its not “A Thing”, we still don’t ever do a good job to classify it. Academia often does its best to classify stuff, but admittedly even there we run into issues where we research a lot of the same stuff in different worlds and it's really hard to bridge that language to make it happen. Some of it is time, some of it is politics, a lot of it is just how academia works to get recognition.

As a society today, we basically are communicating that all things which have developed muscular structure in face and developed voice which has an emotional trigger with humans are deemed as “intelligent”. That said, it seems each year we figure out a new animal actually has some language or social structure we had not been aware of in the past. Even that entire topic of being recognized is hyper subjective to who knows who or who does what, or who even recognizes that based on what press drowns out your current Evangelist for AI.

Aside from that, we do not make it a social topic to communicate various classifications of intelligence. What is “Human” intelligence, what is “Super Intelligence” what is “Machine intelligence”, how do we classify them, what is their role as an application? As a result, there are absolutely businesses and a lot of social hype that emerge from people who don’t understand much of these things which get a lot of money that also trigger feelings and responses in interesting ways which further make this difficult to lockdown. Really, this does impact the most profound minds because we are human and our brains assume patterns and responses from our highest areas of validation (internet).

Machines of Hype

The reality of this is that today, the various flavors of AI in all forms are a lot of hype. I have met a few groups doing something novel, but the most profound groups in AI today know better than to call themselves AI, and the groups which have AI brilliance either know it’s only ever gonna be R & D, or doesn’t grasp the problem space, then it gets expensive. If you know tech you know what I am talking about. Suffice to say “what’s old is new” which seems to be a fairly common topic here.

AI is a simple topic because brains are actually way easier to understand than I think people will allow them to. Sometimes I wonder if this is nature’s way of balancing the scales. The universe is pretty neat that way. Even some old languages we don’t use had amazing elegance and we just sorta philosophized it into oblivion. What is sorta funny about this is that it was a society’s inability to adapt or use words which inevitably led to this topic too, I think we overlook how many people and their own motivations have been involved in defining the world of people we live in and how we communicate today.

I can say with some certainty that a significant amount of these AI hype groups are not gonna do the things the money hype thinks its gonna do. Because the fact of the matter is Watson, Spark Cognition, AlphaGO, and a few others that are brilliantly hiding out there are very elegantly demonstrating that you can take an input of human patterns from an environment and replicate those patterns into human needed environments. As it happens, all brains do is read patterns and apply them in human environments, so this has to do with anything you would call creative.

Technology Environment

This one is the kicker, really, but it’s maybe the most important of them all. Its the basis of a discussion that triggers the topics of Jarvis saving the world or Skynet destroying it. As the theme of the above, it’s just not that easy, at least not in today’s society. In order for this to occur you would require a single architecture of technology that operated with “perfectly” connected technology, which you could refer to as a “simulation” environment. Not only is this “nearly impossible” to do for any company, because those companies are just run by people with lots of market hype, but because it has to have an “ideal” environment to take in data and replicate it, and an environment to demonstrate that pattern.

Aside from the technology space, there is also the topic of platform organizations and their Data Architecture. Some of them are pushing automated Machine Learning or some kind of Flavor of AI, but what is important to understand about this is, even Data Science is only functional based on its capacity to use the correct Data Architecture, Structure, and a lot of other technology pipelines. Data Science is a really big guessing game and in both AI and Data Science I have met very few leaders who actually understand how this works. The result of this is “Data and Math can demonstrate Data and Math”. Ultimately this will result in something that must be more compelling than leadership intuition, which isn’t common, factually, I’ve never met a Data Driven organization to date, but I have met very large Data Informed groups with many ways they handle and communicate data in the same groups.

Data Storage as a whole is really important here too. Sometimes using some of these “cloud services” can be more harm than good. If you are in games, Android, Apple, Microsoft, and Playstation are actually very restricting with what data can even be accessed, or how. What is not communicated or understood is that these are massive organizations that are actually just too big to slow down and fix those issues. Its commonly overlooked and I do sometimes wonder how long it will take before PC becomes the dominant environment and Consoles are left behind, it seems inflexible and more harm than good.

Without having to go too deep down into technical topics, the easiest way to look at Data Storage is simply that its an environment that stores data to be accessed. When you hear terms like lakes, or tables, or whatever other terms, its simply an environment of a certain size that has data that can be accessed and has to jump through some hoops to access. The reality of all of this is, not many companies take the time to really let this talent do what it has to do, so it becomes a culture of just smash it in and figure it out later, or forget it completely. This is arguably the most important function in every organization, but its never given enough attention, or support to make it better.

Cultural Environment

Another big pain I see in AI is that each group has one primary Evangelist that has its foot in the thing from one area of work. They may be published, they may have some work in from the private sector, but it's the same pattern I see, they were a technical mind that got attention and it influences the entire organization. In some cases, I have seen other groups doing amazing work in these big organizations get outright shut out, and it comes with some animosity. One thing I have noticed in the character of people who are capable of AI is a general appreciation for intelligence in whatever form that is, those minds don’t often debate, they just do stuff.

Today I have a bit of fun, admittedly I get bored sometimes, and I will do an interview with a group in these areas and dumb down my own background to see what is going on. I really am curious, even open to the thing on occasion, but you can tell almost right away what is going on with the first few people you speak with. Ultimately, people hire on words and emotion more so than function or application, its very human and absolutely acceptable, but the words they align with are what interest me the most when I speak with them.

At the end of the day, I suspect the success of an organization with “technology hype”, as I kinda play around in all areas of it these days, has a lot to do with the individuals at the top. This seems to be a systemic and fairly logical topic of those they hire, how the money was acquired, by who, and can often be correlated against job descriptions and their current applications of work. That said, I still am not convinced on blockchain as being viable. I do wonder though if there is a group that isn’t doing blockchain but says it is for the hype money but will bypass all blockchain because they don’t do blockchain, just replicated it with older techniques or different words.

The Math Might Be Wrong

I have noticed an interesting intersection of groups culturally around this space, those that follow more linear math in sequence, or “Turing” approach, and those that seem to follow the more classic topics of theoretical physics, or the “Church” approach. One thing I can say from my own personal exposure to the work, behavior, brain function, and intelligence in machines has a lot more to do with Physics than most groups realize. Many of the topics of the mathematical implications of notable AI efforts have more in common with current topics being addressed in Quantum Mechanics.

Nothing I have seen of behavior within the proper simulated environments had anything to do with statistics. Factually, the very practice of it is seemingly driven by bias on a limited structure of perception. In practice, the data being collected is an abstraction of limited view, then driven by an output that is very subjective, notwithstanding all those topics of data tech, preparation of data, etc. The snapshot of information which is recovered from these practices never shows how things change, only a snapshot of that moment's perception. By the time that data is accessed again, many things will change, much like your mood without coffee. It seems silly, but minds really are that susceptible to change.

Emotional Environment

The topic of emotion is very emotional, and anyone that would accept this as truth is actually very difficult to do, as it has a lot of implications on what personal beliefs may be. However, the reality of what is communicated as emotion, in our brains, is an interpretation of physical response, correlated to the tone of voice, and current topics of language in society. We absorb patterns of communication the moment we come out, we replicate those, by default some muscles seem to be responsive to happy or sad, and those patterns are also validated or invalidated based on various understandings of a social or cultural environment.

The biggest area of complexity here is, taking in and replicating all of those inputs we have evolved which impact our brains and responses, is not an easy thing to do with technology. Our brains are pretty amazing, they correlate sound, touch, taste, smell, words, you name it, and it creates what we see as reality. Don’t get weird about the reality thing, it’s still very real, but I do suspect the whole Schordingers cat thing was an abstraction of this topic. Some of these senses we are demonstrating with hardware, others need some time to develop, like taste and touch for a machine to register that stimulus to impact its own inputs. But ultimately we have emotional humans trying to communicate technical topics and it results in a lot of confusion.

The Turing test itself has a fallacy in that its seemingly ever-changing. And I will admit that even I am unsure of who holds the dominant or more official entity which will establish how to properly measure it. I don’t have high hopes that the desire to do AI will allow this to occur, what is more, likely is a younger generation with a better language of math or connected social intelligence will make a thing but not call it AI, this seems to be what societies do.

I feel strongly that satisfying intelligence today will be culturally difficult, maybe not impossible, but it's about as silly as saying someone will have a perfect anything. Nothing about math itself is static, we only take abstractions of math and language and make a static acceptance of it. But more importantly, I feel we have a very skewed view of pain, intelligence, culture, and a variety of other topics.

Of all of this, there was one lesson I was taught which stands out to me by an unexpected philosopher, a reformed thug from LA, which I see stands true. Life is a constant war between language, philosophy, and theory, and the only times that things seem to advance is when the three can find balance in one another. Honestly, of all the brilliant minds I have met, he was the most unexpected and at a very young age in my life and this is maybe the biggest topic that makes me feel some kind of way in how we recognize intelligence, artificial or not.

Brains are in fact brilliant, all of them. Everything seems to push and pull in various ways, and much of it can be measured in interesting ways so obvious it may make some people very uncomfortable. Still, I have a lot of faith in people to do amazing things, I’m just not sure what amazing thing comes next.

--

--