HTTPS://Brooksnetworks.com
Understanding Artificial Intelligence and your contribution
James J Brooks updated August 2016
Let’s set the table and get rid of some nonsense
You and I contribute or use the building blocks for AI everyday. We may not realize it but every transaction you make at a fast food counter, every toll booth you go through, every web-page you visit, and everything you store or create using modern software adds to the digital foot print of our world. Some industry giants do their best to keep your specific identification anonymous, but the data created by your behavior is worth its weight in gold. and the massive collection of data about you is the foundation of Artificial Intelligence. Here is how it works:
The term “Artificial Intelligence” tends to create an ominous portrait in our minds. Good or bad, the connotation to turning over mankind's decision making to computers is a subject that can unravel our own cognitive ability to discuss the topic rationally. Superficially, a subjective view of this imposing theme we call “Artificial Intelligence” is a fine topic for debate, but that is about it. A.I. is here, it is part of our lives, and it is ours. As you will see, we have been and will always be the facilitators of our existence.
To understand AI is not difficult. To do so, we must start by leaving behind the “what if” philosophical arguments that come along with the Sci-Fi Grand Illusion. If Sci-Fi scenario’s were accurate, I believe we would all be driving hover crafts and would have solved the space time continuum long ago. The bad news is, we are not going to solve the space time continuum anytime soon. The good news is, the practical application of Artificial Intelligence and our own intellectual evolution is happening as we speak. It is here, it is real and there is nothing Artificial about it.
Abandon the theory of a future where the world's custodial rights are handed to machines, and their "intellectual” decisions eventually conclude that leaving “us” out of their plans is unfortunately unavoidable. Set aside the theoretical end game where AI logic, void of emotions, personified as quasi "Doctor Spock" cold hearted bastard that he is, conspires to put us out of our mystery.
All the world's a development testing lab, we are willing players
No matter who you are, what you do for fun or work, you will be able to understand this topic in a deeper and more enthusiastic manner than you do right now. When we are introduced to a new computer application, the time line from the “wow that is pretty cool” phase, to “I cannot get by without this (whatever)” has exponentially shrunk by our collective intellectual appetite leaving almost zero tolerance. for error, and no patience for pre release refinements. Or at least we like pretend we have no tolerance for incomplete solutions. Truthfully, we have painted ourselves into a corner, and like any other addiction, we tell ourselves that next time we will allow software development to reach a manageable state of completion before we throw caution to the wind and torture ourselves with unfinished code. The mind of a software engineer may accurately define his or her project's development phase as "early beta", but as consumers our otherwise rational thought process hears "Release Candidate". So with our implied blessing we push the bird out of the nest and hope it can fly well enough to survive until R2. This driving force behind all that we do is arguably irresponsible and disingenuous as we continue to release solutions that are not ready for prime time. It is also arguably the driving force behind human nature. A force that has always been with us, and has pushed our technological evolution as far and as fast as possible.
We have an insatiable apatite for the next best thing; we posses the collective attention span of a gnat, we label ourselves as early adapters and subject ourselves to the pain of bleeding edge technology over and over again. We have only ourselves to blame, or to thank for this accepted product release model. It is what it is. Since we can't beat them, we have chosen to join them and all that is discussed herein and all the examples used to illustrate how we use AI exist because of our behavior. Time-lines would be impossible to meet without broad range beta testing and refinement of code perfected using input from you and I feeding the adaptive learning engines. We are not artificial, we are really not THAT intelligent, but we install the latest version of an application, and agree to deal with the shortcomings we expressly consent to tolerating, and we even agree to click the smiley face when we stumble across a feature that we appreciate.
So when we talk about Artificial Intelligence we have to consider human behavior and demand for intellectual property developed by man and delivered by man made systems with accelerated perpetual learning. Our demand for newer better solutions is the bedrock we set aside for the foundation of AI to be constructed. We race to achieve technical superiority with every additional data query. In short, if you are waiting to see what AI will do to/for the world, you do not have to look into the future, you just have to think about what you did today. Chances are you were assisted by AI in some way and simultaneously you played a part in the perpetual learning phase inherent in the AI process.
Artificial Intelligence in your everyday life right now, in March, 2016
To understand AI, we must look at the practicality of using machines to do the work of men; to improve our quality of life and to solve problems we face. Just as the invention of the cotton gin, the internal combustion engine, the personal computer, and the cell phone have done. A.I. is no different than any of these things when it comes to its place in OUR intellectual evolution. (The last two items I listed have bridge this gap into AI already.)
The evolution of computing as we know it and as we use it in our everyday lives, contains quite a bit of Artificial intelligence. There are many subsets of AI that you probably encounter every day. Generally referred to as evolutionary programming or machine learning; things like speech recognition, data mining, industrial robotics, and search engines are forms of AI.
Getting Driving Directions, a perfect example of AI in three easy steps
Let’s say you are invited to an event. A band is playing at a small establishment in a city near you. You know the name of the place and perhaps that is about it. You get in your car, (the retro-mobile where the driver is still flesh and bone) and before you begin to drive, you push a button or you voice a command that engages the attention of your navigation system. Immediately you are prompted to input your desired destination by way of natural language. After a Q&A prompt session by the system to make sure you and your computer are on the same page, you are given specific driving directions to the destination. Easy enough, right?
Your smart phone is in a persistent state of readiness, waiting to fulfill the request which will follow the verbal entry point command. At this point, the voice recognition engine is only required to identify a single command. This simple on/off button (phrase) can exist as a hard coded element. This is not AI, it is little more than the evolution of "The Clapper." However, what will follow begins to help us understand A.I. at its infancy. We can begin to understand the complexities ahead as the challenges inherent in today's AI applications expose just how challenging the processes as intelligent analysis and adaptation really are, with attributes uniquely created to understand YOUR voice. Most of us already depend on intelligent voice recognition, we expect it to become an expert at understanding our voice. We get angry when it does not work well enough, and we push for a “smarter” interpreter, we insist that this "Artificial" person listening to our voices become more “intelligent.”
Break this process down, the first "ah ha" moment is only a few paragraphs away:
1. Our first interaction with A.I. happens the moment you call up the device's basic voice recognition applet. For example, Google-powered Android device's listen for a single voice command "! Ok Google" This phrase acts as a switch to turn on the Android; s Voice Recognition engine. The smart device acknowledges your nudge with an acknowledgment informing you that it has its thinking cap on. The VR (voice recognition) engine is now involving some AI features, it has learned your voice and will likely execute the start command for Google Maps, whereas someone else talking to your phone may not yield the same results. So we have encountered our first AI driven process. We tend to take this adaptation to our specific voice for granted but today's VR capabilities are the result of several decades of development and adaptive learning at a very complex level. The varibles attached to pitch, tone, cadience, dialect and regional accents create billions of decision points within the code written to decipher a persons voice.
If you use your smart phone to voice your outgoing texts, you may notice that it has become better at deciphering your mumbles and indecisive speech patterns. Well, mine does at least. To point, there is more than just a straight interpretation of the sounds you make, there is an intelligent process that is rather remarkable at deciphering speech from a known source. Pay attention" switch
2. The speech recognition engine accurately assimilates our verbal gibberish and creates a relation to the standardized database of words it will use as input to the second A.I, module; the mapping application. AI begins by taking the name of the establishment you requested and runs an instant query using all the variables it has learned about you; where you live, where you have been before, your age, your interests (information that you have knowingly or unknowingly provided over time) ; combines them with other variables like your current GPS coordinates; similar requests from a larger database set outside of your personal history, and the weighted value of possible matches based on the popularity of probable destinations using the name you requested. The program uses complex mathematical calculations derived from the values it has associated to each contributing variable. it continues to drill down to a finite list created through these complex algorithms. The application is not only providing you with information, it is also designed to refine the calculations it used based on the accuracy of the result it has presented. This perpetual learning and dynamic refinement exemplifies a form another early form of AI that is well beyond the conceptual phase. More than likely, the destination shows up at the top of the list of probable matches you are prompted pick from.
3. Now that you are in agreement with the navigation system as to your destination, the system begins to access the establishment’s longitude and latitude, cross referencing your current proximity to the destination. Before you are presented with directions, the AI engine has assessed traffic conditions, time of day, road closures, the flow of traffic as calculated by other users currently in route on segments of the roads it can could choose for you in its quest to present as the best path from point A to point B. Ultimately and almost instantly the system provides a driving route to follow.
This process happens in a matter of seconds, with billions of variables and calculations which have been refined over time as the navigation system has learned from its own similar processes. The navigation system has been programmed by humans to perform a perpetual comparative analysis of every element used to produce previous results identifying and avoiding inefficiencies. With every additional element added to the database and each possible combination of variables accounted for or calculated with high probability, anomalies are diminished and are classified as insignificant. As the amount of data processed increases, statistical probability calculated, the artificially intellectual results become increasingly accurate. As these results are compiled the navigation system has produced a byproduct of the process it has used to better itself. The resulting tables from each calculated trip, the successes and failures and additional data that is collected but may not be directly significant to the trip at hand but it is far from extraneous. This massive collection of extremely accurate data with variables that can be applied to other partially related issues, such as weather patterns or the seasonal increase of a specific issue that creates traffic problems has importance well above the fifteen minute delay it is responsible for in March and April each year in Torrance (contrived example)
This bi product of ancillary data, as well as the gold mine of trip data that has been collected can be thought of as yet another building block in our quest to define the AI. Using this building block and others like it we start to remove ourselves from liner data processing towards tangible AI development, (tangible AI, and oxymoron, I suppose, but tangible is true.) This is where we are. This data can and will be used for a higher level of computing as a problem solving tool. The results of our first generation of data processed with intelligent adaptive decision making algorithms , will help define the second generation systems. These "Gen2" systems benefiting from the processes before them, and further benefiting from the machine learning inherited in the Gen1 computations can produce answers to questions far beyond the "number crunching" our baseline data was originally intended for. The collection of results, recycled like exhaust from a tailpipe, become the fuel used to propel us into deeper learning. And so it goes.
Here is where you start to “Get IT”
This is where we really begin to achieve what is thought of as Artificial Intelligence. This is where you can begin to comprehend the evolution of machine learning that can use this process in a more accurate and adaptive way as complexities are introduced in a pyramid of modular processes. Giving us the ability to answer questions based on applying complex mathematical equations to the data collected in massive amounts and sliced and diced every way imaginable. This is only the beginning though. This is also where the terms AI and Deep Learning begin to make sense and boggle the mind simultaneously. But again, the concept and the practicality is not difficult to understand.
Think of the entirety of the navigation system we used as a basis for understanding AI as it is used in its infancy today. Now think of all of the data that went into calculating a single trip. Think of that single trip as one single line item in one table in a new database to be collected and stored with all the other trips this navigation system has produced. The end results of all of the trips as a single source for one table of a new database.
As you would imagine, a database with one table is not much more than a spread sheet. This new database has many tables, the navigation result table is merely a single element. For sake of discussion, our new database has a total of four tables. (Why four? I like the number) All four data sources have one thing in common, they are results of a sophisticated massive collection of records and represent the end result of their respective systems. These other systems have seemingly nothing to do with the navigation system. They are just other databases coexisting in a world full of data.
All four of the systems we will use to feed our deep learning Gen2 database are considered Gen1 engines. Each of Gen 1 system has done its thing, produced the result it is designed to produce from all of the variables it was given. Giving the end user the result they were looking for. These results like the navigation system are a wealth of information we will use as source tables for our Gen2 AI engine.
It matters not what we are trying to achieve in order to understand this concept. The important thing to think about is what can result from taking several very accurate, extremely deep very complex pre made data sources and use it to use as input for the second generation database. This Gen2 deeper learning cycle can produce very accurate output with relative ease and with near certain credibility because of the known integrity of the core data. This Gen2 output can be thought of as higher level learning or in AI speak “deeper learning”
Why is this possible now and was not created long ago?
We can look at the evolution of technology and the invention of the single transistor in order to understand why we are where we are today. The transistor as a practical component in mass produced electronics changed the world. The impossible became not only possible, it gave us the foundation for all of the electronic gadgets we use today. In the course of about sixty years. We used, and still use this building block as the basis for everything that contains a circuit board. The basic premise of a amplified switching mechanism used to produce an on/off state ( 0 or 1) has been condensed into semiconductors where billions of these switches are necessary to process such complex data. From a single stand alone component, the transistor has continued to evolve exponentially and as our ability to create more transistors packed into smaller integrated circuits reaches yet another threshold we are now entering a new generation of computing power. The power to create complex systems that are now leading us into AI simply did not exist a few short years ago.
Data collection and data mining the holy grail of AI
The significance of collecting and processing massive amounts of data has long been understood. The evolution of the transistor to the microprocessor and it's billions of transistors along with the programmatic language to collect and store that data had to exist before we had the amount of data needed to create a viable AI system. Which should remind us again that there is really nothing artificial about it. This data collection, a topic for another philosophical debate notwithstanding, has been an ongoing process by the tech giants that are actively leading us into the world of AI. For over a decade, nearly two decades the Internet has provided the first tool to seamlessly collect data. The companies who have spent the time and have redefined the methods to collect data probably know more about you than you know about yourself. This should not surprise anyone.
With the data sources on hand and the hardware architecture available for ultra super computing, Gen2 databases can and do exist. Understanding this Generation of data processing is important and probably as significant as the transistor itself, because going beyond the technological evolution i represent as Gen2 AI, virtually removes the need for manual input of data, reconciliation of data sets or programming instruction handled by direct human interaction. We are still the facilitators of the systems which run for our benefit and cannot run without our decisions to run them, but like the automobiles we drive to work, the machines we created can simply do the job of transporting us faster than our legs can.
Going back to the empowerment of our Gen2 AI systems, we can easily envision flopping that data over again, and as that evolution happens and our systems have no direct connection to the essential data which has been collected and compiled to become mere building blocks; extremely accurate data which is removed of any element or table that exists in its entirety in our world, yet can be considered with precision because of the core data that was used as building blocks just two generations before hand. This Gen3 AI system, containing unimaginable amounts of data as a single line in a single table used by a database that now contains an infinite amount of power and a data set that is more accurate and larger than our mind can comprehend run queries against this data and we now have resulting data that have been produced without a true connection to any single real world event.
Here is where mankind sees the game changer
accurate data at the core of a database which is removed of any element or table that exists in its entity in our world, yet can be considered extremely accurate because of the core data that was used as building blocks just two generations before hand. Add to that the programmatic advancements that are refined in an exponential fashion in tangent with the data. We now have a structure of pure Artificial Intelligence and a system that has the ability to produce information and solutions to problems we have yet to understand, problems we may not even be aware of.
By the third cycle of programmatic and data sets, you have the ability to produce accurate results to problems where there is no direct connection to any event or any realty hypothetical circumstance that has been defined, or thought of. This IS AI this is deeper learning. This is life on planet earth as we know it. This virtual reality created through Artificial intelligence. This is hardly science fiction and is certainly not a pie in the sky concept. It is OUR intellectual evolution and it is happening now. So let yourself be amazed. Where are we headed? An easier question to answer would be where we aren’t headed. The rate at which we process data today is staggering, but it is merely a fraction of our ability to process data tomorrow. We will watch this intellectual evolution unfold in our generation. This is why every great mind in technology today is focusing their efforts on AI technology. it is everything, it is anything it "IS."
Over time this will allow humans to build and configure infrastructure, or medical procedures without any real historical reference to guide them. It is the genius of mankind and the evolution of our minds that created the extension of our own capabilities. Artificial as it is, it is still the result of our human intellectual evolution to build a better mouse trap.