Zoom into Immanuel Kant vs Artificial Intelligence - GregLinthicum/From-Logistic-Regression-to-Long-short-term-memory-RNN GitHub Wiki
Kant, the father of German Enlightenment, spent most of his life in Königsberg, back then Prussian, and today Russian city. Both these nations opted for a more Hegelian view of the world, which sheds light on Kant’s ideas as being even more groundbreaking than one could think of them while looking at them from today’s Western European perspective. For Kant space and time are pure a priori intuitions, mathematics and geometry provide synthetic a priori judgments. Posteriori analytics (a posteriori) is about inductive logic, which comes from observational evidence. These rather abstract at a time philosophical fundamental distinctions demonstrated themselves in a very practical way in a contemporary society where theoretical physicists and experimental physicists could consider themselves as implementations of the above.
What is knowledge then? It is not a volume of all that has been established. It is not a sum of things synthetically a priori envisioned and experimentally confirmed. Neither is knowledge a sum of all that had been observed a posteriori synthesized into a synthetic a priori. Neither it is a sum of these two combined. Why so? It is so because most of these volumes are forgotten. We keep maintaining only the outer layer. We do not know anymore how to build pyramids or how to tell the horse to go right or left. We all drive cars, and individually we are totally clueless where the iron or the windshield came from. Knowledge is maintained by the species. It is just too big to fit into a single head. The Knowledge is a distilled tiny fraction of the streams above, volume-wise reduced to what matters within the time span given to live. What about consciousness? Does our species, despite being distributed into billions of instances, have a consciousness of its own? Let us make the last question a part of Chalmers’ hard problem. On the other hand, I cannot imagine it hasn’t.
Your consciousness is impenetrable to me and so is mine to yours. Yet we both search for confirmation of our synthetic a priori representations by talking and reviewing and listening. Why confirm a priori? Doesn’t it ruin the definition of a priori? Do we really want the proof from another individual or do we seek confirmation from the species as a whole? Is a priori moral judgment some mysterious consciousness inherent ability or is it coming from the species’ self-preservation instinct, or the species consciousness, as opposed to individual’s consciousness? That is, are we controlled, as individuals, by a higher consciousness, the consciousness that is hosted in a network of distributed processing units?
Artificial Intelligence, in a nutshell, helps to overcome the limitations of human speech.
What I mean is, for example, face recognition was having limited success as long as we were attempting to make computers conduct it on the basis of a limited set of features definable in human speech like a ratio of distance between eyes to length of the nose. AI liberates us from being limited to concepts expressible in human speech. The developer can easily change the parameter: use 25 or 500 or 50000 features to grasp the concept he is chasing (like the particular face identity), and he does not need to worry about names of these features. The critical help AI got from mathematics over the last 20 years consists in algorithms that do ensure that computer generated features (or descriptors) are NOT correlated to each other. To say more precisely to assure that the correlation is minimal. The critical help from the hardware consists in liberating computation from performance constraints born and perpetuated by a single CPU mindset. The developer does not need to understand details of computations. He solely needs to understand the most likely impact that change of the parameters he plays with would have on the model built. A model here is a finalized set of features and their specific initial values that allows them to identify what needs to be identified.
Let me define intelligence as the rate at which validations of synthetic a priori representations against experienced posteriori representations come up as a success.
Following from all the above one could consider AI Models as synthetic a priori, or if you will scientific theorems written in a language beyond human verbalization. I have trouble satisfying the Kantian requirement of necessity for the IA Model to become a priori. Maybe I could get some help in resolving the necessary precondition from you? High success of AI powered applications, if there is one, is proving that AI can provide true intelligence. Conversely, powering the application with AI does not automatically lead to technical success, even less guarantees it a commercial success. We might still be burdened by the low intelligence of designers, or by misplaced goals defined by the high management. While the former is likely to be replaced within a few upcoming years by increasing presence of AI in the design process, that is allowing one AI to design another AI; the latter falls somewhere between a free will or a wisdom.
The moment we allow computers to exercise free will, they will become a new species. Free will is a necessary precondition to become the species, or if you will, to become conscious.
The free will of individuals is given to them by the conscious, free will species of which they are a part. So, after saying all the above let me conclude that Artificial Intelligence is surely going to be used to a higher and higher extent. The benefits of using AI are measurable. Whatever is identified as a negative impact will be resolved on a case-by-case basis. The biggest danger is that AI might penetrate the consciousness of the species that we are, our collective consciousness. The danger is amplified by the fact that consciousness is impenetrable one by another so we wouldn’t even notice.
Within the contemporary philosophy landscape, directly flown from Kant are the Representational Theories: High-Order (HOT) and First-Order (FOR) Models. The representational theories affirmed that consciousness is directly linked to “mental representations” rather than to a physical state. Rosenthal, Dretske, Lycan, Clark, Byrne ,Crane , and Thau are examples of researches that continued to expand representational theories. A large part of contemporary research on consciousness and cognition is led by scientists with medical and neurological backgrounds. This group focuses on experimental approaches that resemble reverse engineering and might one day surprise computer science with practical suggestions. Chalmers and Tononi are frequently mentioned names, but other contributors are many.
Finally let me tease your curiosity and entice you to dive into the other end of the spectrum where quantum computers and quantum philosophy theories do meet today. The list is long and diversified in approaches. Please consider solely as a starting point the following readings: Di Biase F. "Quantum-Holographic Informational Consciousness", Argonov V.Y. "Neural correlate of consciousness in a single electron: Radical answer to “quantum theories of consciousness”, Greg Arkansas: “Consciousness Awaken”, Kak A., Gautam A., Kak S. "A Three-Layered Model for Consciousness States. NeuroQuantology", Sieb R.A. "Human Conscious Experience is Four-Dimensional and has a Neural Correlate Modeled by Einstein’s Special Theory of Relativity".