1. The current approach to AI and NLP – and how it fails

In the years before the first flight of the Wright brothers, aviation wasn't scientific yet. Because the attempts were “inspired by nature”, using feathers, flapping wings, bird suits, and so on:

However, the Wright brothers understood: A machine will only be able to fly if it obeys the the Laws of Physics regarding to flight. So, apparently, using the laws of nature is a fundamental approach, whilst being “inspired by nature” isn't.

This situation is illustrative for the field of AI and NLP:

  • This field is lacking a unifying, fundamental (=natural) and deterministic (=implementable) definition of intelligence, and the understanding how natural intelligence and natural language are related;

  • Without natural definition, this field is lacking a natural foundation;

  • Without foundation, the techniques developed on AI and NLP are in fact baseless. And without one common (=natural) foundation, its disciplines – like automated reasoning and natural language – can not be integrated;

  • Being baseless, AI got stuck at a simulation of behavior, and NLP got stuck at keyword level;

  • As a consequence, AI and NLP are limited to programmed and trained intelligence.

Even almost 2,400 years after Aristotle's work on logic, and almost 170 years after the publication of “The Laws of Thought” by George Boole, scientists are still unable, unwilling or forbidden to convert a sentence like “Paul is a son of John” to “John has a son, called Paul” – and vice versa – in a generic way (=through an algorithm).

 

Both sentences have the same meaning. So, it must be possible to convert one sentence to the other – and vice versa – as explained in 1.5.2. Fundamental flaw in the Turing test. However, such a conversion requires to understand what natural intelligence is.

Common knowledge:

  • If problems are fundamental, one needs to repair the foundation. Actually, it is better to remove the old foundation, and to pour a new one;

  • If two disciplines have different foundations, they can't be integrated, because a building can only have one foundation. If another foundation would be poured next to an existing one, both foundations will move relative to each other. Then the expanded building – resting on both foundations – will prolapse, and eventually collapse.

Using a fundamental approach – based on laws of nature – will deliver significant progress, while it will be fundamentally different from a behavioral / cognitive approach.

1.1. Fiction, engineering and science

Fact-checking is extremely rare in the field of AI and NLP.

Fact: Scientists are unable, unwilling or forbidden to define intelligence as a set of natural laws.

 

Being unable, unwilling or forbidden to define intelligence, AI is not an artificial implementation of natural intelligence. As a consequence, AI is not scientific. Instead, AI is just clever engineering. Therefore, this field is limited to deliver specific solutions to specific problems.

 

Being unable, unwilling or forbidden to define intelligence, a lot of Science Fiction stories are told on AI. This video on YouTube separates engineering from the Science Fiction stories told on AI: “How Intelligent is Artificial Intelligence? - Computerphile”.

 

Also the field of NLP is not scientific. Because scientists are unable, unwilling or forbidden to derive new knowledge from sentences in natural language and to write the derived knowledge back to readable sentences in natural language. It proves that scientists don’t understand what natural language is (*).

 

Only Controlled Natural Language reasoners are able to close the loop: natural language → logic → natural language. Because only CNL reasoners are able to read sentences (with an extremely limited grammar), to derive new knowledge, and to write the derived knowledge in self-constructed sentences (with an extremely limited grammar).

 

CNL reasoners are based on Predicate Logic, which describes the intelligent function of basic verb “is/are” in a generic way, in the way nature works. So, CNL reasoners work in the way nature works regard to verb “is/are”. Therefore, they deliver a generic solution. And therefore, they are scientific.

 

However, Predicate Logic – and thus any  CNL reasoner – is limited to logic expressed with basic verb “is/are”. Scientists are for example ignorant of the intelligent function in language of possessive verb “has/have”. Instead of implementing this intelligent function in artificial systems – which would deliver a generic solution – scientists teach us to hard-code knowledge containing this verb directly into a reasoner or a knowledge base, like: has_son(john,paul). This is again engineering – a specific solution to a specific problem – instead of fundamental science.

 

(*) The field of electromagnetism is scientific – understood – because scientists are able to close the loop for electricity, magnetism, movement and light. Scientists are able:

  • to convert electricity to light, and to convert light back to electricity;

  • to convert electricity to magnetism, and magnetism back to electricity;

  • to convert electromagnetism to movement, and movement back to electromagnetism.

However, scientists are unable, unwilling or forbidden to close the loop for natural language and logic, because they are ignorant of the logical structures of natural language.

1.2. Evolutionary intelligence

First of all, the development of any technology – including Artificial Intelligence (AI) – requires by definition (human) intelligence and a structured approach, while the theory of evolution doesn't support any intelligent influence, nor any structured approach. So, the theory of evolution doesn't apply to the development of technology (like AI).

In the same way, the theory of evolution doesn't apply to the development of Evolutionary Algorithms / Programming and Genetic Algorithms / Programming: Both techniques are obviously algorithms. Algorithms are intelligently designed by definition (*) – using a structured approach – while the theory of evolution doesn't support any intelligent influence, nor any structured approach.

 

Nevertheless, Evolutionary Algorithms are useful though for finding an optimum value. They are comparable to the PID Controller – found in ordinary central heating systems – which optimizes the burning time in order to avoid undershoot and overshoot.

(*) algorithm: “any set of detailed instructions which results in a predictable end-state from a known beginning

 

1.3. Autonomous systems

We should separate autonomous systems from autonomously intelligent systems:


Autonomous systems: Mars rovers, autonomously flying drones and self-driving cars are examples of autonomous systems. They are able to use consistent sources to navigate, like radar, cameras and GPS. These sources are consistent with their maps and with their movement: If the vehicle moves, their radar, cameras and GPS will move accordingly. And marks on the map will eventually appear on radar and cameras when it comes near the GPS position of those marks.


Such systems are autonomous – but not autonomously intelligent – because the intelligence in such systems is programmed.


Autonomously intelligent systems: Language is a naturally consistent source. It is subject to Natural Laws of Intelligence. For example, each and every (human) language has an equivalent of conjunction “or”, like in sentence “Every person is a man or a woman”. This word has an intelligent function in language: It is used by our brain to separate knowledge, in this case to separate the words “man” and “woman”.


By using language as a natural source of intelligence, it is possible to implement natural intelligence in artificial systems, by which these systems become autonomously intelligent (up to a certain level).

1.4. Artificial / Deep-learning Neural Networks

First of all, neurons are not essential to intelligence, in the same way as feathers and flapping wings are not essential to aviation. So, neurons are not the source of natural intelligence.

 

Scientists are unable, unwilling or forbidden to define intelligence as a set of natural laws. Without a natural definition of intelligence, AI is limited to engineering: specific solutions to specific problems. Artificial Neural Networks (ANN) are engineered to store an average pattern, based on a training set of patterns. As a consequence, the use of ANNs is limited to pattern recognition. And the use of Deep-learning Neural Networks (DNN) is limited to perform trained tasks, based on pattern recognition.

 

ANNs are lacking the logic implemented by natural intelligence. As a consequence, human intelligence (natural intelligence) is required to select the patterns of the training set. Humans are therefore the only naturally intelligent factor in pattern recognition. Not the ANN. The word “learning” is therefore a misfit term when used in regard to an ANN. To illustrate:

 

We don't have to feed a child thousands of pictures of a cat before a child is able to recognize a cat. One example of a cat may be sufficient for a child to distinguish this type of animal from other types of animal. At the moment the child sees another cat, it will point to the animal and ask “Cat?”, in order to get a confirmation that it has learned to distinguish this type of animal from other types of animal correctly.

 

My father taught me: “Don't become a monkey that learns a trick”. DNNs are engineered to perform a trick, based on pattern recognition. DNNs are lacking natural intelligence. So, they don't understand the essence of the task. Therefore, they need to be trained. Human intelligence (natural intelligence) is required to design the algorithms that describe the essence of the task. After a lot of training runs, the DNN has mastered to perform that trick, without understanding the essence of the task. Having designed the training algorithms, humans are the only naturally intelligent factor in performing the trained trick of a DNN. Not the DNN itself. The word “learning” is therefore also a misfit term in regard to a DNN. To illustrate:

We don’t need to play a game thousands of times, before a child is able to play this game. Explaining the rules of the game may be sufficient for a child to play that game, while the rules of a game can't be explained to a DNN.

In our brain, pattern recognition doesn’t provide the intelligence itself. Pattern recognition only provides the input for the intelligent (=hard-coded) brain. Self-driving cars work in a similar way: Pattern recognition provides the input on which the programmed logic responds.

 

The only way to improve pattern recognition in machines: To identify individual parts of each object, like the left ear of a cat, its right ear, its nose, its whiskers, its mouth, its tail, each eye, each leg, and so on.

 

1.4.1. Deep-learning networks applied to natural language

Deep-learning networks are able to recognize and to produce patterns of a language. But they are unable to grasp the meaning expressed by humans through natural language, because natural language is like algebra and programming languages: It has “variables” (keywords) and “functions” (structure words).

 

In natural language, keywords – mainly nouns and proper nouns – provide the knowledge, while the logical structure of sentences is provided by words like definite article “the”, conjunction “or”, basic verb “is/are”, possessive verb “has/have” and past tense verbs “was/were” and “had”.

 

However, deep-learning networks are not hard-wired to process logic. So, this technique is unable to process the logic that is naturally found in language. And therefore, this technique is unable to grasp the deeper meaning expressed by humans through natural language.

 

Deep-learning networks are based on pattern recognition. And therefore, they are limited to perform tasks based on pattern recognition.

1.5. Fundamental flaw in NLP

The quality of a system is determined by the quality of its output, divided by the quality of its input. The quality of the current approach to NLP is very bad:
• Rich and meaningful sentences in;
• Artificially linked keywords out.

 

During the NLP process, the logical structure of the sentences is lost, like a two-dimensional movie has lost the three-dimensional spatial information. To prove this loss of the logical structure – and the poor state of the current approach to NLP: You will not find any system – other than Thinknowlogy – able to convert a sentence like “Paul is a son of John” to “John has a son, called Paul” – and vice versa – in a generic way (=through an algorithm).

 

Both sentences mentioned above have the same meaning. So, it is possible to convert one sentence to the other – and back – through an algorithm. So, why are scientists unable, unwilling or forbidden to define such an algorithm?

 

Only if the involved laws of nature are understood, one is able to convert light to electricity and back, motion to electricity and back, and so on. In the same way, converting one sentence to another – while preserving the quality (=meaning) – requires to understand the Laws of Intelligence that are naturally found in the Human Language. However, not a single scientific paper supports the mentioned conversion in a generic way (=through an algorithm).

 

In its infancy, Thinknowlogy only accepts a very limited grammar. However, its output has (almost) the same quality as its input, which is a quality ratio of (almost) 100%. It proves: Thinknowlogy preserves the meaning.

1.5.1. Blind spot in NLP

Natural language is like algebra and programming languages:

Natural language has “variables” (keywords) and “functions” (structure words). However, in NLP, only the keywords are used, while the natural structure of the knowledge is discarded. As a consequence, the field of NLP got stuck with “bags of keywords”, which have lost their meaning (=natural structure).

 

In natural language, keywords – mainly nouns and proper nouns – provide the knowledge, while the logical structure of sentences is provided by words like definite article “the”, conjunction “or”, basic verb “is/are”, possessive verb “has/have” and past tense verbs “was/were” and “had”. My scientific challenge describes some basic reasoning constructions, based on the logical structure of sentences.

 

Scientists are ignorant of the logical structure of sentences. Instead of preserving this natural structure, they teach us to throw away the natural structure, and to link keywords by an artificial structure (semantic techniques). Hence the struggling of this field to grasp the deeper meaning expressed by humans, and the inability to automatically construct readable sentences from derived knowledge (automated reasoning in natural language).

 

As a consequence, this field has a blind spot on the conjunction of logic and language.

A science integrates its involved disciplines. However, the field of AI and NLP doesn't integrate (automated) reasoning and natural language. There are roughly three categories in this field involved with natural language and/or reasoning. However, scientists are unable, unwilling or forbidden to integrate them beyond reasoning with verb “is/are” in the present tense:

  • Chatbots, Virtual Assistants and Natural Language Generation (NLG) techniques are unable to reason logically. They are only able to select human-written sentences, in which they may fill-in user-written keywords;

  • Reasoners like Prolog are able to reason logically. But they only have keywords as output. So, their results can't be expressed in automatically constructed sentences. As a consequence, laymen are unable to use this kind of reasoner;

  • Controlled Natural Language (CNL) reasoners are able to reason logically in a very limited grammar. But they are able to autonomously construct sentences, word by word.

In order to uplift this field to a fundamental science, the following three steps are required to close the loop for reasoning in natural language:

  1. Conversion from a sentence in natural language to an almost language-independent knowledge structure;

  2. Logical reasoning applied to the almost language-independent knowledge structure;

  3. Conversion of the result of the reasoner – the derived knowledge – to a readable and autonomously – word by word – constructed sentences.

Only CNL reasoners tick all boxes mentioned above for reasoning in natural language. However, they are limited to sentences with verb “is/are” in the present tense. So, they don't accept, implement and use structure words like definite article “the”, conjunction “or”, possessive verb “has/have” and past tense verbs “was/were” and “had”.

 

Some people believe that meaning will evolve “by itself” (see Evolutionary Intelligence), while others believe that the meaning is preserved by parsing all words of a sentence. But they all fail to integrate reasoning and natural language beyond verb “is/are” in the present tense.

 

1.5.2. Fundamental flaw in the Turing test

The Turing test has a fundamental flaw: The quality of the jury isn't specified. So, any chatbot can pass the Turing test if a jury is selected who is easily impressed, or if the subject (chatbot) is presented to the jury as a foreign child who may have problems to understand the given sentences, by which the jury becomes biased through compassion for the 'child'.

 

Besides that, chatbots are unable to reason logically. So, it is extremely simple to determine whether the subject is a person or chatbot: Let the subject perform an intelligent reasoning task, as described in my scientific challenge to beat the simplest results of my automated reasoning system.

For example, provide the subject with a sentence like “Paul is a son of John” and the following algorithm:

  • Swap both proper nouns;

  • Replace basic verb “is” by possessive verb “has” (or vice versa);

  • Replace preposition “of” by adjective “called” (or vice versa).

Now ask the subject to apply the given algorithm to the given sentence, which should result in a different sentence with the same meaning. The outcome must be: “John has a son, called Paul”, as described in the first block of my scientific challenge. To be sure, ask the subject to apply the given algorithm in the opposite direction, to convert “John has a son, called Paul”. The outcome must be of course: “Paul is a son of John”.

 

Not a single scientific paper supports the conversion of a sentence like “Paul is a son of John” to “John has a son, called Paul” – nor vice versa – in a generic way (=through an algorithm). So, it would become immediately clear if the subject is a person or a chatbot.

 

Another way of separating humans from chatbots as a jury, is to only present confusing phrases that are not finished, completely out of context and not related to each other. If the subject initially responds despairingly – and stops responding after a while – then the subject is human. But if the subject keeps responding cheerfully with full sentences, then the subject is a chatbot.

 

1.6. Predicate Logic

Predicate Logic (algebra) has a fundamental problem when applied to linguistics: It doesn’t naturally go beyond basic verb “to be” in the present tense.

 

Predicate Logic (algebra) describes logic expressed by present tense verb “is/are” in a natural way. But it doesn’t describe the logic of the complimentary function of verb “is/are”, namely verb “has/have”. Neither does it describe the logic of their past tense functions, namely verb “was/were” and verb “had”. As a consequence, automated reasoners are unable to read and write sentences with possessive verb “has/have” and with past tense verbs “was/were” and “had”. Apparently, Predicate Logic (algebra) is not yet equipped to process linguistics.

A lot of structure words (non-keywords) have a naturally intelligent function in language. However, their naturally intelligent function is not described in any scientific paper. Apparently, scientists don't understand their naturally intelligent function in language.

Being unable, unwilling or forbidden to describe possessive logic in a natural way, another workaround is created, by adding possessive logic in an artificial way:

  • Possessive logic must be programmed directly into the reasoner, like “has_son(john,paul)”;

  • Besides that, lacking a generic solution, the same logic needs to be programmed for each and every new noun. So, separate functions must be programmed for “has_daughter”, “has_father”, “has_mother”, “has_teacher”, “has_student”, and so on;

  • Moreover, in order to enable multilingual reasoning, all existing functions in one language, need to be translated for each and every new language.

This is engineering (specific solutions to specific problems) instead of fundamental science (a generic solution). Actually, it is a bad example of engineering. So, we need to uplift the field of AI and NLP from engineering towards a fundamental science.

 

1.6.1. Controlled Natural Language

Controlled Natural Language (CNL) reasoners allow users to enter Predicate Logic in natural language-like sentences. However, Predicate Logic doesn’t go naturally beyond the present tense of basic verb “to be”. So, also CNL reasoners don’t go naturally beyond verb “is/are”.

 

As a consequence, CNL reasoners are unable to convert a sentence like “Paul is a son of John” to “John has a son, called Paul” – and vice versa – in a generic way (=through an algorithm), because the latter sentence contains verb “has”. As a workaround, this conversion needs to be programmed for each and ever relationship:

  • First of all, a rule must be added: “If a man(1) is-a-son-of a man(2) then the man(2) has-a-son-called the man(1)”;

  • In order to trigger this rule, the relationship between “Paul” and “John” needs to be written with hyphens between the words: “Paul is-a-son-of John”. And the outcome will also contain hyphens: “John has-a-son-called Paul”;

  • And the above must be repeated for each and ever similar noun: for “daughter”, for “father”, for “mother”, for “teacher”, for “student”, and so on.

This engineered workaround is clearly not generic, and therefore not scientific.

Besides that, while predicate logic describes both the Inclusive OR and Exclusive OR (XOR) function, CNL reasoners don't implement conjunction “or”. So, CNL reasoners are unable to generate the following question:

 

Given:

  • “Every person is a man or a woman.”

  • “Addison is a person.”

Generated question:

  • “Is Addison a man or a woman?”

As a workaround for lacking an implementation of conjunction “or”, CNL reasoners need three sentences to describe sentence “Every person is a man or a woman” in a similar way:

 

Given:

  • “Every man is a person.”;

  • “Every woman is a person.”;

  • “No woman is a man and no man is a woman.”

 

Even though their workaround sentence “No woman is a man and no man is a woman” describes an Exclusive OR (XOR) function, scientists are still unable, unwilling or forbidden to implement automatically generated questions in a generic way (=through an algorithm).

Both problems mentioned above – the inability to convert a sentence through an algorithm and the inability to generate a question through an algorithm – make clear that scientists are unable – or unwilling – to integrate reasoning (=natural intelligence) and natural language in artificial systems.

Lawyers have no problems to write down logic in legal documents, using natural language. So, why are scientists unable, unwilling or forbidden to integrate logic and natural language in artificial systems?

Legal documents are of course accurate in their description: “either ... or ...” is used to describe an Exclusive OR function, and the combination “and/or” is used to describe an Inclusive OR function. In daily life, instead of the combination “and/or”, we add “or both” to the sentence. In most other cases of conjunction “or”, we mean an Exclusive OR function.

 

So, in daily life, “Coffee or tea?” – short for “Either coffee or tea?” – describes an Exclusive OR function, while “Warm milk or a sleeping pill? Or both?” describes an Inclusive OR function.

Note: In these examples, the conjunction separates a series of words of the same word type. In these cases, a series of singular nouns. But also in imperative sentences like “Do …, or you'll have to face the consequences”, conjunction “or” implements an Exclusive OR function. Because the sender gives the receiver an exclusive choice: “Either do …, or you'll have to face the consequences”.

 

1.6.2. The function of word types in reasoning

There is another fundamental problem when Predicate Logic is applied to linguistics: It doesn't specify word types.

For example, instead of “All humans are mortal”, it is perfectly fine in Predicate Logic to write “All blue are mortal”. But this sentence construction is grammatically invalid for any adjective. It is only valid for plural nouns.

 

In order to be applicable to natural language, Predicate Logic should describe the word type of each variable. In this case, it should define that the first variable (second word) should be a plural noun, and that the second variable (last word) should be an adjective.

 

Let's consider the following equation: “Every car has an engine” equals to “An engine is part of every car”. I state that this equation is true for any singular noun. However, unaware of the function of word types in language, scientists try to prove my fundamental approach wrong by using a proper noun, like: “John has a son” equals to “A son is part of every John”, which is nonsense of course.

 

So, despite of using different types in common programming languages – such as booleans, integers and strings – scientists are ignorant of the function of the different word types when it comes to reasoning in natural language.

 

The notation of the definitions in the scientific challenge I launched, repairs both problems: Preserving word type information, as well as reasoning beyond the present tense of basic verb “is/are” (see Predicate Logic). Abbreviations can be used later, in order to make the notation compact.