Introduction
Science relies on the assumption that we live in an ordered universe that is subject to exact, deterministic, and consistent laws of nature. So, everything in nature is bound by natural laws and proceeds according to natural laws.
Natural laws, logic and natural phenomena are investigated using fundamental science:
-
Natural reasoning requires both natural intelligence and natural language;
-
Intelligence and language are natural phenomena;
-
Natural phenomena obey laws of nature;
-
Laws of nature and logic are investigated using fundamental science.
However, the field of Artificial Intelligence (AI) and Natural Language Processing (NLP) — in a broad sense — is investigated using cognitive science. As such, the field of AI and NLP is limited to mimic behavior, while mimicking a hen’s — chicken’s — behavior will not produce a single egg. As a consequence, the field of AI / NLP has fundamental problems. A few simple ones are described in the problem descriptions in this document.
Problem description 1: Reasoning in the past tense
Aristotle described the first discovered natural reasoning construct almost 2,400 years ago:​
-
Given: “All philosophers are mortal.”
-
Given: “Socrates is a philosopher.”
-
Logical conclusion: “Socrates is mortal.”
​​
However, at the time Aristotle described the natural reasoning example mentioned above, Socrates was already dead, as the ultimate proof of his morality. So actually, Aristotle should have used the past tense form in his example regarding Socrates:​
-
Given: “All philosophers are mortal.”
-
Given: “Socrates was a philosopher.”
-
Logical conclusion: “Socrates was mortal.”
​​​
The tense of a verb tells us about the state of the involved statement:
-
“Socrates is a philosopher” tells us that Socrates is still alive;
-
“Socrates was a philosopher” tells us that Socrates is no longer among the living.
​​​
In regard to the conclusion:
-
“Socrates is mortal” tells us that the death of Socrates is inevitable, but that his mortality isn't proven yet by hard evidence;
-
“Socrates was mortal” tells us that his mortality is proven by hard evidence.
​
In Block 5: Past tense reasoning, a natural reasoning solution is proposed.
Problem description 2: Possessive reasoning (specifications)
The field of electromagnetism is a science because it closes the loop:
-
We can convert motion to electromagnetism, and convert electromagnetism back to motion;
-
We can convert light to electromagnetism, and electromagnetism back to light;
-
We can convert magnetism to electricity, and electricity back to magnetism.
Also natural reasoning closes the loop — for natural language and natural intelligence — without any human interaction or engineered techniques:
-
From readable sentences,
-
through natural logic (natural intelligence),
-
with the result expressed in readable — word by word constructed — sentences again.
In primary school we all learned a similar sum:
-
Given: “John has 3 apples.”
-
Given: “Peter has 4 apples.”
-
Logical conclusion: “Together, John and Peter have 7 apples.”
The school teacher then wrote:
-
3 apples + 4 apples = 7 apples
However, the result of the sum — “7 apples” — lacks a reference to “John and Peter”. So, the result of this sum is insufficient to generate a readable sentence:
-
“Together, John and Peter have 7 apples.”
Hopefully, mathematicians will come to the rescue. They will write:
-
J = 3
-
P = 4
-
J + P = 7
Unfortunately, the mathematical result “J + P = 7” lacks a reference to “apples”. So, the result of this algebra is also insufficient to generate a readable sentence:
-
“Together, John and Peter have 7 apples.”
When such problems occur in the field of AI / NLP, human influence or an engineered solution is applied rather than a generic solution, by which AI / NLP is as a field of engineering rather than a science.
In Block 3: Grouping of knowledge (specifications), a natural reasoning solution is proposed.
Problem description 3: Possessive reasoning (relations)
Possessive reasoning — reasoning using possessive imperative “have” — is not naturally supported by logic / algebra:
-
Given: “Paul is a son of John.”
-
Logical conclusion: “John has a son, called Paul.”
Nor the other way around:
-
Given: “John has a son, called Paul.”
-
Logical conclusion: “Paul is a son of John.”
​
​In Block 4: Grouping of knowledge (relations), a natural reasoning solution is proposed.
Problem description 4: Generation of questions
Algebra describes the Exclusive OR (XOR) function, while CNL reasoners don't implement its linguistic equivalent: conjunction “or”. CNL reasoners are therefore unable to generate the following question:
-
Given: “Every person is a man or a woman.”
-
Given: “Addison is a person.”
-
Logical question: “Is Addison a man or a woman?”
​​​
In Block 6: Detection of a conflict — and generation of a question, a natural reasoning solution is proposed.
Challenge
It may seem like Large Language Models (LLM) can solve the aforementioned reasoning problems, from natural language — through natural logic (natural intelligence) — with the result expressed in natural language again. However, LLMs only have a limited, engineered reasoning capability. When reasoning problems are combined, LLMs will start to lose context.
Therefore, I defy anyone to beat the simplest results of my reasoner in a generic way:
-
From readable sentences (with a restricted grammar, Controlled Natural Language),
-
through natural logic (natural intelligence),
-
with the results expressed in readable, autonomously — word by word — constructed sentences,
-
in multiple languages (*),
-
without programmed or trained knowledge,
-
without human-written output sentences,
-
without extensive words lists,
-
published as open source software, just like my software is published as open source.
​​​
(*) Logic is (almost) language independent. My natural reasoner therefore implements an (almost) language independent logic, which is configured for five languages: English, Spanish, French, Dutch and Chinese.
The rules of this challenge
-
Below are 9 blocks. In the first 7 blocks I describe the very simplest natural reasoning constructs of my system. Your implementation should deliver the results of at least one of the mentioned blocks. In the last 2 blocks I only show the results of my reasoning system;
-
Your implementation should not contain any knowledge after startup. Instead, the system should derive the knowledge from the input sentences of the mentioned examples, from readable sentences, via a generic algorithm, back to readable sentences;
-
Preferably, the nouns and proper names used should not be known in advance. I use grammar definitions and an algorithm instead of a word list;
-
Your implementation should be set up as generically as possible, so that all examples of this challenge can be integrated into a single system;
-
The screenshots of my reasoning system show that various natural reasoning constructs reinforce each other. At the end of each of the first 7 blocks a screenshot has been added, to show how my system processes the mentioned examples;
-
Your implementation should be published as open source software, so that the functionality is clear, just like my software is published as open source software;
-
In case your results are slightly different, you should explain why your system reacts differently;
-
It is an on-going challenge, until all mentioned blocks have been implemented by others;
-
I will be the jury of your implementation.
​
A small reward:
I offer a small reward for each block implemented by others. For the first 7 blocks €1,000 per block, for the last two €1,500 per block. So €10,000 in total.
You can contact me via LinkedIn and this website.
Menu
Scientific challenge (Introduction)