Boosting Human Insight by Cooperative AI: Foundations of Shannon-Neumann Logic

- We present the logical foundation of an artificial intelligence (AI) capable of dealing with complex dynamic challenges, that would be very hard to handle using traditional approaches (e.g. predicate logic and deep learning). The AI is based on a cooperative questioning game, to boost insight. Insight gains are measured by information, probability, uncertainty (Shannon), as well as utility (von Neumann). The framework is a two-person cooperative iterated Q&A game, in which both players (human, AI agent) benefit (positive-sum): the human player gains insight and the AI player learns to improve its suggestions. Generally speaking, valuable insight is typically gained by asking ’good’ questions about the ’right’ topic, at the ’appropriate’ time and place: by posing insightful questions. In this study, we propose a logical and mathematical framework, for the meanings of ’good, right, appropriate’, within clearly-defined classes of human intentions.

• In section 1, we discussed algorithmic vs human intelligence, and the purpose of SN-Logic.
• In section 2, we present the two-person (human H, AI agent A SN ) cooperative Iterated Questioning (IQ) game's role, from both H's and A SN 's perspectives • In section 2.3, we discuss the dynamic drift problem: coping with the changing human understanding of a given complex challenge, using a dynamic optimization process. It's impossible to clearly define a single problem, in complex challenges (e.g. war on drugs) so that they can last for decades • In sections 3.1-3.2, we discuss SN-Logic's requirements to cope with insight (which involves causality, information, logic, probability, uncertainty and utility) and the spaces over which SN-Logic operates • In sections 3.3-3.4, we introduce SN-Logic's grammar: semantics + syntax The syntax is used by question generators, to build millions of possible questions • In section 3.5, we present SN-Logic predicates of two classes: problem difficultyminimizing, and solution quality-maximizing, used in all inferences • In section 3.6, we discuss the complexity and scope of SN-Logic, and section 3.7 highlights the distinction between knowledge acquisition (symbolic AI) and cooperative (machine) learning, both present in our AI • In section 3.8, we introduce the normal form for making SN-inferences, about a question's insightfulness • In section 4, we introduce the Insight Gain Tensor µ(when, where, what, which) to select sound inferences, from the many valid normal-form inferences, and measures of insight gains associated to these questions • In section 5, we illustrate the use of SN-Logic, and we perform a validation test, to show how SN-Logic/IQ-game helps finding a solution path, to a component of a hard real-world solved case (quantum field theory research topic) The Iterated Questioning or IQ game, is described in paper I. During a game session, the AI-agent, A SN , poses the human player H, a question q ∈ Q, it thinks is most insightful, given H's current cognitive mindset C(t). H then explores it, and reports if it was insightful. These are the game's cooperative policies, both players agree to adopt for each Q&A episode. The game serves several purposes which benefits both players (positive-sum game) [7,9] For the human player, H, the IQ-game has the following main roles: • The IQ-game is a Q&A process that reduces uncertainty and increases information about a specific problem, via a sequence of Q&As. It provides an effective tool, to gain insight on the many aspects of a complex challenge.
• The IQ-game drives a sequential (mostly left-hemispheric) conscious reasoning for solving well-defined (narrow) tasks. This process is mirrored by algorithmic AI. For complex tasks, this process alone fails to deliver full solutions. Conceptual solutions to such problems require the next process: insight-gaining.
• The IQ-game drives a parallel (mostly right-hemispheric) non-conscious process, for gaining insights leading to an 'aha' moment. Largely non-conscious processing can be used, where the first process proves too slow or impossible (task is too broad, ill-defined and complex).
• The IQ-game is driven by dual goals: minimizing obstacles and maximizing solution qualities. The minimizing questions guide H to eliminate or reduce difficulties in the problem, when possible. The maximizing questions guide H to boost specific solution qualities, when constraints allow it. It is a dynamic optimization (changes with H's understanding). We discuss this process in section 3.4.
• The IQ-game provides a non-brittle reasoning framework, which continuously adapts to the human player H's cognitive intentions C. This mindset C evolves as H's understanding of the challenge progresses. The IQ-game copes with the framework drift problem (section 2.3).
For the AI-agent, A SN , the IQ-game has these roles: • The IQ game produces game session episodes, from which the agent A SN can learn via cooperative learning.
• The IQ game ensures the agent remains human-aligned [10], because of the continuous human judgments. What is useful, informative, insightful for a human player H, does not necessarily mean the same for A SN , even if it starts that way. In the learning process, these values can drift apart, due to many factors. In the IQ game, human valuation is the ultimate arbiter, for the insight value of a question (since any AI short of a full AGI superintelligence, will fail miserably at this task), while SN-Logic estimates the insight values, given C(t).
• The IQ game taps into a most valuable human resource: our collective evidence-based knowledge, undeniably our greatest accomplishment (culture, science, technology).
Note that our collective belief-based human selections are often poor (e.g. who we put in power as our leader). The forces here are complex and evolutionary: desire for control, cognitive biases and herd mentality from the fear of social isolation (e.g. [11]).
These factors are absent in the IQ procedure, since decisions are individual, and based directly on one's own experience of a question's insight, within a very specific cognitive context C(t). It uses direct evidence-based judgment, where H's main incentive is to make life easier for herself. There are, of course individual variations in the experienced insightfulness of questions, but only stable patterns (across many individuals) are retained in cooperative learning (not presented in this paper).
A complex challenge is typically time-evolving, multi-objective, multi-solution, multidiscipline, multi-level and open-ended, making it hard from the start, to clearly define a single problem, even when it is urgent (e.g. a crisis) or critical (e.g. sustainability), or both (e.g. a pandemic) Instead, there is a drift in the framing of problem and its solutions, as we accumulate new insights about a challenge: a framework drift problem. The drift cannot be handled with a static AI/ML system, focused on a given narrow problem.
The IQ-game, copes with the framework drift, by using an adaptive reasoning framework, and an adaptive cognitive intention C = {f ramework, where, when, what} Ref (section 3.3-3.4) which tracks the human player H's current understanding of the conceptual framework. It follows H's evolving understanding of the challenge, helping the SN-logic suggest the insightful questions, within each context C. The IQ-game doesn't define a problem from the start, but instead, let's H describe the Standard Logic Programming (predicate logic) is very effective when making strict deductions, but it cannot cope with the cooperative 2-person IQ-game. The purpose of SN-Logic is to provide an inference engine with the following requirements: it has to be ...
• precise (ambiguity-free) semantics axioms • consistent (contradiction-free) framework within which, all SN-inferences can be made (normal-form inferencing) • transparent (natural language, no hidden layers) • explainable (no unjustifiable moves) • human-aligned (no conflicts of with human cognitive intentions) • non-brittle able to cope with fundamental concepts related to human-insight: causality (causes of insight), time-dependence (evolving understanding), information, probability, uncertainty (Shannon), utility (von Neumann), and insight (paper I). Brittleness is a common cause of AI failures.
To satisfy these requirements, we need a consistent set of SN-Logic definitions, axioms and rules, to which we now turn.
To reason using a predicate logic (such as SN-Logic), the variables x need spaces X, to scope the quantification: ∀x ∈ X, ∃x ∈ X. SN-Logic's concepts are partitioned in six compact concept spaces, over which we can perform inferences (see appendices A-F): Five vector spaces {T, S D , S C , S G , S S }, are used to describe the human player H's changing cognitive mindset C(t), during the IQ-game. The AI agent, A SN , needs to know C(t), because the insightfulness of a question, depends on H's increasing understanding of the challenge and its possible solutions, as insight is accumulated.
The (tensor product) space S A , of possible conceptual actions (operation x object) provide the raw material to build conceptual solutions.  , and the sentence structure (questions which ≡ q ∈ Q), have to be both consistent and precise. A SN needs a basic grammar (syntax, semantics, vocabulary) to communicate effectively with the human player H, in a consistent and precise manner. SN-Logic is based on four consistent (contradiction-free) axioms, to define its semantics precisely (ambiguity-free).
Let the human-player H's cognitive mindset C(f ramework, p) be defined by the current reasoning f ramework (next section), and three (intention) parameters: (Sem 1) Shannon-informative questions: a question (which) q(p, action), that reduces uncertainty (Shannon entropy) for H, who's mindset is C(f ramework, p) (Sem 2) Neumann-useful questions: a question (which) q(p, action), that has a human-aligned (via the 2-person IQ-game) utility, within a mindset C(f ramework, p). It helps H make progress towards a solution.

Notes
These SN axioms of semantics, allow the AI to cope with core concepts of causality (causes of insight), dynamics (changing reasoning frames) information, probability, uncertainty [6], utility [7] and insight (paper I). These are necessary components of an insight-boosting AI. The axioms Sem1, Sem2 restrict the form of allowed questions. This constraint is used by a Q-generator of questions q ∈ Q, to which we now turn.
The cooperative IQ-game is driven by dual-objectives: to minimize the problem's causes of difficulty, and to maximize the solution's quality. The optimization must continuously adapt to H's understanding of the challenge, over an IQ-game session).
The SN-grammar has a simple syntax, specified for each question class Q. All questions q ∈ Q will fall into two classes Q = {Q min , Q max }, from two complementary (dual) perspectives: (a) causes of cognitive difficulty (to minimize), (b) qualities of solution (to maximize). Each question class generates many of specific questions, aimed at making insight-gains.
The purpose of SN-Logic is to incrementally boost our insight about solutions, by suggesting when/where to pose which types of questions about what topic, while adapting to a moving target: our current understanding the obstacles in a challenge The question generator, or Q-gen, of difficulty-minimizing questions, uses a specific syntax for an evolving cognitive mindset C min (f rame, topic, p 1 , p 2 , p 3 ). There is a lot of freedom in which questions to pose, even at a specific place and time, within a well-defined framework. We select a set of six commonly useful problemsolving questions, to illustrate the procedure.
Q-Gen Syntax: difficulty-minimizing questions q(p, action) ∈ Q min q min1 : at what exploration stage are we in now? (specifies when = p 1 ∈ T ) q min2 : what reasoning frame are we operating in, now? (specifies [frame]) q min3 : what topic in [frame] are we focusing on, now? (specifies [topic]) q min4 : where does the main difficulty reside? (specifies where = p 2 ∈ S D ) q min5 : what, more specifically, causes this difficulty? (specifies what = p 3 ∈ S C ) q min6 : can you reduce the difficulty (where) and avoid its causes (what), by using these actions? (specifies action ∈ S A and which = q min6 ∈ Q min ) The [frame] variable, labels the reasoning framework currently being used (e.g. a discipline, a subject, a specialty, a model, a system, a theory, a technology etc.). This framework can change from one exploration stage to the next. It is a moving target, which mirrors our current understanding of a complex challenge.
The [topic] variable, labels a set of items we're focusing on, within [frame] (e.g. agents, assumptions, bounds, properties, qualities, relations, statements, strategies, tactics, techniques etc.). Typically, [topic] is a tool we use within [frame], to make progress. For a concrete example, see section 5.
Questions q ∈ Q min are SN-insightful, only if they are SN-informative (axiom Sem 1): they attempt to reduce a maximum possible amount of uncertainty (alternatives, ignorance, options, possibilities), within the context C min .
The generator of quality-maximizing questions, uses a specific syntax for an evolving cognitive mindset C max (f rame, topic, p 1 , p 2 , p 3 ): : what solution aspect, do you want to focus on? (specifies what = p 3 ∈ S S ) q max6 : can you boost your goal (where) and the solution's quality (what), by using these actions? (specifies action ∈ S A and which = q max6 ∈ Q max ) Questions in Q max are SN-insightful, only if they are SN-informative (axiom Sem 1): they attempt to reduce a maximum amount of uncertainty (alternatives, ignorance, options, possibilities), within the context C max . They are specificityboosting questions which reduce uncertainty (Shannon entropy) to increase the solution's quality.
The SN concept of insight involves notions in information, logic, probability, uncertainty and utility (see paper I). To cope with these, we need a logic with quantifiers for scoping the variables x to specific spaces X. In standard predicate logic, a predicate is a function p of a variable x, which maps a variable x ∈ X, into the predicate's truth values {T, F } [12].
In SN-Logic, an SN-predicate is a a function q of a variable x, which maps a variable x ∈ X, into the predicate's insight values {insightf ul I + , insightless I 0 }.
In SN-Logic we define the two classes (minimizing, maximizing) of predicates q(x), the mindset parameter p ∈ P ≡ {when, where, what} and the predicate variable 'cognitive action': • SN-predicate questions q(p, action) ∈ Q min , where p ∈ P , action ∈ S A • SN-predicate questions q(p, action) ∈ Q max , where p ∈ P , action ∈ S A The parameter p ∈ P is in the space P of cognitive mindsets C min (f ramework, p): the set of H's intentions, during the IQ-game. The AI needs to know this intent, to make useful cooperative suggestions. The mindset parameter p, encodes the type of insight, H wants to boost, at any given time.  Thus, the number of distinct classes of challenges SN-Logic can cope with, is effectively infinite (N = 10 7 !), yet, based on a few small, compact concept spaces (cardinality ≈ 10 2 ). In this sense, SN-Logic is economical (Occam's razor).
The computed complexity of SN-Logic is a theoretical upper bound, to determine the scope of SN-Logic. In practice the computational cost will be much lower, due to universal constraints (common to all challenge classes), because they are imposed by (mostly) challenge-independent forces: • causality: universal root causes of cognitive difficulties (e.g. confusion due to ambiguity, indecision due to missing information) and solution quality (e.g. accuracy, adaptability) • logic: valid inferences with sound semantics • planning: logically necessary chronology of solution steps • problem-solving: universal tactics to minimize obstacles (to avoid/reduce), and maximize solution quality (to target/increase/maximize) (e.g. divideand-conquer, minimize ambiguity, maximize order, simplify) • information: a question is only informative, if it reduces uncertainty by eliminating alternatives, options, outcomes, possibilities, within a cognitive mindset (intention) C, restricting the insightful questions to a manageable subset: q ∈ Q * (C) ⊂ Q, with Card(Q * (C)) << Card(Q) • utility: a question is only useful, if it helps H, overcome obstacles, given a cognitive intention C, restricting the insightful questions to a manageable subset: q ∈ Q * (C) ⊂ Q, with Card(Q * (C)) << Card(Q) These rules impose a lot of structure on the SN-agent's insight grain tensor µ(f rame, topic, when, where, what, which), which is, in its fully general form, a high-dimensional rank-6 tensor, but is in practice, very sparse and decomposable into simpler tensors and convolution kernels.
The structure imposed by the universal (challenge class-independent) constraints, is sufficient to construct factored ('vanilla') tensors µ * of much lower dimensions and lower rank: knowledge acquisition. A 'flavor' is then learned to fine-tune the tensors to each class of challenge, via cooperative learning (not described in this paper). Given the complexity upper-bounds of SN-Logic, the fine-tuning possibilities are vast.
A SN 's fundamental problem, is to use the IQ-game, to guide a human player H, in when and where, to pose which types of questions about what topic, to gain a maximum amount of insight into a complex challenge.
A standard normal form inferencing (analogous to conjunctive and disjunctive normal forms, in digital and predicate logic), is necessary for the AI to cope with the computational complexity of SN-Logic. The AI can efficiently search for predicate variables action ∈ S A , used as building-blocks for conceptual solutions. Given an evolving inferencing framework (frame, topic), SN-normal forms are the following: if ∃ action ∈ S A , such that µ min (f rame, topic, p, action) > µ crit , then q(p, action) ∈ Q * min (C min ) ⊂ Q min , and q(p, action) is SN-insightful, within C min SN normal-form for maximizing inferences Given a maximizing mindset C max (f rame, topic, p), where p ∈ P = {when, where, what}: if ∃ action ∈ S A , such that µ max (f rame, topic, p, action) > µ crit , then q(p, action) ∈ Q * max (C max ) ⊂ Q max , and The sets Q * (C), are maximum-insight subsets of Q min or Q max , and µ(f rame, topic, p, action) is an insight-gain tensor (discussed shortly) whose insight gains are above a minimum critical cutoff µ crit . The purpose of an insight-gain cutoff scale is intuitive, but its mathematical justification is outside the scope of this paper, which focuses only on logical validity, and ignores scientific soundness. The cutoff is related to a scale-invariance due to a conformal symmetry, under the renormalization of probabilities (unitarity). Scale-separation is used in quantum field theories [13], but justified by the conformal symmetry [14] of a renormalization group [15].
To perform successful inferences autonomously, the AI agent needs to possess the means of deciding whether a predicate variable action ∈ S A , leads to insight gains above a minimum lower bound (that is, action ∈ S * A (C) ⊂ S A ). The insightgain tensor provides the SN-agent, the ability to select sound inferences, from a vast number of merely, valid ones (that is, of SN normal-form).
The AI performs SN normal-form inferences, to suggest insightful questions to explore, given human-targeted insight gains C(p). These 'most insightful' questions, lie in a restricted subspace Q * (C) = {Q * min (C min ), Q * max (C max )}, within a large space Q, of possible questions (Card(Q) = 10 7 ). Given a current mindset C(p), A SN must find a subspace of questions Q * (C). This is where an insight-gain measure µ(p, action) (convolution tensors and their kernels, used to restrict searches to optimal sub-spaces) are essential, to make sound inferences (real-world accurate), rather than merely valid ones (SN normal-form inferences). This will be presented elsewhere. For now, we simply discuss general constraints imposed by SN-Logic, on the tensor elements.

Ref
The AI's capacity to generate SN-insightful I + questions, from a vast possibility of insightless I 0 ones (with actions ∈ S A ), resides in the structure a highdimensional insight-gain tensor µ(when, where, what, which) ≡ µ(p, action), for each challenge class and reasoning frame. So the full rank-7 tensor is actually µ(class, f rame, topic, p 1 , p 2 , p 3 , action). This function outputs the value g of insight gain associated to exploring a question which ≡ q(p, action) ∈ Q, where p ∈ P encodes H's targeted insight gains. To be useful, the tensor µ is required to satisfy the following properties: where Cl = set of challenge classes, F r = set of reasoning frameworks (frame+topic), • it is a measure of insight gain µ(class, f rame, topic, p, action) = g ∈ [0, 1] (normalized) • probability of all possible actions with a mindset p, must sum to one (unitarity) • µ crit ∈]0, 1[ (minimum critical insight-gain value µ > µ crit ) • g = 0 when q(p, action) is SN-insightless I 0 , given the mindset p • g = 1 when q(p, action) is maximally SN-insightful I + , given the mindset p • µ is initialized by satisfying heuristics from causality, information, logic, planning, problem solving and utility. These constraints provide the initial (challenge class-independent) approximation for µ

Notes
We can now illustrate how SN-Logic is used, on a real challenge. In the IQ-game, both players (human: H, A SN ) agree to use simple cooperative strategies, given H's current mindset C: (1) A SN suggests its guess at a most insightful question (q ∈ Q * (C)) (2) H reports questions q she actually finds insightful The game's Q&A session, cycles over each obstacle, encountered within a challenge. Hundreds of such sub-problems may be encountered, to solve a challenge. Usually, the number and nature of these obstacles is unknown ahead of time, in real-world challenges.
For clarity, we use a single, static, not so complex, yet most difficult challenge. The scenario is: a young post-doctoral researcher, H, is trying to find a good quantum field topic, to spend her next ten years on. The first few moves (Q&As) of the two-person IQ-game, could proceed as follows:  A by H: I want to improve on standard quantum field theory, its a discover class of challenge ([class] = discover ).
1. Q from A SN : Which exploration stage are we in, now: (AI is using q min1 in Q-gen) 2. Q from A SN : 'What is our current reasoning framework ? (AI is using q min2 in Q-generator) The framework is composed of a topic and a frame

Notes
The topic can be any useful tool we select, for overcoming the obstacle (select the closest match): actions e.g. activities or behaviors agents e.g. catalysts or inhibitors limits e.g. lower, upper, extremes computations e.g. algorithms equations e.g. model or representation laws e.g. laws of quantum physics procedures e.g. protocols or decision processes e.g. interactions or communications properties e.g. pattern or symmetry qualities e.g. strengths or weaknesses relationships e.g. hierarchy or priorities restrictions e.g. constraints or conditions rules e.g. allowed or forbidden statements e.g. assumptions, conditions or theorems states e.g. equilibrium or criticality strategies e.g. divide-and-conquer structures e.g. classes, partitions, sets tactics e.g. explore special cases techniques e.g. calculation or construction ...  This scenario shows how suggested questions from A SN , can replicate realworld solutions to obstacles, via a cooperative Q&A dialog. The researchers do something similar between themselves, early-on, to decide what to work on. But AI's complementary strength, is to cover many exploration paths, which are very often overlooked, yet may be key to quality solutions. This dynamic 'human-AI' interaction would be even more fruitful, in a group brainstorming session, where each member of the team, can select directions to explore and possible answers.
We mentioned (section 3.7), that insight-gain convolution tensors and kernels, form the bridge between the SN normal form inferencing (SN-validity), and measures of insight (SN-soundness); the bridge between logic (validity) and science (soundness). Initially, the tensors µ are the AI's 'vanilla' core, then, learned flavors are added to it, via machine learning to optimize the core AI, to distinct challenge classes.
The AI's core will be initialized by heuristics from causality, information, logic, planning, problem-solving, and utility. These apply to all types of challenges. The tensors' added flavor, needs to be learned using cooperative learning via a renormalization procedure, from the IQ-game's episodes. The construction of the insightgain tensors and cooperative learning will be described in future work.

Ref
We presented the foundations of SN-Logic, designed to boost human insight, to help overcome challenges that are hard to deal with, using traditional AI (mainly, predicate logic and deep learning neural nets). This required a logic, capable of coping with the concepts necessary to measure insight-gains: causality (causes of insight gains), dynamics (adaptive reasoning frameworks), information, probability, uncertainty (Shannon) and utility (von Neumann).
In this paper, we presented the following: • The two-person (H, A SN ) cooperative IQ-game's role from both H's and A SN 's perspectives • The frame drift problem: coping with the changing understanding of a challenge, using a (non-brittle) logic and optimization process, which continuously adapt to the current human understanding and intention • SN-Logic's requirements to compute insightfulness (which involves causality, information, logic, probability, uncertainty and utility) and the concept spaces over which SN-Logic operates (to scope the quantifiers) • SN-Logic's grammar: semantics + syntax for posing questions q ∈ Q from a vast space of potential questions. The syntax is used by a dual question generator (q ∈ Q min , q ∈ Q max ), from which all questions are built (N ques = O(10 7 )) • SN-Logic predicates of two question classes: problem difficulty-minimizing, and solution quality-maximizing, used in all inferences This paper focused solely on logic and validity of SN-inferences. It has not dealt with the equally important issue of scientific soundness and accuracy. We will present the construction of the insight-gain convolution tensors and kernels, and the learned structure (cooperative learning), in future papers.