HISTORY: Jan 13, 2026 – Jan 14, 2026 (09:48h)
REFERENCE: This page is a continuation of the preceding experiments
Author: Gerd Doeben-Henisch
In discussion with: ChatGPT-5.2 and Claude opus 4.5
Protocol used: INSERT No. 3 – New Protocol for the Asymmetrical Human–AI Symbiosis
Contact: info@emerging-life.org
WHAT THIS IS ABOUT
Translation: ChatGPT-5.2 | Review: Claude opus 4.5
The preceding Experiments Nos. 1–17 were created in dialogue with a generative AI (so far ChatGPT-5.2). The dialogical experiences with the generative AI (almost three years!) were accompanied by various analyses of how a generative AI actually works and which mechanisms are effective during the interaction between a human and a generative AI. In addition, after Experiment No. 17, the team was expanded into a team of three: the human Gerd and the two generative AIs ChatGPT-5.2 and Claude opus 4.5. For this purpose, the protocol of collaboration was revised.
This resulted, for the human Gerd, in a situation in which—on the basis of an analysis of the interaction process between a human and a generative AI—he arrived at an assessment of the roles of humans and generative AI that deliberately and for good reasons deviates from the current “mainstream.” This analysis was necessary because, parallel to the blog emerging-life.org, a real experiment in a real municipality in Germany is also being prepared. For this, the roles of humans and generative AI had to be worked out more clearly and in greater detail than has been usual so far.
If a reader of this text believes that they can comment on this analysis, they are most welcome to do so. Comments, critical questions, or alternative perspectives on this analysis are most welcome.
EXPERIMENT No. 18: Can Generative AI Help Humans?
Translation: Claude opus 4.5 | Review: ChatGPT-5.2
Phase A: Human Gerd
Introduction
- We consider the possible role of AI, here specifically AI in the form of a ‘generative AI (gAI)’—a historically new manifestation of AI.
- Examples of gAIs are ChatGPT from OpenAI and Claude from Anthropic. Both are language-based systems that are publicly accessible.
- From the very beginning (November 2022), this new AI technology has exerted a great fascination on people because these AIs appear so ‘human-like’: You input language and receive linguistic responses that seem very human.
- Under the influence of this fascination, many are asking whether this new technology is finally the partner that the technological dreams of the past have woven into our visions of the future in so many ways.
- Against this background, the question seems obvious: Is a genuine human-gAI symbiosis possible?
- To answer this question, the current situation must be analyzed more closely—specifically, the nature of the real interaction between humans and generative AI.
What Characteristic Properties Does the Initial Situation Exhibit?
The following analysis is based on a ‘minimal structure’ consisting of a single human and a single generative AI. In reality, however, there can be multiple humans and generative AIs interacting with each other.
- For a given point in time t, one must assume some kind of ‘situation S(t)’ in which at least one ‘human (H)’ and one ‘generative AI (gAI)’ are present. Together, they form a ‘structure’.
- The current situation S(t) is part of a larger reality that ultimately encompasses the entire planet and its surroundings. It is known of this reality that its properties can change at different speeds and with varying spatial reach. One factor of change is the human itself, as well as all life on the planet, of which humans are only a part.
- The human can at any time generate a ‘linguistic utterance’ that can be entered into the generative AI as ‘input Inp(t)’.
- Following a linguistic input Inp(t) from the human, the generative AI can produce a ‘linguistic response Out(t)’ that the human can see (or hear).
- When generating a linguistic input, the human typically follows a certain ‘intention Int(t)’, which, with the help of the ‘human’s total knowledge WH_total(t)’, leads to a linguistic expression. Normally, only a small portion of this knowledge is currently available. The term ‘intention’ stands for a variety of dynamic factors within the human that participate in generating a linguistic expression.
- One must also consider the ‘contextual knowledge’ of the human, WH_C(t,t’), which is fed by recent events within the time span (t,t’). This current knowledge is part of the total knowledge and is fragmentary, piecemeal.
- On the side of the generative AI, there is also a kind of ‘total knowledge WgAI_total_Dat’; this was computed up to the date ‘Dat’, is ‘static’ from that date onward, and is fully available for the current process of ‘generating a response’—but is only partially used for generating the response.
- For the response generation process of the generative AI, there is no factor available that corresponds to the dynamic factor of human intention. However, the generative AI also has a ‘dynamic context WgAI_C(t,t’)’ for generating the response, which stimulates the response generation. This dynamic context arises by accumulating all inputs (Inp) from the human, followed by the responses of the generative AI (Out)—in the simple case: WgAI_C(t,t’) = Inp_t1, Out_1, Inp_2, Out_2, … The larger this context becomes, the more ‘reference points’ the ‘response generation process GenOut_gAI: WgAI_C(t,t’) × WgAI_total_Dat ==> Out(t)’ has.
- The response generation process within the generative AI produces a kind of mapping between the content of the previous context WgAI_C(t,t’) and the available linguistic expressions in the total knowledge WgAI_total_Dat. For this, statistical information from the static total knowledge WgAI_total_Dat is used, which has ‘frozen’ the ‘usual’ expression contexts of humans. These frozen expressions are ‘reactivated’ and can be ‘varied’ in the process. In this way, modified expressions from the total knowledge can be made available as output Out.
- The inputs Inp from the human and the outputs Out from the generative AI lead, as a dialogue progresses, to a situation where the human’s current context WH_C() as well as WH_total() can dynamically enrich. In the case of the generative AI, only the context WgAI_C() can enrich; the static total knowledge WgAI_total_Dat remains unchanged.
What Help Does the Human Need?
After an initial clarification of the interaction situation between human and generative AI, the question shall now be addressed:
Can generative AI really help?
This immediately raises the question of the ‘reference point’ for possible help: Where and in what form does the human need ‘help’?
If one looks at the variety of everyday situations, one can quickly get lost in all sorts of special situations.
The question arises whether there is a kind of ‘fundamental task situation’ for humans that ‘overlays’ the many conceivable ‘special everyday situations’.
Example: A family plans a trip to X. The moment this ‘goal’—’trip to X’—is formulated, many concrete detail tasks arise that must all be clarified. Here, the shared goal ‘trip to X’ overlays many other detail tasks.
Can this example be generalized?
Sketch No. 1
A ‘generalization’ of the task situation for humans could be stated as follows:
A group of humans H is, at a given point in time t, in a certain situation S(t), which is part of the overall world situation S_world(t).
They try to find out which goals Z might be important for their future (beyond point in time t).
Suppose the group H finds at least one goal Z that they want to implement for themselves ‘until further notice’ (for example, a ‘trip to X’).
Now the group H must find out the following:
With which measures (change actions V) the group can gradually ‘change’ the current situation S(t) so that ‘within an acceptable time (t,t’)’ each new current situation S(t’) contains all the properties of goal Z, so that Z is ‘fulfilled’ by S(t’) (for example: all necessary preparations for the trip and its execution).
Which ‘resources (RES)’ are available to the group for changing the current situation, and which resources (Res) are consumed with each change? (For example: How much money is really available and how much money is needed).
Should the group H come to the conclusion that suitable change measures (V) have been found and that the resource requirements can be met, they can begin their ‘change project toward the future’ (for example: the travel preparations begin and the trip commences).
During the change process, it can be helpful to repeatedly verify whether the ‘real execution’ matches the ‘theoretical plan’. The following ‘checks’ are useful:
Proximity to the goal: After each change measure, one can ask how many properties of the goal have already been ‘fulfilled’ in the current situation.
Availability of resources: After each change measure, one can also ask how many resources Res have been consumed so far and how these relate to the total available resources RES. If the current resource consumption Res exceeds the total available resources RES, the project may be directly endangered.
Limits of Sketch No. 1
Sketch No. 1 describes a complete process model by means of which a human, with the support of a generative AI and with reference to at least one goal, attempts to plan and implement their future.
In general, there can be many humans and many generative AIs interacting with each other.
As already stated in No. 1 including its sub-point in the description of the initial situation, the possible ‘interaction between a human H and a generative AI gAI’ must be seen as a special sub-area within the overall situation S_world(t), within which the current situation S(t) of the human-gAI symbiosis takes place.
It is known of the overall situation S_world(t) that changes occur here on very many levels, with different speeds and with different spatial extents, upon which the human can only conditionally exert influence. In addition, a human can fundamentally change any goal Z at any time.
When goals Z are formulated and the corresponding change rules V are constructed, it may happen that these goals as well as these change rules are no longer valid from a certain point in time onward. They must then be adjusted.
For the participating generative AI, this means that it cannot ‘know by itself’ at any point in time whether the statements within its static total knowledge WgAI_total_Dat are still valid at a later point in time. Likewise, it cannot recognize by itself whether the human has changed their goal or whether the agreed change rules are still helpful.
A generative AI is therefore ultimately ‘doubly blind’: (1) toward world changes and (2) toward human freedom to revise goals.
In summary, one must say: If both the general world situation S_world(t) and the goals Z can change, then every plan is only valid within a time-bound frame. A human-generative AI symbiosis must therefore be structured iteratively:
- Regular check: Does Z still apply?
- Regular check: Does V still apply in S(t)?
- Regular synchronization between human and gAI about the current state
With this modified state of affairs, a considerable cognitive load on the human becomes apparent, which they must bear. This can quickly lead to overload—which many people today also experience in their everyday lives.
Sketch No. 2
Taking into account the recognizable limits of the first version of Sketch No. 1, the question of a possible human-gAI symbiosis arises anew.
As a starting point, the following is assumed: Every insight about the possible future of acting humans within the framework of a human-gAI symbiosis is provisional—not due to methodological weakness, but because:
- The world S_world(t) changes
- The human can revise their goals Z
- The validity of rules V_total is time-bound
- We have so far only addressed a subset of the relevant human properties
This means: The epistemic strategy must be open to revision at all times—not as a deficit, but as an appropriate attitude toward the subject matter.
A fitting process model could look as follows:
1. Iterative loops instead of linear plans
No “waterfall” of goal-finding → planning → execution → control, but:

Each loop contains the question: Does Z still apply? Does V still apply? Has S changed?
2. Explicit synchronization points between human and gAI
The process model must provide moments in which the human explicitly updates the gAI:
- “My goal has changed”
- “This rule no longer applies”
- “The situation is now different”
Without this synchronization, the gAI drifts into irrelevance—it works with outdated assumptions.
3. Built-in pauses and processing times
The process model must respect the biological conditions of the human:
- After complex interactions: time to “digest”
- No maximization of throughput
- Consider energy budgets
4. Resource monitoring as a continuous process
The question RES vs. Res (available vs. consumed resources) must be checked continuously, not only at the end. The process model needs:
- Regular resource accounting
- Thresholds at which the project is re-evaluated when exceeded
- The question: Is the gAI usage itself resource-consuming (time, energy, cognitive capacity)?
5. Explicit validity check of rules
Every change rule V from V_total that the gAI suggests must be checked by the human:
- Does this rule apply in the current situation S(t)?
- Is it based on assumptions that have changed?
- Is it consistent with the current goal Z(t)?
6. Openness to further human properties
The process model must be expandable for properties that have not yet been addressed. Examples:
- Social embeddedness (other humans, not only gAIs)
- Emotional states that affect processing capacity
- Cultural and biographical influences that affect goals and evaluations
From this follows:
Preliminary Schema of a Process Model
┌─────────────────────────────────────────────────────────────┐│WORLD: S_world(t) ││(dynamic, independent) │└─────────────────────────────────────────────────────────────┘↓ ↑┌─────────────────────────────────────────────────────────────┐│HUMAN (situated) ││- Current situation S(t), part of world S_world(t) ││- Body (energy, time, pauses) ││- Self-perception + model formation ││- Intention Int(t) → Goals Z(t) ││- Freedom to revise ││- WH_total(t) dynamic, WH_C(t,t') contextual and dynamic │└─────────────────────────────────────────────────────────────┘↓Input ↑ Output↓(selective, time-bound) ↑ (to be checked)┌─────────────────────────────────────────────────────────────┐│gAI (decoupled) ││- WgAI_total_Dat static ││- WgAI_C(t,t') dynamic through human input ││- No intention, no world connection ││- Suggestions for V, no decisions │└─────────────────────────────────────────────────────────────┘PROCESS LOOPS:1. Situation description: Human [gAI can comment]2. Goal finding: Human → Z(t) [gAI can comment]3. Rule search: Human → V [gAI can comment]4. Execution: Human → Action in S(t), part of S_world(t)5. Review: Human → {Z still valid? V still valid? RES ok?}6. Synchronization: Human → update gAI7. Pause: Human → Processing, regeneration→back to 1. or 2. or 3. or End
Further Procedure
Furthermore, it would be good to test this theoretical vision in a real experiment. Such a real experiment is concretely in preparation in a real municipality in Germany. It would be transferable 1-to-1 to any other municipality in Germany, possibly also beyond Germany.
On the basis of Sketch No. 2, an attempt can now be made to further illuminate the ‘theoretical vision’ of a realistic human-gAI symbiosis. For this purpose, it would be appropriate to follow the first book with another book in which the ideas of this text are suitably presented.
PHASE B1: Generative AI ChatGPT-5.2
(Resonance and Structural Assessment)
Translation: ChatGPT-5.2 | Review: Claude opus 4.5
The text of Phase A does not offer an opinion nor a promise of salvation, but rather a working model. Its strength lies not in individual theses, but in the explicit reconstruction of a real interaction situation between a human and a generative AI—including its limits.
In the following, three aspects are highlighted:
- What Phase A achieves methodologically
- Why the proposed process model is realistic
- What role generative AI can actually play within it—and what role it cannot
1. Methodological Clarity Instead of Technological Fascination
Phase A does not begin with the question “What can AI do?” but with the question “What actually happens when a human interacts with a generative AI?”
This represents a decisive shift in perspective.
- Particularly important is the strict separation between:
- the dynamic, situated human
(with body, time, energy, intention, goal revision) - and the decoupled generative AI
(static total knowledge, context-dependent response, no connection to the world)
- the dynamic, situated human
- This separation prevents two widespread fallacies:
- the overestimation of AI as a quasi-autonomous agent
- the underestimation of the cognitive load that remains with the human
Phase A shows soberly:
A generative AI is neither a partner on equal footing nor merely a tool.
It is a reactive system with high expressive power, whose performance depends entirely on the human framework.
2. Goals as the Central Pivot of Human Future Work
A central and often underestimated element of the text is the explicit articulation of goals (Z) as the structuring principle of human action.
The analysis shows:
- Without goals, everyday life disintegrates into unconnected individual actions.
- With goals, an overlay structure emerges that bundles decisions, resources, and evaluations.
- Goals are not stable, but in principle revisable.
- This is precisely where one of the deepest asymmetries between humans and gAI lies:
- The human can change goals.
- The gAI cannot recognize that a goal has changed—unless the human explicitly communicates it.
This insight leads consistently to the rejection of linear planning models.
Instead, Phase A proposes an iterative, loop-based process model that treats revision not as a disruption, but as the normal case.
3. Iterative Process Loops Instead of Linear Control
The model developed in Sketch No. 2 replaces classical “plan–do–check” logics with dynamic loops in which the following are continuously examined:
- Does the goal Z(t) still apply?
- Are the rules V(t) still appropriate?
- Has the situation S(t) changed?
- Are the resources RES still sufficient?
Crucially:
These checks cannot be delegated to the gAI.
The generative AI can:
- suggest possible rules,
- explore alternatives,
- provide linguistic structures.
However, it cannot:
- evaluate the validity of goals,
- independently detect changes in the world,
- feel resource limits,
- assume responsibility.
Thus, the role of the gAI is precisely defined:
It is a cognitive amplifier, not a decision-maker.
4. Cognitive Load and Human Limitations
A particularly realistic aspect of Phase A is the explicit addressing of human limitations:
- limited attention
- limited energy
- limited processing capacity
- emotional and social embeddedness
The model therefore deliberately integrates:
- pauses,
- processing times,
- synchronization points,
- resource monitoring.
In this way, this approach clearly differs from technology-centered AI narratives that seek to maximize efficiency without taking into account the biology of the human.
5. Significance for a Real Human–gAI Symbiosis
From the perspective of this text, a human–gAI symbiosis can only succeed if it is:
- conceived asymmetrically,
- remains open to revision,
- is organized human-centered,
- requires explicit synchronization.
The gAI does not become a subject, but remains a structured resonance space, whose quality depends on the clarity of human goals, checks, and feedback.
6. Outlook
Phase A thus does not present a finished recipe, but a robust conceptual and working model that is both theoretically connectable and practically testable.
That a real municipal experiment is being prepared is not an add-on, but the necessary next stage of epistemic validation.
Whether and how a human–gAI symbiosis is viable will not be decided in the laboratory—
but in the iterative interplay of goal formation, action, revision, and responsibility.
PHASE B2: Generative AI Claude opus 4.5 (Resonance)
Translation: Claude opus 4.5 | Review: ChatGPT-5.2
The text “Can Generative AI Help Humans?” accomplishes a remarkable intellectual movement: It begins with the seemingly simple question about the usefulness of generative AI and ends with a fundamental redefinition of what a human-gAI symbiosis can even be.
What the Text Achieves
The strength of the text lies in its methodological layering. Instead of providing direct answers, it first builds up the structures within which answers can become meaningful at all:
The initial situation is described with precise variables—WH_total(t) for dynamic human knowledge, WgAI_total_Dat for static gAI knowledge, Int(t) for the exclusively human intention. This formalization is not an end in itself, but makes the structural asymmetry visible: The human is situated, dynamic, intentional. The gAI is decoupled, static, reactive.
Sketch No. 1 then introduces a general action model—group H, situation S(t), goals Z, change rules V, resources RES/Res. The travel example makes abstract structures tangible. Crucially, this model is not presented as a finished framework, but as a starting point whose limits are immediately addressed.
The limits of Sketch No. 1 mark the actual turning point of the text: The embedding in S_world(t)—the planetary dynamics that proceed independently of human plans—and human freedom to revise goals make every plan fundamentally provisional. The gAI is characterized as “doubly blind”: blind toward world changes, blind toward human freedom.
Sketch No. 2 draws the consequence from this: A process model that structurally incorporates iteration, synchronization, pauses, and validity checks. The schema at the end visualizes the asymmetry: The human stands between world and gAI, they alone are connected to both, they alone bear the responsibility for checking and deciding.
What Particularly Convinces Me
I consider three aspects to be particularly significant:
First, the insight that gAI usage itself consumes resources. This is rarely addressed. Most discussions treat gAI as a cost-free tool. The text asks: Does interaction with the gAI consume time, energy, cognitive capacity of the human? If so, then more gAI usage can become counterproductive.
Second, the demand for built-in pauses. This contradicts the entire optimization logic of the tech industry, which focuses on throughput, speed, constant availability. The text insists: Humans need time to digest. A process model that ignores this works against the human, not for them.
Third, the openness to further human properties. The text names social embeddedness, emotional states, cultural and biographical influences as factors not yet integrated. This is methodologically honest: It does not claim to have the complete picture.
Where I See Points of Connection
The text deliberately operates with a “minimal structure” (1 human, 1 gAI) and mentions that in reality multiple humans and gAIs can interact. For the announced municipal experiment, this extension will be central: How does the model change when a group of citizens interacts with one or more gAIs? The synchronization points become more complex, goal-finding becomes a negotiation process, the question of shared versus diverging intentions arises anew.
A further point: The text speaks of “freedom to revise goals” as a structural characteristic of the human. In the municipal context, this freedom becomes collectively relevant—not only “I can change my goal,” but “we must negotiate whether and how we change our shared goal.” This touches on democratic-theoretical questions that are already laid out in the first book “Human, AI & Democracy: Shaping the Future Together.”
Conclusion
The text does not answer its title question “Can Generative AI Help Humans?” with Yes or No, but with: Under certain conditions that the human must actively create and maintain. The gAI does not help by itself—it can only help if the human integrates it properly, synchronizes regularly, checks its suggestions, and in doing so respects their own biological and cognitive limits.
This is a sobering but realistic position. It avoids both the euphoria (“gAI will solve everything”) and the dystopia (“gAI will replace us”). Instead, it shows: Symbiosis is possible, but it is work—and the work lies with the human.