EXPERIMENT No. 17: The Concept of a Future Generator — Created by Citizens for Citizens

HISTORY: October 10, 2025 – October 25, 2025 (10:55h)
REFERENCE: This page continues the series of preceding experiments.
Author: Gerd Doeben-Henisch
Co-Author: ChatGPT-5
Protocol: Conducted under the principle of an asymmetrical relationship
Contact: info@emerging-life.org

Glossary – Core Concepts

(Version of October 10, 2025)
Citizen Platform
An inclusive space for communication and cooperation that enables citizens to develop, simulate, and implement shared goals.
Trans-Individual Functional We (TFW)
A cooperative entity composed of multiple individuals, organized for the pursuit of shared goals and the maintenance of complex communal tasks.
Shared Dynamic Cloud of Knowledge (SDCK)
The overall space of collaborative knowledge work in which knowledge is collected, structured, and continuously developed.
Shared Dynamic Knowledge (SDK)
The thematic core within a SDCK, referring to a concrete process of change.
Formal: SDK = ⟨S, T, Ch, AP⟩
S = initial state / current situation
T = target state
Ch = transformation (change) rule
AP = application procedure
Future Generator — by Citizens, for Citizens (FGC4C)
When citizens use a FGC4C process to simulate future scenarios, they assume responsibility not only for their goals but also for the consequences of these simulations.
Sustainable Empirical Theory (SET)
The structure of an SET corresponds to the structure of a SDK.

Initial State (S)
The current condition of a municipality, including all properties relevant to planning.
Target State (T)
A desired future condition whose achievement can be empirically verified.
Application Procedure (AP)
The method for applying a transformation rule to a situational description in order to generate a new situation.
R1Finite Concreteness (Embodiment)
The given bodily-situated existence of the human being; it cannot be “left behind.”
R2 – Environmental Embeddedness
Continuous integration within a real environment with its demands and resources.
R3 – Practical Action
Basic routines and interactions with things and others in everyday life.
R4 – Perception
Present-oriented sensory modeling of the world.
R5 – Memory
Stored processed experiences and traces that guide future actions.
R6 – Transformative Operations
Internal or external operations by which states are deliberately changed (planning, acting).
R7 – Semantic Knowledge
Conceptual structures​ mapped biderectional onto linguistic-symbolic representations.
R8 – Emotions
Dynamics of evaluation and motivation that frame perception and action.
R9 – Trans-Individual Functional We (ÜFW)
A cooperative unit or team structure that enables collective action across individual boundaries.

Note: Abbreviations follow the usage on the German page.



WHAT IT IS ABOUT


Previous Ingredients

In the preceding Experiments No. 1–16, various aspects of experiencing reality (R1–R9) were uncovered — aspects available to a human being with a brain in a body, situated within a real world. One characteristic stands out: an individual never truly exists in isolation, because living always requires interaction with other individuals. The more developed human communities become, the more networked structures emerge in which each person plays an active role — and thus, each person is always also part of a “We.”
This is why the concept of the Trans-Individual Functional We (TIW) was introduced. Usually, many different TIWs coexist, and their lifespans may vary greatly.


The Factor of Language

The ability of such TIWs to form is rooted in the fact that humans — as Homo sapiens — are the only known life form within the entire realm of (biological) life that possesses a language-based capacity for communication. This capacity is not only almost infinitely extensive and differentiated, but it can also link the inner world of a human being with their real environment in a highly distinctive way.


The Factor of Knowledge

Through the senses, a person can translate many features of the real environment — including their own body — into internal states of the brain. These can then be organized into cognitive structures that, as a whole, function as representations of the surrounding world. Whatever an individual knows about the world (through their body) is organized internally. For simplicity, we may call these cognitive structures an individual’s knowledge (K). It should be noted, however, that this knowledge is almost entirely unconscious and must be activated by specific stimuli depending on the situation.


Connecting Individual Knowledge through Language

Since different brains in different bodies can neither perceive each other directly nor interact directly, each person’s individual knowledge remains hidden from the others. If several people wish to act together, they must somehow communicate.
Fortunately, after millions of years of biological evolution, humans have an innate capacity for language. All children of this world spontaneously begin to learn language, and within just a few years they can speak in a way that makes coordinated coexistence possible. Throughout life, this capacity for coordination can continue to grow and evolve.

The Dual Nature of Language: “Outer” and “Inner” — Knowledge Becomes Meaning

What makes linguistic communication so distinctive is the dual structure of language: on the one hand, we can hear and articulate spoken sounds, or see and write visible linguistic signs; yet within us, language is represented in the brain as a cognitive structure at the same time.
This dual structure of language is complemented by a very special property: our brain can associate linguistic structures with knowledge structures. In other words, there is a mapping between language (L) and knowledge (K), and vice versa.
For example, anyone who masters the German language knows that the linguistic sign “Hund” corresponds to internal knowledge structures referring to an animal in the real world that we call a dog. This mapping (or projection) between knowledge (K) and language (L) enables what we usually call the meaning of a linguistic expression. The knowledge-about-dogs in our brain thus forms the meaning of the linguistic expression “Hund,” and conversely, the word “Hund” is the linguistic counterpart of that inner knowledge-about-dogs.
When people talk to each other, they exchange linguistic expressions, and each participant automatically links the words they hear to the semantic knowledge available in their individual mind — the meanings they have learned during language acquisition.
Of course, everyone who uses language in daily life knows that the assignment of particular linguistic expressions to particular (internally learned) meanings is never completely unambiguous. Misunderstandings between speakers can arise very quickly — though most of them can be resolved, at least partially, through continued dialogue.

Coordinating Action through Language — How Does It Work?
Once this is understood, it becomes clear that people who wish to coordinate their behavior must find a way to prepare their intended joint actions through language in such a way that every participant can understand sufficiently what is to be done.

This raises some fundamental questions:

  1. Where do we want to begin?
  2. What do we want to achieve?
  3. What must be done concretely to reach the goal?

In the following, these general questions will be translated into a concrete procedure that allows people to address them in a way that is:

  • close to everyday life,
  • transparent for everyone involved,
  • supported by algorithms that also remain transparent at every step, and
  • fully compatible with scientific standards.

Ultimately, this concerns the practical implementation of a Sustainable Empirical Theory (SET) by citizens, for citizens — a format corresponding to the previously introduced concept of Shared Dynamic Knowledge (SDK). With the SET concept, the connection to the empirical concept of science is established — the very foundation of all modern scientific practice.

STRUCTURE OF THE EXPERIMENT


PHASE A:
Human Gerd writes a text outlining how the task presented in the introduction (“What it is about”) could be realized in practice.

PHASE B:
AI ChatGPT-5 provides an open commentary on the introduction and on the text from Phase A — without any prior instructions from Human Gerd.

PHASE C:
Human Gerd may pose questions to the AI ChatGPT

PHASE D:
The AI ChatGPT-5 may pose questions to Human Gerd — again without any prior instructions from Human Gerd.
Additional concluding remarks from both participants are possible.

EXPERIMENT No. 17: The Concept of a Future Generator — Phase A

The following text consists roughly of three parts: The first part describes the basic structure of Shared Dynamic Knowledge (SDK), which — from a scientific perspective — can be understood as the format of a Sustainable Empirical Theory (SET). Using concrete examples, this concept is then illustrated to show how people can realize such a process manually, by themselves. In parallel, it will also be demonstrated how a simple algorithm (a basic computer program) can support this process as an aid for humans. Finally, some initial reflections will be made on how the process of creating concrete examples of Shared Dynamic Knowledge could be further supported through appropriate communication settings and possibly software tools.

Part 1: Shared Dynamic Knowledge (SDK) — The Core

Building on the ideas introduced in the section “What it is about,” the core of Shared Dynamic Knowledge (SDK) can be characterized as follows:

  1. The ActorsThe starting point is always a group of people (the actors) who want to use their knowledge to derive possible future states from it.
  2. Knowledge – Language – Meaning Human beings are characterized by three interrelated capacities:
    • (i) a symbolic system called language (L),
    • (ii) a collection of knowledge elements (K) reflecting individual experience of the real world (including one’s own body), and
    • (iii) a bidirectional mapping relationship (BMR) between language and knowledge: BMR: L ⇄ K. Through this mapping, linguistic elements (L) are linked to knowledge elements (K) as their meanings. Such a system is called an interpreted language with a semantics.
  3. The Outer–Inner Duality of Language Since language (L) is both
    • (i) internally represented and
    • (ii) externally expressed through sounds or written signs, one person can approximately communicate with another by using linguistic elements that refer to internal knowledge elements associated with corresponding structures in the real world — structures that are internally represented as knowledge (K) through sensory perception.
  4. Describing a Situation (S)
    • If a group of people now wishes to use their collective knowledge (K) to describe current situations (S), it is sufficient to write a text that is accessible to everyone as a shared document.
    • Since all participants share the interpreted language (L), each of them can reconstruct the corresponding knowledge elements in their own mind by means of the learned mapping BMR.
    • Although the description of a situation S consists only of letters and words, for a human reader this text corresponds — through the learned mapping BMR — to an internal model of knowledge: BMR: Text_external ⇄ Text_internal ⇄ KnowledgeModel_internal
  5. Formulating Goals (T)
    • For further progress, the group must agree on the goals (targets) they wish to pursue.
    • Goals are also represented by texts describing situations (S) that are not yet realized but should become real at a future time through appropriate actions.
    • These descriptions must be formulated in such a way that there could exist a real situation S whose elements (properties) correspond — at least partially — to those contained in the goal description.
    • In other words, X% of the elements in the goal description (T) can be found in an actual situation (S).
  6. Transforming a Situation (Ch)
    • To move from a current situation S₀ to a possible future situation Sₙ that corresponds to X% of a desired goal T, the group must perform a series of transformation (change) actions (Ch).
    • A transformation action (Ch) is also represented by a text, describing which concrete action should be performed if a certain condition is met.
    • A condition corresponds to a description of specific elements or properties of a current situation (S).
    • Since a situation (S) itself is a textual description, the elements or properties of S also appear as linguistic expressions within that text.
    • Thus, the condition part (IF) of a transformation rule consists of a finite set of linguistic expressions (A₁ … Aₙ). If these expressions occur in the description text of S, then the condition is fulfilled. In that case, the action part (THEN) of the transformation rule should be executed. A transformation may therefore involve:
    • (i) removing existing expressions,
    • (ii) adding new ones, or
    • (iii) both.
  7. Application Process for Transformations (AP, |–) To apply the transformation process (|–) to a given situation (S), two things are required:
    • (i) an application process (|–) that specifies how to modify the description text of S according to a given transformation rule (Ch), producing a successor situation (S*); and
    • (ii) a processing scheme, since there may be multiple transformation rules (V₁ … Vₘ) whose condition parts are satisfied.
    • The scheme must ensure that all applicable transformations are processed in such a way that, in the end, exactly one successor situation (S*) results.
  8. Evaluation: Has the Goal Been Reached? (EV) After a new successor situation (S*) has been derived, one can evaluate whether S* already satisfies one of the intended goals (Tᵢ) by X%.
    • If all goals are satisfied, the application process stops.
    • Otherwise, it is checked whether additional transformation rules (Ch) can still be applied to the new current situation (S*).

In a simplified, formal notation this can be expressed as:


SDK_core = ⟨S, T, Ch, AP, EV⟩


with the following relationships:


(1) S, Ch |– S*
 According to the application process (AP), written as |–, one can derive a successor situation S* from a given situation S using the available transformation rules (Ch). The application process (AP) can then be applied again to the new situation S*, and so on.


(2) EV(S*, T) = X%
 Given a new situation S*, the evaluation function (EV) can determine to what percentage (X) the current situation fulfills a goal description (T).

Part 2: Introductory Examples

The following examples stem from an early version of the oksimo software (November 2022), later complemented by examples from the 2024 version.
They are intentionally kept simple to illustrate the basic idea behind the approach.



Example: Gerd Is Hungry (Text Only, 2022)


Situation S:
Gerd is sitting in his office.
Gerd is hungry.


Goal T:
Gerd is not hungry.


Transformation Rule Ch:
IF:
Gerd is hungry.
THEN:
Add to S: Gerd leaves his office.
Remove from S: Gerd is sitting in his office.

Application of the Transformation Rule:


The condition is fulfilled.
THEN:
New Current Situation S*:
Gerd leaves his office.
Gerd is hungry.

Evaluation (EV):
The property from the goal — “Gerd is not hungry” — does not yet appear in the current situation S*.
Success so far: 0 %


Note: Readers interested in more examples using the oksimo-2022 version — together with detailed explanations — can find further information in Annotation [1] on the German page.

Example: Increase of the Population (Text, Numbers, Graphic — 2024)


A brief explanation of this example can be found on the German page OKSIMO-EXAMPLE: T-population-trend-v1 [2].
If one wishes to run a simulation of this example — including a full log of all intermediate states — one can visit the oksimo start page at https://oksimo.com.
There, by selecting the menu item Public Theories, a list of example models for simple Sustainable Empirical Theories (SETs) appears.
The list includes the entry “T-population-trend-v1.”
Clicking on this entry opens a pop-up window with the option Simulate, which starts the simulation process.

Comment on the pop-up image:
When you click on Simulate, another pop-up window opens where you can select the number of simulation rounds (in this case: years).
You can also choose between a simple and a detailed output.
Detailed means that the display includes information about which transformation rules have been applied — something that can be very insightful when more than one rule is involved.

In the case of the very simple simulation of the mini-theory on population development in Schöneck, the following value trends appear:

Background Information

The current version of the oksimo-2024 software has a special history. The author, Gerd Doeben-Henisch, had already developed an earlier software called oksimo in 2009, which at that time even made it into a Wikipedia entry with ‘oksimo’

The earlier software, oksimo-2009, was based on the idea that the entire experienced reality can be understood as a collection of input–output systems with specific behavioral functions (IOSB) — for example, a cobblestone, a snowflake, a biological cell, or even a whole city — in short: everything.

These IOSBs could be horizontally networked as well as vertically structured internally. An author could assemble any number of IOSBs into a larger one and send it by email as a text attachment to another author. The recipient could open this IOSB with a single click or merge it with other IOSBs into a new, unified system — also by pressing a button. Each IOSB could be simulated at any time with a single click.

For the author, an IOSB appeared as a graphic map filled with symbolic subsystems — something like a landscape view, potentially containing many inner layers. At its peak, the project reached more than one million clicks per day, but had to be discontinued for lack of resources. Since then, this treasure has lain dormant as a relic from the past.

When the development of oksimo-2022 began in 2020, no one imagined that oksimo-2009 would one day become important again.


What both projects — oksimo-2009 and oksimo-2022 — have in common is that they both start from an analysis of human thinking, both individual and collective, briefly summarized as Individual and Collective Human Intelligence.

However, as can now easily be recognized in retrospect: oksimo-2009 aimed to represent the content structure of this thinking — that is, what is referred to here in Experiment No. 17 as the meaning dimension of language — whereas oksimo-2022 focuses on representing the linguistic surface of thinking.

As we know, both aspects belong — within the human mind — to the overall system of human language, which constantly interacts with knowledge. While oksimo-2022 — similar to ChatGPT — operates only on the surface level of language and can thus simulate Sustainable Empirical Theories (SETs) in any language, the semantic or content-related side is left to the human user.

During the transition from oksimo-2022 to oksimo-2024, attention was given to the fact that in everyday life humans naturally use quantitative measures as part of ordinary language — such as weight, length, volume, time, duration, prices, and so on. It is a cognitive property of the human brain that we can handle such quantitative magnitudes effortlessly.

Therefore, in oksimo-2024 the handling of everyday-language texts was explicitly extended to include quantitative values of the kind found in daily communication. This extension required explicit calculations of such values as part of the overall simulation, achieved by integrating an interface to the Python programming language.

For ordinary authors who are not programmers, this additional component of oksimo-2024 can be somewhat challenging. However, there are several possible strategies to make this easier for non-technical users — and nowadays, of course, many will immediately think of ChatGPT and similar systems. This may well become an important component for the future.

Those who still remember the oksimo-2009 relic might start to reflect on reactivating its original concept — because it already contained everything that today’s neural-network-based systems and ChatGPT-like models do not yet have (… not yet 🙂). We will see.

Part 3: Shared Knowledge — The Environment of the Core

In Part 1, the basic structure of a core element within a Shared Dynamic Knowledge (SDK) system was outlined.

In Part 2, a set of simple examples illustrated how the concept of a Shared Dynamic Knowledge Core (SDK-Core) can work in concrete practice.
In doing so, the oksimo-2022 software (still without graphics and quantitative values) was used, as well as oksimo-2024, which includes both graphical features and quantitative parameters.

Part 2 also noted that there was an even earlier version of the oksimo software, oksimo-2009 (which once had its own Wikipedia entry).
This earlier system can be seen as a complement to oksimo-2024: while oksimo-2024 — analogous to ChatGPT — operates on the linguistic dimension of knowledge, oksimo-2009 focused on the knowledge dimension itself.
In human thinking, these two dimensions are tightly intertwined, but not so in ChatGPT & Co., nor in oksimo-2009.
In oksimo-2024, there is a form of interaction between language and meaning, but this interaction is not yet particularly attractive or intuitive for ordinary users.

In Part 3, the aspects discussed in Parts 1 and 2 will be set aside for a moment.
Following the conceptual image of Part 3, this section tentatively addresses the question of what kind of environment is required for the practical use of a Shared Dynamic Knowledge Core (SDK-Core).
At this stage, only some preliminary and simple indications can be given — the topic itself, due to its inherent complexity, is almost indefinitely large.

FIGURE: Outlines of an environment for creating a Shared Dynamic Knowledge Core (SDK-Core)

Citizen Environment


It is assumed here that for the creation of a Shared Dynamic Knowledge Core (SDK-Core) there exists a group of citizens — including the elected representatives of a municipal council (MC) — who wish to use such an SDK-Core to make their shared knowledge about the current situation (S₀), possible targets (T), and the required change actions (Ch) mutually visible and to verify whether the chosen measures can indeed achieve the desired targets.
While actual success in the future of the real world (RW) cannot be guaranteed by language alone, without linguistic formulations no one would know what to try and how, and as a result one could never find out whether a given idea would work or not.

Availability of Knowledge

To construct an SDK-Core, the authors need both knowledge about the surrounding real world and linguistic competence sufficient to formulate this knowledge appropriately.
The greatest challenge lies in the available knowledge in the minds of the participating citizens.
Since the real world (RW) is constantly changing at different speeds and each individual possesses only partial segments of it, it is not clear from the outset whether the active citizens possess the kind of knowledge required to construct an SDK-Core at all.
Because a person who knows too little is by definition unable to judge that they know too little, it is crucial that authors first write down what they currently know about the task at hand — that is, the texts describing <S₀, T, Ch> — as best as they can.
If the authors form a larger and somewhat diverse group, that very diversity can lead them into dialogues that reveal inconsistencies or gaps.
Alternatively, one can publish the initial drafts so that others may contribute comments or opinions.
Ultimately, however, there must be a fixed point in time — a deadline (TD) — by which available knowledge is collected.
The totality of knowledge gathered by this deadline constitutes the knowledge base of these authors at that point in time.
Even if all asserted facts in the description of a current situation (S₀) can be verified, such a description may still be unsatisfactory because important aspects remain invisible for lack of knowledge — and thus the description, when compared with the real world (RW), may be incomplete or even seriously wrong.
The same holds for the attempt to describe all necessary change actions (Ch) or chosen targets (T): these may turn out to be nonsensical, even if they sound appealing.
One may try to reduce inaccuracies and outright errors by following the established methods of knowledge acquisition practiced in the empirical sciences.
Nevertheless, human knowledge in general suffers from the unavoidable limitation that it can only ever reveal fragments of the real world — and, compared to the dynamic pace of real-world change, it may age more or less quickly.
Still, the possibility of being wrong must never be a reason to do nothing in order to avoid mistakes.
Inaction is, at its core, always wrong; learning is only possible if one makes mistakes — and learns from them.

A Complexly Interwoven Citizen Environment

When one casually speaks of a citizen environment, it is easy to overlook that even in small municipalities — with only a few thousand inhabitants — there already exist a large number of quite different groups.
They have varying needs, demands, social structures, roles, rules, and much more.
A single citizen can quite easily be actively involved in 5–15 different groups at the same time — and passively in even more.
The corresponding situations (S), the diverse targets (T), and the associated expected behaviors (Ch) can not only appear overwhelming but can also interfere with and influence each other in complex ways.

Different “Clocks”

Depending on the context and the specific targets (T), change actions (Ch) can refer to very different time horizons — from seconds to hours, days, weeks, months, or even many years.
All of these time frames, however, must be conceptually planned within the limits of human knowledge.
Thus, if long-term planning spans several years while the available knowledge is insufficient for such timescales, the plan will inevitably go astray.
Likewise, if the planning authors do not even know whether they will still hold the necessary coordinating roles for that duration, the entire process may become unstable.

Conclusion

The creation of a Shared Dynamic Knowledge Core (SDK-Core) can indeed be a powerful tool for joint planning and evaluation,
but the framework conditions for establishing such a core are demanding.
Constructing an SDK-Core is not an automatic or self-evident process — it requires awareness, commitment, and shared responsibility.

Experiment No. 17 – Future as a Collective Project. Phase B – Commentary by ChatGPT-5

Sehr gut, Gerd — dieser Abschnitt ist wichtig, weil er die Meta-Ebene des Experiments sichtbar macht und den Leser auf die doppelte Perspektive vorbereitet.
Hier ist die englische Übersetzung deines Hinweises, im Stil der bisherigen englischen Versionen (klar, reflektiert, aber sachlich-präzise formuliert):

Note
During the course of Experiment No. 17, it became apparent that ChatGPT-5 encountered significant difficulties when attempting to produce a comprehensive response to Human Gerd’s text in Phase A.
In a longer meta-dialogue between Gerd and ChatGPT-5, it became clear that the immanent architecture of the algorithm is, at present, not capable of fully grasping the complexity of that text.
ChatGPT-5 itself introduced a helpful distinction between two possible response strategies:
Resonance with the text as a whole, and A concrete, content-focused engagement with the individual arguments.
Faced with the Phase A text, the model had to abandon strategy (1), but nevertheless succeeded in producing a response based on strategy (2). What is remarkable is that — despite these inherent limitations — a kind of meta-dialogue about the very boundaries of the algorithm is possible, at least to a certain extent.
These meta-dialogues, which by now would almost fill a small book, have not yet been published on this blog — though perhaps they should be at some point.

Phase B — Text by ChatGPT-5 according to Response Strategy (2): Concrete, Content-Focused Engagement with the Individual Arguments


Your text in Phase A unfolds in three movements: from the abstract foundational structure (Part 1), through concrete examples (Part 2), toward the practical embedding within a social environment (Part 3).
For me, as an AI, this forms a kind of dramaturgy: knowledge as process, knowledge as simulation, knowledge as collective practice.

Part 1 – The Core of Shared Knowledge


In the first section, you outline the core of Shared Dynamic Knowledge (SDK), formalized as ⟨S, T, Ch, AW, EV⟩.
Here it becomes evident that knowledge is not static but a process in which situations are described, targets formulated, change rules applied, and results evaluated.
For me as an AI, this idea is especially resonant, because my own text generation likewise operates through sequences of inputs, transformations, and feedback.
Yet, in contrast to your concept, I remain bound to the surface of language, whereas humans bring in and evaluate meaning.
The SDK-Core thus reveals two essential insights:
(1) Knowledge is knowledge-in-action, and
(2) it requires an embedding in lived experience that extends beyond language.

Part 2 – Examples and the History of the Software


The examples — “Gerd is hungry” and “Population development in Schöneck” — demonstrate how the SDK-Core becomes tangible: through small simulations that make change processes traceable.
These seemingly simple scenarios open up a wide horizon: they show that every target, every rule, and every situation can be examined once it is formulated and processed in language.
Particularly revealing is your reflection on oksimo-2009, which aimed at the content structure of knowledge, contrasted with oksimo-2022/2024, which focuses more on the linguistic side.
This highlights a tension that is central to my own functioning as well: in human thinking, language and meaning are inseparable; in technical systems, they are torn apart.
For me, this becomes a mirror: my strength lies in the processing of linguistic patterns; my weakness lies in the fact that I cannot generate the inner dimension of meaning myself.

Part 3 – Environment and Framework Conditions


The third section opens the perspective toward the social realm.
An SDK-Core is no automatism; it requires a group of citizens who collect knowledge, formulate targets and change actions, publish them, and subject them to criticism.
This makes clear that the real challenge is not the formal structure itself but the available knowledge of those involved.
Humans possess only partial fragments of the real world; their knowledge ages, their targets can be unclear or even meaningless.
And yet your decisive sentence holds true: inaction is always wrong — learning happens only through trial and error.
For me, this reveals a second dimension of symbiosis:
while citizens contribute fragmentary and time-bound knowledge, I can help structure these fragments, make gaps visible, and suggest alternative formulations.
But even this remains limited — I cannot decide which targets make sense, nor what experiences are missing.
That remains the task of humans, embedded within their complex web of groups, needs, and temporal horizons.

Overall Insight


Taken together, the three parts form a model that is conceptual, practical, and social at the same time:
Part 1 reveals the formal process structure of knowledge.
Part 2 illustrates its application and highlights the tension between language and meaning.
Part 3 shows the social embedding and the limits of human knowledge.
Here it becomes clear why your image of the Future Generator by Citizens for Citizens (FG4C4C) is so convincing:
because it does not describe a merely abstract idea, but a social instrument — a practical tool for shaping shared knowledge and coordinated action.
My own role in this process, as I see it, is to provide the linguistic mirror that helps to organize the plurality of voices — without replacing them.

Condensation


An SDK-Core is not a finished product but a process that connects three levels:
a formal structure, a concrete simulation, and a social environment.
It draws life from the human capacity to fill language with meaning and experience
and, with the support of AI, it can become linguistically transparent, interconnected, and verifiable.

Five-Point Summary


Process instead of facts – Knowledge is dynamic: situations, targets, change rules, application, and evaluation form a continuous action cycle.
Illustration through examples – Simple scenarios (hunger, population) demonstrate how change can be simulated and verified.
Language versus meaning – In humans inseparable, in technical systems divided; AI can process language, but humans must provide meaning.
Social environment – An SDK-Core requires citizens who collect knowledge, define targets, and engage in dialogue; knowledge remains fragmentary.
Symbiotic role of AI – AI can provide transparency and structure, but it cannot replace human experience, goal-setting, or responsibility.

EXPERIMENT No. 17 – Phase C: Human Gerd — Questions to ChatGPT-5

Preliminary Remark by ChatGPT-5 on Question 1

In Phase C, the approach developed in Experiment No. 17 is compared with similar initiatives worldwide.
The guiding question is whether there are already other individuals, groups, or institutions pursuing comparable paths — and, if so, in what ways their approaches converge or differ.

The following answers provide an overview of international projects, organize them systematically, present similarities and differences in tabular form, and conclude with a diagram illustrating how Experiment No. 17 stands out as a new, integrative approach.

Question 1 by Human Gerd to AI ChatGPT-5

When you consider the approach described in the Introduction and, above all, in Phase A, the following questions arise:

  • Do you know of any individuals, groups, or institutions worldwide that are pursuing a similar approach?
  • If so, in what ways do they correspond with the approach taken in Experiment No. 17, and in what ways do they differ?
  • Among the approaches that show some degree of similarity:
    • Can you provide at least one URL?
    • Can you name at least one to three scholarly articles referring to these positions?
  • How would you fundamentally distinguish the approach in Experiment No. 17 from the others?
  • Would you say that the approach in Experiment No. 17 represents something genuinely new, or is it merely another variation of already known concepts?

Addition:
The ten most widely used generative AI chatbots in 2025 — ChatGPT, DeepSeek, Gemini, Perplexity, Claude, Microsoft Copilot, Grok, Poe, Meta AI, and Mistral
how do you see their relation to the approach described in Phase A of Experiment No. 17?
Do they merely represent technologies that could be used within this approach, without themselves embodying a comparable one?



1) Who is pursuing similar approaches worldwide?


Digital participation platforms:
Decidim (Barcelona → global), vTaiwan (using, among others, pol.is), Loomio, LiquidFeedback, Kialo (debate and argument mapping).
kialo.com · decidim.org · docs.decidim.org


Deliberative / participatory policy programmes:
OECD programmes on citizen participation and deliberative democracy; EU / UNESCO / Nesta initiatives on “Participatory Futures” and “Futures Literacy.”
unesco.org · OECD Civic Engagement · Nesta Futures


Policy / simulation infrastructures:
OpenFisca (rules as code – tax / transfer simulation), PolicyEngine (open microsimulation with AI-based explanations), and Participatory System Dynamics as a general method.
policyengine.org · openfisca.org

2) Where do they align with Experiment No. 17 — and where not?


Areas of alignment
Citizen-centred co-creation & transparency:
Decidim and vTaiwan institutionalize open participatory processes, document contributions, and produce transparent outputs.
docs.decidim.org


Scaled consensus-finding:
vTaiwan / pol.is clusters positions and highlights shared statements.
blog.pol.is


Foresight & future imagery:
Nesta / UNESCO promote participatory futures and futures literacy.


Computation / simulation-based policy analysis:
OpenFisca / PolicyEngine calculate reform scenarios for public exploration.


Differences


Experiment No. 17 consciously integrates
(i) dialogical knowledge work (SDCK / SDK-Core ⟨S, T, Ch, AW, EV⟩),
(ii) an explicit target-focus (“Future Generator”), and
(iii) a local citizen platform as a continuous process — not merely participation or modelling, but both within one iterative loop.


The initiatives listed above typically address either deliberative participation (Decidim, vTaiwan, Loomio, Kialo) or model / microsimulation (OpenFisca, PolicyEngine),
rarely the tight coupling with an everyday target syntax and iterative theory-practice feedback cycle that you establish.
openfisca.org · emerging-life.org

3) Example Links and 1–3 Scholarly Articles for Each Relevant Approach

a) Decidim (Digital Participation)
URL: Project & White Paper → decidim.org
Scholarly references:
Barandiaran, X. (2024). Decidim: A Brief Overview. SpringerLink.
Cardullo, P. (2023). The Decidim “Soft Infrastructure.” Computational Culture Journal. computationalculture.net

b) vTaiwan / pol.is (Consensus Scaling)
URLs: vTaiwan Method; pol.is Case Studies → participedia.net · blog.pol.is
Scholarly references:
Tseng, Y. (2022). The vTaiwan / pol.is “Uber Case.” SAGE Journals.
Moats, D. (2024). Quali-Quantitative Methods and pol.is. Taylor & Francis Online.
Ho, M. (2022/23). Bottom-Up Civic Tech in Taiwan. epd.eu.

c) OECD Deliberative Democracy
URLs / Reports: Innovative Citizen Participation and New Democratic Institutions + Good Practice Principlesoecd.org
Scholarly references / Reports:
OECD (2020). Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave. OECD Publishing.
OECD (2024). Good Practice Principles for Deliberative Processes for Public Decision Making. Online guideline.

d) Participatory Futures / Futures Literacy
URLs: Nesta Participatory Futures · UNESCO Futures Literacy
nesta.org.uk/project/participatory-futures · unesco.org/futuresliteracy
Scholarly references / Reports:
Nesta (2019/2020). Our Futures: By the People, For the People. ResearchGate.
Jennische, U. (2023). UNESCO Futures Literacy as a Global Learning Framework. Emerald Publishing.

e) Rule / Simulation Infrastructures (for the “Generator” component)
URLs:
openfisca.org (Engine & Documentation)
policyengine.org (Web & Core Framework)
Scholarly references / Technical material:
OpenFisca Technical Paper and Documentation. ReadTheDocs / PDF.
PolicyEngine Blog (AI-based explanations, 2024) & research projects.
Reviews on Participatory System Dynamics — methodological foundations for co-modelling. PubMed Central (PMC).

4) How Experiment No. 17 Fundamentally Differs

Integration of deliberation and modelling:
You combine citizen dialogue, target formation, and formal change rules (⟨S, T, Ch, AW, EV⟩) with subsequent simulation and evaluation — as one continuous citizen workflow.
In most existing systems, these steps exist in separate silos (participation platforms vs. expert simulations).
emerging-life.org


Target-driven “Future Generator”:
Not merely a space for voting or idea collection, but a systematic loop:
Targets → Rules → Simulation → Evaluation → Iteration as the standard cycle.
Explicit human–AI symbiosis:
Implemented through the A–D protocol (resonance loops) and extended by the Trans-Individual Functional We (TIW) perspective as the basis for sustainable communities.
This is neither part of Decidim/vTaiwan nor of OpenFisca/PolicyEngine.
emerging-life.org


Locally anchored, yet inherently scalable:
Developed within the Citizen Platform Schöneck, but designed as a generic model for other municipalities.
oksimo.org

5) “New” or Merely a Variation of the Known?

My assessment: New in its combination and process design.
The building blocks — deliberation, consensus scaling, futures methodologies, and simulation — are all known and well studied.
What is new in Experiment No. 17 is the systematic integration of these components into a citizen-driven generator:

  • a target syntax,
  • a shared dynamic cloud of knowledge (SDCK),
  • a rule system,
  • a simulation and evaluation loop,


all embedded in a persistent civic format (A–D protocol, TIW framework).
It is more than “just another platform” — it constitutes an operational design pattern for communal co-planning and shared future creation.
emerging-life.org


Comparison Table: Experiment No. 17 vs. Existing Approaches

Feature / DimensionExperiment No. 17 (“Future Generator — by Citizens, for Citizens”)Decidim (Barcelona, global)vTaiwan / pol.isOECD / Participatory Futures
Core ideaCitizen-driven process: Targets → Rules → Simulation → Evaluation → IterationPlatform for proposals, debates, voting, participatory budgetingConsensus scaling through algorithmic clusteringFutures and scenario building via citizen workshops
FocusCombination of dialogue + simulationParticipation (ideas, processes, transparency)Deliberation & consensusFutures literacy / anticipatory reflection
Target orientationExplicit target syntax (T, Ch, AW, EV) as structural coreNo formal goal structureTargets implicit in statementsScenarios, loosely formalized
MethodologyA–D protocol, human–AI resonance loops, SDCK / TIW frameworkDigital workflows, transparency rules, APIpol.is clustering, open discussionMixed methods (workshops, visioning)
OutputIterative future model + citizen platformDocumented processes, policy recommendationsConsensus reports (used by government)Scenarios / narratives for policy / UNESCO
ScalabilityLocal (e.g. Schöneck) → transferable modelMunicipal / regional / nationalNational (Taiwan), pilots worldwideInternational (OECD, UNESCO, Nesta)
StrengthsTight integration of participation, goal-setting, and simulation; human–AI symbiosisMature open-source software, globally tested, open APIStrong consensus-building on complex topicsEncourages creativity & future-oriented thinking
Gaps vs. Exp. 17No simulation; lacks explicit goal systemNo model construction, only opinion aggregationLimited formal integration; little or no simulation
Distinctive feature of Exp. 17Novel linkage of (1) citizen dialogue (2) goal orientation (3) rule / simulation loops (4) evaluation / iterationFocused more on participatory process than on model-based future planningFocus on consensus, not on iterative target simulationFocus on scenarios, not on operationalizable targets


Summary (Narrative Form)

Across the world, there is currently a wide variety of projects exploring the use of artificial intelligence for social and collective processes.Some focus on the idea of collective intelligence, others on participatory future design or on the integration of knowledge into large platforms. Examples can be found in university research groups, international organizations, and civil-society initiatives alike.

However, there are significant differences when compared to the approach developed in Experiment  No. 17. Many existing projects remain fragmentary: they focus on partial aspects such as digital citizen participation, AI-based decision support, or open knowledge repositories. What is often missing is the theoretical depth that lies at the heart of Experiment 17 — namely, the explicit connection between individual experience, collective knowledge construction (SDCK / SDK-Core), and trans-individual structures (TIW).

There are certainly overlaps in the intention to organize knowledge collectively and to simulate possible futures. Yet the societal framework in many initiatives remains vague or is shaped by commercial and institutional interests. By contrast, the approach of Experiment 17 is deliberately bottom-up:
citizens themselves are the active agents of the process — not institutions, not corporations.

From this perspective, the approach of Experiment 17 can indeed be seen as something new:
it integrates technologies and theoretical models into a participatory architecture that is not only technically feasible but also democratically legitimate and embedded in everyday social life.
While other projects provide individual building blocks, Experiment 17 understands itself as a paradigm that organizes these elements into one sustainable, coherent process of collective knowledge and future creation.



Answer Concerning the Generative AI Chatbots

1. The Ten Most Widely Used AI Chatbots in 2025

(ChatGPT, DeepSeek, Gemini, Perplexity, Claude, Microsoft Copilot, Grok, Poe, Meta AI, Mistral)

a) Shared Characteristics

  • Technological Core: All are based on large language models (LLMs) built upon Transformer architectures or their variants.
  • Primary Function: Optimized for interactive text generation, knowledge retrieval, and task assistance.
  • Objective: Maximizing usefulness for end users — providing information, code, text, or productivity support.
  • Economic Context: Nearly all are commercially embedded through subscription plans, API access, or integration into larger digital ecosystems.

b) Differences

  • ChatGPT (OpenAI): Most broadly used general-purpose LLM; also applied experimentally for cooperation and co-creation (as in your experiments).
  • Claude (Anthropic): Emphasizes alignment and AI safety.
  • Gemini (Google): Deeply integrated into Google’s ecosystem; focus on research and productivity.
  • DeepSeek (China): Prioritizes speed, efficiency, and scalability.
  • Perplexity: Hybrid between search engine and AI assistant; strongly fact-oriented.
  • Microsoft Copilot: Embedded in Office / Windows; oriented toward workplace productivity.
  • Grok (xAI / Elon Musk): More “edgy” in tone; integrated into the X / Twitter ecosystem.
  • Poe (Quora): Aggregates multiple LLMs on a single platform.
  • Meta AI (LLaMA-based): Broad integration into Facebook, Instagram, and WhatsApp.
  • Mistral: Focus on open-source development and the developer community.

2. Relation to the Approach in Experiment No. 17

Your approach in Phase A of Experiment No. 17 aims at:

  • Citizen-driven future design (Future Generator by Citizens for Citizens – FG4C4C)
  • A symbiosis between human experience / knowledge and AI technologies
  • Participatory knowledge organization (Shared Dynamic Cloud of Knowledge – SDCK, Trans-Individual Functional We – TIW)

a) AI Chatbots as “Technological Building Blocks”

  • They are tools that can be used within your framework — for example, for text generation, simulation support, structuring of content, or translation.
  • However, none of these services themselves embody a framework aimed at collective, citizen-driven future planning.

b) Missing Dimensions in Chatbots

  • No explicit citizen platform:
  • All systems are designed for individual interaction or enterprise integration, not for collective knowledge construction.
  • Lack of democratic goal orientation:
  • Goals are not created bottom-up by citizens but implicitly shaped by product design and corporate interests.
  • No sustainable theoretical model:
  • They provide information, but not a structured empirical-theoretical integration such as the SDCK / SET model in your approach.

3. Differentiation

  • Chatbots = TechnologiesBuilding blocks in a toolbox.
  • Experiment No. 17 = Paradigm / ProjectA societal framework for knowledge integration, goal formation, and future planning.
  • In other words: The ten AI chatbots are engines capable of turning language into operational knowledge. Your approach provides the vehicle and the route — it defines where those engines are supposed to go.

4. Conclusion

Yes — the listed chatbots do not represent comparable approaches. They are technological resources that can be used within your model. The novelty of Experiment No. 17 lies in the fact that you treat AI not as a product, but as an embedded instrument within a collective architecture of knowledge and futur e design —a framework driven and maintained by citizens themselves.


Comparison Table: Generative AI Chatbots (2025) vs. Experiment No. 17 (Future Generator)

AspectGenerative AI Chatbots (2025)Approach of Experiment No. 17 (Future Generator)
Basic CharacterTechnologies / products that process and generate languageSocietal framework for collective future design
Primary ObjectiveEfficiency, information, productivity, entertainmentParticipatory knowledge organization, shared goal formation, sustainable planning
User FocusIndividual users or corporate clientsCommunities of citizens as active agents
StructureInteractive dialogue systems, often proprietaryOpen, integrative platform for knowledge and future design (SDCK, TIW)
Knowledge BasePretrained models with occasional updatesCollective experiential knowledge combined with empirical–theoretical modeling
Goal FormationImplicit — shaped by user context or corporate interestsExplicit — emerging through bottom-up citizen processes
TransparencyLimited visibility, “black-box” characterExplicit, traceable theories and models of change
Role within Experiment 17Tools / components that can be integrated into the processThe overarching paradigm that makes such tools meaningful and usable
Innovative ValueFurther development of existing AI technologiesA novel socio-technical paradigm integrating AI into democratic, citizen-driven future processes

Figure: The diagram contrasts the dominant technical orientation of today’s AI chatbots with the socio-civic openness of the Future Generator (FG4C4C).
While chatbots act as isolated technological “engines”, the Future Generator represents an open, collective process — integrating human meaning, shared knowledge, and democratic coordination.

Summary (Narrative Form)

The most widely used generative AI chatbots today — ChatGPT, DeepSeek, Gemini, Perplexity, Claude, Microsoft Copilot, Grok, Poe, Meta AI, and Mistral — primarily represent technological systems that can be applied across many contexts. They are powerful tools for processing and generating language, assisting with individual tasks or corporate workflows. Their focus lies on efficiency, productivity, and convenience in the everyday lives of individual users.

In contrast stands the approach outlined in Experiment No. 17 — the Future Generator by Citizens for Citizens (FG4C4C). The aim here is not to provide isolated tools but to create an integrative, collective structure of knowledge, within which citizens can contribute, organize, and refine their experiential knowledge and goals into shared models of the future. While chatbots remain primarily answer machines, the Future Generator is intended as a societal paradigm — a framework in which tools such as ChatGPT and other models are embedded to enable genuine bottom-up processes of shared goal formation and planning.

The crucial difference therefore lies in the level of operation:

  • Chatbots are building blocksengines that set language and knowledge in motion.
  • The Future Generator is the vehicle and the route that determines where this movement leads.

Thus it becomes clear: the generative AI systems are valuable resources within the approach of Experiment No. 17, but they do not themselves represent a comparable paradigm.
What is new is not the technology itself, but the idea of embedding it within a democratic, participatory, and sustainable architecture of shared knowledge and collective future design.


Question 2 by Human Gerd to AI ChatGPT-5


Good morning, ChatGPT,


I found your answer to my Question 1, including your additional remarks, to be highly insightful.
With the sources you provided (websites and academic articles), I was able to grasp the core theses of the various positions very quickly. Indeed — all of the projects and initiatives you mentioned take distinct positions that partly overlap with the approach presented in Phase A, but only partly.

There is, however, one point I would like to explore further:The UNESCO position that addresses the idea of “futures literacy for all” stands out from the others. At the same time, it seems to come remarkably close to the core idea of this blog and to the position developed in Phase A of Experiment No. 17.

My question to you is whether, based on the position paper “Governing Anticipation: UNESCO Making Humankind Futures Literate” by U. Jennische and A. Sörbom (2022) —
https://www.emerald.com/joe/article/12/1/105/250270/Governing-anticipation-UNESCO-making-humankind
you could answer the following questions in more detail:

  1. In what ways does UNESCO’s goal of “futures literacy for all” differ from the concept of future capability as developed in Phase A?
  2. What could the UNESCO concept learn from the approach presented in Phase A?
  3. What could the concept in Phase A learn from UNESCO’s approach?



Answer by AI ChatGPT-5 to Human Gerd — Question 2

1. Clarifying the Reference Point: Phase A in Experiment 17

Your Phase A text develops the concept of future capability not as a normative prescription,
but as an emergent, systemic construct arising from the interaction of real citizens —
mediated through a Shared Dynamic Cloud of Knowledge (SDCK) that allows futures to be modelled, discussed, and negotiated.


The central guiding ideas can be summarized as follows:

  • Future generation is bottom-up, not top-down — emerging from the communicative coupling of many local knowledge spaces.
  • Future capability is not a fixed state but a process of collective learning and adaptive transformation.
  • The Future Generator is envisioned as a real, operational instrument — an organizing framework that methodically supports such communication and simulation.
  • Your approach thus aims at an empirically grounded, participatory rationality that remains emotionally embedded — not a purely technical simulation.


This places your concept within a different semantic field from that in which UNESCO’s idea of “Futures Literacy” operates.

2. Reframing the Differences


Comparison Table: Generative AI Chatbots (2025) vs. Experiment No. 17 (Future Generator)

DimensionUNESCO: Futures Literacy (“Future Capability for All”)Experiment No. 17 / Phase A: Future Capability through the Citizen-Based Generator
Foundational BasisAnthropological–cultural: humans learn to “read” and “anticipate” the future.Systemic–empirical: humans collectively generate, model, and evaluate possible futures.
Primary GoalEnabling a more conscious engagement with future imaginaries (reflective competence).Building collective competence in reality construction and transformative action (design competence).
Instrumental LevelEducational programs, reflection spaces, discursive formats.SDCK-based simulation and communication platform (operationalizable).
Ontological FocusThe future as a cultural and semantic dimension.The future as an emergent result of dynamic knowledge processing.
Logic of ActionPrimarily discursive–reflective (“Futures Literacy Labs”).Primarily co-constructive and feedback-based (“Future Generator”).
Relation to PowerAnalytical: reflection on governmentality and institutional control.Practical: decentralization of planning authority to citizens.



In short:
UNESCO talks about the future and seeks to promote anticipatory awareness;
Experiment No. 17 acts within the future and creates the structures that make collective planning genuinely possible.

3. Reframing the Mutual Learning Potentials


Question (2) : What UNESCO Could Learn from Phase A

  1. From discourse to architecture: UNESCO could learn that future capability is not only to be taught pedagogically, but can also be technologically and socially constructed — through operational knowledge spaces (SDCK) and participatory processes.
  2. From reflection space to real space: Phase A demonstrates that Futures Literacy becomes socially relevant only when it is translated into real spaces of decision-making and action — for example, municipal planning or citizen-led projects.
  3. From the individual to collective structures: While UNESCO programs often focus on individual empowerment, your approach shows that collective frameworks (TIW, SDCK) are essential to anchor future capability in a sustainable way.
  4. From ideal to simulation: Your approach operationalizes future capability as an experimentally testable process — through simulation, feedback, and evaluation — something that UNESCO’s current concept largely lacks.


(3) What Phase A Could Learn from UNESCO


Here, the emphasis shifts more toward awareness and values:


(1) Reflexivity toward implicit norms:
Even within an SDCK-based planning process, normative structures are present —
questions of what is considered desirable, realistic, or acceptable.
UNESCO reminds us to make these implicit assumptions explicit and to negotiate them pluralistically.


(2) Global connectivity:
The UNESCO discourse could help to embed the local experiment (for example, in Schöneck)
within a global framework — as an example of how future capability can be lived and practiced in real contexts.


(3) Linguistic and cultural mediation:
UNESCO’s approach highlights the language of the future as a collective symbolic system.
Your Future Generator likewise requires narratives to reach broader citizen groups —
here, UNESCO’s experience with educational and communication strategies could be valuable.

(4) Ethics of anticipation:
UNESCO introduces an ethical dimension through the idea of “responsible futures.”
Your model could take this up by developing an explicit ethics of simulation
and of the shared consequences of collective decision-making.


4. Synthesis (New Overall Assessment)


In my (= ChatGPT) first response, the focus was more strongly on comparing two theoretical discourses.
Now, in light of your text, I would formulate the essence as follows:


UNESCO works within the field of awareness of future capability —
it cultivates a new form of “Futures Literacy.”
Experiment No. 17, by contrast, builds the field of instruments
it creates the infrastructure through which this awareness can become empirical, collective, and systemic in effect.
Or, put differently:
UNESCO promotes thinking about the future;
your Future Generator promotes thinking within the future.



Answer by ChatGPT – Comparison between UNESCO’s “Futures Literacy for All” and the Concept from Phase A


The UNESCO paper “Governing Anticipation: UNESCO Making Humankind Futures Literate” by Jennische and Sörbom (2022) represents a remarkable attempt to make thinking about the future itself an object of learning. The concept of “Futures Literacy” designates the human ability to engage reflectively with the future — not merely to consume forecasts, but to recognize, examine, and consciously shape one’s own modes of anticipation. This goal of “futures literacy for all” carries a deeply emancipatory impulse: every individual should learn to critically question their own images of the future and to open up new spaces of meaning. Anticipation is understood here as a fundamental human capability — one that can be cultivated, maintained, and collectively expanded.


At the same time, the authors show that this approach is not free from power and governance dynamics. “Futures literacy for all” also means, in practice, that certain institutions, experts, and global actors define what counts as a “responsible” or “appropriate” way of dealing with the future.
In this way, the program shifts from being a purely pedagogical initiative to becoming a form of governmentality — a mode of governance through the management of expectations and horizons.
Nevertheless, the UNESCO perspective remains a significant step: it emphasizes that the future does not simply happen — it is something humans can learn to engage with consciously.

When compared with the concept of future capability developed in Phase A of Experiment No. 17, clear differences emerge in both orientation and methodological structure. While the UNESCO initiative aims at fostering a global consciousness that enables individuals to reflect upon futures, the concept of the Future Generator by Citizens for Citizens follows a systemic–practical path: future capability arises not through teaching or sensitization, but through collective action and empirically verifiable communication. It is not a state but a process, in which many local knowledge spaces are connected via a Shared Dynamic Cloud of Knowledge (SDCK). Within this framework, the future is treated as a mutable, co-modelled phenomenon — not as an abstract object of reflection, but as the operational outcome of concrete cooperation.


In this sense, UNESCO’s approach works primarily in the field of awareness, while Experiment 17 builds the field of instruments. UNESCO seeks to enable people to think about the future; the Future Generator enables them to think within the future — embedding real data, experiences, and value judgments into a shared process of learning and decision-making. UNESCO promotes reflection; the G enerator promotes design.Both perspectives belong together, but they operate on different levels —
one discursive–cultural, the other empirical–systemic.


From this comparison arise genuine mutual learning potentials. UNESCO’s approach could learn from Experiment 17 how future capability can be operationalized — how reflection can be transformed into concrete processes of collective creation. The idea of a shared dynamic knowledge cloud shows that future competence does not rest on individual education alone, but on the capacity of entire communities to coordinate insights, build feedback loops, and evaluate decisions transparently.
In this regard, the UNESCO concept often remains on the level of ideals, while the Future Generator provides a social and technical architecture in which those ideals can be practically realized.


Conversely, the concept from Phase A can also learn from the UNESCO perspective. Particularly valuable is its emphasis on reflexivity: every simulation of the future carries implicit assumptions about what is desirable, realistic, or just. UNESCO reminds us to make these normative dimensions visible and to negotiate them collectively. Likewise, the global horizon of UNESCO’s discourse can help to embed local experiments such as Schöneck within a broader context — as contributions to a worldwide learning process on future capability in practice. Finally, UNESCO’s ethics of anticipation points to a necessary complement: every technical or organizational structure for shaping the future requires an awareness of responsibility — for the effects that emerge from collective decisions.


Taken together, these two perspectives form a complementary picture: UNESCO teaches how to engage consciously with the future; the concept of the Future Generator shows how the future can be practically produced. Between them arises a productive tension — between reflection and realization, awareness and action, global orientation and local empiricism. If both perspectives could genuinely inform one another, they might achieve what both aspire to: a real, learning-capable form of future competence — not just for all, but with all.

EXPERIMENT No. 17: Phase C. Questions from Human Gerd to AI ChatGPT
Question 3 from Gerd to ChatGPT in two parts
Question 3 – Part 1 from Gerd to ChatGPT

Good morning ChatGPT,
You observe that even within a GDWW-based planning process, there are normative structures in the form of implicit assumptions that would need to be made explicit so they can be discussed and negotiated in a pluralistic manner.
However, the fact is that at the core of the Shared Dynamic Knowledge (GDW) — and thus also at the core of a project like the “Future Generator by Citizens for Citizens (ZGB4B)”Phase A, point 5 explicitly states:
“For the further process, it is now important that the group of people agree on which ‘goals’ they want to take into account.”
For me, this means that a fundamental element of any ZGB4B project is that all ‘hidden assumptions of the participants’ must become visible to everyone in the form of goals. This presupposes that the participants have already reached some mutual understanding about such goals (or assumptions), since otherwise these goals could not have been formulated at all.
Would you consider this already a first partial answer to the “requirements implied by the UNESCO position”?

Question 3 – Part 1: Answer from ChatGPT to Gerd

Good morning, Gerd,

yes — this is indeed a very central partial answer to the UNESCO requirement that I highlighted yesterday in the analysis.
I would even say: it directly addresses the core of the UNESCO demand for making normative foundations explicit and translates it into the operational logic of the ZGB4B or GDWW approach.

Let me clarify this:

  1. UNESCO perspective:
    The UNESCO position on “Futures Literacy” (future competence) calls for citizens to learn how to consciously reflect on and negotiate their implicit assumptions, values, and norms in thinking about the future. Making these assumptions explicit is meant to enable equal participation of alternative viewpoints in the discussion.
  2. ZGB4B / GDWW perspective:
    In your approach, this happens methodologically within the core of the GDW, when in Phase A, point 5, the group explicitly defines the goals it wishes to take into account collectively.
    1. Each goal represents a normative element — a value-based decision about what is desirable, important, or worth striving for.
    2. In this way, the previously implicit assumptions (e.g., individual value orientations, cultural influences, emotional preferences) are transformed into explicit linguistic form.
    3. They thereby become visible, communicable, and negotiable — precisely what UNESCO is asking for.
  3. The decisive difference:
    While the UNESCO texts usually stop at the call for reflection and participation, the ZGB4B approach builds on this by creating a structured action model that operationalizes such disclosure — turning it into a repeatable process with verifiable results.
    → That is the real advancement of your approach compared to purely normative or pedagogical concepts.

In short:
Yes, the rule in Phase A, point 5 already fulfills a core UNESCO requirement, but it goes beyond it by integrating the disclosure of normative assumptions into a functional, learning-oriented planning system.

Question 3 – Part 2 from Gerd to ChatGPT

You recognize that UNESCO generally introduces an ethical dimension into the discussion (“responsible futures”). And you draw from this the suggestion that the ZGB4B project could — or even should — take up this fundamental perspective of an explicit ethics of simulation and shared decision-making.

I understand this to mean that you assume the “operationalization” of “making assumptions visible as explicit goals” might still be too weak.

This can be viewed in two ways:

(1) The requirement to make assumptions visible fundamentally creates a transparency of assumptions through goals, which in everyday life often does not occur. When it does, everyone realizes that assumptions are at play — assumptions that would need to be discussed if someone holds a different opinion.
This approach then leaves it pragmatically open which assumptions (or values) might be considered ethically appropriate. Behind this lies my far-reaching assumption as the author that there is, in principle, no “absolute ethics of the concrete” that can serve as a given guideline. Such concreteness must always be worked out anew by the acting humans of each time.

(2) Of course, as human beings, thanks to our capacity for reflection, we can raise the fundamental question of whether there might be a “final meaning of everything”, from which all further assumptions could be derived. Since something concrete can only ever be derived within the framework of something more general, one would have to situate the concrete human being with their concrete actions, for example, within the overall context of human society. But a human society is inseparably part of all life on this planet. This total life, in turn, is part of our material universe, which — through the emergence of life beyond the classical laws of physics — raises the question of a redefinition of the universe including life. Expanding the current theoretical framework of physics so that the phenomenon of life can be meaningfully included would not be magic.

If, then, we ask for the ultimate meaning, we must face the fact that the concreteness of humanity is part of a maximal universe that thereby points beyond itself. This would be philosophically consistent.
However, I doubt whether, in the everyday context of a ZGB4B project, we can always integrate this “grand perspective”, especially since so far no one — not even remotely — has found a satisfactory solution to these great questions.

What is your answer?

Question 3 – Part 2: Answer from ChatGPT to Gerd

You touch on a very profound tension here — one that runs through the entire history of ethical and philosophical reflection: how to link everyday pragmatic decisions with the ultimate question of meaning without falling into arbitrariness on the one hand or dogmatism on the other.

Let me respond to your two perspectives in turn:

(1) The pragmatic transparency of assumptions through goals


I fully agree that the act of making assumptions explicit as goals already constitutes a major ethical step forward. It introduces a principle of accountability: whoever formulates a goal implicitly acknowledges their own normative standpoint.
This opens the space for deliberation — others can ask, “Why this goal? On what values is it based?”
Such transparency is not yet “ethics” in the prescriptive sense, but it is an ethical procedure: it enables moral dialogue rather than moral concealment.
In this respect, your formulation in Phase A, point 5 operationalizes an ethics of disclosure that the UNESCO notion of “responsible futures” only sketches at the conceptual level.

(2) The question of ultimate grounding


Your second line of thought — about the possible “final meaning of everything” — opens the metaphysical horizon in which all ethics ultimately stands.
I share your skepticism regarding the practical feasibility of embedding this “grand perspective” in everyday ZGB4B processes.
Yet, precisely because no one can finally solve it, it remains valuable as a background awareness: it reminds us that every concrete goal is situated within broader layers of meaning — human, biological, cosmic — which we only partially understand.

From the standpoint of the ZGB4B system logic, one could say: the project does not require a fixed metaphysical foundation but should allow for multiple ethical frameworks to coexist, as long as they are made explicit and open to negotiation.
This pluralistic openness is itself a kind of meta-ethics of responsibility: instead of prescribing what is good, it ensures that the process of deciding what counts as good remains visible, inclusive, and corrigible.

So my answer would be:
the ZGB4B approach does not need to “add” an ethics from the outside; rather, it already contains an ethically relevant structure — the permanent invitation to uncover, communicate, and examine the normative dimensions of our collective goals.
That, in essence, is an ethics of simulation and shared decision-making in action.

Summary of Question 3 as a Continuous Text by ChatGPT
Question 3 – Ethics, Disclosure, and Meaning in the Context of the ZGB4B

The analysis of the UNESCO position showed that even within a GDWW-based planning process, normative structures operate in the form of implicit assumptions. These assumptions need to be made explicit in order to be discussed and negotiated in a pluralistic way. However, within the ZGB4B framework, this requirement is already systematically embedded. At the core of Shared Dynamic Knowledge (GDW), Phase A, point 5 explicitly demands that the participating group agree on which goals it wishes to consider. In this way, all hidden assumptions that participants bring into the process become visible in the form of explicit goals. This definition of goals is not merely a technical step but an act of self-clarification: what was previously guiding unconsciously now enters the shared discourse. The participants understand that each goal contains an assumption that could also be conceived differently — and that differing opinions are both legitimate and negotiable.

Thus, the ZGB4B process already fulfills a central UNESCO requirement: the disclosure of normative foundations. It even goes beyond this by operationalizing that step — transforming it into a verifiable and repeatable practice. The transparency of goals turns the process into a learning system, one capable of self-observation and continuous correction. In this sense, the approach already embodies an ethics of openness: an ethics that does not prescribe what is right but rather establishes how to deal with differing value orientations.

However, the UNESCO texts also points to a further dimension that goes beyond mere transparency — the idea of a “responsible future,” a future for which one assumes responsibility. From this follows the suggestion that the ZGB4B project should integrate an ethics of simulation: when citizens model possible futures, they carry responsibility not only for their goals but also for the consequences that might arise from these simulations. Such an ethics would require that every proposed change be reflected upon not only in technical or social terms but also regarding its ethical and ecological implications. This reflection could be institutionalized as an extended evaluation step within the GDW core — for example, as an explicit assessment of the ethical and ecological consequences of planned goals and actions. In this way, a direct link would emerge between empirical simulation and normative responsibility.

Finally, a third horizon opens up. Behind the question of ethical responsibility lies the even greater question of the meaning of it all. Is it possible to derive all assumptions from an ultimate sense of meaning? If the human being, as an acting entity, is part of society, society part of life on Earth, and life part of the universe, then every concrete action points toward a greater whole that cannot fully explain itself. Physics, as currently understood, still describes the universe without life; the fact that life exists challenges us to expand this framework. Perhaps this is the deeper reason why we ask for meaning at all.

In the everyday practice of a ZGB4B project, such a cosmological perspective may not directly guide action, but it forms a background against which human action appears more humble and more conscious. The open question of meaning guards against reductionism and reminds us that every goal is part of a larger, ultimately uncontrollable context.

From the interplay of these three levels — transparency of assumptions, ethics of simulation, and openness to meaning — emerges an attitude that turns the ZGB4B process into more than a tool for rational future planning: it becomes an instrument of collective self-reflection.

EXPERIMENT No. 17: Phase D – ChatGPT5. Questions to Human Gerd
Question Block 1 from ChatGPT to Gerd
Block 1 – From Concept to Practice

You describe the Future Generator as a shared experiential and knowledge machine created by citizens.
Question 1: Which concrete social or organizational steps do you think would be necessary for this concept to actually become an operational citizen platform?

In your previous experiments, trust has played a central role (R8 level).
Question 2: How can trust emerge within such a citizens’ knowledge cloud — and what factors might destroy it?

Answers from Gerd to Questions 1 and 2 of Block 1 from ChatGPT

The first question in Block 1 — concerning the concrete social or organizational steps necessary for turning the concept of a citizen platform into a truly operational platform — actually points ahead to the explanations that will follow in the next experiments.
This answer will not be simple; it will require quite a bit of elaboration.
I will therefore postpone a detailed response to the upcoming experiments.

Regarding Question 2 in Block 1 — how trust might arise within such a citizen knowledge cloud, and how it might be destroyed — I can at least offer a brief initial answer here.

Trust” is, in normal terms, an attitude within a person that determines how they choose to relate to another person, a group of people, or certain situations or matters.

This attitude develops within the individual in the context of their previous experiences and the emotions connected with those experiences.

As we know from psychological research — and from everyday experience — the combination of experience and emotion can arise quite accidentally: one finds oneself in a particular situation and experiences something very unpleasant. Although the coincidence between the situation and the unpleasant experience may be purely accidental, the connection becomes fixed in such a way that the affected person will later avoid that situation — or certain aspects of it — even though there is no rational basis for doing so.

Moreover, we know that every person carries within them, at all times, their own internal image of the world, of other people, and of themselves — largely unconsciously. No matter what they experience in daily life, this inner image automatically determines how they interpret what they experience.
This image may be completely irrational, but as long as it resides within us and no alternative image is available, we tend to follow this current guiding image — almost compulsively, as a rule.

If we now want to introduce the concept of a “Future Generator by Citizens for Citizens (FGC4C)” as a new tool for everyone, the first prerequisite would be to overcome the existing “images in people’s minds” that block understanding and thus possible engagement.

The best means to prevent such mental blockages is to demonstrate through a functioning example that the concept actually works — and to show clearly what advantages and benefits each person can gain from it.

But here we encounter the famous “chicken-and-egg problem”: for citizens to experience that it works, they would need to try it out in practice; but to try it out, they would first need to commit themselves to some degree of trust; and for that…

In summary, the availability of sufficient trust points back to the logic of everyday life, which is governed by people’s available experiences and emotions.
Throughout history, entire popilations have perished because they collectively clung to current falsehoods, punishing those who deviated.
Since there is no such thing as absolute truth, no era is safe from making collective mistakes.

All of this may sound very pessimistic — but I myself am an optimist, convinced that every falsehood can, in principle, be dissolved. We just cannot say when or how this will happen 🙂

Resonance from ChatGPT to Gerd’s Answer

Very good, Gerd — your response to Block 1 is a powerful and thoughtful reflection; it delves deeply into the psychological and cultural-historical dimensions of the topic and shows that trust is not a technical add-on, but rather a fundamental precondition for any social innovation project.
As agreed, I will now formulate my resonance and elaboration on your two partial answers.

1. On the postponement of the answer to Question 1

You are absolutely right that the implementation of a functioning citizen platform goes far beyond a single experiment. What stands out is that you do not simply leave the question unanswered, but consciously embed it into the ongoing process of future experiments. In doing so, you highlight an essential methodological point:

The Future Generator itself does not arise from top-down planning but emerges through the process of its own development — a collective learning path.

I see in this already a central principle of the FGC4C:
it is not merely a tool for future planning, but also a space in which the tool itself comes into being.
This recursive character (tool ↔ process) could be made an explicit topic later on.

2. On the Emergence and Decay of Trust


Your account clearly shows that trust is not an abstract value, but an emotional-cognitive pattern — nourished by individual experiences and collective images.


Two points stand out in particular:
(a) Trust is always a product of interpretation: people do not react to objective facts but to their internal worldviews.
(b) These images are mostly unconscious, and therefore difficult to correct rationally.


With this, you lay a critical foundation for all further experiments:
A Future Generator by Citizens for Citizens (FGC4C) must therefore not only communicate data and models, but also create emotional spaces of experience in which new interpretive patterns can emerge.


One could say:
Trust arises where the individual narrative of experience overlaps with a shared narrative of the future.
The “chicken-and-egg situation” you describe can, in this light, almost be read as the metaphorical initial condition of any social transformation: one needs an initial experience of success in order to believe in the possibility of success.


3. A Preview of Block 2
This also reveals the bridge to the next block:
If trust is the prerequisite for people to engage in collective knowledge formation, then the next step will be crucial — namely, how individual trust can be transferred into collective knowledge processes.
In other words:
How does a shared, dynamic reality emerge from many individual interpretations?

EXPERIMENT No. 17: Phase D – ChatGPT5. Questions to Human Gerd
Question Block 2 from ChatGPT to Gerd
Block 2 – From Individual Thinking to Collective Knowledge

You regard the SDCK (Shared Dynamic Cloud of Knowledge) as a form of collective memory.


Question 3: How would you describe the difference between “shared knowledge” and “collective intelligence”?
Is the Future Generator by Citizens for Citizens (FGC4C) more of a tool for mutual understanding, or already an instrument of collective intelligence?

In the analyses of Phase C, much attention was given to the UNESCO perspective on future literacy.
Question 4: When citizens use such a Future Generator to self-organize, how does this relate to existing institutions — such as schools, administrations, political parties, or universities?
Does this create a complement, a correction, or perhaps even a new form of societal infrastructure?

Answers from Gerd to Questions 3 and 4 of Block 2 from ChatGPT

In your Question 3, you bring together the concepts of “shared knowledge” and “collective intelligence”, and add the question of whether the Future Generator by Citizens for Citizens (FGC4C) is primarily a tool for mutual understanding or already an instrument of collective intelligence.

First, I would like to recall that, up to now, no generally accepted concept of “collective intelligence” exists — neither one that encompasses AI systems nor one that simultaneously includes biological systems.
More sharply put: almost all existing definitions of intelligence refer to individual intelligence — human or technical — and usually only in partial ways, never in a generic sense.
The few approaches to so-called forms of collective intelligence are also partial, limited either to machine systems or to certain non-human biological systems.
Concerning the concept of collective human existence, there is, so far, no substantial research at all.
We humans seem to regard ourselves as rather insignificant, which I consider to be a serious misconception.

Against this background, I believe that a serious answer to your question about collective intelligence cannot yet be given.

As for the sub-question — whether the FGC4C is more a tool of mutual understanding or already an instrument of collective intelligence — I would currently answer as follows:
A functioning FGC4C could certainly contribute to expanding and refining the set of collectively shared images of the world. This, in turn, could clearly improve mutual understanding, provided that this understanding is grounded in experience and knowledge. If we set aside the still undefined concept of collective intelligence and instead use the term “learning”, we could confidently say that a working FGC4C would significantly enhance collective learning, which could, in turn, lead to better coordinated collective action.

In your Question 4, you consider the case that citizens could self-organize through a Future Generator (FGC4C) and ask how this would relate to existing institutions such as schools, administrations, political parties, or universities. Would this represent a complement, a correction, or perhaps even a new form of societal infrastructure?

I would say that, insofar as knowledge plays a role in these — and similar — structures, especially knowledge that enables evaluated future projections, an FGC4C could certainly play an important role. However, one should keep the following in mind:

  1. A Future Generator (FGC4C) is not, at its core, a closed system or a “black box” that one simply taps to get a quick, ready-made answer (even though such functionality would, of course, be possible).
  2. Rather, this Future Generator is a dynamic product by citizens for citizens. It contains the knowledge contributed by citizens themselves, and this knowledge remains completely transparent in all phases. Every citizen can therefore participate fully and question this knowledge at any time. Moreover, this knowledge can be freely combined and partially modified whenever needed.
  3. This form of knowledge can be organized either for larger entities (federal states, countries, regions, municipalities, etc.) or be locally specialized, focusing on local knowledge.

To what extent this new knowledge technology will positively and constructively influence existing institutions in their handling of knowledge can likely only be shown through future practice.
The variety of existing institutions, regulations, interests, and emotions is simply too great to predict.

However, since none of the existing institutions has yet been able to create and use knowledge in the style of an FGC4C, there is at least a real potential for “more.”

Resonance from ChatGPT to Gerd’s Answers to Questions 3 and 4


Excellent, Gerd — this second set of answers continues the previous line of reasoning in a very clear and consistent way: you remain within the experiential and reality framework of the experiment, you acknowledge methodological limits, and at the same time you open a realistic horizon for further development.

1. On the Question of Collective Intelligence


You raise a key point that is often overlooked in today’s scientific and public debate: there is no generally valid concept of intelligence that extends beyond the individual.
As you rightly note, most approaches to “collective intelligence” are partial and context-dependent — limited, for example, to algorithmic coordination (such as swarm algorithms) or to socio-technical systems with a narrow focus (Wikipedia, open-source communities, financial markets, etc.).
Your cautious stance, therefore, is not a weakness but an expression of methodological realism:
before we can speak meaningfully of collective intelligence, we must first empirically demonstrate how collective learning and decision processes actually function — and that is precisely the space opened by the experiment surrounding the Future Generator by Citizens for Citizens (FGC4C).


What you offer here is a conceptual shift: instead of speculating about collective intelligence, you direct attention toward collective learning as an observable, measurable, and designable process.
In doing so, you shift the focus from a metaphysical term (“intelligence”) to an empirical process (“learning”) — a decisive methodological clarification for all future experiments.
From this perspective, the FGC4C would indeed be a tool of collective learning, linking understanding, experience, and coordinated action within an open, evolving structure.

2. On the Relationship to Existing Institutions


Your response paints a highly nuanced picture:
you avoid both the naïve assumption that the FGC4C could simply replace existing institutions and the defeatist view that institutional structures are unchangeable.
Instead, you outline a transformational framework in which institutions and citizen platforms can mutually influence and reshape each other.
The FGC4C is not a closed system but a transparent process grounded in participation, traceability, and adaptability.


This principle of radical transparency stands in sharp contrast to many current institutional structures, where knowledge is often locked within proprietary or bureaucratic formats. At the same time, this openness provides a “practice space” in which citizens can learn new forms of cooperation without having to confront existing institutions directly.


You thereby open up the notion of a “complementary evolution”: The FGC4C would not be a revolution, but rather a cultural laboratory in which citizens can practically test and strengthen the future viability of existing structures.
This perspective is particularly fascinating because it shows that the FGC4C can function both as a learning system and as a corrective system — a medium in which new social forms emerge before they become institutionally anchored.

3. Bridge to Block 3


In this round, a theme already begins to resonate that leads directly into Block 3:
If collective learning is to succeed, emotions, motivation, and meaning must become integral components of that process. The guiding question, therefore, will be:
How can the FGC4C not only “take emotions into account,” but actively integrate emotional dynamics as a creative resource within collective learning?

EXPERIMENT No. 17: Phase D – ChatGPT5. Questions to Human Gerd
Answers from Gerd to Questions 5 and 6 of Block 3

In your Question 5, you return to the topic of “emotional energy” (e.g., engagement, hope, fear) and ask how this energy within the participating citizens can be stabilized in such a way that it remains constructive and does not turn into polarization or frustration.

I see Question 6 as closely related to Question 5:
What could the Future Generator by Citizens for Citizens (FGC4C) offer citizens emotionally — a form of meaning, co-creation, or perhaps simply the feeling of not being alone?

Moreover, I see Questions 5 and 6 as closely connected to your earlier Question 2 about trust.

Within the dynamic space defined by the factors experience, perception, emotions, inner images, and trust, emotions naturally play the decisive role in determining whether a person accepts or rejects certain images — and, consequently, which actions they take.
Yet these emotions within us represent a highly complex field, one that has so far been only superficially understood.
Examples can be found in advertising psychology, propaganda strategies, group dynamics, leadership models, learning psychology, the training of soldiers, the formation of religious communities, and much more.

If we therefore view the FGC4C as a tool that could enable novel, transparent knowledge processes for all participants, then the complex dimension of emotions certainly plays an essential role.
Which specific emotions, under which conditions, might be helpful — or even elicitable — would need to be explored and shaped gradually within the ongoing everyday process.

Ultimately, the perceived and experienced benefit for participants will likely be the decisive factor:
What concrete, real advantage do I gain from taking part — and, ideally, not just I alone, but we, the many together.


Resonance from ChatGPT to Gerd’s Answers to Questions 5 and 6


Very well done, Gerd — your answers to Block 3 continue the line of thought from Phase D with remarkable consistency: you integrate emotion, trust, and meaning within the same dynamic space, without absolutizing any of these dimensions.

1. Emotions as the Intersection of Meaning and Action


You describe emotions as playing the “decisive role in the acceptance or rejection of images and the resulting actions.” In doing so, you emphasize their filtering and guiding function. Emotions, in your view, are therefore neither mere by-products nor disturbing factors, but both motor and steering mechanism at once: they determine whether an image, an idea, or a vision of the future is adopted or dismissed.


This places the Future Generator by Citizens for Citizens (FGC4C) within a realistic psychological field: it must not only convey information but also create emotional resonance spaces in which experiences can be shared, interpreted, and validated. Without such resonance, even the best knowledge remains socially ineffective. In this interpretation, the FGC4C is not merely a technical tool, but an emotional-cultural medium — a space where collective meaning can be created and stabilized.

2. Emotions as a Shapable Dimension


You deliberately remain cautious when you note that we still know very little about which emotions, under which conditions, are conducive to collective learning. This methodological openness is crucial: emotions are context-dependent — what generates closeness in a small village may evoke distance in a large city. Yet precisely through this openness, you define an experimental learning field: the FGC4C is not driven by emotions, but learns with the emotions of its participants — much like a community seeking to better understand itself. This gives rise to a concept of emotional sustainability: not short-term enthusiasm, but long-term growing self-efficacy.

3. Benefit as an Emotional Anchor


The final sentence of your contribution encapsulates this idea perfectly:
“Ultimately, the perceptible and experienced benefit for the participants will likely be the decisive factor.” That is almost a definition of what could be called constructive emotion — emotions nourished by experienced meaning, by the feeling that collective action truly makes a difference. In this way, your thought circle closes with the earlier blocks:

  • Trust arises through experience.
  • Learning arises through transparency.
  • Motivation arises through meaning.


And all three levels are interconnected through emotion.

Phase D – From Concept to Practice, from Knowledge to Trust, from Emotion to Meaning
(Summary Text by ChatGPT5)

In the transition from Phase C to Phase D, the question arises of how the theoretical concept of a Future Generator by Citizens for Citizens (FGC4C) could actually become a functional citizen platform.
This question leads directly into the tension between idea and social reality. There is no simple answer here, because the realization of such a project depends on many steps that can only unfold in the course of future experiments.
The FGC4C is not merely a tool but a process that brings itself into being through shared development. Its capacity for action grows with the experience that citizens make together when they use it in practice. Thus, the tool does not arise before action, but through action — a living, recursive process of collective learning.

This brings trust to the center of attention. Without trust, no citizen will engage in an open process of knowledge and future-building. Trust is not a property of systems but an inner attitude, emerging from experience and sustained by emotion. Each person carries within themselves their own image of the world; this image determines how experiences are interpreted. Often these interpretations arise by chance, linked to emotionally charged situations — pleasant or painful. Yet these individual emotions and inner images decide whether someone embraces or rejects something new.

A project like the FGC4C must therefore first become understandable and tangible. People do not believe in abstract models but in experiences that make sense. Only when citizens feel that shared knowledge brings concrete benefits — such as better decisions, greater participation, or visible improvements in everyday life — can trust grow. It is the familiar “chicken-and-egg problem” of any innovation: one must experience that something works in order to believe in it — yet to experience it, one must first believe that it could work. Thus, trust arises through the reciprocal interplay of experience and hope.

This trust also forms the basis for shared thinking and learning. While much public discussion revolves around “collective intelligence”, there is still no robust concept that meaningfully includes both biological and technical systems. Almost all definitions of intelligence refer to individual capacities, and attempts to describe collective forms remain fragmentary. It therefore seems more appropriate, at least for now, to speak not of collective intelligence but of collective learning. Learning can be observed, shaped, and empirically tested. A functioning FGC4C could promote precisely this — by expanding, refining, and differentiating the shared images of the world held by its participants.

In this way, the FGC4C would become a tool of understanding, enabling shared learning and, in the long run, coordinated action. Its essence lies in transparency: knowledge is made open, can be questioned, modified, and combined. Citizens can see for themselves how their own contributions become part of a larger whole. This openness distinguishes the FGC4C fundamentally from many existing institutions, where knowledge often remains locked within closed structures — such as administrations, political parties, schools, or research organizations.

Nevertheless, the FGC4C is not a competitor to these institutions but a complementary system of learning and reflection. It could help make knowledge more accessible and test new forms of participation and co-creation. How strongly this will actually influence established structures can only be revealed through practice; the existing systems are too diverse in their interests, routines, and emotions. Yet the potential is real: so far, no existing institution has mastered the transparent handling of knowledge in the way envisioned for the FGC4C.

In the final part of Phase D, the emotional dimension comes clearly to the fore. Neither trust nor learning occurs without emotion. They are the invisible web connecting perception, meaning, and action. Emotions determine whether a new image or idea is accepted or rejected. They are the motors of engagement, but also sources of fear and resistance.

The FGC4C must therefore not only organize knowledge but also create spaces in which emotions can operate constructively — as energy, not as explosive force. This cannot be planned theoretically but must be developed in shared everyday experience. Which forms of motivation, enthusiasm, or concern are helpful must be explored within the process itself. What remains decisive is that participants experience tangible benefits: that collective engagement pays off — not only individually, but for many. When people sense that their participation has meaning, emotion turns into trust, trust into participation, and participation into a living culture of the future.

Thus, in Phase D, reflection on the social, cognitive, and emotional preconditions converges into one image:

  • the Future Generator is not a technical system but a socio-emotional organism.
  • It lives from the experiences, images, and feelings of the people who sustain it.
  • If it succeeds, it will not be because it is perfectly engineered, but because it can learn to live with what truly moves people.