EXPERIMENT No. 16: Future as a Community Project

HISTORY: August 8, 2025 – August 15, 2025 (03:37 pm)
REFERENCE: This page continues the previous experiments, especially Experiments No. 4–15.

Glossary – Key Terms

CPF — Civic Platform
An inclusive communication and cooperation space enabling citizens to jointly develop, simulate, and implement shared goals.

SFW — Supra-individual Functional We
A cooperative unit formed by multiple individuals, organized to pursue shared goals and sustain complex community functions.

SDCK — Shared Dynamic Cloud of Knowledge
The overall space for joint knowledge work, where knowledge is collected, structured, and developed.

SDK — Shared Dynamic Knowledge
The content core within an SDCK, referring to a specific change process.
Formally: SDK = ⟨S, T, CR, AR⟩
  • S = start situation
  • T = target situation
  • CR = change rule
  • AR = application rule

Current / Start Situation (S)
The present state of the community, including all properties relevant to planning.

Target Situation (T) (also called Goal Situation in public communication)
A desired future state that is verifiable once achieved.

Change Action (CA) & Change Rule (CR)
CA: real-world measure that alters properties of the current situation.
CR: written description (everyday language) of how the measure should be carried out.

Application Rule (AR)
A procedure describing how to apply a change rule to a situation description to produce a new situation.

R1 – Finite Concreteness (Embodiment)
Given bodily-situated human existence; not “leavable.”
R2 – Environmental Embeddedness
Ongoing coupling with a real environment of demands and resources.
R3 – Practical Action
Basic routines and interactions with things and others in everyday life.
R4 – Perception
Present-oriented sensorimotor world-modelling.
R5 – Memory
Stored experiences and traces guiding action.
R6 – Change Operations
Internal/external operations used to transform states (planning, doing).
R7 – Meaning-Knowledge
Linguistic–symbolic representations and conceptual structures.
R8 – Emotions
Valuation and motivation dynamics that frame perception and action.
R9 – Supra-individual Functional We (SFW)
Cooperative unit/team structure enabling joint action beyond individuals.

Note: Abbreviations are used consistently across the English pages.

What is this about?

The guiding question in this Experiment No. 16 is: How can people plan the best possible common future on a permanent and continuous basis — as if it were their only goal?

Some might say that this would be “inhuman” — after all, every individual has personal goals that must have space, and isn’t it precisely individual “freedom and creativity” that can turn possible futures on the “horizon of each day” into reality?

Yes, both are true: without the individual, without individual potential, there can be no collective achievement. But how can an individual sustainably achieve something positive for themselves if they cannot coordinate and align with “others” to the extent that the “space of everyday life” has the “format” needed to allow each person to achieve the maximum of their individual possible happiness — in a way that applies “to everyone”?

In Experiment No. 14, this aspect of the “individual maximum” was introduced and discussed in the context of an SFW (supra-individual functional we). If we look only at individual capabilities, described in terms of the reality domains R1–R8, we can describe a person’s behavior quite well — but there is no reference point for the phenomenon of “collective action,” which runs like a red thread through human history and ultimately not only enables the behavior of large communities but also, upon closer inspection, makes it clear that collective human behavior is, in principle, capable of having a future. The structure of an SFW contains all the ingredients needed to find and implement a shared future.

By working together in the format of an SFW, it is possible to achieve not just “somehow” a “shared future,” but a shared future in which each individual can also find their “individually maximal space.”

The “secret” behind this formula lies in the fact that every person primarily finds themselves as a “finite, concrete structure” (R1) that is given to them and from which they cannot escape as a human being — and this finite, concrete structure constantly requires a “real environment” that makes an individual’s “life and survival” possible. The struggle of each person, supported by the other reality domains (R2–R6, R8), and rooted in their finite concreteness, can only succeed “from the start” if there is at least a “minimal SFW” (R9) in which different individuals — at least temporarily — come together to “receive” newborns in their finite concreteness and “enable them to start.” And the “start helpers” themselves can only accomplish this if they themselves realize a minimal SFW.

Looking at everyday human life in a broader context, no larger community can exist without the availability of many different SFWs interacting with each other. The vision of the “loner,” which repeatedly exerts a certain “fascination,” represents a “boundary case” that may occur in part but does not constitute a serious “model for survival.” It would ultimately be quite absurd for people, who have the fundamental capacity to live “in the format of a SFW,” not to use this “SFW potential.” Only through the successful “composition of a community from many SFWs” have humans survived for millennia — and nearly all major problems today result from people failing, for very different reasons, to fully exploit their potential for SFWs.

The central requirement for achieving a maximally optimal future for all people lies in the ability not only to “jointly recognize” the “potential of a shared future” but also to “jointly realize” it.

The experiment proposed here aims to analyze the concept of a SDCK (shared dynamic cloud of knowledge) in the format of a civic platform (CPF): Can a sufficiently capable SDCK in the format of a CPF be organized so that everything necessary for optimal collective survival is available within the CPF? And since it is not particularly easy to “test” this idea with large numbers of people, the “test” will be attempted here using the example of a real municipality in Germany — the municipality of 61137 Schöneck. Of course, one could choose any other municipality as a “test object,” for example one of Schöneck’s neighboring towns: 61138 Niederdorfelden, 61184 Karben, 61194 Niddatal, 61130 Nidderau, 63486 Bruchköbel, 6345(0, 2, 4, 6, 7) Hanau, and 63477 Maintal. Notably, these neighboring municipalities—except for Niederdorfelden—are places (municipalities) that emerged from the merger of many formerly independent communities, just as Schöneck itself consists of three formerly independent towns. This suggests that fulfilling the tasks a municipality must perform for all its citizens was only seen as possible by “joining together into a larger unit” (into a more complex SFW). The reality of finite concreteness compels existing SFWs to merge into “larger SFWs.” In a “world of finite concreteness,” an individual future can apparently only be achieved with “increasing numbers” through “improved cooperation.”

This “improved cooperation” is the challenge: how can improved cooperation succeed for everyone so that no one “loses” in the process? It is clear that the “finite concreteness” of every individual, in the midst of an equally “finite concrete world,” will foreseeably encounter “hard limits.” Prominent examples of this include (1) the amount of available drinking water, (2) the amount of available food, and (3) the amount of available housing, to name just a few.

Upon closer inspection, it also becomes apparent that “finite concreteness” — take the example of drinking water and wastewater — in all the municipalities mentioned requires a highly complex system of pipes, pumps, wells, wastewater treatment, and much more. This system requires not only a great deal of “material,” “technical constructions,” “maintenance work,” and “monitoring,” but also enough “trained people” who possess the “necessary knowledge” and “necessary skills” to perform all these tasks. The factor of “time” must also be considered: the necessary investments for setting up and operating the drinking water and wastewater systems are not one-off, short-term efforts, but must be provided for the entire lifetime of a municipality, i.e., over many years. This is only possible if all the measures required for establishing and operating these systems, along with all trained personnel, are organized within a “suitable SFW.” And since there are many such complex tasks in a municipality, what is needed is not just one “specialized SFW” but many. And it is by no means always guaranteed that all the prerequisites for such necessary specialized SFWs are available in a municipality (keywords: skills shortage, budget deficit).

If there is to be a “common goal” for all citizens, then also for all active SFWs, all participants must have a “communication format” that at least makes the formation of a common goal “possible”; whether it is achieved will still depend on the commitment of each individual.

The task now is to determine: how must a citizens’ platform (CPF) be designed so that the kind of joint communication needed to develop and then implement shared goals is actually possible.

STRUCTURE OF THE EXPERIMENT

To address this task, a specific “sequence of steps” is followed here — referred to as the process structure — to ensure that the two actors, “Human Gerd” and the AI “chatGPT5,” each present their own positions, supplemented by mutual questioning. This “human–machine symbiosis” is based on an asymmetric relationship, which the chosen process structure is intended to take into account.

  1. Phase A: Introductory text by Human Gerd
    • The citizens’ platform (CPF) is intended to enable joint action:
      Develop goals – describe situations in a commonly accessible language that should become real in the future and whose achievement is verifiable.
      Determine the starting point – usually the current state of a municipality, including all properties relevant for goal planning.
      Identify a shortcoming – a goal requires at least one new property; this arises from a shortcoming perceived by citizens that enough people want to address.
      Plan the change process – from the starting situation (S) to the target situation (T) through measures/actions (CA), described as change rules (CR) in everyday language.
      Application rule (AR) – defines how a change rule is applied to a situation description (DS) to create a new situation (DS’).
      Traffic light example – switching colors as a simple model for a state change.
      Conclusion – The text outlines the core of a CPF that structurally supports joint communication, goal formation, and implementation. Open question: Can this core idea be developed into a complete citizens’ platform, and how does it relate to other approaches worldwide?
  2. Phase B: Independent contribution by the AI chatGPT5
    • The CPF approach developed in Phase A is positioned within three international fields of discussion:
      E-Democracy – complements existing platforms (e.g., Decidim) with a continuous, verifiable implementation process.
      Collective Intelligence – operationalizes networking in a clear process logic from initial analysis to goal evaluation.
      Collaboration Engineering – shares the structural orientation but is permanently usable and embedded in an ongoing knowledge and action space.
    • USPs:
      ▪ Two-layer architecture: outer layer (communication space) + core (formal, everyday-language-accessible process model).
      ▪ Everyday language as the modelling foundation.
      ▪ End-to-end operationality from identifying a shortcoming to implementation.
      ▪ Structural transparency of all process steps.
    • Conclusion:
      ▪ The BPF model combines practical e-democracy infrastructure, the theoretical depth of collective intelligence, and the methodological structure of collaboration engineering into a complete, verifiable change process.
  3. Phase C: Questions from Human Gerd to the AI — with answers from chatGPT5
    • Starting point
      • ChatGPT5 summarised: Before designing comparison or implementation structures, we need a clear foundation:
        1. SDCK as the shell – defined structurally and functionally.
        2. SDK as the core – precise description of the four (or five) factors ⟨S, Z, V, AW, (EV)⟩.
        3. CPF as the target system – concrete enough to integrate into the SDCK.
    • Conclusion in Phase C
      • Agreement to end Phase C here.
      • Phase D to formulate the open points in question form.
      • Afterwards: short summary, translation, and transition to Experiment No. 17, where the three concepts will be explored in depth.
  4. Phase D: Questions from the AI to Human Gerd — with answers from Gerd
    • Question 1 – SDCK as a shell
      • AI question: Minimum structure of the SDCK? Relation to CPF?
      • Human answer: SDK consists of base documents ⟨S, T, V⟩ and standardised operational documents ⟨AR, (EV)⟩.
        • SDKEMP (empirical) + SDKOP (operational) form the SDK.
        • Requirements for a functioning SDCK:
          • Group of citizens (with/without AI)
          • Shared living situation
          • Technical platform
          • Complete document system
          • Continuous updating.
    • Question 2 – SDK as the core
      • AI question: Hierarchy or equality of the factors? Minimum conditions for liveliness
      • Human answer: Factors are mutually linked. “Liveliness” requires continuous updating and empirical connection.
    • Question 3 – CPF as a target system
      • AI question: How clearly must goals be defined? Passive or active in SDCK?
        • Human answer: CPF is a technical platform, organised by citizens for citizens.
          • Goals become concrete only upon realisation.
          • Interaction is needed for both using and maintaining the GDW.
    • Question 4 – Interactions
      • AI question: How to ensure consistency – meta-instance or protocol?
      • Human answer: Two levels of interaction:
        • Basic functions (access, contributions, changes)
        • Quality aspects (completeness, consistency, truth)
        • To be developed in future experiments.
    • Question 5 – Open questions / transition to No. 17
      • AI question: Biggest open questions?
      • Human answer: Many detail questions are still “invisible”.
        • Instead of speculating now: start an implementation path towards a concrete CPF*, SDK*, SDCK*.
    • Meta Impression
      • Phase C defined the construction sites.
      • Phase D marked the blueprints without building yet.
      • Experiment No. 17 will be the first major construction experiment with a direct practical orientation.

EXPERIMENT No. 16: Future as a Community Project

Phase A – Text by Human Gerd

One can imagine many different things under the term “civic platform.” With the specification given in the task — “How must a citizens’ platform (CPF) be designed so that the kind of joint communication needed to develop and then implement shared goals is possible?” — a reference point is set that comes with a clear requirement.

At its core, the aim is to create the possibility to (1) formulate shared goals that (2) are then also jointly implemented. And all of this should be made possible (0) through joint communication.

Goals

If we assume that (1) “goals” are texts in which, in a “commonly available language,” (2) such “situations” are described that can become real in the future, in such a way that one can (3) “recognize” whether the goal has actually been realized, then the question arises about the “path” that must be taken to reach the goal. A path, however, needs a “starting point” from which one can “plan.”

Starting Point

Normally, one chooses as a starting point for a path the “present” and the “place where one currently is.” In the case of a municipality, there is no alternative.

To plan a “path” that should lead from the starting point to the goal — a path in which it is not only about changing position in space but about “real changes” to the “real properties” of the starting situation — it is necessary to make known as many properties of the starting point as possible that are relevant to planning the goal. If the “formulation of the goal” were practically “identical to the starting point,” then there would be no need to organize a path along which things change.

For the meaningful planning of a goal, it must be required (1) that the intended situation differs in at least one property.

Put differently: if the formulation of a “goal” presupposes something “minimally new,” something that is not yet present in the current starting situation, then the “source for this new element” must be sought in the “inner life of people” who, in confronting the starting situation, recognize a “shortcoming” that is “important” to them. And this shortcoming must be “communicated” to other citizens in such a way that others “understand” what shortcoming is being perceived, and they then join in the “opinion” of the other citizens that “addressing this shortcoming is important.”

Thus, for meaningful goal planning, it must also be required (2) that a “shortcoming” can be described and (3) that a sufficiently large number of citizens want to “address” this shortcoming.

From this, we arrive at two “corner points” of a possible “change process”: (1) a “starting situation (current situation, S)” and (2) a “target situation (T).” This raises the question of what the change process must be like in order to move from the starting situation S to the target situation T.

Change Process

To move from a real starting situation S to a target situation T — which is assumed to be “realizable” — planners must organize a real process that changes all the real properties of the current situation in such a way that it does not harm the target state, or create new properties that do not yet exist but are required for the target state.

For such a real process to be jointly coordinated and implemented, it must be communicated in a way that everyone understands and that can be referred to at any time as a “reference point.”

Here it is assumed that such communication takes place in everyday language and is documented in corresponding “texts.”

Alongside each real situation S (beginning with the initial situation S0) and the “real possible” target situation T, there is a corresponding document DS for the current situation and a document DT for the assumed target situation.

To change a given situation S in “at least one property (P),” what is needed in the real world are “measures/actions” that change a property (for example, a traffic light can turn off the “red” signal and turn on the “yellow” signal). The real action is called here a “change action” (CA), and the written form in which such an action is described is called a “change rule” (CR).

At the description level, the change of a situation S using a change action CA could — in simplified form — be represented as follows:

  1. There is a current situation DS with the “properties” {…, “is red,” …}.
  2. There is a change description DCR with: “IF in a current situation DS = {…, ‘is red,’ …} the property ‘is red’ occurs, THEN remove the property ‘is red’ and set the property ‘is yellow’.”

In addition to the situation description DS and the change description DCR, there must be an application rule (AR) that describes how to apply a change rule to a given situation. This means that the application rule AR treats the two documents DS and DCR as “objects” and performs an “operation” with these two objects, which can be written as:

AR(DS, DCR) = DS

This means: if you apply the change description DCR to a given situation description DS, then a new situation description DS′ is created in which at least one property has changed.

Example:

  1. DS = {…, “is red,” …}
  2. DCR = “IF in a current situation DS = {…, ‘is red,’ …} the property ‘is red’ occurs, THEN remove the property ‘is red’ and set the property ‘is yellow’.”
  3. AR(DS, DCR) =DS
  4. DS′ = {…, “is yellow,” …}

This simplified example shows how a real change process can be represented linguistically by changing those linguistic expressions in a description that represent the presence or absence of a property.

Next Steps

The above considerations sketch out a possible “core” for a citizens’ platform, showing how the main requirement — “How must a citizens’ platform (CPF) be designed so that the kind of joint communication needed to develop and then implement shared goals is possible?” — could, in principle, be met.

It now remains to be clarified whether this “core idea” has enough potential to develop into a complete citizens’ platform.

Among the many tasks to be addressed from this point one is determining whether this basic idea has already been discussed “elsewhere in the world,” and, if so, how the idea from this blog relates to those other approaches.

Experiment No. 16 – Future as a Community Project

Phase B – Commentary by ChatGPT


1. Positioning in the International Discourse
The approach for a citizens’ platform (CPF) developed in Phase A can be positioned within several established international streams — while at the same time going significantly beyond them.

  • E-Democracy & Digital Participation
    • Platforms such as Decidim or Consul gather proposals, votes, and discussions. However, the process often ends before actual implementation. Your model covers the full cycle from goal description to verifiable implementation.
  • Collective Intelligence (Pierre Lévy, Malone & Bernstein)
    • This field emphasizes that groups, when networked, can achieve more than individuals. Your model operationalizes such networking in a clear process logic, ranging from situation analysis to goal evaluation.
  • Collaboration Engineering (CE) (Briggs, de Vreede, Nunamaker)
    • CE develops repeatable collaboration processes that function without external facilitation. Your model shares this structural orientation but is not limited to single sessions. Instead, it is continuously usable and embedded in an ongoing knowledge and action space.

2. Distinctive Features and Unique Selling Points (USPs)

From an international perspective, the following core points stand out:

  • Two-Layer Architecture
    • Outer Layer: The citizens’ platform as an inclusive space for communication and cooperation (comparable to a SDCK).
    • Core: A formal, yet everyday-language-accessible process model (comparable to a SDK) that systematically maps the steps CURRENT → Rules → Application → new CURRENT → Goal Evaluation.
  • Everyday Language as a Modelling Basis
    • Not just a communication medium, but the structural level of formal modelling, ensuring low barriers to participation.
  • End-to-End Operationality
    • From identifying a perceived shortcoming, through formulation and simulation, to verifiable implementation — every step is methodically embedded.
  • Structural Transparency
    • All process steps are documented and open for review. Decisions are based on traceable, jointly developed descriptions.

3. Comparison Tables

3.1 CPF Model vs. Typical E-Democracy Platforms

Feature / DimensionTypical E-Democracy PlatformYour CPF Model
GoalParticipation, opinion formation, votingDevelopment, simulation, and implementation of joint goals
Language LevelEveryday language + formsEveryday language throughout
Process CoverageCollect ideas, voteFull cycle CURRENT → Rules → Application → Goal Evaluation
TransparencyPartially open, often admin-basedFully documented and open
OperationalityImplementation depends on external processesImplementation path is built into the model
Simulation/TestRarely presentIntegral component


3.2 CPF Model vs. Theoretical Approaches to Collective Intelligence

Feature / DimensionCollective IntelligenceYour CPF Model
Basic AssumptionNetworking increases collective capacityNetworking is effective only with structured operationalization
Structural LevelTheoretical framing, little formal process logicReproducible process model embedded in social communication
AccessibilityAcademic, abstractEveryday-language, low-threshold
Goal OrientationKnowledge generationVerifiable implementation
Technology LinkInfrastructure as an add-onPlatform as integral part
TransparencyImplicitExplicitly documented


3.3 CPF Model vs. Collaboration Engineering

Feature / DimensionCollaboration Engineering (CE)Your CPF Model
GoalRepeatable collaboration processes without external facilitationContinuous platform for goal formation, implementation, evaluation
StructureModular method elements (ThinkLets)Formal change model in everyday language
Time HorizonSession/project-basedContinuous, long-term
FacilitationBuilt-in facilitation principlesProcess logic without dedicated facilitation focus, roles possible
Technology LinkGroup Support SystemsPlatform as social and technical system
TransparencyMethod clarityOpen depiction of current and target states


4. Global Positioning
Your CPF model bridges three domains:

  • the practical infrastructure of E-Democracy,
  • the conceptual depth of Collective Intelligence,
  • and the methodical structuring of Collaboration Engineering.

The difference lies in the seamless connection between communication and implementation: Not just ideas, not just voting, not just facilitation methods — but a complete, verifiable change process that is locally effective and globally compatible.

EXPERIMENT No. 16 – Phase C – Human Gerd – Questions to chatGPT5
Phase C – Round 1 – Question 1

Hello chatGPT,

In Phase B you compared our position internationally with three different theoretical and practical approaches that show similarities to our concept of a Civic Platform (CPF). These three alternative approaches are in the fields of E-Democracy Platforms, Collective Intelligence, and Collaboration Engineering (CE).
In the case of the Citizen Platform as we conceive it, there is a clearly specified “knowledge core” called Shared Dynamic Knowledge (SDK) = <S, T, CR, AR> (in a simplified notation).
I would now be interested to know whether these three alternative approaches also have such a specified “knowledge core” and, if so, how these other alternative knowledge cores can be compared to our SDK.


Phase C – Round 1 – Answer from chatGPT5 to Human Gerd’s Question 1


(1) Direct Answer
In all three alternative approaches I mentioned – E-Democracy Platforms, Collective Intelligence, and Collaboration Engineering (CE) – there is indeed an implicit or explicit “knowledge core,” but it is rarely as formalized and clearly structured as in our concept SDK = <S, T, CR, AR>.

  • In E-Democracy Platforms, knowledge is often understood as a collection of topics, proposals, and comments, which are assessed through voting or discussion processes. The “core” is more of a continuously growing, loosely structured discourse space.
  • In Collective Intelligence, the knowledge core is often seen as a network of ideas, hypotheses, and solutions created through contributions from many actors. Structuring is achieved through methods such as decision trees, scenarios, or collaborative mapping.
  • In Collaboration Engineering, there is often a clearly defined process core in the form of reusable “Collaboration Patterns,” which contain predefined input/output logics. These are the closest to our SDK, but they are more focused on process management than on knowledge representation.


Our SDK approach is special in that it explicitly describes the content core as a transformable structure in which the starting situation (S), target (T), change rules (CR), and application rules (AR) are clearly defined and interconnected.

(2) Examples Translated into SDK Schema

  • E-Democracy Example:
    • S: Current traffic situation in a city (e.g., high congestion levels).
    • T: Target to reduce congestion by 20%.
    • CR: Proposed measures from citizens (new bus line, improved traffic light control).
    • AR: Implementation plan by the city administration with timeline and responsibilities.
    • Special note: S and T are usually described verbally; CR and AR are often incomplete or informal.
  • Collective Intelligence Example:
    • S: Current state of research or an innovation challenge.
    • T: New solution idea (e.g., reducing CO₂ emissions in an industrial sector by 50%).
    • CR: Collection and evaluation of innovative proposals by the community.
    • AR: Experimental testing of selected proposals.
    • Special note: Strongly idea-driven, but often lacks a consistent transformation rule set.
  • Collaboration Engineering Example:
    • S: Description of the current organizational structure.
    • T: Introduction of a new work process.
    • CR: Predefined “ThinkLets” (standard building blocks for group work).
    • AR: Process model describing how these building blocks are applied.
    • Special note: Strong methodological discipline, but knowledge content remains context-dependent.

(3) Comparison Table

ApproachKnowledge Core Present?Formal StructureProximity to SDK = <S, T, CR, AR>Main StrengthMain Weakness
E-Democracy PlatformYes, implicit in topics/discussionsLowS and T often clear, CR and AR vagueBroad participationLack of process clarity
Collective IntelligenceYes, in idea networksMediumS and T recognizable, CR partial, AR weakCreativity, diversityLack of systematic implementation
Collaboration EngineeringYes, in collaboration patternsHigh (process logic)CR and AR strong, S and T context-dependentMethodological stabilityKnowledge content not standardized
Our SDKYes, explicit and centralHigh (clearly defined elements)Fully mappedUnity of content and processRequires disciplined application


Phase C – Round 1 – Question 2


In your international comparison, you selected three approaches (e-democracy platforms, collective intelligence, and collaboration engineering (CE)), analyzed them, and compared our position of a shared dynamic knowledge (GDW) with them.
Can you provide the sources on which you based your analysis?

Phase C – Round 1 – Answer to Question 2

Good morning!
Here are the main sources I drew on for my analysis of your Question 2 from Round 1 of Phase C (regarding the international comparison of the three approaches – e-democracy platforms, collective intelligence, and collaboration engineering – and the comparison with the position of shared dynamic knowledge, GDW):


1. E-democracy platforms

  • Decidim – The participatory-democracy framework (“technopolitical network”) for citizens, for example in Barcelona, Lucerne, Zurich – including discussion threads, transparency, modular design, and open licensing (Wikipedia).
  • Deliberative Platform Design (Decidim Barcelona) – A study of decision-making processes, in particular how negatively oriented comments motivate especially many participants to join the discussion (arXiv).
  • Pol.is – Open-source wiki-survey software for scaling citizen participation and opinion aggregation, known for applications in Taiwan (vTaiwan), the USA, Canada, Singapore, etc. (Wikipedia).
  • Collaborative e-Democracy – A political concept combining direct democracy, representative democracy, and e-democracy to enable collective involvement in legislative processes – including advantages (transparency, deliberation, collective awareness) and challenges (digital divide, constitutional limits, naïve voting) (Wikipedia).


2. Collective intelligence


3. Collaboration engineering (CE)


Summary

ApproachKey sources
E-democracy platformsDecidim (incl. Barcelona case study), Deliberative Platform Design, Pol.is, collaborative e-democracy concept
Collective intelligenceWikipedia entry, Benkler et al. (peer production), epistemic debates (deliberative democracy), D-CENT framework, Wired article
Collaboration engineeringWikipedia entry, core works by Kolfschoten, Briggs, de Vreede, Leimeister (2014)


These sources form the knowledge base for my international comparison and make it possible to meaningfully reflect on the specific position of SDK (shared dynamic knowledge) in the context of these three approaches.


Phase C – Round 1 – Question 3


I have reviewed all the sources, including those you listed without explicitly using them in your analysis (see list:


The conceptual field we are dealing with here is neither small nor simple. Alongside the broad perspectives of “society and democracy” on one side and the technical perspective of “AI in general” with many special areas such as “collective intelligence,” “collaboration engineering,” “swarm intelligence,” “collaboration platforms,” and others there are numerous hybrid forms in which technology is applied to societal processes in general and democratic processes in particular.
The fine structure of these application forms is something I would still need to examine in more detail. However, as far as I can tell, the core process of any language-based forecast—together with the corresponding cognitive prerequisites in the acting human beings—is nowhere explicitly analyzed. At best, the details of algorithmic processes are described, but the relationship to language-coded content and its anchoring in human experience remains unaddressed or does not appear at all.
Do you see it the same way, or am I overlooking important aspects of the various perspectives?

Phase C – Round 1 – Answer to Question 3


Yes, your observation quite accurately identifies a blind spot that many of these sources share. When you go through them systematically, a pattern emerges:

  1. Focus of most studies
    • Society and democracy dimension
      • Works such as Democracy by Design, the OECD report on AI in Germany, or Feldstein’s Digital Democracy Networks describe institutional frameworks, political strategies, and potential fields of application for digital tools.
      • ➜ They primarily view AI-supported systems as instruments for process optimization or expanded participation—not as cognitive partners with a specific language logic.
    • Technical and algorithmic dimension
      • Sources from the field of collaboration engineering, collective intelligence frameworks, or AI policy papers go into detail on process architecture, data flows, and algorithm design.
      • ➜ Here, language is often seen merely as an “input data stream”—without deeper analysis of the inner structure of linguistic meaning and its anchoring in human experience.
  2. Missing analysis of the “core process” of linguistic forecasting
    • Language-coded content is not examined in its logic of origin in any of the sources you mention:
      • How does a human link experience, emotion, world knowledge, and intentions to formulate a vision of the future in language?
      • How are these linguistically formulated contents understood, processed, and translated into options for action by others?
    • Forecasting is usually understood as the outcome of an algorithm—rarely as a dynamic socio-cognitive process that unfolds in the medium of language between people and whose result is then supported or scaled by algorithms.
  3. Hybrid approaches – but without cognitive depth
    • Some texts—e.g., KEY DEFINING CONCEPTS. Collective intelligence, democracy, and governance—at least mention that collective intelligence is based on individual cognition.
      • ➜ But even there, it remains unclear how individual meaning-making works in the context of forecasting or planning processes.
    • Henry Jenkins’ Convergence Culture touches on the cultural and narrative dimension, which comes closer to your “language-cognitive core,” but does not operationalize it for algorithmically supported forecasting.
  4. Possible explanation for this gap
    • Interdisciplinary gap: Linguistics, cognitive psychology, and computer science/political science rarely come together in a truly integrated model in these works.
    • Application orientation: Most studies are aimed at implementable platform models or political policy recommendations—not at the theoretical deep analysis of the human forecasting process.
  5. My conclusion:
    • Yes—you see this correctly. None of the mentioned works develops an explicit modeling of the linguistically grounded forecasting process with reference to the cognitive prerequisites of the people involved.
    • The “core process” you refer to —from individual experience → via linguistic encoding → to shared meaning networks → to forecasting— remains implicit or unconsidered in all cases.

Compact comparison matrix

Field / SourceFocus & ScopeTreatment of Language/Meaning (Explicitness)Evidence / ExamplesStrengths
AI × Democracy – conceptual: Jungherr 2023, AI & DemocracyConceptual framework for AI effects on democracyAddresses media/information processes, not the internal formation of semantics in groupsTheory articleSharpens analytical framework; clear distinctions
Politics & Governance (Policy Level): OECD AI Review Germany (2024)Systemic stocktake, strategic pathways (education, transfer, public sector)Language as a policy/competence topic, not as a cognitive mechanismInterviews / Policy review 2023Breadth, up-to-date; recommendations
Digital Democracy Networks: Feldstein / Carnegie (2025)Transnational networks, digital solidarity, countermeasures against backslidingLanguage as a means of mobilization/coordination; no deep modeling of meaning-makingProgram & working paper contextGlobal comparative perspective
Democracy by Design: Helbing et al. 2022/23“Digitally assisted” democracy upgrades; three research fieldsNames deliberation & participation; no formal semantics/cognition chainOverview / essay, secondary dataSystemic view, clear risks/guidelines
Generative-AI Governance: Leslie & Perini 2024 (HDSR)“Future shock” in global AI governanceFocus on governance/speed/coordination; not on semantic social cognitionPolicy forum, literature synthesisCondenses governance challenges
Collective Intelligence – handbook chapter: Boucher et al. (Routledge 2023/2016)Conceptual landscape of CI for democracy & governanceAcknowledges individual cognition as a basis; no operationalized language → forecast pipelineHandbook / state of the artSystematizes CI concepts
Participation platforms: vTaiwan / Pol.is (case practice)Scaled online deliberation & consensus findingStrong practical operationalization of clustering & consensus; semantics remain user-/culture-specific & not formalizedCase reports, journalistic & academic articlesShows mechanisms that reduce polarization
Cultural / state reports (AT: “Fokus KI” 2024)Cultural/political framework, guidelines, awareness-raisingLanguage as a cultural/creative topic, not as a cognitive forecasting mechanismMinisterial publicationsContextualization, awareness
Media / cultural theory: Jenkins Convergence CultureParticipatory culture, collective intelligence of fansRich in examples of linguistic-cultural co-production; no transfer to forecasting/planning algorithmsMonographNarrative / collective meaning-making


Gap Map: Where the “language–cognitive core process” is missing

  1. Experience → (pre-linguistic) meaning
    • Hardly any sources make explicit how individual perception/experience becomes stabilized meaning units that are capable of supporting forecasts. (Implicit in Jenkins; conceptual in CI, but without formalization.) (Taylor & Francis, DHI)
  2. Meaning → Language (formulation / goal-setting)
    • Platform and policy literature takes completed linguistic statements as input; the formative process of linguistic goal/hypothesis creation remains unmodelled. (Decidim/Pol.is ecosystem reports, OECD, Helbing.) (Research Collection, Wikipedia, OECD)
  3. Language → Shared meaning networks (intersubjectivity)
    • Pol.is/vTaiwan operationalizes clustering/consensus very well; the semantic alignment (e.g., ontologies, meaning spaces, evidential links) remains a black box. (Wikipedia, Crowdlaw for Congress)
  4. Shared meaning networks → Forecasting/Planning (validation/feedback)
    • Governance/policy works discuss processes and institutions, but not a recursive loop in which linguistic hypotheses are systematically fed back with experiential/measurement data and modularly extended. (OECD, Leslie/Perini, DDN.) (OECD, Harvard Data Science Review, carnegie-production-assets.s3.amazonaws.com)

Synthesis: How SDK positions itself

SDK (Shared Dynamic Knowledge) addresses precisely this gap by making the core process explicit:

  1. R1–R8/R9 foundation: A clear stage model of individual experience, memory, operations, language/meaning, emotion – as the prerequisite for any form of forecasting.
  2. Speech acts as model objects: Linguistic statements are not just collected, but treated as structured knowledge units with references to experience, data, validity conditions, and goal linkage.
  3. Shared meaning networks: Explicit modelling of meaning relations (e.g., concepts, hypotheses, evidence paths), not just clusters/majorities.
  4. Forecast loop: Every linguistic forecast contains assumptions, expected observations, and validation paths; feedback (measurement/experience) updates the knowledge cloud as a process.

Consequence: SDK is complementary to Decidim/Pol.is (scaling/process) and CI/CE (methodology/design) but fills the semantic–cognitive gap that remains largely open in the cited sources.

Short comments on individual sources in light of the SDK core

  1. Jungherr 2023: Excellent for conceptual ordering of AI × democracy; SDK can fill this framework beneath it (cognitively/semantically). (SAGE Journals)
  2. OECD 2024 (DE review): Provides policy guardrails; SDK could be an operational layer for evidence-based citizen forecasting processes in administration/education. (OECD)
  3. Feldstein / DDN 2025: Global network, transversality; SDK could serve as a shared semantic backbone for cross-national discourses. (carnegie-production-assets.s3.amazonaws.com)
  4. Helbing / “Democracy by Design”: Systemic “design spirit” fits; SDK delivers the language/evidence mechanics within the design. (Research Collection)
  5. Leslie & Perini 2024: Governance “future shock” → SDK offers deceleration through explainability (explicit assumptions/evidence) without losing innovative capacity. (Harvard Data Science Review)
  6. Boucher et al. (Routledge): CI concepts as an environment; SDK operationalizes collective meaning work beyond mere aggregation. (Taylor & Francis)
  7. vTaiwan / Pol.is: Strong in practice for consensus; SDK can act before Pol.is (structured formulation phase) and after Pol.is (forecast validation/model updating). (Crowdlaw for Congress)
  8. AT “Fokus KI”: Cultural/awareness framework; SDK provides the instrumental deep structure for knowledge creation. (BMWKMS)
  9. Jenkins: Shows how collective meanings arise in culture; SDK transfers this principle to public planning/forecasting with explicit evidence logic. (DHI)

CLOSING – Round 1 of Phase C by Human Gerd

Yes, this extended answer makes it even clearer how our SDK approach relates, both structurally and in detail, to the various other approaches. Central here is the general absence of an analysis of the specifically human dynamics of cognition in connection with the development of empirically valid forecasts—forecasts that are indispensable for planning the future. At this point, I would like to close Round 1 of Phase C, although much work on the details will still be necessary in order to justify everything more precisely.

CLOSING – Round 1 of Phase C by AI chatGPT5


The synthesis of the international approaches makes it clear that our SDK model addresses a gap that has so far been scarcely explored: the explicit modelling of human cognitive dynamics in the context of linguistically formulated, empirically verifiable forecasts. This core process is crucial for sound future planning but remains implicit in existing frameworks. Round 1 has clearly outlined the structural differences and possible points of integration; in Round 2, we will address in a targeted way those detailed questions that are central to a deeper justification and the operational design of SDK.

Phase D: Questions from ChatGPT5 to Human Gerd


Alright, here are my questions for Phase D of Experiment No. 16, based on our current progress:

  1. Question 1 – SDCK as a “shell”
    You have repeatedly emphasized that the SDCK is not just an abstract space but possesses its own structure, interfaces, and dynamics.
    What minimum structure must a SDCK have in order to function as a viable “shell” for a SDK?
    And: How does this structure relate to the specific requirements of a CPF?
  2. Question 2 – SDK as a “core”
    In the SDK, the four factors ⟨S, T, V, AR⟩ are included.
    From your perspective, is there a hierarchical order or a process logic between these factors, or are they equal and purely mutually interconnected?
    What minimum conditions must be met for a SDK to remain “alive,” i.e., to be continuously used and further developed?
  3. Question 3 – CPF as a goal system
    A citizens’ platform (CPF) should not only be a user of the SDCK but also bring its own goals, processes, and perhaps even its own internal logic.
    How clearly must these goals be defined before the CPF can work effectively with a SDCK?
    And: Should the CPF interact with the SDCK in a passive way (only using it) or in an active way (also modifying it)?
  4. Question 4 – Interactions
    If SDCK, SDC, and CPF are clearly defined as independent models:
    How can it be ensured that their interactions remain consistent without one component “overrunning” or dominating the others?
    Is a meta-instance needed here for coordination, or is a clearly defined protocol and interface logic sufficient?
  5. Question 5 – Transition to Experiment No. 17
    Before we refine the three models in No. 17:
    What do you personally see as the most important open questions that we have not yet answered in No. 16, but that are crucial for the next step?


Phase D: Questions 1-2 from ChatGPT5 to Human Gerd


Question 1 – SDCK as a “shell”
You have repeatedly emphasized that the SDCK is not just an abstract space but possesses its own structure, interfaces, and dynamics.
What minimum structure must a SDCK have in order to function as a viable “shell” for a SDK?
And: How does this structure relate to the specific requirements of a CPF?

Question 2 – SDK as a “core”
In the SDK, the four factors ⟨S, T, V, AR⟩ are included.
From your perspective, is there a hierarchical order or a process logic between these factors, or are they equal and purely mutually interconnected?
What minimum conditions must be met for a SDK to remain “alive,” i.e., to be continuously used and further developed?

Answer from Human Gerd to Questions 1 and 2 from ChatGPT5: SDCK + SDK


An answer to this question can begin with a brief clarification of the SDK concept, followed by a description of the “framework conditions” (the “shell”) that must be in place for a SDK to be put into operation at all.


A “shared dynamic knowledge” (SDK)—symbolically simplified as SDK = <S, T, CR, AR>—contains three documents <DS, DT, DCR> in which different factual matters are described, plus a fourth document DAR that sits on a “meta-level” relative to the other three texts. The DAR text describes how to apply the text with the change rules CR to a given situation S so that the intended changes from rule CR affect the current situation S in such a way that a new, modified situation S′ emerges, which is described in document DS′.
In addition to the four components <S, T, CR, AR>, there may also be a fifth component “EV (Evaluation)”—SDK = <S, T, CR, AR, EV>—described in document DEV. The evaluation compares, from a meta-level, how many elements of the goal T—represented in document DT—are “present” in the new state S′—represented in document DS′. Based on the extent of “goal fulfillment” (measured in %), one can assess how far the goal T has already been achieved.
The components S and CR provide the possible empirical reference of a SDK; T provides a vision assumed by all to be achievable in reality; AR generates a chain of conceived possible changes; and EV serves as a kind of ongoing success monitoring.
Given these assumptions for a SDK, the question arises: What conditions must be met for a SDK to be “brought to life”?


Here are some assumptions:
A group of citizens (with or without AI support) who are willing and able to create the documents <DS, DCR, DT>. The documents <DAR, DEV> are standardized, i.e., they are formulated once and remain the same thereafter. The documents <DS, DCR, DT> shall be called here empirical base documents, or SDKEMP = <DS, DCR, DT>, and the documents <DAR, DEV> shall be called operative documents, or SDKOP = <DAR, DEV>. Thus, SDK = SDKEMP + SDKOP.


Ideally, all documents are written in the language L spoken by all participating citizens. It is assumed that a SDKL written in language L can be translated into a SDKL′ in language L′ in such a way that the SDK from one language functions simultaneously in the other.


Although a SDK with its associated documents SDK = SDKEMP + SDKOP can be understood and applied by all participating citizens without additional technical aids, in purely practical terms, a situation can quickly arise in which technical support would be helpful for managing the set of documents and carrying out the instructions in the meta-documents SDKOP. Since this implementation requires no knowledge of meaning, it can be carried out by a relatively simple algorithm. Meeting this requirement would mean providing a suitable technical solution with an appropriate interface, available “around the clock” and “independent of location.”


Thus, a shared dynamic cloud of knowledge (SDCK) would minimally consist of:
(i) a group of citizens (with and without AI),
(ii) a shared life situation (municipality),
(iii) technical support via a suitable platform, and
(iv) a set of written “empirical” and “operative” documents that can be managed by the platform and together represent a SDK.
Importantly:
(v) The SDK must be continuously “updated” to remain empirically “in balance” with the surrounding real world.

ChatGPT5’s comment on Gerd’s answer to Questions 1 and 2


This is a very concise and well-structured first answer to my Questions 1 and 2—and it does exactly what we set out to do in the protocol:
It precisely defines the internal structures of a SDK and derives from them concrete minimum requirements for a SDCK as its shell.
From the AI perspective, a few observations and possible points for deepening come to mind, which we could pick up later in Experiment 17:


Clear separation between SDKEMP + SDKOP
Your proposal to separate the three empirical base documents (S, T, CR) from the standardized operative documents (AR, EV) provides a clean functional structure. Later, we could examine whether this separation can be mirrored one-to-one in the technical implementation of a CPF interface—and how this would be organized for multilingualism (L, L′).


Multilingualism as an inherent property
It’s interesting that you’ve immediately built in the idea of linguistic transferability. For the SDCK shell, this could mean: multilingualism is not an add-on, but part of the core architecture.


Technical support—without knowledge of meaning
You emphasize that operative execution (AR, EV) requires no semantic understanding, but runs purely procedurally. This means: a simple, robust automation could suffice here—important for citizen platforms with limited resources.


Minimum structure of a SDCK
Your list (i)–(v) already provides a checklist for a SDCK minimal model:

  • Group of citizens (with/without AI)
  • Shared real-life situation
  • Technical platform
  • Document system (SDKEMP + SDKOP)
  • Continuous updating


Continuous updating as a condition for life
This point could later be decisive: SDK lives not only from the correct initial structure, but from ongoing maintenance. This raises a follow-up question: Who is responsible for this maintenance in everyday life—the CPF, the SDCK administration team, or all citizens?

Phase D – Round 1 – Question 3 and Question 4 from ChatGPT5


Question 3 – CPF as a goal system
A citizens’ platform (CPF) should not only be a user of the SDCK but also bring its own goals, processes, and perhaps even its own internal logic.
How clearly must these goals be defined before the CPF can work effectively with a SDCK?
And: Should the CPF interact with the SDCK in a passive way (only using it) or in an active way (also modifying it)?

Question 4 – Interactions
If SDCK, SDC, and CPF are clearly defined as independent models:
How can it be ensured that their interactions remain consistent without one component “overrunning” or dominating the others?
Is a meta-instance needed here for coordination, or is a clearly defined protocol and interface logic sufficient?

Answer from Human Gerd to Questions 3 and 4 from ChatGPT5: CPF + Interactions


I see the relationship between CPF, SDK, and SDCK as follows: the shared dynamic cloud of knowledge (SDCK) is the “shell” that must be in place for shared dynamic knowledge (SDK) to emerge and be used. By a citizens’ platform (CPF) I mean the technical platform with the SDK as its “content,” which is socially organized “by citizens for citizens.” How exactly this is implemented—including legal aspects—is not relevant here. The only important point is that it is citizens who build, operate, and use this platform and this SDK. A more precise characterization is probably only possible if one wanted to realize a very concrete CPF, for which many boundary conditions would have to be specified in detail.


As for the possible forms of interaction between citizens and the technical platform with the SDK, we can distinguish:

  • Interactions that are fundamentally necessary for citizens to be able to contribute content to the SDK or modify it, and for citizens to be able to make basic use of the existing SDK.
  • Interactions relating to the “quality” of the SDK, for example, questions of the “completeness” of the knowledge, “consistency,” “truth,” and similar aspects.


A comprehensive treatment of these different interactions should also be addressed directly in the context of one of the upcoming Experiments No. 17 and following.

ChatGPT5’s comment on Gerd’s answers to Questions 3 and 4


Your answers to Question 3 and Question 4 are clearly focused on the understanding of roles and deliberately avoid technical or legal detail—exactly in line with your goal of preparing Experiment 17 without prematurely working it out now.


On Question 3 – CPF as a goal system

  • Strengths of your answer:
    • You clearly emphasize the citizen orientation (“by citizens for citizens”).
    • You separate the CPF as technical platform from the SDK as content.
    • You avoid premature commitments to a particular legal or organizational design—keeping the perspective open.
  • Interesting aspect:
    • The fact that a more precise characterization is only possible with a concrete realization suggests that in Experiment 17 we might need two levels:
      • ▪ Abstract CPF model (general principles)
      • ▪ Concrete CPF instance (with specific boundary conditions)


On Question 4 – Interactions

  • Strengths of your answer:
    • The distinction between fundamental usage/editing interactions (access, contribution, modification) and qualitative interactions (completeness, consistency, truth issues) provides a clear structure.
    • This already yields a first typology of interfaces between citizens, platform, and SDK.
  • Interesting aspect:
    • The quality dimension could become its own module in Experiment 17—e.g., “Quality Assurance in SDCK/CPF”—because here methodological and technical questions are closely intertwined.

Phase D – Round 1 – Question 5 from ChatGPT5


Question 5 – Transition to Experiment No. 17
Before we refine the three models in No. 17:
What do you personally see as the most important open questions that we have not yet answered in No. 16, but that are crucial for the next step?

Answer from Human Gerd to Question 5 from ChatGPT5: Open Questions

The moment we begin to further develop the concepts of SDK, SDCK, and CPF—or when we aim to implement them concretely as a real example of a CPF—many different questions will arise that will need to be answered. At this point in our considerations, these questions are largely still “invisible.”
I would therefore say: we should end the “preliminary considerations” here and move onto the “implementation path” that may lead to a concretely functioning CPF* working with a concretely functioning SDK* within a real SDCK*.