This layout presents the principal protocol modules used to annotate and index negotiation sessions. Each module is described as an independent card with fields for intent, structural elements, markers, and encoding notes. The layout emphasizes referential continuity: modules include tokens and persistent identifiers that allow events and fragments to be linked across sessions. The presentation that follows is explanatory and archival; it documents representational conventions rather than operational instructions. Readers seeking the full assembly of modules into a session-level template may use the cross-reference tokens embedded in each card to trace inter-module dependencies and transitions.
Session framing
Session framing records the initial descriptive header for a negotiation exchange. A framing module specifies participant identifiers, declared affiliations when relevant, the temporal window for the session, and the immediate scope of discussion. It also records the agreed representational mode, for example whether utterances will be transcribed verbatim or summarized, and the citation conventions to be applied to referenced documents. Framing entries include a concise natural-language scope statement and a machine-friendly header that contains persistent identifiers, version tags, and timestamp anchors. The module documents preconditions such as required reference documents or prior-session anchors that must be loaded for correct interpretation of subsequent remarks. The framing card also includes metadata about permitted continuity operations, for instance whether cross-session citation tokens are allowed to carry forward previous annotations without revalidation. By capturing the session's initial state, framing supports consistent interpretation of later sequence events and clarifies the provenance context for all recorded elements within the session.
Boundary setting
The boundary setting module enumerates the explicit constraints and allowances that shape permitted discourse within a session. Boundaries are recorded as discrete rules with short identifiers, a plain-language explanation, scope metadata, and a coded severity level. A boundary entry specifies whether particular documents may be cited in full, whether anonymization or redaction is required for certain topics, and whether off-record commentary is permitted. The module also includes procedural boundaries such as interruption rules, permitted timing windows for responses, and the conditions under which breakout threads may be established and later reintegrated. Boundary entries include machine-readable selectors that enable automated checks for compliance when records are compiled or transformed. This formalization supports consistent enforcement of confidentiality and citation conventions across aggregated archives, while also enabling later reviewers to filter records by boundary tags to understand which parts of a transcript were subject to restricted handling or special citation rules.
Exchange sequence
The exchange sequence module specifies the ordered phases and state transitions that structure participant contributions. Sequences document nominal steps such as opening statements, clarification windows, proposal presentation, structured responses, and closure operations, together with explicit markers that indicate transitions between these states. Each step is described with the expected inputs and outputs, the permissible markers to indicate pauses or deferrals, and any timeboxing conventions that are applied. The module supports parallel lanes for subthreads and specifies reference tokens that map events in a subthread back to the parent session. Exceptions and escalation paths are recorded alongside the nominal sequence so that records preserve both the planned flow and observed deviations. By cataloguing sequence tokens and their permitted transitions, the module enables transcripts to maintain temporal fidelity and to reconstruct causality between statements, clarifications, and decisions while remaining neutral and descriptive about the content itself.
Clarification markers
Clarification markers are standardized tokens and metadata used to capture requests for explanation and to log subsequent resolutions. A marker entry defines the token to be used (short identifier), an expected response window, and a schema for linking the clarification back to the origin utterance. Markers include contextual metadata that classify the clarification as definitional, procedural, or technical, and they include a status field indicating whether the clarification was resolved, deferred, or escalated. The module also prescribes how to handle chained clarifications where a response itself triggers further questions, and it documents the appending rules for clarified content to preserve the reading order and interpretative linkage. By providing a consistent marker vocabulary and encoding rules, this module enables later readers to reconstruct how ambiguous statements were interpreted and what clarifications informed subsequent dialogue without introducing evaluative language.
Record references
The record references module defines persistent identifier schemes and citation conventions for linking items both within and across session records. Entries specify identifier formats, minimal metadata fields for unambiguous retrieval, and versioning rules that clarify how to cite specific revisions or transformed artifacts. Conventions for internal citations map fragments to prior-session anchors, while external citation rules specify how to document origin, authoring context, and access controls. The module also prescribes how transformed artifacts such as redacted excerpts or summaries must retain provenance metadata so that later assemblages can trace derivation and authenticity. Reference tokens include machine-readable anchors that integrate with sequence and framing modules to provide a coherent graph of session events and citations, enabling continuity without asserting any outcomes or prescribing substantive usage of cited material.
Index navigation and card semantics
Cards in this layout are designed to be modular and linkable. Each card contains a brief intent statement, a structural breakdown of fields, canonical marker tokens, and example encodings for persistent records. Cross-reference tokens allow readers to follow how a change in one module, for example a boundary amendment, affects sequence operations and reference handling. The layout uses a consistent visual grammar: header tokens, short identifiers, and labeled micro-fields that support both human readability and machine parsing. This section documents the semantics of card fields and suggests machine-friendly encodings for integration with archival systems. The presentation is explanatory rather than prescriptive, focusing on how to represent and connect protocol artifacts across sessions while preserving provenance and interpretability for later review.