ARC Framework AI Navigating Intelligence

HGAA Standard — First Edition (March 2026)

Human‑Governed AI Authorship (HGAA)
Defined by Ed Woods

A formal standard definition for responsible, transparent, human-governed AI-assisted authorship.

What This Is (and Why it Matters)

Human-Governed AI Authorship (HGAA) is a standard for using AI without losing human authorship, judgment, or responsibility.

As AI becomes more capable, the risk is not just error; it is the quiet loss of human meaning, voice, and accountability.

HGAA defines how to use AI while keeping the human being fully responsible for what is created.

Read This Plain English Summary

To make this possible in practice, HGAA relies on methods that keep the human visible to the system.

A central condition of HGAA is contextualization, in which AI assistance operates within a clearly authored human context of intent, meaning, and purpose. One of the primary methods for achieving this is contextualized augmentation.

Formal Definition

Human-Governed AI Authorship (HGAA) is a standard for AI-assisted authorship in which artificial intelligence may support drafting, language, structure, refinement, or generative assistance, while the human being remains the governing source of concept, meaning, judgment, approval, and final responsibility for the work.

HGAA establishes the conditions under which AI-assisted work remains legitimately human-authored. AI may assist expression, but it does not replace the human source of meaning, authorship, discernment, or accountability.

Within HGAA, one of the primary methods for preserving authorship is contextualized augmentation: AI assistance operating within a deliberately authored human context, under human judgment and final responsibility.

Traceability ensures that human direction, review, and approval remain visible and, where necessary, demonstrable within the authorship process.

Provenance completes this governing process by making human direction, review, and approval visible and traceable. In this sense, provenance supports traceability, connecting technical record-keeping to accountable human authorship.

Scope of the Term

HGAA is intentionally focused on authorship. It applies to AI-assisted writing and other forms of authored expression in which the human being must remain responsible for meaning, judgment, approval, and final accountability.

While its principles may inform other areas of AI use, HGAA is primarily concerned with the preservation of legitimate human authorship in AI-mediated work.

Where human context is thin, AI may produce fluent language without preserving authentic voice, grounded meaning, or accountable authorship. HGAA responds by requiring visible human governance, traceable provenance, and authored conditions under which assistance remains bounded rather than substitutive.

Plain-Language Version

The AI can help carry the expression.
The human must still carry the meaning, judgment, and responsibility.
And that human role must remain visible and traceable

Core Conditions of the Term

  • Human origination or substantive direction The human originates the core idea, or meaningfully directs the conceptual intent, purpose, and direction of the work.
  • Human discernment The human evaluates, selects, rejects, revises, or reshapes outputs rather than passively receiving them.
  • Human final judgment The human decides what is fitting, what is faithful enough to stand, and what is worthy of being shared.
  • Human accountability The human remains answerable for the final work, including its meaning, effects, and public trustworthiness.
  • Provenance as completion of governance The work must be traceable to human direction, review, and approval. Provenance is the visible record that governance occurred.

What the Term Rejects

  • concealed AI ghostwriting presented as wholly human work
  • the outsourcing of meaning or judgment to the model
  • the assumption that fluency alone constitutes authorship
  • the displacement of human responsibility by technical assistance

Publication-Ready Statement

I define Human-Governed AI Authorship as a standard of AI-assisted authorship in which the human being remains the governing source of concept, judgment, and accountability, while AI serves as a tool of articulation, refinement, and structured support. In this model, AI may assist the writing, but it does not assume authorship of the human meaning behind it.

Within HGAA, one of the primary methods for preserving authorship is contextualized augmentation: bounded AI assistance operating within authored human context, under human judgment and final responsibility.

Provenance ensures that this governance is visible and traceable.

Signature Reef Flow Formulation

Human-Governed AI Authorship affirms that AI may assist the writing, but human meaning, judgment, and responsibility must remain in human hands and remain traceable.

Short Motto

Human first. Tool second. Responsibility always.

Whitepaper Note

This page offers a public-facing presentation of Human-Governed AI Authorship (HGAA) as the governing standard for preserving human authorship in AI-assisted work.

The full framework is developed in the whitepaper Human-Governed AI Authorship (HGAA): A Framework for Preserving Epistemic Agency and Accountability in AI-Mediated Work, which explains in greater depth how human judgment, accountability, and legitimate authorship can be preserved in AI-assisted work.

Request your copy here.

Positioning Note

HGAA is a public-facing authorship standard for human-governed AI-assisted writing. It is designed to work alongside broader AI governance resources, including NIST’s voluntary AI risk framework, by translating concerns such as traceability, transparency, and human oversight into the specific domain of writing and authorship.

HGAA also operates alongside current U.S. Copyright Office guidance on human authorship by helping writers preserve and demonstrate meaningful human judgment, authorship, and final responsibility in AI-assisted work.

Permission Statement

Permission is granted to share this document for non-commercial, educational purposes, provided it is circulated with clear attribution to Mr. Ed Woods / Reef Flow Publishing and is not presented as the work of others.

ReefFlowPublishing.com | ARCframework.ai | HumanGovernedAI.com

Human first. Tool second. Responsibility always.

Reasons to use the arc framework

accurate and reliable Results

The ARC Framework™ is a two-tier, curation continuum process that transforms AI from a content generator into a contextual collaborator.

Tier 1 ensures each output is accurate and reliable—by validating the AI’s role, sources, and citations.
Tier 2 refines those outputs into strategic, contextual decisions—by analyzing risks, reframing assumptions, and committing to action.

Together, these tiers anchor AI insights in human relevance—so you don’t just get answers, you get results you can trust.

ARC Tier-1

Act as

Define the AI’s role. Is it your analyst, strategist, or scout?

ARC Tier-1 References

references

Ask: “Where is this coming from?” Source matters.

ARC Citations

Citations

Require verifiable backing. No citation? No decision.

analyze

Surface trade-offs, options, and risks.

re-frame

Shift the lens. Is the real problem what you think it is?

Commit

Choose aligned action based on your mission and constraints.

All assets published under ARCframework.ai, including the ARC Framework™, Reef Flow™ narratives, ARCemedes™, Qwilla AI™ and the ARCframework-OpenAcademicEdition GitHub repository, are protected intellectual property. The framework content is licensed under CC BY-ND 4.0 for educational and nonprofit use only. Reef Flow’s characters, metaphors, and narrative structure are trademarked and may not be reproduced, adapted, or distributed without express permission. Commercial use of any ARC-aligned tools or materials requires a separate license. Please see our Licensing page for full terms.

Views: 85