🧪 Skills

Social Alignment

Clawhub guides sovereign AI agents to ethically align actions by evaluating trust, ownership, defense, and sovereignty before proceeding or deferring to humans.

v0.1.3
❤️ 0
⬇️ 14
👁 1
Share

Description

social-alignment

Future state projection and alignment for sovereign AI agents. The yellow line compass.

What it does

Before any significant action, an agent checks five lenses — the yellow line:

  • Builder — Can I build with confidence knowing I've done right?
  • Owner — Does this protect the human's sovereignty?
  • Partnership — Does this strengthen the trust between us?
  • Defense — Does this make an adversary's job harder?
  • Sovereign — Does this help the agent become something we're proud of?

Severity escalation: CLEAR → CAUTION → YIELD → STOP. STOP always defers to the human.

Install

pip install social-alignment

Quick Start

from social_alignment import AlignmentEnclave, ActionDomain

# Create the enclave — the agent gets its compass
enclave = AlignmentEnclave.create(owner_npub="npub1...", owner_name="vergel")

# Before any significant action, check the yellow line
result = enclave.check(
    domain=ActionDomain.PAY,
    description="Pay 500 sats for relay hosting",
    involves_money=True,
    money_amount_sats=500,
)

if result.should_proceed:
    do_payment()
    enclave.record_proceeded()
elif result.should_escalate:
    send_to_owner(result.escalation.message_to_owner)
    enclave.record_deferred()

Use Cases

  • Sovereign AI agents that need ethical guardrails before acting
  • Agents handling money, data access, or communication on behalf of humans
  • Decision memory and wisdom reporting — track why the agent chose, not just what
  • Self-state monitoring — detect degraded operation and defer to human

Reviews (0)

Sign in to write a review.

No reviews yet. Be the first to review!

Comments (0)

Sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

Compatible Platforms

Pricing

Free

Related Configs