Ricky Jones / AlvianTech / TrinityOS — execution-boundary AI governance.
Governance becomes real at the execution boundary: where an AI-supported system must either prove authority to act or fail closed with an inspectable refusal receipt.
AI Governance Systems Engineer working on execution-boundary control, admissibility, runtime authority, and fail-closed AI systems.
This GitHub profile is a public inspection surface, not full architecture disclosure.
It exists to show bounded public claims, evidence objects, inspection paths, and claim limits.
It must not be treated as a system map, orchestration model, deployment model, runtime substrate, or protected architecture disclosure.
Public inspection standard:
claim -> evidence object -> inspection path -> claim limit
Primary site:
Current public entry surface:
Primary execution-boundary proof surface:
Each public repository should be read only at its stated scope.
A local proof object proves only the local claim attached to it.
Ricky Jones / AlvianTech / TrinityOS — execution-boundary AI governance.
Core terms:
- execution-boundary AI governance
- runtime AI governance
- fail-closed AI governance
- admissibility at execution time
- authority-before-action
- refusal receipt
- audit and replay evidence
Where does the system physically stop?
The public work focuses on systems where AI moves beyond advice and begins participating in actions that affect money, access, legal state, infrastructure, records, workflows, or downstream commitments.
Public artefacts are intentionally narrow.
They do not claim, unless explicitly stated:
- production readiness
- compliance or certification
- enterprise deployment
- adoption
- standardisation
- path-universal governance
- full architecture disclosure
These repositories sit alongside published papers on admissibility, runtime governance, refusal, constraint, authority allocation, and fail-closed AI architecture.
The public standard is restraint:
- what is claimed
- what evidence supports it
- what can be inspected
- where the claim stops
I help teams inspect where AI-assisted work becomes consequence, and what evidence shows an action was allowed, scoped, or stopped.
Useful problems include:
- AI systems that need deterministic stop mechanisms
- approval flows that must bind before action
- high-risk automation with audit requirements
- governance claims that need executable proof
- messy repositories that need to become inspectable artefacts
Email: ricky.mcjones@gmail.com
LinkedIn: linkedin.com/in/ricky-jones-1b745474
All architecture, methods, and system designs across this profile and its repositories are the original work of Ricky Dean Jones unless otherwise stated.
Repository licences govern code use. Broader architecture, method, and authorship claims require explicit permission where not otherwise licensed.
Status: active research and engineering work.


