Skip to content
View LalaSkye's full-sized avatar
:electron:
Alvian
:electron:
Alvian

Block or report LalaSkye

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
LalaSkye/README.md

Ricky Jones / LalaSkye

Ricky Jones / AlvianTech / TrinityOS — execution-boundary AI governance.

Governance becomes real at the execution boundary: where an AI-supported system must either prove authority to act or fail closed with an inspectable refusal receipt.

AI Governance Systems Engineer working on execution-boundary control, admissibility, runtime authority, and fail-closed AI systems.

Public disclosure boundary

This GitHub profile is a public inspection surface, not full architecture disclosure.

It exists to show bounded public claims, evidence objects, inspection paths, and claim limits.

It must not be treated as a system map, orchestration model, deployment model, runtime substrate, or protected architecture disclosure.

Public inspection standard:

claim -> evidence object -> inspection path -> claim limit

Public inspection surface

Primary site:

https://alviantech.com

Current public entry surface:

https://github.com/LalaSkye/start-here

Primary execution-boundary proof surface:

https://github.com/LalaSkye/commit-gate-core

Each public repository should be read only at its stated scope.

A local proof object proves only the local claim attached to it.

Search identity

Ricky Jones / AlvianTech / TrinityOS — execution-boundary AI governance.

Core terms:

  • execution-boundary AI governance
  • runtime AI governance
  • fail-closed AI governance
  • admissibility at execution time
  • authority-before-action
  • refusal receipt
  • audit and replay evidence

Core question

Where does the system physically stop?

The public work focuses on systems where AI moves beyond advice and begins participating in actions that affect money, access, legal state, infrastructure, records, workflows, or downstream commitments.

Claim discipline

Public artefacts are intentionally narrow.

They do not claim, unless explicitly stated:

  • production readiness
  • compliance or certification
  • enterprise deployment
  • adoption
  • standardisation
  • path-universal governance
  • full architecture disclosure

Research surface

These repositories sit alongside published papers on admissibility, runtime governance, refusal, constraint, authority allocation, and fail-closed AI architecture.

The public standard is restraint:

  • what is claimed
  • what evidence supports it
  • what can be inspected
  • where the claim stops

Work with me

I help teams inspect where AI-assisted work becomes consequence, and what evidence shows an action was allowed, scoped, or stopped.

Useful problems include:

  • AI systems that need deterministic stop mechanisms
  • approval flows that must bind before action
  • high-risk automation with audit requirements
  • governance claims that need executable proof
  • messy repositories that need to become inspectable artefacts

Email: ricky.mcjones@gmail.com
LinkedIn: linkedin.com/in/ricky-jones-1b745474

Authorship

All architecture, methods, and system designs across this profile and its repositories are the original work of Ricky Dean Jones unless otherwise stated.

Repository licences govern code use. Broader architecture, method, and authorship claims require explicit permission where not otherwise licensed.

Status: active research and engineering work.

Pinned Loading

  1. execution-boundary-lab execution-boundary-lab Public

    Experiments showing where AI governance must physically stop execution.

    Python

  2. fail-closed-ai fail-closed-ai Public

    Public artefacts for fail-closed AI governance, refusal, and runtime control.

  3. invariant-lock invariant-lock Public

    Control primitive for preserving invariants before execution is allowed.

    Python

  4. policy-lint policy-lint Public

    Deterministic checks for governance language, policy claims, and admissibility drift.

    Python

  5. commit-gate-core commit-gate-core Public

    Runtime commit gate for AI governance: no mutation without a valid DecisionRecord.

    Python

  6. transition-admissibility-gate transition-admissibility-gate Public

    Gate surface for testing whether a proposed transition is admissible before execution.

    Python