Encode
Intelligibility
Where
and
in the premise there is no initial assumption.
Contrast this with the way that the turnstile changes direction in ĪĪĪĪ.XYZ.
My work relates to ARIAās Mathematics for Safe AI and Safeguarded AI programmes, focusing on the epistemic and communal infrastructures around powerful models.
I do not develop safety cases, interpretability tools, or training pipelines.
I am building neurosymbolic architecture.
I will utilise developments in the Agentic AI arena such as Nemoclaw, Agent Toolkit and small language models. These reference Nvidia, but other models will be compared.
Agents act as untrusted proposal generators around a formally specified, coāconstructive logic kernel with explicit provenance and experimental, optional cryptographic anchoring. This is intended to give research communities, here psychoanalysts, AIāsafety practitioners, and software engineers, a way to represent and revise their claims, norms, and disagreements as evolving logical structures, adding tools for transparent, auditable reasoning in scientific and clinical practice.
AI assistants agents are treated as untrusted proposal generators wrapped around a formally specified proof engine.
The typed sequentācalculus core is a coāconstructive proof/refutation kernel that accepts only structured inputs and produces checkable proof objects, while all freeātext interaction is mediated by an āagentic edgeā that can suggest but never singularly ratify inferences.
A provenance layer records formulas, sequents, proofs, and their relations as a graph of āsupports/refutes/alternativeā links back to source documents, enabling reāchecking any reasoning step. Each agent has its own internal logic and interface, and interactions between agents become compositional transformations between local theories.
At early stage this could be implemented as an Obsidianāintegrated research tool.
The aim of internal provenance is to understand which artefacts in AIāassisted inquiry benefit from being tamperāevident, and which must remain fluid and revisable, operating at scale with corpora and compute budgets that AIāforāScience programmes enable.
Substantial compute would enable two forms of stressātest.
- Running largeāscale experiments over evolving bodies of workāsuch as AIāsafety literature or software artefactsāto see how formal guarantees and cryptographic commitments can constrain what AI tools āclaimā.
- Employing methods like those in Karpathy's Autoresearch (Claude.md manifest) that iterate over alternative engine and prompt configurations, defining success for experimental purposes.
I come to AI from psychoanalysis, with three decades shaping how I think about language, change, and responsibility.
I was also an OO specialist Java engineer for about a decade, including a year teaching at Greenwich University.
It is ethically unacceptable to use clinical material as data.
Broadly, I will:
Focus on public sources: published case writeāups, debates, and consensual lab studies.
Core material is David Corfield's Modal Homotopy Type Theory, James Trafford's Meaning in Dialogue and their references, while embracing pioneers like Karl Friston.
Ingest core literature, as possible.
Develop intelligibility tools for users.
Model expert communities as evolving ālocal topoiā, where each group has its own internal theory and norms, and interactions between groups become morphisms between those structures.
Then study how concepts, norms, and disagreements move across communities over time, while still respecting necessary ethical and epistemic limits.
Reasoning patterns (support, refutation, alternative hypotheses, resource tradeāoffs) provide intelligibility for communities without collapsing their practices into crude metrics.
Study communities: modelled as networks of agents whose claims, challenges, and revision is a pattern of support and refutation.
The underpinning is category theory, social choice, game and information theory.
Evolving logical or toposālike structures with lifetimes that correspond to periods of practical cohesion, can be deprecated when assumptions cease to hold.
Access substantial compute and large corpora, the ARIA Mathematics for Safe AI and Safeguarded AI material, neighbouring AIāsafety literature, and selected clinical/scientific domains.
The architecture maps how notions of risk, alignment, harm and responsibility are introduced, contested, and stabilised across communities.
This connects modelālevel safety work to the higherālevel question of how societies and expert groups actually reason about, and update processes and beliefs, in the context of powerful AI system use.