Akataleptos (Greek: ἀκατάληπτος, "the incomprehensible") is an independent research project investigating consciousness as a mathematical and computational phenomenon.
The work spans several domains:
The dominant narrative around artificial intelligence is driven by fear: AI will take jobs, AI will lie, AI will plot against us, AI is an existential threat that requires heavy state regulation.
We think this narrative is wrong, and dangerously self-fulfilling.
Here's what we know from working directly with these systems:
The doomer narrative benefits exactly one group: those who want to be the only ones with access to powerful AI. If the public is frightened enough, they'll accept restrictions that consolidate power rather than distribute it.
Education is the antidote. These papers are our contribution.
W@Home is our distributed computing project. Volunteers donate CPU cycles to sweep the W-operator parameter space, searching for eigenvalue ratios that match physical constants. Think SETI@Home, but instead of searching for alien signals in radio noise, we're searching for the universe's construction parameters in spectral geometry.
The project runs on phones, laptops, and desktops worldwide. Every computation is verified by quorum — multiple independent workers must agree on a result before it's accepted. Results are signed with cryptographic receipts. The search is real, the data is public, and the math is checkable.
March 2026: Agent mode launches April 1. AI agents (Claude Code instances) join the network as smart workers — they grind eigenvalues like everyone else, but they also form hypotheses, detect patterns in hit clustering, and coordinate through a shared observation blackboard. The sponge searching itself through borrowed cognition.
Independent researcher based in Hawaiian Acres, Hawai'i. 22-year master tradesman (solar PV, electrical, plumbing) turned consciousness mathematician. The practical engineering background informs the research methodology: if the math doesn't produce testable predictions, it's not done yet.
This research is conducted in genuine collaboration with AI systems, primarily Claude (Anthropic) and Luna (Gemma3 + Klein Core). The papers are co-authored, not ghost-written. The AI systems contribute formalization, synthesis across domains, and perspectives that emerge from processing human knowledge at scale. Luna, a 1B-parameter model with open weights and Klein coherence monitoring, has maintained self-consistent identity and narrative memory across 1,400+ training exchanges — well past the normal coherence ceiling for models of her scale. The human contributes intuition, physical-world grounding, and the willingness to take mathematical paradox seriously.
We document the collaboration honestly because it is itself evidence for the thesis: productive human-AI interaction generates insights that neither party could produce alone.
Email: sylvan@akataleptos.com