All Things are

🅿️ vs NP is not about complexity. It’s about salvation.”
"I show you how deep the rabbit hole goes."

All Things are 🅿️.

P and NP seem distinct—one solves, the other verifies. Yet if the verifiable is truly accessible, they are not far apart. As faith trusts the unseen, complexity may obscure—but not erase—solvability.

...
KEUNSOO YOON (Austin Yoon)
austiny@gatech.edu / austiny@snu.ac.kr

.

1 st

Emergence in SAT Problems: Critical Thresholds under Constraint Density

May 2025

View Kor
2 nd

Transitions of Critical Structural Regions
for NP Problems

Jun 2025

View Kor
3 rd

Changbal Theory
of Emergent Solvability

Coming Soon

Abstract Kor
...

allthingsare🅿️.com

A distributed experimentation platform focused on real-time computation of NP problems and the development of learning-based solvability models. While deCHURCH.net collects high-quality human-generated faith data to analyze structural emergence, allthingsareP.com directly distributes large sets of computational problems—such as SAT or hypergraph coloring—to participating nodes and gathers their solving traces for analysis. Each result is logged with timing, structural metadata, and algorithmic path data, feeding into the training of the Changbal Jump Function, a custom model designed to predict and manipulate solvability transitions. The platform continuously refines this function using AI-guided exploration, including reinforcement learning, constraint-based sampling, and anomaly detection in solution space. Users contribute not only by observing but by running solvers, submitting algorithms, or analyzing critical zones—turning NP problems into a collaborative canvas of solvability research. At its core, allthingsareP.com is a live lab for complexity collapse: a place where impossibility is measured, bent, and, ultimately, redefined. Such global-scale collaboration will culminate in the proof of the third paper, “Changbal Theory of Emergent Solvability.” It will mark the moment when P and NP are no longer divided, but understood—through data, structure, and collective emergence.

Live from "3rd Paper"
...

HISCoin.org

Unlike conventional cryptocurrencies that rely on Proof-of-Work to solve arbitrary problems, HISCoin adopts a purpose-driven model: coins are minted through the process of solving NP problems—such as SAT or graph coloring—thus turning computational effort into socially and mathematically valuable work. Users contribute by uploading worship photos, copying Scripture, and submitting prayers or confessions. These actions generate structured faith data, while participants also offer computational resources to explore NP problem spaces. Token issuance is tied to both the quality of spiritual activity and the actual progress made in solving these problems. Currently deployed as a BEP-20 token, HISCoin is designed to evolve into its own blockchain (HISChain) using Cosmos SDK. Future development includes integrating NFT-based faith profiles and enabling transparent, decentralized tracking of both spiritual integrity and problem-solving contributions. By aligning belief, behavior, and blockchain, HISCoin establishes a virtuous loop where data becomes devotion, and computation becomes collaboration.

Live from "2nd Paper"
...

deCHURCH.net

The platform collects raw data from users, including worship attendance photos, Bible transcription records, and prayer/confession content. These inputs are accompanied by metadata, then normalized and structured for storage in a relational database. Currently, the system employs a centralized architecture, but in the long term, it plans to transition to decentralized node-based storage using IPFS or blockchain technologies to ensure censorship resistance and data permanence. Each data entry is stored with associated metadata such as timestamps, geolocation, and user identifiers. The data is then funneled into a data warehousing pipeline and analyzed using time-series analytics, text mining, and pattern recognition techniques. In particular, prayer and confession content is processed through natural language processing (NLP) models and integrated into an OpenAI GPT-based spiritual response system, which provides real-time feedback, Scripture recommendations, emotion classification, and personalized faith-growth tracking. deCHURCH.net is not merely a religious community platform; it is designed as a unique spiritual data infrastructure with a complete pipeline for faith data acquisition, refinement, analysis, and machine learning. Ultimately, it aims to support personalized faith journey tracking and foster collective spiritual formation within decentralized communities.

Live from "1st Paper"

3 🅿️aradigms: Future Directions


While our current focus lies in the boundary between P and NP, 3 Paradigms introduces broader paths where complexity and emergence extend beyond computation, inviting new frameworks that link structure, intelligence, and meaning.

1. Intelligence

A Paradigm of how reasoning evolves—human or artificial, biological or synthetic. Not just solving problems, but defining, reframing, and scaling them through structure, memory, growth, and adaptive learning.

2. Systems

Beyond isolated problems to dynamic, interconnected systems that behave as wholes. From constraints to feedback loops, latent structures, and emergent effects, this Paradigm reveals how global solvability unfolds.

3. Meaning

What makes a solution meaningful? This Paradigm embraces ambiguity, metaphor, and context as formal dimensions worth modeling.


These Paradigms are not answers, but invitations— to think beyond the known boundaries of logic, code, and form. They point toward new ways of framing complexity not as an obstacle, but as a canvas for discovery, where intuition and structure meet, and meaning itself becomes part of the equation. In this space beyond certainty, new insight waits—not as proof, but as possibility.


FAQ – On the Use of Symbolic and Historical Texts

Ⅰ. Why use structured narrative texts instead of standard language datasets?
One of our key datasets is the Annals of the Joseon Dynasty, a massive collection of daily historical records from Korea, spanning over 500 years (1392–1863) across 27 kings. These annals were compiled by court historians who were forbidden to lie, and even kings themselves were prohibited from viewing them. As a result, they contain brutally honest, detail-rich documentation of politics, emotions, disasters, decisions, and social dynamics.

Unlike modern social media data or casual text corpora, these annals follow strict temporal structure, causality chains, moral tension, and decision-flow logic—ideal for studying nonlinear transitions and turning points in human systems.

In parallel, we also use other structured texts such as the Bible, which contains similar layers of narrative tension, symbolic motifs, and intervention-based turning points.

Our use of these texts is not religious or cultural, but structural: we seek narrative datasets where emergence—especially sudden solvability or transformation—can be observed, tagged, and modeled across time.
Ⅱ. Why include historical narrative texts in AI research?
One of our core datasets is the Annals of the Joseon Dynasty, a monumental historical record from Korea that spans 500 years (1392–1863) and 27 monarchs. These records were written daily by official historians who were forbidden to alter or conceal the truth—so much so that even kings were not allowed to view them during their lifetimes. As a result, the annals offer a uniquely honest and detailed account of political tension, wars, disasters, human decisions, and societal shifts.

This chronicle is not only historically significant but also structurally valuable: it contains clear temporal progressions, emotional dynamics, decision-flow patterns, and sudden turning points. Such features make it ideal for modeling nonlinear emergence and transition patterns through AI.

In parallel, we also draw on classic narrative texts like the Bible, which have been interpreted across cultures and time. These texts likewise encode patterns of repetition, symbolic motifs, interventions, and structural jumps that support comparative modeling.

Thus, this research is neither historical nor religious in purpose, but aims to extract and model the underlying structures of transformation and emergence that can be found across diverse human narratives.
Ⅲ. Is this research aimed at predicting historical events?
No, the goal of this research is not to predict or replicate past events. Rather, it investigates what conditions tend to accumulate before a sudden transition—or "Jump"—occurs in human systems.

The Annals of the Joseon Dynasty record countless events across 500 years of Korean history, including wars, famines, corruption, reforms, and political upheavals. In many cases, long periods of stagnation are suddenly broken by a decisive change when certain hidden thresholds are crossed.

Our aim is to identify emotional trajectories, institutional shifts, intervention points, and symbolic transitions within such narratives, and to model these emergent patterns through AI—quantifying and anticipating the structural conditions that lead to transformation.

Parallel analyses are conducted on other classic narrative systems such as the Bible, where similar transitions—from despair to recovery, from impossibility to breakthrough—are encoded in symbolic form.

In short, this study does not attempt to "predict the past," but to uncover the structural conditions under which emergence becomes possible in both individual lives and collective societies.
Ⅳ. What advantages do these narrative texts have over general NLP datasets?
General language datasets often consist of casual, fragmented expressions that lack deep structure, symbolic consistency, or clear causal dynamics.

In contrast, structured historical texts like the Annals of the Joseon Dynasty contain rich sequences of tension, decision-making, and transformation over time. These records exhibit high-frequency patterns such as repetition, escalation, symbolic turning points, and long-term feedback loops—features ideal for modeling complex emergence.

Similarly, the Bible offers another example of highly structured narrative, where symbolic motifs, moral decisions, and transformational arcs have been interpreted across centuries and cultures.

Texts like these are not used for religious or historical purposes, but rather as high-density narrative data that reflect universal human structures of experience and change. They enable AI models to learn emergence patterns with minimal cultural, racial, or religious bias.

Moreover, their symbolic and relational architectures align well with modern neural network frameworks, such as Graph Neural Networks and Transformers.
Ⅴ. Is this research culturally and religiously neutral?
Yes, this study does not incorporate any specific cultural ideologies or religious interpretations.

Historical records such as the *Annals of the Joseon Dynasty* are used purely as high-density narrative data, capturing structured sequences of events, judgments, interventions, and outcomes. These are analyzed to mathematically model societal transition patterns and emergent structures.

Likewise, classical texts such as the Bible are treated not through theological lenses, but as symbolic systems with identifiable jump points and inner logic, subject to the same structural abstraction and algorithmic analysis used in complex systems research.

In short, this work is not aligned with any faith tradition or cultural worldview, but uses structured narratives solely for the academic study of emergence.
Ⅵ. What kinds of texts are particularly suitable for symbolic modeling?
The Annals of the Joseon Dynasty (Joseon Wangjo Sillok) offer a unique dataset: over 500 years of dynastic records covering wars, reforms, famines, political upheavals, all arranged in strict chronological structure. This enables the detection of nonlinear transitions and emergent jumps within historical context.

In parallel, the Bible—despite its religious origin—is one of the most symbolically dense and structurally stable narrative systems in human history. Translated into thousands of languages and interpreted across cultures, it offers a rare combination of semantic continuity and interpretive diversity. These properties make it an ideal environment for training models in symbolic reasoning, structural emergence, and narrative abstraction.

Both corpora provide repetitive, symbolic, and transitional patterns that transcend time and culture—qualities essential for large-scale symbolic modeling and alignment with AI architectures like Transformers and Graph Neural Networks.
Ⅶ. Why extend research from combinatorial problems to symbolic narrative data?
Combinatorial domains such as SAT and graph coloring offer precise, synthetic environments for modeling phase transitions and testing the Changbal Jump Function. These domains allow tight parameter control and statistically robust experimentation.

However, symbolic narrative datasets—such as the Annals of the Joseon Dynasty or the Bible— represent real-world complexity: multi-scale decisions, temporal layering, and emergent causality. By extending Changbal Theory to these domains, we test its generality beyond synthetic inputs, applying it to dense, high-dimensional human systems.

This extension is not a departure, but a natural scaling of the theory—from pure combinatorics to structural symbolic emergence in cultural and historical data.
Ⅷ. How is this qualitative narrative data transformed into machine-processable structure?
This transformation involves a multi-stage algorithmic pipeline composed of the following steps:

1️⃣ Symbolic entities and relations are mapped into graph structures to enable structural representation. Applied models: Graph Neural Networks (GNN), Graph Attention Networks (GAT)

2️⃣ Constraint patterns—such as laws, moral dilemmas, or decrees—are abstracted into logical forms. Techniques used: First-Order Logic, Answer Set Programming (ASP)

3️⃣ Narrative contexts are embedded into semantic vector spaces. Applied models: Transformer, BERT, RoBERTa

4️⃣ Full narrative arcs are modeled as time-evolving dynamical systems, with a focus on identifying critical transitions that define the core behavior of the Changbal Jump Function.

In this way, narratives are not treated as mere text, but as complex systems with learnable structural representation, suitable for quantitative and predictive modeling.
Ⅹ. What real-world problems could this theory be applied to?
Although originally developed for abstract computational challenges, Changbal Theory generalizes to domains involving structural complexity and dynamic constraint evolution.

It does not merely ask “Is this solvable?” but rather investigates: “Under what structural and informational conditions does solvability emerge?”

Potential application domains include:

- Adaptive learning systems based on cognitive thresholds
- Modeling emergent decision transitions in organizations
- Smart infrastructure optimization under evolving constraints
- AI interpretability and alignment via structural analysis
- Early-stage prognostic modeling in medicine and mental health
- Functional protein emergence from amino acid sequences — transforming life itself

Ultimately, Changbal Theory shifts the paradigm from deterministic solving to emergent condition design— creating systems in which solutions become inevitably possible.

References

  1. Kuntur, S., Krzywda, M., Wróblewska, A., Paprzycki, M., & Ganzha, M. (2024). Comparative Analysis of Graph Neural Networks and Transformers for Robust Fake News Detection: A Verification and Reimplementation Study. Electronics, 13(23), 4784. https://doi.org/10.3390/electronics13234784
  2. Ben Hutchinson. (2024). Modeling the Sacred: Considerations when Using Religious Texts in Natural Language Processing. arXiv. arXiv:2404.14740
  3. Murai, H. (2013). Exegetical Science for the Interpretation of the Bible: Algorithms and Software for Quantitative Analysis of Christian Documents. In R. Lee (Ed.), Studies in Computational Intelligence, vol. 492, pp. 67–86. Springer. https://doi.org/10.1007/978-3-319-00738-0_6
  4. Claire Clivaz & Garrick V. Allen. (2019). The Digital Humanities in Biblical Studies and Theology. Open Theology, 5(1), 461–465. https://doi.org/10.1515/opth-2019-0035
  5. Akerman, V., Baines, D., & Hermjakob, U. (2023). The eBible Corpus: Data and Model Benchmarks for Bible Translation for Low‑Resource Languages. arXiv. arXiv:2304.09919
  6. Aars, C., Adams, L., Tian, X., Wang, Z., Wismer, C., Wu, J., et al. (2024). Efficacy of ByT5 in Multilingual Translation of Biblical Texts for Underrepresented Languages. arXiv. arXiv:2405.13350
  7. Mathew, J., & Hermjakob, U. (2023). User Study for Improving Tools for Bible Translation. arXiv. arXiv:2302.00778
  8. Mahek Vora, Tom Blau, Vansh Kachhwal, Ashu M. G. Solo, & Rohitash Chandra. (2024). Large language model for Bible sentiment analysis: Sermon on the Mount. arXiv. arXiv:2401.00689
  9. Levy, D. S. (2024, December 19). How technology is reshaping religion. National Geographic. Link
  10. Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Perrig, W. J. (2008). Improving Fluid Intelligence with Training on Working Memory. Proceedings of the National Academy of Sciences (PNAS), 105(19), 6829–6833. https://doi.org/10.1073/pnas.0801268105