Artificial Intelligence

2604 Submissions

[8] viXra:2604.0119 [pdf] submitted on 2026-04-30 23:15:01

What Does "Formally Verified" Actually Guarantee?

Authors: Stanislav Komarovsky
Comments: 12 Pages.

We prove that for any formal verification of any real system, the correspondence between the formal proposition and the system it describescannot be established within any finite tower of formal languages. The proof follows from Tarski’s undefinability theorem applied iteratively: verifying that a proposition correctly describes a system requires ex-pressing a correspondence claim that, by Tarski’s theorem, cannot be formulated within the proposition’s own language. Expressing the correspondence in a richer metalanguage generates a new correspondence claim that cannot be formulated in the metalanguage, producing an infinite regress that no finite extension of the formal framework can resolve. The result is structural, not contingent on current tooling or the complexity of the target system. We discuss five caveats—concerning human knowledge, the value of formal verification, partial gap closure, varying assumption strength, and the functionalist objection—and draw implications for the verification of AI-generated software artifacts.
Category: Artificial Intelligence

[7] viXra:2604.0118 [pdf] submitted on 2026-04-30 05:12:54

A Modular Zero-Knowledge Credential Framework for Multi-System Attribute Verification with Scoped Unlinkability and Efficient Accumulator-Based Revocation

Authors: Sayan Bairagi, Sayan Singha Roy, Abir Rakshit, Anik Bhowmick
Comments: 41 Pages.

This work presents a zero-knowledge credential framework designed to enable secure and privacy-preserving attribute verification across multiple independent systems. Theframework allows a user to prove statements of the form a ≥ t,where a ∈ Zq represents a secret attribute and t denotes a public threshold, without revealing the attribute value itself. At the sametime, the framework prevents the exposure of any globally stable identifier, thereby eliminating the risk of cross-domain tracking. The construction is based on Pedersen commitments, where each attribute is encoded as C = g^ah^r ∈ G, with G ⊆ Z^∗p denoting a cyclic group of prime order q. The generators g and h are selected such that the discrete logarithm relation between them is unknown. This ensures that the commitment is computationally binding under the discrete logarithm assumption and perfectly hiding due to the use of randomness r. As a result, the committed attribute remains concealed while still allowing verification of statements about it. Predicate verification is achieved using a sigma protocol, whichenables the prover to demonstrate knowledge of valid witnesses without revealing them. In particular, the protocol proves the relation C · g−t = g^δh^r, where δ = a − t. This transformation allows the system to verify threshold conditions such as a ≥ twithout disclosing the value of a. The zero-knowledge property of the protocol ensures that the verifier learns only the validity ofthe statement and no additional information about the underlying attribute or randomness.To prevent correlation of user activity across different verification domains, the framework introduces scoped pseudonyms defined as IDS = pkH(S), where pk = g^x is a public key derivedfrom a secret key x, and H is a cryptographic hash functionmodeled as a random oracle. The scope S represents a domain specific identifier. This construction produces a unique identifierfor each domain while ensuring that identifiers generated for different scopes cannot be linked without solving the discrete logarithm problem in G. Revocation is supported through an RSA accumulator constructed under the Strong RSA assumption. For a revoked set R={ri}, the accumulator value is defined as A = g^Qri mod N,where N is an RSA modulus. The system enables efficient non membership verification using witnesses derived from B´ezout coefficients1. This mechanism allows a verifier to confirm thata credential has not been revoked, while maintaining constant verification cost that does not depend on the size of the revoked set. (Truncated by viXra Admin)
Category: Artificial Intelligence

[6] viXra:2604.0096 [pdf] replaced on 2026-05-05 14:00:49

True AI Should Be a Loser, Not a Winner

Authors: Dimiter Dobrev
Comments: 6 Pages.

The modern definition of AI contains an inaccuracy. According to the definition we have nowadays, AI is a computer program which is successful. Indeed, for a computer program to be successful, it must be intelligent, but the opposite is not true. A program can be intelligent but not successful, merely because it pursues different goals and does not aim at the success in question. From a theoretical perspective, the modern definition of AI is good enough because it answers the question "What is AI?" even though it does not encompass all intelligent programs, but only some of them. From a practical standpoint, however, this definition is insufficient. The reason is that we are at the doorstep of creating True AI and among all intelligent programs we must choose the one we will be most comfortable with from now on. Thus, it is not a good idea to choose one of these successful programs. It would be better to choose a program that does not pursue victory at any cost. Such a program could be called a loser because it will not be successful enough. After all, both in humans and in AI relentless ambition is not a positive trait.
Category: Artificial Intelligence

[5] viXra:2604.0062 [pdf] submitted on 2026-04-16 18:44:39

DeepSlide: From Artifacts to Presentation Delivery

Authors: Ming Yang, Zhiwei Zhang, Jiahang Li, Haoseng Liu, Yuzheng Cai, Weiguo Zheng
Comments: 37 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)

Presentations are a primary medium for scholarly communication, yet most AI slide generatorsoptimize the artifact (a visually plausible deck) while under-optimizing the delivery process (pacing, narrative, and presentation preparation). We present DeepSlide, a human-in-the-loop multi-agent system that supports preparing the full presentation process, from requirement elicitation and time-budgeted narrative planning, to evidence-grounded slide—script generation, attention augmentation,and rehearsal support. DeepSlide integrates (i) a controllable logical-chain planner with per-node timebudgets, (ii) a lightweight content-tree retriever for grounding, (iii) Markov-style sequential rendering with style inheritance, and (iv) sandboxed execution with minimal repair to ensure renderability. We further introduce a dual-scoreboard benchmark that cleanly separates static artifact quality from dynamic delivery excellence. Across 20 domains and diverse audience profiles, DeepSlide matches strong baselines on artifact quality while consistently achieving larger gains on delivery metrics,improving narrative flow, pacing precision, and slide—script synergy with clearer attention guidance.
Category: Artificial Intelligence

[4] viXra:2604.0059 [pdf] replaced on 2026-04-26 07:19:04

Reducing Credit Assignment Variance via Counterfactual Reasoning Paths

Authors: Fei Ding, Yongkang Zhang, Yeling Peng, Youwei Wang, Guoxiong Zhou, Zijian Zeng
Comments: 8 Pages.

Reinforcement learning for multi-step reasoning with large language models (LLMs) often relies on sparse terminal rewards, leading to poor credit assignment conditions where the final feedback is evenly propagated across all intermediate decisions. This results in high gradient variance, unstable training, and numerous ineffective updates, ultimately causing the model to fail and preventing sustained improvement. We introduce a counterfactual comparison-based credit assignment framework, which samples multiple reasoning trajectories under the same input. By treating their differences as an implicit approximation of alternative decisions, we construct an implicit process-level advantage estimator that transforms sparse terminal rewards into step-sensitive learning signals. Based on this, we propose Implicit Behavior Policy Optimization (IBPO), which significantly improves training stability and performance upper bounds on mathematical and code reasoning benchmarks, pointing to a promising direction for unlocking the performance potential of LLMs.
Category: Artificial Intelligence

[3] viXra:2604.0058 [pdf] submitted on 2026-04-15 20:10:37

Design Conditions for Intra-Group Learning of Sequence-Level Rewards: Token Gradient Cancellation

Authors: Fei Ding, Yongkang Zhang, Youwei Wang, Zijian Zeng
Comments: 8 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)

In sparse termination rewards, intra-group comparisons have become the dominant paradigm for fine-tuning reasoning models via reinforcement learning. However, long-term training often leads to issues like ineffective update accumulation (learning tax), solution probability drift, and entropy collapse. This paper presents a necessary condition for algorithm design from a token-level credit assignment perspective: to prevent reward-irrelevant drift, intra-group objectives must maintain gradient exchangeability across token updates, enabling gradient cancellation on weak-credit/high-frequency tokens. We show that two common mechanisms disrupting exchangeability make "non-cancellation" a structural norm. Based on this, we propose minimal intra-group transformations to restore or approximate the cancellation structure in the shared token space. Experimental results demonstrate that these transformations stabilize training, improve sample efficiency, and enhance final performance, validating the value of this design condition.
Category: Artificial Intelligence

[2] viXra:2604.0018 [pdf] submitted on 2026-04-06 20:43:07

DisasterSim: A Reproducible Benchmark for Navigation and Coverage in Collapsed Structures with Empirical Analysis of Classical Exploration and Goal-Conditioned Learning-Based Methods

Authors: Newton Adhikari
Comments: 10 Pages. (Note by viXra Admin: Please submit article written with AI assistance to ai.viXra.org)

Autonomous navigation of collapsed buildings iscritical for disaster response, yet no standardized simulation benchmark exists for reproducible evaluation of robot navigationand coverage policies in such environments. We present DisasterSim, an open-source benchmark built on ROS 2 Humble and Gazebo Classic that provides a physically realistic post-earthquakebuilding interior with configurable obstacle density, a multimodal sensor suite with Extended Kalman Filter (EKF)-based fusion, four formally defined evaluation metrics with automatedcomputation, and four reference baseline policies. The entire system—environment, robot, SLAM, navigation stack, metrics, and automated experiment runner—executes from a singlecommand with frozen parameters to ensure full reproducibility. Our empirical study across 39 trials reveals a striking result: three fundamentally different classical exploration paradigms—reactive FSM, frontier-based, and potential field—converge to a statistically indistinguishable performance plateau of approximately 30% area coverage (p>0.79, |d|≤0.27). This convergence suggests that navigation constraints, not exploration strategy, form the primary performance bottleneck in cluttered disaster environments. A partially trained goal-conditioned PPO policy(370k of 600k planned steps)—which navigates toward a fixed known survivor location rather than exploring freely—achieves higher incidental coverage (36.9% mean, 61.1% peak, Cohen’sd=0.78), indicating that goal-directed learned navigation traverses more of the environment en route than classical explorers manage in the same time budget. We additionally identify a quantifiable coverage—localization trade-off (Pearson r=0.85, p<0.001), correct a data error present in an earlier draft, and discuss the design of a goal-free RL explorer as the next step toward a fully autonomous learned baseline. All code, configurations, experiment logs, andtrained models are publicly available.
Category: Artificial Intelligence

[1] viXra:2604.0003 [pdf] submitted on 2026-04-02 13:18:10

AI and Neuroengineering Powered Slavery: a Distopian Future

Authors: Tianqi Zhu
Comments: 6 Pages.

Recent progress in minimally invasive brain—computer interfaces (BCIs), nanoscale neural interfacing, and multimodal neural decoding has enabled increasingly precise access to and interpretation of human brain activity. This paper analyzes the dual-use risks associated with these technologies when integrated with advanced artificial intelligence and adaptive social engineering methodologies. We formalize a conceptual architecture for "brain-invading systems," which leverage closed-loop neural interaction, personalized modeling, and behavioral manipulation strategies to influence cognitive and affective states. We examine enabling components, including remote-capable neural interfaces and high-fidelity decoding pipelines, and discuss their potential convergence into scalable manipulation frameworks. Key challenges in detecting such systems are evaluated, including signal attribution, adversarial interference, and limitations in current neurodiagnostic methods. We further discuss opportunities in detecting such neuro-AI system for malicious purposes based on EEG signals.
Category: Artificial Intelligence