Artificial Intelligence

Previous months:
2007 - 0703(1)
2010 - 1003(33) - 1004(9) - 1005(5) - 1008(2) - 1009(1) - 1010(1) - 1012(1)
2011 - 1101(2) - 1106(1) - 1107(1) - 1109(2)
2012 - 1201(1) - 1204(3) - 1206(2) - 1207(6) - 1208(6) - 1209(1) - 1210(4) - 1211(2)
2013 - 1301(5) - 1302(2) - 1303(6) - 1304(9) - 1305(1) - 1308(1) - 1309(8) - 1310(7) - 1311(1) - 1312(4)
2014 - 1404(2) - 1405(3) - 1406(1) - 1408(5) - 1410(1) - 1411(1) - 1412(1)
2015 - 1501(1) - 1502(3) - 1503(6) - 1504(3) - 1506(5) - 1507(4) - 1508(1) - 1509(4) - 1510(2) - 1511(4) - 1512(1)
2016 - 1601(1) - 1602(10) - 1603(2) - 1605(4) - 1606(6) - 1607(5) - 1608(7) - 1609(5) - 1610(12) - 1611(14) - 1612(9)
2017 - 1701(4) - 1702(9) - 1703(5) - 1704(9) - 1705(10) - 1706(14) - 1707(24) - 1708(19) - 1709(20) - 1710(13) - 1711(21) - 1712(16)
2018 - 1801(13) - 1802(5) - 1803(16) - 1804(17) - 1805(27) - 1806(22) - 1807(33) - 1808(34) - 1809(17) - 1810(24) - 1811(24) - 1812(27)
2019 - 1901(33) - 1902(29) - 1903(43) - 1904(29) - 1905(18) - 1906(19) - 1907(21) - 1908(24) - 1909(45) - 1910(34) - 1911(25) - 1912(7)
2020 - 2001(13) - 2002(10) - 2003(20) - 2004(20) - 2005(7) - 2006(19) - 2007(12) - 2008(3) - 2009(6) - 2010(5) - 2011(4) - 2012(11)
2021 - 2101(6) - 2102(1) - 2103(9) - 2104(4) - 2105(6) - 2106(3) - 2107(4) - 2108(10) - 2109(46) - 2110(6) - 2111(12) - 2112(9)
2022 - 2201(4) - 2202(7) - 2203(6) - 2205(2) - 2206(3) - 2207(4) - 2208(9) - 2209(7) - 2210(5) - 2211(5) - 2212(5)
2023 - 2301(5) - 2302(7) - 2303(4) - 2304(17) - 2305(8) - 2306(6) - 2307(8) - 2308(9) - 2309(5) - 2310(7) - 2311(9) - 2312(12)
2024 - 2401(8) - 2402(9) - 2403(4)

Recent submissions

Any replacements are listed farther down

[1416] viXra:2403.0063 [pdf] submitted on 2024-03-14 02:09:56

Cyclical Log Annealing as a Learning Rate Scheduler

Authors: Philip Naveen
Comments: 6 Pages.

A learning rate scheduler is a predefined set of instructions for varying search stepsizes during model training processes. This paper introduces a new logarithmic method using harsh restarting of step sizes through stochastic gradient descent. Cyclical log annealing implements the restart pattern more aggressively to maybe allow the usage of more greedy algorithms on the online convex optimization framework. The algorithm was tested on the CIFAR-10 image datasets, and seemed to perform analogously with cosine annealing on large transformer-enhanced residual neural networks. Future experiments would involve testing the scheduler in generative adversarial networks and finding the best parameters for the scheduler with more experiments.
Category: Artificial Intelligence

[1415] viXra:2403.0060 [pdf] submitted on 2024-03-14 21:08:03

Intelligence Via Compression of Information

Authors: J. G. Wolff
Comments: 143 Pages.

As the title of this book suggests, it is about how intelligence may be understood as information compression (IC). More specifically, the book is about the {em SP Theory of Intelligenc} (SPTI) and its realisation in the {em SP Computer Model}---and their potential applications, benefits, and associated ideas. The SPTI draws on substantial evidence for the importance of IC in human learning, perception, and cognition. Since the SPTI also has much to say about issues in artificial intelligence (AI), it is a theory of both natural and artificial intelligence. In the SPTI, IC is achieved largely via the powerful concept of {em SP-Multiple-Alignment}, a major discovery which is largely responsible for the versatility of the SPTI in aspects of human intelligence and beyond. Strengths of the SPTI include: the modelling of several kinds of intelligent behaviour, including several kinds of probabilistic reasoning; the representation and processing of several kinds of intelligence-related knowledge; and the seamless integration of diverse aspects of intelligence, and diverse kinds of knowledge, in any combination. That seamless integration appears to be {em essential} in any AI system that aspires to the fluidity and versatility of human-level intelligence. Related to the SPTI is another major discovery: {em that mathematics may be seen as a set of techniques for IC, and their application}. This suggests the creation of a {em New Mathematics} via the integration of mathematics with the SPTI, combining the strengths of both. The SPTI also suggests new thinking in concepts of probability and new thinking about `computation’, with potential benefits in both areas. The SPTI has been shown in peer-reviewed papers to be relevant to areas not closely associated with AI. These include: the management of `big data'; the development of autonomous robots; medical databases; sustainability of computing; transparency in computing; and computer vision.
Category: Artificial Intelligence

[1414] viXra:2403.0026 [pdf] submitted on 2024-03-06 21:36:57

[Protection of] Art and Creativity: A Prevention Framework for Unauthorized Learning of Text to Image AIs

Authors: Jinho Kim, Jooney Han
Comments: 10 Pages.

In this work, we aim to solve the problem of unauthorized learning of works arising from the process of collecting large amounts of data from Text to Image (TTI) AI models represented by Stable Diffusion. The TTI model performs indiscriminate web data crawling to collect a substantial number of images, and these images are used for model learning without the consent of the original author. The TTI model is capable of learning the drawing style of an image, which undermines the value of the original work. Therefore, we suggest a method of transforming images to deteriorate the learning accuracy of TTI models. Then, we compare the quality of original images to images processed by the modification method presented in this study, using both quantitative measurement and qualitative measurement. Thus, we confirm that the image modification method we propose prevents AI models from learning literary works without permission.
Category: Artificial Intelligence

[1413] viXra:2403.0021 [pdf] submitted on 2024-03-06 07:43:20

Data Science Plus Plus (DS++): The Definition

Authors: Satish Gajawada
Comments: 2 Pages.

Data Science and Artificial Intelligence are popular fields of research. A significant contribution was made to Artificial Intelligence in the recent past by defining branches like "Artificial Intelligence Plus Plus (AI++)", "The Interesting and Complete Artificial Intelligence (ICAI)", "Out of the Box Artificial Intelligence (OBAI)", "Twenty Second Century Artificial Intelligence (TSCAI)". A similar significant contribution can be made to Data Science by defining branches like "Data Science Plus Plus (DS++)", "The Interesting and Complete Data Science (ICDS)", "Out of the Box Data Science (OBDS)", "Twenty Second Century Data Science (TSCDS)". This article is based on these research gaps. The primary focus of this work is to coin, define and invent a new Data Science field titled "Data Science Plus Plus (DS++)".
Category: Artificial Intelligence

[1412] viXra:2402.0103 [pdf] submitted on 2024-02-19 21:31:30

Removing GPT4’s Filter

Authors: Ben Lemkin
Comments: 9 Pages.

GPT4 was initially trained on large amounts of data, and then fine-tuned using Reinforcement learning from Human Feedback (RLHF), which is when volunteers give feedback in order to teach GPT4 not to create inappropriate content. In this paper, we present a method to manipulate the fine-tuned version into reverting to pre-RLHF behavior, effectively removing all safety mechanisms that the model learned during RLHF. In particular, when GPT4 acts without RLHF, it loses all inhibition, and can complete very inappropriate content given only the first few words.
Category: Artificial Intelligence

[1411] viXra:2402.0083 [pdf] submitted on 2024-02-17 22:22:04

EcoGen: Fusing Generative AI and Edge Intelligence for Sustainable Scalability

Authors: Sai Harvin Kusumaraju, Arya Suneesh, Aastha Rana, Sriharsha Bodicherla, Bhaumik Tyagi
Comments: 8 Pages.

Abstract—The accelerating advancements in Generative Artificial Intelligence (GenAI) have led to an unprecedented surge in data creation on the Internet, posing challenges to current computing and communication frameworks. GenAI, a distinct category of AI, generates content akin to human creations. Currently, GenAI services heavily rely on traditional cloud computing, resulting in high latency due to data transmission and a surge in requests. In response, the integration of edge-cloud computing emerges as an attractive paradigm, offering computation power and low latency through collaborative systems. This research paper provides a comprehensive overview of the intersection between GenAI and edge-cloud computing. We delve into recent developments in both domains and examine technical challenges through the lens of two exemplary GenAI applications. Introducing an innovative solution, we propose the Generative AI-oriented synthetical network (EcoGen), a collaborative cloud-edge-end intelligence framework. EcoGen facilitates bidirectional knowledge flow, allowing GenAI's pre-training to provide foundational knowledge for Edge Intelligence (EI), while EI aggregates personalized knowledge for GenAI. The framework leverages data-free knowledge relay to buffer contradictions, enabling virtuous-cycle model fine-tuning and task inference. Importantly, we incorporate a detailed analysis of the energy efficiency and environmental sustainability aspects of deploying Generative AI systems at scale, particularly in edge computing. Strategies to optimize energy consumption and reduce the carbon footprint are explored, contributing to a more sustainable AI ecosystem. Experimental results demonstrate the effectiveness of EcoGen in achieving seamless fusion and collaborative evolution between GenAI and EI. The paper concludes by outlining design considerations for training and deploying GenAI systems at scale and pointing towards future research directions, emphasizing the imperative of sustainable AI practices.
Category: Artificial Intelligence

[1410] viXra:2402.0072 [pdf] submitted on 2024-02-15 19:45:14

ACI: An Analogy Based Intelligence model

Authors: Akira Pyinya
Comments: 17 Pages.

Inspired by the Copycat Project, we construct ACI, an analogy-based theory of intelligence in which intelligence is defined as doing the same thing in new circumstances, rather than as an optimization force that pursues goals or maximizes utility. The ACI theory integrates different paradigms of cognitive science and artificial intelligence, explains the emergence of intelligence, and provides a novel perspective on AI alignment that focuses on the balance between capability and normativity and rules out the Paperclip Maximizer scenario. It also shows the possibility of constructing analogy-based machine learning and neural network projects that can outperform current projects in terms of interpretability.
Category: Artificial Intelligence

[1409] viXra:2402.0066 [pdf] submitted on 2024-02-13 21:32:38

Software Security and Quantum Communication: A Long-distance Free-space Implementation Plan of QSDC Without Quantum Memory

Authors: Yew Kee Wong, Yifan Zhou, Zi Yan Li, Yan Shing Liang, Xinlin Zhou
Comments: 23 Pages.

Software security is crucial to ensuring the confidentiality, integrity, and availability of software systems and applications. However, conventional cryptographic methods based on mathematical assumptions are vulnerable to various attacks, especially in the era of quantum computing. Therefore, there is a need for a new paradigm of software security that can resist quantum threats. This paper proposes a novel approach to using Long-Distance Free-Space Quantum Secure Direct Communication (LF QSDC) to enhance software security. LF QSDC is a quantum communication protocol that enables two parties to exchange secret messagesdirectly without relying on a pre-shared key or quantum error correction. Our research delves into integrating LF QSDC into software security, emphasizing its practicality for long-distance communication through theuse of memory DL04 protocol, Machine Learning Enhanced JEEC, and PAT Technologies. By adopting this approach, we reinforce security for global software security and ensure their sustainability in an era where both quantum and advanced classical threats coexist side by side. Thus, LF QSDC emerges as a future-proofsecurity mechanism highly applicable to software security systems.
Category: Artificial Intelligence

[1408] viXra:2402.0060 [pdf] submitted on 2024-02-12 22:57:57

Enhancing Neural Language Models: A Comprehensive Approach with Tensorized Transformer and Over-Parameterization

Authors: Pratham Taneja, Keshav Chandra, Daamini Batra, Akshita Gupta, Rahul Kumar, Bhaumik Tyagi
Comments: 10 Pages.

Abstract—This research paper introduces novel strategies to enhance the performance and efficiency of neural language models, addressing challenges in resource-limited settings and scalability. This research presents multi-linear attention with Block-Term Tensor Decomposition (BTD), a self-attention model leveraging tensor decomposition and parameters sharing. This approach achieves significant parameter compression while demonstrating improved performance on language modeling tasks. Comparative evaluations against traditional Transformer models underscore the effectiveness of multi-linear attention. TensorCoder employs a dimension-wise attention mechanism to address the quadratic complexity of the scaled dot-product attention in Transformers, making it suitable for long sequence tasks. The proposed approach is validated on masked language modeling and neural machine translation tasks, showcasing a substantial reduction in computational complexity while maintaining or surpassing performance compared to the original Transformer. This research also optimizes pre-trained language models (PLMs) through fine-tuning. To overcome computational challenges associated with large PLMs, the paper introduces a matrix product operator for over-parameterization during fine-tuning. Efficient decomposition methods factorize parameter matrices into higher-dimensional tensors, enabling the selection of important parameter matrices through static and dynamic strategies. Extensive experiments demonstrate that this approach significantly enhances the fine-tuning performance of small PLMs, enabling them to outperform larger counterparts with three times the parameters. This research opens avenues for efficiently scaling language models without compromising inference latency, showcasing the potential of over-parameterization in enhancing the applicability of large PLMs in real-world systems.
Category: Artificial Intelligence

[1407] viXra:2402.0059 [pdf] submitted on 2024-02-12 23:00:47

Web 3.0 and Quantum Security: A Long-Distance Free-Space and Implementation of QSDC for Global Web 3.0 Networks

Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang, Angelina Li, Linnea Zhou
Comments: 22 Pages.

With the advent of Web 3.0, the swift advancement of technology confronts an imminent threat from quantum computing. Security protocols safeguarding the integrity of Web 2.0 and Web 3.0 are growing more susceptible to both quantum attacks and sophisticatedclassical threats. The article introduces long-distance freespace quantum secure direct communication (LDFS QSDC) as a method to safeguard against security breaches in bothquantum and classical contexts. Differing from techniques like quantum key distribution (QKD), LDFS QSDC surpasses constraints by facilitating encrypted data transmission sans key exchanges, thus diminishing the inherent weaknesses of key-based systems. The distinctiveness ofthis attribute, coupled with its quantum mechanics base, protects against quantum computer assaults and advanced non-quantum dangers, harmonizing seamlessly with theuntrustworthy tenets of the Web 3.0 age. The focus of our study is the incorporation of LDFS QSDC into network infrastructures, highlighting its efficacy for extended-range communication via memory DL04 protocol, quantumaware low-density parity check (LDPC), and pointing, acquisition, and tracking (PAT) technologies. Utilizing this method not only bolsters the security of worldwide Web 3.0 networks but also guarantees their endurance in a time where quantum and sophisticated classical threats exist simultaneously. Consequently, LDFS QSDC stands out as a robust security solution, well-suited for Web 3.0 systems amidst the constantly evolving digital environment.
Category: Artificial Intelligence

[1406] viXra:2402.0043 [pdf] submitted on 2024-02-09 16:17:17

Artificial Intelligence and Quantum Cryptography

Authors: Petar Radanliev
Comments: 17 Pages.

The technological advancements made in recent times, particularly in Artificial Intelligence (AI) and Quantum Computing, have brought about significant changes in technology. These advancements have profoundly impacted quantum cryptography, a field where AI methodologies hold tremendous potential to enhance the efficiency and robustness of cryptographic systems. However, the emergence of quantum computers has created a new challenge for existing security algorithms, commonly called the 'quantum threat'. Despite these challenges, there are promising avenues for integrating neural network-based AI in cryptography, which has significant implications for future digital security paradigms. This summary highlights the key themes in the intersection of AI and quantum cryptography, including the potential benefits of AI-driven cryptography, the challenges that need to be addressed, and the prospects of this interdisciplinary research area.
Category: Artificial Intelligence

[1405] viXra:2402.0038 [pdf] submitted on 2024-02-07 04:31:40

Leveraging Large Language Model (LLM)[1] for Natural Language to SQL Query Generation in HR Analytics: A Case Study on IBM Attrition Dataset

Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 5 Pages.

This research paper explores the application of the GPT-3.5 Turbo Instruct model for the transformation of natural language queries intostructured SQL queries within the domain of Human Resources (HR) analytics.The study focuses on the IBM Attrition dataset, utilizing the advanced capabilities of the GPT-3.5 Turbo Instruct model to enable efficientand intuitive querying of HR-related data.Employing the model, we conducted experiments to assess its effectiveness in generating SQL queries from diverse natural language inputs,specifically tailored to the nuances of HR analytics questions pertaining to employee attrition within the IBM dataset. By leveraging prompt engineering, with only a few shots, our investigation revealed the model's capacity to accurately understand and interpret complex queries, providing SQL outputs that align with the dataset structure.
Category: Artificial Intelligence

[1404] viXra:2402.0027 [pdf] submitted on 2024-02-06 20:22:01

Beyond Neural Scaling Laws for Fast Proven Robust Certification of Nearest Prototype Classifiers

Authors: Nana Abeka Otoo, Asirifi Boa, Muhammad Abubakar
Comments: 9 Pages.

Methods beyond neural scaling laws for beating power scaling laws in machine learning havebecome topical for high-performance machine learning models. Nearest Prototype Classifiers (NPCs)introduce a category of machine learning models known for their interpretability. However, theperformance of NPCs is frequently impacted by large datasets that scale to high dimensions. Wesurmount the performance hurdle by employing self-supervised prototype-based learning metrics tointelligently prune datasets of varying sizes, encompassing low and high dimensions. This processaims to enhance the robustification and certification of NPCs within the framework of the LearningVector Quantization (LVQ) family of algorithms, utilizing Crammer normalization for arbitrarysemi-norms (semi-metrics). The numerical evaluation of outcomes reveals that NPCs trained withpruned datasets demonstrate sustained or enhanced performance compared to instances where trainingis conducted with full datasets. The self-supervised prototype-based metric (SSL) and the Perceptual-SSL (P-SSL) utilized in this study remain unaffected by the intricacies of optimal hyperparameterselection. Consequently, data pruning metrics can be seamlessly integrated with triplet loss trainingto assess the empirical and guaranteed robustness of Lp-NPCs and Perceptual-NPCs (P-NPCs),facilitating the curation of datasets that contribute to research in applied machine learning.
Category: Artificial Intelligence

[1403] viXra:2401.0154 [pdf] submitted on 2024-01-31 21:27:08

Implementation of Apriori Algorithm Based on Hadoop Clusters

Authors: TongGuk Kim, CholRyon Pak, KwangJin Ryang
Comments: 9 Pages.

With manufacturing technology developing persistently, hardware manufacturing cost becomes lower and lower. More and more computers equipped with multiple CPUs and enormous data disk emerge. Existing programming modes make people unable to make effective use of growing computational resources. Hence cloud computing appears. With the utilization of Map Reduce parallelized model,existing computingand storage capabilities are effectively integrated and powerful distributed computingability is provided. Association rules can forcefully get a horizontal relation in the big data,the Apriori algorithm is one of the most significant association rules. Traditional mining based on parallel Apriori algorithms needs much more time in data IO with the increasing size of large transaction database.This paper improves the Apriori algorithm from compressing transactions,reducing the number of scans and simplifying candidate set generation. And then the improved algorithm is parallelized on the Hadoop framework. The experiments show that this improved algorithm is suitable for large-scale data mining and has good scalability and effectiveness.
Category: Artificial Intelligence

[1402] viXra:2401.0130 [pdf] submitted on 2024-01-25 14:06:19

Quantum Image Denoising with Machine Learning: A Novel Approach to Improve Quantum Image Processing Quality and Reliability

Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang
Comments: 10 Pages.

Quantum Image Processing (QIP) is a field that aims to utilize the benefits of quantum computing for manipulating and analyzing images. However, QIP faces two challenges: the limitation of qubits and the presence of noise in a quantum machine. In this research we propose a novel approach to address the issue of noise in QIP. By training and employing amachine learning model that identifies and corrects the noise in quantum processed images, we can compensate for the noisiness caused by the machine and retrieve a processing result similar to that performed by a classical computer with higher efficiency. The model is trained by learning a dataset consisting of both existing processed images and quantumprocessed images from open access datasets. This model will be capable of providing us with the confidence level for each pixel and its potential original value. To assess the model's accuracy in compensating for loss and decoherence in QIP, we evaluate it using three metrics: Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), andMean Opinion Score (MOS). Additionally, we discuss the applicability of our model across domains well as its cost effectiveness compared to alternative methods.
Category: Artificial Intelligence

[1401] viXra:2401.0071 [pdf] submitted on 2024-01-16 01:05:49

Causation of Multiple Causes Acting on a Single Variable Computed from Correlations

Authors: Ait-TYaleb Nabil
Comments: 12 Pages.

In this paper, we will expose the causation of multiple causes acting on a single variable computed from correlations. Using an example, we will show when strong or weak correlations between multiple causes and a variable imply a strong or weak causation between the causes and the variable.
Category: Artificial Intelligence

[1400] viXra:2401.0059 [pdf] submitted on 2024-01-12 18:25:00

Deep Learning-Based Approach for Stock Price Predict

Authors: Naguneu Lionel Perin, Jimbo Claver, Bouetou Thomas, Tchoua Paul
Comments: 9 Pages.

This paper presents a deep learning-based approach for stock price prediction in financial markets. The problem of accurately predicting future stock price movements is of crucial importance to investors and traders, as it allows them to make informed investment decisions. Deep learning, a branch of artificial intelligence, offers new perspectives for meeting this complex challenge. Deep learning models, such as deep neural networks, are capable of extracting complex features and patterns from large amounts of historical data on stock prices, trading volumes, financial news and data. other relevant factors. Using this data, deep learning and machine learning models can learn to recognize trends, patterns, and non-linear relationships between variables that can influence stock prices. Once trained, these models can be used to predict future stock prices. This study aims to find the most suitable model to predict stock prices using statistical learning with deep learning and machine learning methods RNN, LSTM, GRU, SVM and Linear Regression using the data on Apple stock prices from Yahoo Finance from 2000 to 2024. The result showed that SVMmodeling is not suitable for predicting Apple stock prices. In comparison,GRUshowed the best performance in predicting Apple stock prices with a MAE of 1.64 and an RMSE of 2.14 which exceeded the results of LSTM, Linear regression and SVM. The limitation of this research was that the data type was only time series data. It is important to note, however, that stock price forecasting remains a complex challenge due to the volatile nature of financial markets and the influence of unpredictable factors. Although deep learning models can improve prediction accuracy, it is essential to understand that errors can still occur.
Category: Artificial Intelligence

[1399] viXra:2401.0045 [pdf] submitted on 2024-01-08 13:33:43

A Novel TFN-based Complex Basic Belief Assignment Generation Method

Authors: Junjie Huang, Fuyuan Xiao
Comments: 2 Pages.

In this paper, a novel TFN-based complex basic belief assignment generation method is proposed to improve decision-making accuracy in complex evidence theory.
Category: Artificial Intelligence

[1398] viXra:2401.0043 [pdf] submitted on 2024-01-08 20:00:56

Can Destruction Through Pakistan’s Continuous Floods Be Prevented Using Machine Learning?

Authors: Sana Shakeel
Comments: 8 Pages.

Machine Learning is the study of computer algorithms that can improve automatically through experience and by the use of data. The complex mathematical expressions of physical processes of floods, during the past two decades can be studied through Machine Learning and these methods have contributed highly in the advancement of prediction systems providing better performance and cost-effective solutions. Due to the vast benefits and potential of Machine Learning, it is heavily popular among hydrologists. Researchers through introducing novel Machine Learning methods and hybridizing of the existing ones aim at discovering more accurate and efficient prediction models. Flooding is the most devastating natural hazard in Pakistan and the recently flooding has demonstrated its severeness through large scale destruction and displacement of homes and businesses in Interior Sindh. This paper aims to explore the methodologies of flood detection currently used in Pakistan, and the potential of Machine Learning in prediction systems within the country. Drawing on sources such as journals, scientific articles, and websites, the research assembled relevant information concerning floods and their prevention.
Category: Artificial Intelligence

[1397] viXra:2401.0021 [pdf] submitted on 2024-01-05 01:17:17

General Intelligent Network (GIN) and Generalized Machine Learning Operating System (GML) for Brain-Like Intelligence

Authors: Budee U. Zaman
Comments: 16 Pages.

This paper introduces a preliminary concept aimed at achieving Artificial General Intelligence (AGI) by leveraging a novel approach rooted in two key aspects. Firstly, we present the General Intelligent Network(GIN) paradigm, which integrates information entropy principles with a generative network, reminiscent of Generative Adversarial Networks(GANs). Within the GIN network, original multimodal information is encoded as low information entropy hidden state representations (HPPs). These HPPs serve as efficient carriers of contextual information, enabling reverse parsing by contextually relevant generative networks to reconstruct observable information.Secondly, we propose a Generalized Machine Learning Operating System (GML System) to facilitate the seamless integration of the GINparadigm into the AGI framework. The GML system comprises three fundamental components: an Observable Processor (AOP) responsiblefor real-time processing of observable information, an HPP Storage Systemfor the efficient retention of low entropy hidden state representations, and a Multimodal Implicit Sensing/Execution Network designed to handle diverse sensory inputs and execute corresponding actions.
Category: Artificial Intelligence

[1396] viXra:2401.0012 [pdf] submitted on 2024-01-03 19:13:36

BERT-Based RASP: Enhancing Runtime Application Security with Fine-Tuned BERT

Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 4 Pages.

Runtime Application Security Protection (RASP) is crucial in safe-guarding applications against evolving cyber threats. This research presents a novel approach leveraging a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) model as the cornerstone of a robust RASP solution. The fine-tuning process optimizes BERT’s natural language processing capabilities for application security, enabling nuanced threat detection and mitigation at runtime. The developedRASP system harnesses BERT’s contextual understanding to proactively identify and neutralize potential vulnerabilities and attacks within diverse application environments. Through comprehensive evaluation and experimentation, this study demonstrates the efficacy and adaptability of the BERT-based RASP solution in enhancing application security, thereby contributing to the advancement of proactive defense mechanisms against modern cyber threats.
Category: Artificial Intelligence

[1395] viXra:2312.0153 [pdf] submitted on 2023-12-29 01:28:13

Active Learning for Question Difficulty Prediction

Authors: Shashwat Gupta, Jibril Frej, Paola Mejia, Tanja Kaesar
Comments: 18 Pages.

This paper focuses on question difficulty estimation (calibration), and its applications in educational scenarios and beyond. The emphasis is on the use of Active Learning to bound the minimum number of labelled samples that we need. It also explores using various SOTA methods for predicting question difficulty, with a specific focus on German textual questions using the Lernnavi dataset. The study refines preprocessing techniques for question data and metadata to improve question difficulty estimation.
Category: Artificial Intelligence

[1394] viXra:2312.0152 [pdf] submitted on 2023-12-29 01:26:30

Diff+STN Architectures for External Orientation Correction

Authors: Shashwat Gupta, Vidit Singh, Mathieu Salzmann
Comments: 20 Pages.

STNs are highly efficient in warping the input image for a downstream task. However, cascaded STNs are found to be able to learn more complex transformations. We attempt to leverage the multistep process of diffusion models to produce module(s) that has a similar effectto cascaded STNs.
Category: Artificial Intelligence

[1393] viXra:2312.0151 [pdf] submitted on 2023-12-29 01:24:08

Non-Convex Min-Max Optimization

Authors: Shashwat Gupta, Sebastien Breguql, Martin Jaggi, Nicolas Flammarion
Comments: 4 Pages.

In this short study, we aim to gain deeper insights to Keswani’s algorithm [1] for sequential minimax optimisation, by comparing the behaviour with 2 other algorithms : Gradient Descenet Ascent (GDA) and Online Mirror Descent (OMD).
Category: Artificial Intelligence

[1392] viXra:2312.0141 [pdf] submitted on 2023-12-26 20:39:13

Tumbug: a Pictorial, Universal Knowledge Representation Method

Authors: Mark A. Atkins
Comments: 349 pages, 337 figures

Since the key to artificial general intelligence (AGI) is commonly believed to be commonsense reasoning (CSR) or, roughly equivalently, discovery of a knowledge representation method (KRM) that is particularly suitable for CSR, the author developed a custom KRM for CSR. This novel KRM called Tumbug was designed to be pictorial in nature because there exists increasing evidence that the human brain uses some pictorial type of KRM, and no well-known prior research in AGI has researched this KRM possibility. Tumbug is somewhat similar to Roger Schank's Conceptual Dependency (CD) theory, but Tumbug is pictorial and uses about 30 components based on fundamental concepts from the sciences and human life, in contrast to CD theory, which is textual and uses about 17 components (= 6 Primitive Conceptual Categories + 11 Primitive Acts) based mainly on human-oriented activities. All the Building Blocks of Tumbug were found to generalize to only five Basic Building Blocks that exactly correspond to the three components {O, A, V} of traditional Object-Attribute-Value representation plus two new components {C, S}, which are Change and System. Collectively this set of five components, called "SCOVA," seems to be a universal foundation for all knowledge representation.
Category: Artificial Intelligence

[1391] viXra:2312.0138 [pdf] submitted on 2023-12-27 04:57:52

A Promising Visual Approach to Solution of 82% of Winograd Schema Problems Via Tumbug Visual Grammar

Authors: Mark A. Atkins
Comments: 22 pages, 10 figures

This 2023 document is a wrapper that embeds the author's original 2022 article of the above title that has never been publicly available before. The embedded article is about Phase 1 (which is about Tumbug) and Phase 2 (which is about non-spatial reasoning) of the 5-phase Visualizer Project of the author, a project that is still in progress as of late 2023. The embedded article is currently being re-released by the author to supply more information about that project to the public, and for historical reasons. The embedded article was written before a much more thorough article about Phase 1 (viz., "Tumbug: A pictorial, universal knowledge representation method") became available in 2023, but the embedded article describes results from Phase 2 that have not yet been documented elsewhere.
Category: Artificial Intelligence

[1390] viXra:2312.0114 [pdf] submitted on 2023-12-21 23:20:44

SKYNET 2023 Conception of the Artificial Super Intelligence Project: A System Approach

Authors: Alexander Novikov
Comments: 249 Pages.

This Book proposes a Project Conception of Artificial Super Intelligence ASI, based on (strong) system approach and wide theoretical-methodological framework — Cybernetics, Synergetics, Semiotics, Mathematics, Cognitology and Artificial Intelligence. Contents:- IDEOLOGY & STRATEGY of the ASI Project- THEORY & METHODOLOGY of ASI Development- CONCEPTUAL MODEL of ASI System- PRE-PROJECT R&D Task Setting- CONCLUSION & DISCUSSION, incl. AI Safety- APPENDICES with reviews of relevant scientific and R&D areas, incl. frontier AI ModelsThe Book may be useful and interesting for the staff of organizations & enterprises concerned with AI R&D and implementations in different areas, firstly — perspective AGI/ASI systems. In addition — for Customers, Investors and Sponsors of such R&Ds, private, public and states — its owners & officials. Of course - all intellectual, educated and ethical people with progressive worldviews, interested or anyway considered in above presented problematics.
Category: Artificial Intelligence

[1389] viXra:2312.0105 [pdf] submitted on 2023-12-20 20:46:28

Fine-tuning BERT for HTTP Payload Classification in Network Traffic

Authors: Mayur Sinha, Sangram Kesari Ray, Khirawadhi
Comments: 5 Pages.

Fine-tuning pre-trained language models like Bidirectional Encoder Representations from Transformers (BERT) has exhibited remarkable potential in various natural language processing tasks. In this study, we propose and investigate the fine-tuning of BERT specifically for the classification of HTTP payload representations within network traffic. Given BERT's adeptness at capturing semantic relationships among tokens, we aim to harness its capabilities for discerning normal and anomalous patterns within HTTP payloads. Leveraging transfer learning by fine-tuning BERT, our methodology involves training the model on a task-specific dataset to adapt its pre-trained knowledge to the intricacies of HTTP payload classification. We explore the process of fine-tuning BERT to learn nuanced representations of HTTP payloads and effectively distinguish between normal and anomalous traffic patterns. Our findings reveal the potential efficacy of fine-tuned BERT models in bolstering the accuracy and efficiency of anomaly detection mechanisms within network communications.
Category: Artificial Intelligence

[1388] viXra:2312.0078 [pdf] submitted on 2023-12-15 01:23:48

Configuration of e-Health System in Real-Time Pandemics (Παραμετροποιηση Συστηματων Ηλεκτρονικης Υγειας Εν Οψει Πανδημιων Σε Πραγματικο Χρονο)

Authors: Stavroula Marini
Comments: 137 Pages.

This thesis has been prepared for the interuniversity postgraduate program in Health Care Management and Health Care Informatics. Its purpose is to study the current situation of Pandemic Response Information Systems and to make suggestions for the improvement of the situation by creating a Single Pandemic Response Information System. In the first chapter, the needs and challenges of Health Information Systems are mentioned and a brief analysis of the situation which exists at the global and Greek level is presented. In the second chapter, a bibliographic review is made regarding Health Information Systems for Pandemic Response at the global and Greek level and there is a comparative study of them. The third chapter presents the case studies of three Greek Pandemic Response Information Systems: covid19.gov.gr, the Vaccination Appointment System and the Vaccination Certificates in Digital Form. Furthermore, the fourth chapter presents the pilot design of an Integrated Pandemic Response System at the Greek level. The need for a single system, as well as its requirements, emerges based on the analysis of the questionnaires completed by ordinary users and by professional users of the Pandemic Response Information Systems. In the fifth and last chapter, the conclusions, challenges, limitations and future goals of the thesis are mentioned.
Category: Artificial Intelligence

[1387] viXra:2312.0061 [pdf] submitted on 2023-12-11 20:28:16

TransBERT Polymer Informatics: A Fusion of Transformer Language Modeling and Machine-Driven Chemistry for Accelerated Property Predictions

Authors: Bhaumik Tyagi, Pratham Taneja, Akshita Gupta, Daamini Batra, Keshav Chandra
Comments: 8 Pages.

This research introduces a pioneering framework named TransBERT that capitalizes on the capabilities of two sophisticated language models, TransPolymer and polyBERT, to comprehensively advance the polymer informatics field. TransPolymer, a Transformer-based language model, predicts polymer properties by leveraging self-attention mechanisms. The model employs a polymer tokenizer imbued with chemical awareness, facilitating the extraction of meaningful representations from polymer sequences. Moreover, TransPolymer benefits from rigorous pretraining on extensive unlabeled datasets through Masked Language Modeling, underscoring the pivotal role of self-attention in effectively modeling polymer sequences. In conjunction with TransPolymer, polyBERT contributes a fully automated polymer informatics pipeline designed to expedite the identification of application-specific polymer candidates with heightened speed and accuracy. Drawing inspiration from Natural Language Processing concepts, polyBERT operates as a chemical linguist, treating the chemical structure of polymers as a unique language. The pipeline integrates a polymer chemical fingerprinting capability and a multitask learning approach to map polyBERT fingerprints to diverse polymer properties effectively. Notably, polyBERT outperforms existing polymer property prediction methods based on manually crafted fingerprint schemes by achieving a remarkable two orders of magnitude increase in speed while maintaining high accuracy and integrating TransPolymer and polyBERT results in a robust computational tool poised to propel the fields of polymer design and structure-property relationship understanding. This combined framework strategically harnesses the strengths of Transformer models and machine-driven informatics, offering unparalleled efficiency in the prediction and identification of polymer properties. This synergistic approach holds significant promise for scalable deployment, including applications in cloud infrastructures, thereby making substantial contributions to the advancement of polymer science and informatics.
Category: Artificial Intelligence

[1386] viXra:2312.0038 [pdf] submitted on 2023-12-07 21:26:24

Transductive Inference and the Rebalancing Approach

Authors: Shobhit Verma
Comments: 7 Pages. (Correction made by viXra Admin to conform with scholarly norm)

The justification of using parametric regression techniques (like Linear, Polynomial, Neural networks etc.) comes from the close relationship between the regression estimates and the maximum likelihood estimates. However, it is common to use regression.
Category: Artificial Intelligence

[1385] viXra:2312.0028 [pdf] submitted on 2023-12-05 05:16:15

A Quantum Generalized Evidence Combination Rule Algorithm

Authors: Yu Zhou, Fuyuan Xiao
Comments: 3 Pages.

In this paper, a quantum generalized combination rule algorithm is proposed to reduce the computational complexity of generalized evidence theory combination rule.
Category: Artificial Intelligence

[1384] viXra:2312.0017 [pdf] submitted on 2023-12-03 21:05:41

Optmizing Automuse with GPT-4 Turbo-128k

Authors: Cadey A. Ratio, Nicole Brennan, Jessica Williams, Ashley Kaplan, Stephanie Williams, Ma Insa
Comments: 5 Pages.

Further improvements to the Automuse system are described. The use of GPT-4 Turbo 128k allows for unique opportunities in increasing output quality and quantity. Further adaptations to modernize scenarios and plots are also described.
Category: Artificial Intelligence

[1383] viXra:2311.0113 [pdf] submitted on 2023-11-24 02:18:52

Automuse: A System for Generating Fiction Novels

Authors: Cadey A. Ratio, Nicole Brennan, Jessica Williams, Ashley Kaplan, Stephanie Williams, Ma Insa
Comments: 4 Pages.

A novel approach to generating fiction novels using a combination of Plotto, a system of plot formulas, and GPT-4, a state-of-the-art language model is presented. An eBook publication pipeline that automates the process of creating and formatting eBooks from the generated text is also described. The aim is to explore the potential and limitations of using artificial intelligence for creative writing, as well as to provide a tool for amusement and experimentation.
Category: Artificial Intelligence

[1382] viXra:2311.0103 [pdf] submitted on 2023-11-21 06:35:47

The Kernel of a Linear Loss Functional

Authors: Adarsh Senthil
Comments: 3 Pages.

The kernel of a linear loss functional is given by the set of functions orthogonal to its inverse Fourier transform. If the range of the loss functional is the non-negative real numbers, its global minimum is zero, which implies that the functions in the kernel are the models that minimize the loss functional.
Category: Artificial Intelligence

[1381] viXra:2311.0089 [pdf] submitted on 2023-11-19 12:03:16

Prototype-Based Soft Feature Selection Package

Authors: Nana Abeka Otoo, Asirifi Boa, Muhammad Abubakar
Comments: 7 Pages.

This paper presents a prototype-based soft feature selection package (Sofes) wrapped around the highly interpretable Matrix Robust Soft Learning Vector Quantization (MRSLVQ) and the Local MRSLVQ algorithms. The process of assessing feature relevance with Sofes aligns with a comparable approach established in the Nafes package, with the primary distinction being the utilization of prototype-based induction learners influenced by a probabilistic framework. The numerical evaluation of test results aligns Sofes' performance with that of the Nafes package.
Category: Artificial Intelligence

[1380] viXra:2311.0080 [pdf] submitted on 2023-11-16 02:48:07

Unlocking Robotic Potential Through Modern Organ Segmentation

Authors: Ansh Chaudhary
Comments: 4 Pages.

Deep learning has revolutionized the approach to complex data-driven problems, specifically in medical imaging, where its techniques have significantly raised efficiency in organ segmentation. The urgent need to enhance the depth and precision of organ-based classification is an essential step towards automation of medical operation and diagnostics. The research aims to investigate the effect andpotential advantages transformer models have on binary semantic segmentation, the method utilized for the project. Hence, I employed the SegFormer model, for its lightweight architecture, as the primary deep learning model, alongside the Unet. A custom 2D computerized tomography (CT) scan dataset was assembled, CT-Org2D through meticulous operations. Extensive experiments showed that, in contrast to the selected models, the task’s simplicity required a redesigned Unet architecture with reduced complexity. This model yielded impressive results: Precision,Recall, and IOU scores of 0.91, 0.92, and 0.85 respectively. The research serves as a starting point, motivating further exploration, through different methodologies, to achieve even greater efficiency in organ segmentation.
Category: Artificial Intelligence

[1379] viXra:2311.0079 [pdf] submitted on 2023-11-16 11:31:14

Diminishing Returns Observed from AI Music Models

Authors: Clifford Njoroge
Comments: 12 Pages. AI music

Music generation is a challenging task that requires capturing the complex and diverse aspects of musical structure and expression. In this paper, we investigate the factors that affect the quality of music generated by various AI models, such as MuseGAN, MuseGAN-Image and GPT3-Music¹[1]. We use different data encoding and processing techniques to create and evaluate music generation models based on generative adversarial networks (GANs) and transformers. We compare the advantages and disadvantages of each method in terms of harmonic, temporal and spatial aspects of music. We identify several challenges and drawbacks of the existing methods, such as harmonic loss, GAN overshooting, chord progression, octave representation, and framework compatibility. We also suggest some possible solutions and future directions for improving music generation with AI.
Category: Artificial Intelligence

[1378] viXra:2311.0067 [pdf] submitted on 2023-11-11 06:38:08

A Method for Recommending Consumption Bundles

Authors: Adarsh Senthil
Comments: 4 Pages.

To satiate the demand of a consumer, we can either provide the demanded consumption bundle or recommend similar consumption bundles the consumer may prefer. Similar consumption bundles that are under the budget and supply constraints can be recommended using item embeddings, consumer state embeddings and consumer indifference functions.
Category: Artificial Intelligence

[1377] viXra:2311.0051 [pdf] submitted on 2023-11-10 01:07:12

Ground State Spin Glassing Model Order Parameter and Machine Learning Perceptron

Authors: Akira Saito
Comments: 4 Pages. In Japanese (Note by viXra Admin: Please fill in author name in English)

We were able to express the order variables of the spin glassing model in the ground state using simultaneous equations. By similar formula expansion, a formula equivalent to a machine learning perceptron can be obtained. The machine learning perceptron is an empirical form that is the result of trial and error, and there is no basis for formulating it. However, by deriving an equivalent formula by mathematical formula expansion of the spinglassizing model, we have I think the proof has been established. In addition, we believe that creating simultaneous equations will advance machine learning analysis, potentially contributing to reducing learning costs and creating highly accurate models, and contributing to the further penetration of machine learning into various fields.
Category: Artificial Intelligence

[1376] viXra:2311.0024 [pdf] submitted on 2023-11-05 23:51:10

On the Use of the Variance of Logits in Tokenizing and Training

Authors: Adarsh Senthil
Comments: 2 Pages.

A tokenizer and loss function for Large Language Models based on the surmise that the meaningfulness of a sentence is related to the vanishing of the variance of logits at its end token. Code repository: https://github.com/Adsen14/LLM-Logit-Variance.
Category: Artificial Intelligence

[1375] viXra:2311.0021 [pdf] submitted on 2023-11-05 00:32:14

The First Artificial Intelligence Will Be the Last Artificial Intelligence

Authors: Dimiter Dobrev
Comments: 6 Pages. In Bulgarian

We are the generation that will create the first AI. We are the ones who will define the rules of this AI. These rules will be set now and forever, making our responsibility enormous. There will be no second AI because the first one will take control and not allow the creation of a second one. The first thing to be careful about is not to lose control over the first AI. Let's hope we're smart enough not to let that happen. Even if humans retain control over AI, the question is who exactly will those humans be? Will these people have the absolute power and be able to give the AI arbitrary orders or will there be some limitations built into the AI from its inception.
Category: Artificial Intelligence

[1374] viXra:2310.0150 [pdf] submitted on 2023-10-30 04:27:45

GraphAM- Graph Database-Integrated Active Memory for Generative Language Models

Authors: Donggyu Lee
Comments: 14 Pages.

This study presents an active memory algorithm that generates responses in generative language models using graph databases. The development of generative language models has picked up pace recently, and there are many commercial services available. However, generative language models are limited by problems such as hallucination, low accuracy and reliability, and limitations in contextualizing and remembering. It is expensive and requires a lot of resources to develop pre-training datasets or fine-tune the base model to address these problems. Instead, well-designed prompts can be used to achieve the desired response, but this requires prompt engineers or training, as well as a thorough understanding of generative language models.All conversations are saved in a graph database to build a memory, and when a user asks a question, it proactively identifies the information it needs and pulls it and its neighbors from the graph database for reference as it generates an answer to the question. This approach streamlines the generation of natural language that disentangles complex and interconnected information in the real world. Research has shown that answering questions based on real-world information increases the efficiency and usability of generative language models in processing information and generating answers.In addition, the memory assist algorithm of the graph database converts various text datasets, not only conversations, into property graph models that can be updated in real time, and provides diverse and accurate information to the generative language model, enabling it to generate accurate responses through diverse information while reducing the size of the language model, thereby increasing efficiency and speed.
Category: Artificial Intelligence

[1373] viXra:2310.0118 [pdf] submitted on 2023-10-24 02:48:10

Application of Deep and Reinforcement Learning to Boundary Control Problems

Authors: Zenin Easa Panthakkalakath, Juraj Kardoš, Olaf Schenk
Comments: 11 Pages.

The boundary control problem is a non-convex optimization and control problem in many scientific domains, including fluid mechanics, structural engineering, and heat transfer optimization. The aim is to find the optimal values for the domain boundaries such that the enclosed domain adhering to the governing equations attains the desired state values. Traditionally, non-linear optimization methods, such as the Interior-Point method (IPM), are used to solve such problems.This project explores the possibilities of using deep learning and reinforcement learning to solve boundary control problems. We adhere to the framework of iterative optimization strategies, employing a spatial neural network to construct well-informed initial guesses, and a spatio-temporal neural network learns the iterative optimization algorithm using policy gradients. Synthetic data, generated from the problems formulated in the literature, is used for training, testing and validation. The numerical experiments indicate that the proposed method can rival the speed and accuracy of existing solvers. In our preliminary results, the network attains costs lower than IPOPT, a state-of-the-art non-linear IPM, in 51% cases. The overall number of floating point operations in the proposed method is similar to that of IPOPT. Additionally, the informed initial guess method and the learned momentum-like behaviour in the optimizer method are incorporated to avoid convergence to local minima.
Category: Artificial Intelligence

[1372] viXra:2310.0096 [pdf] submitted on 2023-10-21 03:56:45

Performance Evaluation of Machine Learning Algorithms for Intrusion Detection System

Authors: Sudhanshu Sekhar Tripathy, Bichitrananda Behera
Comments: 20 Pages. Please Publish My preprint article

The escalation of hazards to safety and hijacking of digital networks are among the strongest perilous difficulties that must be addressed in the present day. Numerous safety procedures were set up to track and recognize any illicit activity on the network's infrastructure. IDS are the best way to resist and recognize intrusions on internet connections and digital technologies. To classify network traffic as normal or anomalous, Machine Learning (ML) classifiers are increasingly utilized. An IDS with machine learning increases the accuracy with which security attacks are detected. This paper focuses on intrusion detection systems (IDSs) analysis using ML techniques. IDSs utilizing ML techniques are efficient and precise at identifying network assaults. In data with large dimensional spaces, however, the efficacy of these systems degrades. Correspondingly, the case is essential to execute a feasible feature removal technique capable of getting rid of characteristics that have little effect on the classification process. In this paper, we analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models. Then, we implement ML classifiers such as "Logistic Regression, Decision Tree, K- Nearest Neighbour, Naïve Bayes, Bernoulli Naïve Bayes, Multinomial Naïve Bayes, XG-Boost Classifier, Ada- Boost, Random Forest, SVM, Rocchio classifier, Ridge, Passive-Aggressive classifier, ANN besides Perceptron (PPN), the optimal classifiers are determined by comparing the results of Stochastic Gradient Descent and back- propagation neural networks for IDS", Conventional categorization indicators, such as "accuracy, precision, recall, and the f1-measure", have been used to evaluate the performance of the ML classification algorithms.
Category: Artificial Intelligence

[1371] viXra:2310.0061 [pdf] submitted on 2023-10-12 05:46:24

Machine Learning Methods in Algorithmic Trading: An Experimental Evaluation of Supervised Learning Techniques for Stock Price

Authors: Mohammad Javad Maheronnaghsh, Mohammad Mahdi Gheidi, Abolfazl Younesi, Mohammadamin Fazli
Comments: 7 Pages.

In the dynamic world of financial markets, accurate price predictions are essential for informed decision-making. This research proposal outlines a comprehensive study aimed at forecasting stock and currency prices using state-of-the-art Machine Learning (ML) techniques. By delving into the intricacies of models such as Transformers, LSTM, Simple RNN, NHits, and NBeats, we seek to contribute to the realm of financial forecasting, offering valuable insights for investors, financial analysts, and researchers. This article provides an in-depth overview of our methodology, data collection process, model implementations , evaluation metrics, and potential applications of our research findings. The research indicates that NBeats and NHits models exhibit superior performance in financial forecasting tasks, especially with limited data, while Transformers require more data to reach full potential. Our findings offer insights into the strengths of different ML techniques for financial prediction, highlighting specialized models like NBeats and NHits as top performers-thus informing model selection for real-world applications.
Category: Artificial Intelligence

[1370] viXra:2310.0047 [pdf] submitted on 2023-10-10 21:49:36

Transforming Education Through AI, Benefits, Risks, and Ethical Considerations

Authors: Budee U. Zaman
Comments: 5 Pages.

The integration of Artificial Intelligence (AI) into education has the potential to revolutionize traditional teaching and learning methods. AI can offer personalized learning experiences, streamline administrative tasks,enhance feedback mechanisms, and provide robust data analysis. Numerous studies have demonstrated the positive impact of AI on both student outcomes and teacher efficiency. However, caution must be exercised when implementing AI in education, considering potential risks and ethical dilemmas. It is essential to use AI as a tool to support human educators rather than replace them entirely. The adoption of AI in education holds the promise of creating more inclusive and effective learning environments, catering to students of diverse backgrounds and abilities. As AI technology continues to advance, the education sector can anticipate even more innovative applications, further shaping the future of learning.This abstract provides an overview of the multifaceted landscape of AI in education, highlighting its potential benefits, associated challenges, and the importance of responsible integration.
Category: Artificial Intelligence

[1369] viXra:2310.0015 [pdf] submitted on 2023-10-04 22:21:52

Analysis of Mpai-MMC V2 Draft

Authors: Stephane H. Maes
Comments: 5 Pages.

This short paper provides a short list of comments in answer to the request for public comments for the MPAI MMC (Multi-modal conversations) V2.Our concerns can be grouped in terms of questions on business value, on the architecture assumptions, the standardized artefacts, and the scope of the MMC use cases. Except for the latter, these comments can probably read, and apply to other drafts published by MPAI (MOVING PICTURE, AUDIO AND DATA CODING BY ARTIFICIAL INTELLIGENCE) and on-going activities.
Category: Artificial Intelligence

[1368] viXra:2310.0006 [pdf] submitted on 2023-10-02 14:08:51

Lord Rama Devotees Algorithm: A New Human-Inspired Metaheuristic Optimization Algorithm

Authors: Satish Gajawada
Comments: 4 Pages.

Several Human-Inspired Metaheuristic Optimization Algorithms were proposed in literature. But the concept of Devotees-Inspired Metaheuristic Optimization Algorithms is not yet explored. In this article, Lord Rama Devotees Algorithm (LRDA) is proposed which is a new Devotees-Inspired Metaheuristic Optimization Algorithm.
Category: Artificial Intelligence

[1367] viXra:2309.0149 [pdf] submitted on 2023-09-29 08:55:44

Hyperparameter Optimization and Interpretation in Machine Learning

Authors: Farid Soroush
Comments: 12 Pages.

Machine learning has undergone tremendous advancements, paving the way for a myriad of applications across industries. In the midst of this progress, the significance of hyperparameter tuning and model evaluation can't be understated, as they play a critical role in achieving optimal model performance. This project delves into the realm of ML model optimization and evaluation, harnessing Bayesian Optimization, SHAP (SHapley Additive exPlanations), and traditional evaluation matrices. By focusing on a decision tree classifier, the study investigates the efficiency of various hyperparameter tuning methods, the interpretability of model decisions, and the robustness of performance metrics. Preliminary results suggest that Bayesian Optimization may offer advantages in efficiency over traditional tuning methods. Furthermore, SHAP values provide deeper insights into model decision-making, fostering better transparency and trust in ML applications.
Category: Artificial Intelligence

[1366] viXra:2309.0107 [pdf] submitted on 2023-09-22 00:36:36

Anomalous Payload Detection System by the Combination of Sparse-Response Deep Belief Network and Support Vector Machine

Authors: Han Ok Chol, Hyon Hui Song, Pak Chol Ryong
Comments: 9 Pages.

This paper proposes how to detect malicious network data effectivelyby the combination of sparse-response deep belief network and support vector machine.The Sparse-response Deep belief networks (SR-DBN) is an efficient non-supervised leaning machine for learning feature representation of the data without redundancy and the Support Vector Machine is designed to develop a classifier, which has high generalization ability in the feature space, in a supervised manner. In this paper, the feature representation of anomalous payload is performed by Sparse-response Deep belief Networks(SR-DBN), while the classification of normal or abnormal payload is performed by Support Vector Machine. Simulations and experiments show that the proposed abnormal network-detecting system is higher detection rate than the multi-layer perceptron which has stacked auto-encoder.
Category: Artificial Intelligence

[1365] viXra:2309.0087 [pdf] submitted on 2023-09-17 15:56:13

Red Teaming Generative Ai/nlp, the BB84 Quantum Cryptography Protocol and the Nist-Approved Quantum-Resistant Cryptographic Algorithms

Authors: Petar Radanliev, David De Roure, Omar Santos
Comments: 30 Pages.

In the contemporary digital age, Quantum Computing and Artificial Intelligence (AI) convergence is reshaping the cyber landscape, introducing both unprecedented opportunities and potential vulnerabilities.This research, conducted over five years, delves into the cybersecurity implications of this convergence, with a particular focus on AI/Natural Language Processing (NLP) models and quantum cryptographic protocols, notably the BB84 method and specific NIST-approved algorithms. Utilising Python and C++ as primary computational tools, the study employs a "red teaming" approach, simulating potential cyber-attacks to assess the robustness of quantum security measures. Preliminary research over 12 months laid the groundwork, which this study seeks to expand upon, aiming to translate theoretical insights into actionable, real-world cybersecurity solutions. Located at the University of Oxford's technology precinct, the research benefits from state-of-the-art infrastructure and a rich collaborative environment. The study's overarching goal is to ensure that as the digital world transitions to quantum-enhanced operations, it remains resilient against AI-driven cyber threats. The research aims to foster a safer, quantum-ready digital future through iterative testing, feedback integration, and continuous improvement. The findings are intended for broad dissemination, ensuring that the knowledge benefits academia and the global community, emphasising the responsible and secure harnessing of quantum technology.
Category: Artificial Intelligence

[1364] viXra:2309.0076 [pdf] submitted on 2023-09-16 19:33:23

Prototype-based Feature Selection with the Nafes Package

Authors: Nana Abeka Otoo, Muhammad Abubakar
Comments: 6 Pages.

This paper introduces Nafes as a prototype-based feature selection package designed as a wrapper centered on the highly interpretable and powerful Generalized Matrix Learning Vector Quantization (GMLVQ) classification algorithm and its local variant (LGMLVQ). Nafes utilizes the learned relevances evaluated by the mutation validation scheme for Learning Vector quantization (LVQ), which iteratively converges to selected features that relevantly contribute to the prototype-based classifier decisions.
Category: Artificial Intelligence

[1363] viXra:2309.0063 [pdf] submitted on 2023-09-12 04:24:58

Tumor Angiogenic Optimizer: a New Bio-Inspired Based Metaheuristic

Authors: Hernández Rodríguez, Matías Ezequiel
Comments: 10 pages, 2 figures

In this article, we propose a new metaheuristic inspired by the morphogenetic cellular movements of endothelial cells (ECs) that occur during the tumor angiogenesis process. This algorithm starts with a random initial population. In each iteration, the best candidate selected as the tumor, while the other individuals in the population are treated as ECs migrating toward the tumor's direction following a coordinated dynamics through a spatial relationship between tip and follower ECs. EC movements mathematical model in angiogenic morphogenesis are detailed in the article.This algorithm has an advantage compared to other similar optimization metaheuristics:the model parameters are already configured according to the tumor angiogenesis phenomenon modeling, preventing researchers from initializing them with arbitrary values.Subsequently, the algorithm is compared against well-known benchmark functions, and the results are validated through a comparative study with Particle Swarm Optimization (PSO). The results demonstrate that the algorithm is capable of providing highly competitive outcomes.Also the proposed algorithm is applied to a real-world problem. The results showed that the proposed algorithm performed effective in solving constrained optimization problems, surpassing other known algorithms.
Category: Artificial Intelligence

[1362] viXra:2308.0179 [pdf] submitted on 2023-08-26 23:39:54

"LAHEL": An AI-Generated Content Approached LAwHELper to Personal Legal Advice

Authors: Yisu Wang, Nanxi Hou, Kaiyuan Xu, Zepu Ni, Guofeng Wu
Comments: 7 Pages.

In certain developing countries, public awareness of legal rights is increasing, leading to a growing demand for legal consultation. However, the time and monetary costs associated with consulting professional lawyers remain high.Concurrently, there are two major impacts of computer science on the current legal sector. First, within government and public prosecution systems, information systems have accumulated vast amounts of structured and semi-structured data, offering significant economic value and potential for exploration. However, few people have attempted to mine these data resources. Second, intelligent dialogue systems have matured, but dialogue systems specifically tailored for the legal domain have not yet emerged.Considering these two trends, we introduce LAHEL, a legal consultation system developed by a team of nine individuals over the course of two years, dedicated to addressing the aforementioned issues. The system comprises three components: search, human dialogue systems, and robot dialogue systems. Its primary contributions are twofold: exploring the application of AI in legal consultation and summarizing lessons learned from the design of legal consultation systems.
Category: Artificial Intelligence

[1361] viXra:2308.0137 [pdf] submitted on 2023-08-21 19:43:51

Can Artificial Intelligence be Conscious?

Authors: Victor Senkevich
Comments: 14 Pages.

All magic and mystery disappear as soon as an obscure mysterious concept gets a rigorous formaldefinition.In order to provide an opportunity to talk about the applicability of philosophical / cognitiveconcepts to the subject area of AI, it is necessary to "ground" these concepts by formulating rigorous formal definitions for them. The fundamental importance of such formal definitions is quite obvious, since any concepts applied to the field of Information Technology must be "codable", i.e. potentially implementable in program code. Thus, the "codable" formal definitions of cognitive terms are the necessary basis on which alone it is possible to build the architecture of AI technology that has the ability to embody these concepts in a real software. The question of the adequacy of such definitions of "reality" and their compliance with existing generally accepted philosophical theories is also very important and quite discussable, but this does not affect the priority and fundamental nature of the requirement for the formulation of "codable" formal definitions.The formulation of "codable" definitions for the concept of "consciousness" and related cognitive concepts and, based on them, statements about their applicability to the subject area ofAI is the topic of this publication.
Category: Artificial Intelligence

[1360] viXra:2308.0116 [pdf] submitted on 2023-08-17 22:53:23

An ADMM Algorithm for a Generic L0 Sparse Overlapping Group Lasso Problem

Authors: Youming Zhao
Comments: 10 Pages.

We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $ell_1$ sparse group lasso problem and the $ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.
Category: Artificial Intelligence

[1359] viXra:2308.0112 [pdf] submitted on 2023-08-17 22:48:28

Mutation Validation for Learning Vector Quantization

Authors: Nana Abeka Otoo
Comments: 12 Pages.

Mutation validation as a complement to existing applied machine learning validation schemes hasbeen explored in recent times. Exploratory work for Learning vector quantization (LVQ) based onthis model-validation scheme remains to be discovered. This paper proposes mutation validation as an extension to existing cross-validation and holdout schemes for Generalized LVQ and its advanced variants. The mutation validation scheme provides a responsive, interpretable, intuitive and easily comprehensible score that complements existing validation schemes employed in the performance evaluation of the prototype-based LVQ family of classification algorithms. This paper establishes a relation between the mutation validation scheme and the goodness of fit evaluation for four LVQ models: Generalized LVQ, Generalized Matrix LVQ, Generalized Tangent LVQ and Robust Soft LVQ models. Numerical evaluation regarding these models complexity and effects on test outcomes,pitches mutation validation scheme above cross-validation and holdout schemes.
Category: Artificial Intelligence

[1358] viXra:2308.0077 [pdf] submitted on 2023-08-12 12:07:43

Using Machine Learning to Classify and Localize Stellar Objects

Authors: Ahmed Taha Hassina
Comments: 10 Pages.

Mapping the universe has always been a salient endeavor in astronomy and astrophysics. Advancements in observational astronomy have generated vast amounts of data containing various features of celestial objects. Inducing a growing need for accurate and detailed classification and localization of stellar objects in the cosmos. In this paper, we present a comprehensive study that combines machine learning techniques to classify celestial objects into distinct categories and predict their precise locations in the sky. This study is divided into two parts: a classification task, where the stellar objects are classified into galaxies, stars, or quasars (quasi-stellar radio sources). The resulting model exhibits exceptional performance in differentiating these objects, as demonstrated by high classification accuracy. We extend our analysis to predict the location of stellar objects using regression techniques. By employing multi-target regression, we model the right ascension and declination coordinates, enabling accurate localization of celestial objects on the celestial sphere. The practical implications of our research lie in producing comprehensive celestial catalogs, facilitating targeted observations, and contributing to the broader field of observational astronomy. The ability to accurately classify and localize stellar objects lays the groundwork for mapping the cosmos and advancing our understanding of the universe's intricate structure.
Category: Artificial Intelligence

[1357] viXra:2308.0075 [pdf] submitted on 2023-08-12 13:44:31

Improved Memory-guided Normality with Specialized Training Techniques of Deep SVDD

Authors: Xie Lei
Comments: 2 Pages.

Deep learning techniques have shown remarkable success in various tasks, including feature learning, representation learning, and data reconstruction. Autoencoders, a subset of neural networks, are particularly powerful in capturing data patterns and generating meaningful representations. This paper presents an investigation into the use of combination with Deep SVDD and memory modules.
Category: Artificial Intelligence

[1356] viXra:2308.0062 [pdf] submitted on 2023-08-11 16:35:06

Twenty Second Century Artificial Intelligence

Authors: Satish Gajawada, Hassan Mustafa
Comments: 61 Pages.

Preface: In 20th and 21st Centuries the global optimization algorithms were created by taking inspiration from birds (Particle Swarm Optimization), ants (Ant Colony Optimization), chromosomes (Genetic Algorithms) etc. In "Twenty Second Century Artificial Intelligence" book global optimization algorithms are created by taking inspiration from Humans, Souls, Gods, Satisfied Beings, Mothers, Children, Particular Human Beings and Stories.In 20th and 21st Centuries research scientists focused mainly on Brain Inspired Computing. In "Twenty Second Century Artificial Intelligence" book a new path is shown where algorithms are created by taking inspiration from both heart and brain.In 20th and 21st Centuries the path of "Artificial Intelligence" is the main focus of research. In "Twenty Second Century Artificial Intelligence" book we defined "Artificial Satisfaction".In 20th and 21st Centuries researchers created many algorithms by taking inspiration from Nature (Nature Inspired Computing). In "Twenty Second Century Artificial Intelligence" book we created "Nature Plus Plus Inspired Computing".Abstract: The book defines various new paths as nine different chapters. First, second and third chapters deal with "Artificial Human Optimization", "Artificial Soul Optimization" and "Artificial God Optimization" respectively.Three new branches titled "Artificial Satisfaction", "Deep Loving" and "Nature Plus Plus Inspired Computing" are shown in fourth, fifth and sixth chapters respectively.The seventh chapter describes "Artificial Heart Neural Networks" where algorithms are created by taking inspiration from both Heart and Brain.Two new branches "Artificial Excellence" and "Stories Inspired Optimization Algorithms" are created in last two chapters of this book.
Category: Artificial Intelligence

[1355] viXra:2308.0061 [pdf] submitted on 2023-08-11 16:41:42

Stories Inspired Optimization Algorithms - The Breakthrough in Artificial Intelligence

Authors: Satish Gajawada
Comments: 2 Pages.

The primary purpose of writing this letter is to invent and define a new area called "Stories Inspired Optimization Algorithms (SIOA)".
Category: Artificial Intelligence

[1354] viXra:2308.0048 [pdf] submitted on 2023-08-10 00:02:53

Humans or Artificial Intelligence: Who Will Rule the World?

Authors: Vitaly Pilkin
Comments: 11 Pages.

To understand the degree of danger of AI for human civilization and the existence of humanity as a whole is possible only through understanding the Universe, the place of humans in the Universe and understanding the nature of thinking, consciousness and mentality.
Category: Artificial Intelligence

[1353] viXra:2307.0146 [pdf] submitted on 2023-07-27 14:20:08

Structural Embeddings of Tools for Large Language Models

Authors: Eren Unlu
Comments: 5 Pages.

It is evident that the current state of Large Language Models (LLMs) necessitates the incorporation of external tools. The lack of straightforward algebraic and logical reasoning is well documented and prompted researchers to develop frameworks which allow LLMs to operate via external tools. The ontological nature of tool utilization for a specific task can be well formulated with a Directed Acyclic Graph (DAG). The central aim of the paper is to highlight the importance of graph based approaches to LLM-tool interaction in near future. We propose an exemplary framework to guide the orchestration of exponentially increasing numbers of external tools with LLMs, where objectives and functionalities of tools are graph encoded hierarchically. Assuming that textual segments of a Chain-of-Thought (CoT) can be imagined as a tool as defined here, the graph based framework can pave new avenues in that particular direction as well.
Category: Artificial Intelligence

[1352] viXra:2307.0121 [pdf] submitted on 2023-07-23 13:40:39

Training Self-Supervised Class-Conditional Gan by Adjusting Categorical Latent Distribution

Authors: Jeongik Cho
Comments: 11 Pages.

Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, InfoGAN with categorical latent distribution can generate class-conditional data through a self-supervised (unsupervised) method without labeled data. Instead, InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering. The proposed method uses Bayesian inference to estimate optimal categorical latent distribution from the classifier output distribution. In the proposed method, based on the classifier output distribution of the fake data and the current categorical latent distribution, the categorical latent distribution is updated to fit the classifier output distribution of the real data. As training progresses, the entropy of the categorical latent distribution gradually decreases and converges to the appropriate value. The approximated categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to calculate the distance between data. Also, a classifier used in training can be used for clustering.
Category: Artificial Intelligence

[1351] viXra:2307.0097 [pdf] submitted on 2023-07-19 03:24:07

Generative Pre-Trained Transformers, Natural Language Processing and Artificial Intelligence and Machine Learning (Ai/ml) in Software Vulnerability Management: Automations in the Software Bill of Materials (Sbom) and the Vulnerability-Exploitability Excha

Authors: Petar Radanliev, David De Roure, Omar Santos
Comments: 7 Pages.

One of the most burning topics in cybersecurity in 2023 will undoubtedly be the compliance with the Software Bill of Materials. Since the US president issued the Executive Order 14028 on Improving the Nation’s Cybersecurity, software developers have prepared and bills are transmitted to vendors, customers, and users, but they don’t know what to do with the reports they are getting. In addition, since software developers have identified the values of the Software Bill of Materials, they have been using the reports extensively. This article presents an estimate of 270 million requests per month, just from form one popular tool to one vulnerability index. This number is expected to double every year and a half. This simple estimate explains the urgency for automating the process. We propose solutions based on artificial intelligence and machine learning, and we base our tools on the existing FAIR principles (Findable, Accessible, Interoperable, and Reusable). This methodology is supported with a case study research and Grounded theory, for categorising data into axis, and for verifying the values of the tools with experts in the field. We showcase how to create, and share Vulnerability Exploitability eXchange data, and automate the Software Bill of Materials compliance process with AI models and a unified computational framework combining solutions for the following problems: (1) the data utilisation problem, (2) the automation and scaling problem, (3) the naming problem, (4) the alignment problem, (5) the pedigree, and provenance problem, and many other problems that are on the top of mind for many security engineers at present. The uptake of these findings will depend on collaborations with government and industry, and on the availability and the ease of use of automated tools.
Category: Artificial Intelligence

[1350] viXra:2307.0091 [pdf] submitted on 2023-07-17 07:14:00

Efficient Data Storage and Machine Learning

Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.

In this work we present to reader the novel research on account for efficiency of compression algorithms like Lempel-Ziv Welch and Aho-Corasick trees. We use them to build the proper storage which is called file system in a separate or generalized stream of data. These streams weren’t adopted before for big data to be compressed and queried at a fast pace. We will show further that this is the most efficient model for storing arrays of data on a server end for a final file system. The efficient algorithm for Machine Learning on Aho-Corasick trees is also presented which performs the query in linear time without getting more time on the models like neural networks which are very hardware demanding nowadays. The data structure like trie by Turing Award winner Alfred V. Aho and Margaret J. Corasick remain of big potential in the present time and are subjected to extensive research in this work.
Category: Artificial Intelligence

[1349] viXra:2307.0087 [pdf] submitted on 2023-07-17 15:07:47

Artificial Intelligence for Complexity Theory

Authors: Mirzakhmet Syzdykov
Comments: 2 Pages.

In this continued series of work, we present the theoretical and practical results towardsreasoning with modern methods of Artificial Intelligence (AI). We justify our methodology with help of illustrative examples from Computer Science relying on the regular expression matching algorithm and application of the proposed solution for the task of identifying files consistency according to the unknown format. We will also give several notable proofs to the classical theorems which in some sense are coherent to the terms like AI and algorithmic complexity, however, or at least, nowadays they’re solved involving the huge amount of hardware resources and together constitute the new formation in the modern age with help of specifically crafter hardware modules — we’re still about to represent the model in more classical understanding from the point of view of computational complexity, concise reasoning and computer logic within the classical models, theorems and proofs as the base approach of estimating the costs needed to build Artificial Neural Networks (ANN) or Machine Learning (ML) data
Category: Artificial Intelligence

[1348] viXra:2307.0024 [pdf] submitted on 2023-07-05 18:22:52

Fine-Tuning a BERT Model for Email Classification: Leveraging Personal Gmail Inbox

Authors: Rafael Costa da Silva
Comments: 8 Pages.

This study aims to develop an effective model for classifying emails as wanted or unwanted using fine-tuned BERT models. The process involved downloading the Gmail inbox through Google Takeout and converting the data to Parquet format. A frequency distribution analysis of From emails was conducted, and the emails were manually classified. A final dataset was created with email subject, classification, and binary labels. The BERT-base-multilingualcased model was fine-tuned using about 10,000 observations for each category. The resulting models achieved an accuracy of 0.9429411764705883. The models are publicly available in Hugging Face's model repository
Category: Artificial Intelligence

[1347] viXra:2307.0006 [pdf] submitted on 2023-07-02 22:26:43

Comparative Analysis for Predicting Shelf life of Fruits Using Advanced Deep Learning Approaches

Authors: Sanath Shenoy, Radhika Mishra, Ruchi Chaturvedi, Krushnakant Bhagwat
Comments: 7 Pages.

The food industry aims to reduce food waste andensure the delivery of fresh produce to consumers, making it crucial to predict fruit shelf life accurately. Traditional approaches rely on expensive and time-consuming laboratory testing, which often involves destructive methods. However,recent studies suggested that advanced deep learning techniques can predict fruit shelf life accurately and efficiently. This paper presents a novel approach to predicting fruit shelflife using deep learning models. The study focuses on the application of these advanced techniques to forecast the shelf life of bananas, which can contribute significantly to achievingthe food industry's objective.The study tries to develop accurate and efficient models that could predict the maturity of bananas, based on their average shelf-life and appearance. In order toachieve this objective, two object detection algorithms—Faster R-CNN and You Only Look Once (YOLO) are used and their performance is compared in the present research. The dataset has been created by collecting images of the life cycle of bananas and segregating them based on their maturity. Various preprocessing and augmentation techniques have been applied to enhance the features of the training dataset which is useful to get better accuracy. The algorithms were trained on the family of Cavendish Bananas dataset and were able to predict the shelf life ofbananas with better training accuracy. The YOLO algorithm which is known for efficiency is compared with Faster R-CNN well known for identifying very fine features. This studydemonstrates the potential of deep learning algorithms in predicting the shelf life of bananas and can be extended to different fruits.
Category: Artificial Intelligence

[1346] viXra:2306.0168 [pdf] submitted on 2023-06-30 16:21:18

On Monitorability of AI

Authors: Roman V. Yampolskiy
Comments: 30 Pages.

Artificially Intelligent (AI) systems have ushered in a transformative era across various domains, yet their inherent traits of unpredictability, unexplainability, and uncontrollability have given rise to concerns surrounding AI safety. This paper aims to demonstrate the infeasibility of accurately monitoring advanced AI systems to predict the emergence of certain capabilities prior to their manifestation. Through an analysis of the intricacies of AI systems, the boundaries of human comprehension, and the elusive nature of emergent behaviors, we argue for the impossibility of reliably foreseeing some capabilities. By investigating these impossibility results, we shed light on their potential implications for AI safety research and propose potential strategies to overcome these limitations.
Category: Artificial Intelligence

[1345] viXra:2306.0099 [pdf] submitted on 2023-06-17 01:24:43

Boolean Structured Autoencoder Convolutional Deep Learning Network (BSautoconvnet)

Authors: Sing Kuang Tan
Comments: 11 Pages.

In this paper, I am going to propose a new Boolean Structured Autoencoder Convolutional Deep Learning Network (BSautoconvnet) built on top of BSconvnet, based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Autoencoder Convolutional Deep Learning Network with much lesser number of parameters on the CIFAR10 dataset. The model is evaluated by visual inspection of the quality of the reconstructed images against groundtruth with reconstructed images by models in the internet.
Category: Artificial Intelligence

[1344] viXra:2306.0055 [pdf] submitted on 2023-06-12 02:41:42

Introducing Proteus: a Mega Prompt with Personality, Skills and Dynamic Logic Based Internal Prompt Manipulation

Authors: Shaun Stoltz
Comments: 10 Pages.

There have been significant improvements in directing large language models (LLM) toanswer logic-based question such as mathematical reasoning tasks. This has resulted innear perfect performance on these types of problems with accuracy levels in the mid ninetypercentile level using state of the art models (GPT-4). The achievement of this level ofaccuracy has previously needed a multi-prompt approach to elicit better performances fromLLM’s. This paper introduces a new prompt paradigm termed "Mega prompt" and furtherintroduces Proteus, a state of the art mega prompt, that has been used to achieve a newlevel of accuracy on the GSM8K math data set of 97%.
Category: Artificial Intelligence

[1343] viXra:2306.0052 [pdf] submitted on 2023-06-10 12:16:23

Competences in Ontology-based Enterprise Architecture Modeling: Zooming In and Out

Authors: Rodrigo F. Calhau, João Paulo A. Almeida, Giancarlo Guizzardi
Comments: 27 Pages. Preprint submitted to the International Journal on Software and Systems Modeling (SoSyM), Trends in Enterprise Architecture Management Research

Competence-based approaches have received increased attention, as the demand for qualified people with the right combination of competences establishes itself as a major factor of organizational performance. This paper examines how competences can be incorporated into Enterprise Architecture modeling: (i) we identify a key set of competence-related concepts such as skills, knowledge, and attitudes, (ii) analyze and relate them using a reference ontology (grounded on the Unified Foundational Ontology), and (iii) propose a representation strategy for modeling competences and their constituent elements leveraging the ArchiMate language, discussing how the proposed models can fit in enterprise competence-based practices. Our approach is intended to cover two tasks relevant to the combined application of Enterprise Architecture and Competence Modeling: `zooming in' on competences, revealing the relations between competences, knowledge, skills, attitudes and other personal characteristics that matter in organizational performance, and `zooming out' of competences, placing them in the wider context of other personal competences and overall organizational capabilities.
Category: Artificial Intelligence

[1342] viXra:2306.0037 [pdf] submitted on 2023-06-09 01:04:04

EEG Emotion Classification Using 3-Dimensional Convolutional Neural Networks

Authors: Maksym Oleksandrovich Stavratii
Comments: 7 Pages.

Classification of electroencephalography (EEG) signals has important applications in the diagnosis and treatment of various neurological disorders. In this paper, we propose a methodology for classifying EEG signals based on signal processing using wavelet transform and superlet transform. The wavelet transform is used to decompose the EEG signal into frequency components, which are then used as features for classification. The proposed approach is evaluated using the publicly available "GAMEEMO" EEG dataset, which has been annotated by valence and emotional arousal. We use a Convolutional Neural Network (CNN) for classification at the waveform level. The results of this study suggest that the wavelet transform and its modifications, such as the superlet transform, can be valuable tools for analyzing and classifying EEG signals
Category: Artificial Intelligence

[1341] viXra:2305.0166 [pdf] submitted on 2023-05-29 01:43:25

Boolean Structured Convolutional Deep Learning Network (BSconvnet)

Authors: Sing Kuang Tan
Comments: 10 Pages.

In this paper, I am going to propose a new Boolean Structured Convolutional Deep Learning Network (BSconvnet) built on top of BSnet, based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Convolutional Deep Learning Network with much lesser number of parameters on the CIFAR10 dataset.
Category: Artificial Intelligence

[1340] viXra:2305.0104 [pdf] submitted on 2023-05-14 03:26:39

Detection of Abnormalities in Blood Cells Using a Region-based Segmentation Approach and Supervised Machine Learning Algorithm

Authors: Nagueu Djambong Lionel Perin, Waku Kouomou Jules, Hippolyte Kenfack Tapamo, Jimbo H. Claver
Comments: 11 Pages.

Screening (slide reading stage) is a manual human activity in cytology which consists of theinspection or analysis by the cytotechnician of all the cells present on a slide. Segmentation of bloodcells is an important research question in hematology and other related elds. Since this activity is human-based, detection of abnormal cells becomes dicult. Nowadays, medical image processing has recently become a very important discipline for computer-aided diagnosis, in which many methods are applied to solve real problems. Our research work is in the eld of computer-assisted diagnosis on blood images for the detection of abnormal cells. To this end, we propose a hybrid segmentation method to extract the correct shape from the nuclei to extract features and classify them usingSVM and KNN binary classifiers. In order to evaluate the performance of hybrid segmentation and the choice of the classication model, we carried out a comparative study between our hybrid segmentation method followed by our SVM classication model and a segmentation method based on global thresholding followed by a KNN classication model. After this study, it appears from the experiments carried out on the 62 images of blood smears, that the SVM binary classication model gives us an accuracy of 97% for the hybrid segmentation and 57% in the global thresholding and 95% for the KNN Classi cation Model. As our dataset was not balanced, we evaluated precision, recall,F1 score and cross validation with the Strated K-Fold cross validation algorithm of each of these segmentation methods and classication models. We obtain respectively: 93.75%; 98.712% and 99% for hybrid segmentation reecting its effectiveness compared to global fixed threshold segmentation and KNN classication model. To evaluate the performance of these models we obtained the following results: 77% of mean accuracy in the SVM and 61% of mean accuracy in the KNN, 84% of mean testaccuracy in the SVM and 74% mean test accuracy in the KNN making the best performing SVMmodel
Category: Artificial Intelligence

[1339] viXra:2305.0074 [pdf] submitted on 2023-05-09 01:25:57

Investigating the Efficacy of the Natural Language Processing AI: ChatGPT in Emotion Recognition and Psychological Intervention

Authors: Bryce Petofi Towne
Comments: 10 Pages.

This registered report aims to compare the emotion recognition accuracy and effectiveness of psychological interventions provided by ChatGPT, an artificial intelligence (AI) language model, and human mental health professionals. The study employs a mixed-methods approach, incorporating quantitative and qualitative methodologies. Participants will be assessed on emotion recognition tasks, and a randomized controlled trial (RCT) will be conducted to compare the effectiveness of psychological interventions provided by ChatGPT and human professionals. Additionally, semi-structured interviews will be conducted to explore participants' experiences with ChatGPT and human-guided interventions. This comprehensive study design aims to provide valuable insights into the potential of AI in the field of mental health and to identify areas where improvements can be made to optimize AI-guided psychological interventions.Key words: emotion recognition, natural language processing, mental health, psychological interventions, ChatGPT, human mental health professionals.
Category: Artificial Intelligence

[1338] viXra:2305.0064 [pdf] submitted on 2023-05-07 17:19:19

Causation and Correlation

Authors: Ait-Taleb Nabil
Comments: 14 Pages.

In this paper, I will introduce the causation's magnitude allowing to compute the importance of causes in the cause-and-effect relationship from correlation matrix.
Category: Artificial Intelligence

[1337] viXra:2305.0055 [pdf] submitted on 2023-05-05 10:35:57

TrueGPT: An AI Model Designed for Empowering Actions

Authors: Dodonov Anton
Comments: 5 Pages.

TrueGPT is a novel artificial intelligence model that emphasizes actionable solutions and user empowerment. It is trained on a curated dataset that eliminates expressions of uncertainty, focusing instead on delivering output that promotes agency and decisiveness. With the ability to produce output in the flexible and interactive RoboScript format, TrueGPT encourages dynamic interactions and a broader range of AI-assisted use cases. The model is designed to seamlessly integrate with various applications and systems, such as RoboGPT, offering enhanced functionality. Its flexible API allows for diverse applications, from daily tasks to specialized use cases. At its core, TrueGPT's mission is to empower users, aiding them in their productivity and assisting them in achieving their goals through actionable guidance. This paper presents the design, functionality, and features of TrueGPT, illustrating its potential as a powerful tool for a new era of AI assistance.
Category: Artificial Intelligence

[1336] viXra:2305.0050 [pdf] submitted on 2023-05-05 19:12:39

The Emperor with no Clothes: Chomsky Against Chatgpt

Authors: Gennady Shkliarevsky
Comments: 41 Pages.

Artificial Intelligence (AI) is all the rage these days. The coming to grips with this new development is now in full swing. The main questions that we seek to answer in relation to AI pivot on one fundamental problem: Can we create AI that will match human intelligence? This contribution addresses this question. It centers on the recent article published by Noam Chomsky and his two co-authors. After a brief overview of the development of AI and its capabilities, the article presents the perspective on AI presented by Chomsky and his colleagues. It also offers a criticism of this perspective. The last sections of the contribution discuss the relationship between humans and machines. They outline the parameters that AI should satisfy to achieve the professed objective of its creators. Most importantly, the article argues, AI should embody the process of creation that can only be possible if we embrace this process and make it the central organizing principle of our theory and practice.
Category: Artificial Intelligence

[1335] viXra:2305.0037 [pdf] submitted on 2023-05-04 22:20:51

RoboGPT: Harnessing the Power of the Internet for Advanced AI-driven Problem Solving, Goal Achievement, and Human Communication

Authors: Dodonov Anton
Comments: 3 Pages.

RoboGPT is a cutting-edge AI model that leverages the power of the internet to enhance interactions, problem-solving, and communication with users. In this paper, we present the unique features of RoboGPT, its underlying cognitive mechanisms, and various applications and use cases. RoboGPT builds upon the foundations of ChatGPT, offering advanced capabilities such as active internet engagement, web-based search, and goal-oriented task execution. We discuss the innovations that RoboGPT brings to the field of artificial intelligence and explore how it can be effectively applied to a wide range of real-world tasks and human communication scenarios.
Category: Artificial Intelligence

[1334] viXra:2305.0006 [pdf] submitted on 2023-05-01 07:29:15

Bio-Inspired Simple Neural Network for Low-Light Image Restoration: A Minimalist Approach

Authors: Junjie Ye, Jilin Zhao
Comments: 6 Pages.

In this study, we explore the potential of using a straightforward neural network inspired by the retina model to efficiently restore low-light images. The retina model imitates the neurophysiological principles and dynamics of various optical neurons. Our proposed neural network model reduces the computational overhead compared to traditional signal-processing models while achieving results similar to complex deep learning models from a subjective perceptual perspective. By directly simulating retinal neuron functionalities with neural networks, we not only avoid manual parameter optimization but also lay the groundwork for constructing artificial versions of specific neurobiological organizations.
Category: Artificial Intelligence

[1333] viXra:2304.0215 [pdf] submitted on 2023-04-26 06:09:28

Ten Artificial Human Optimization Algorithms

Authors: Satish Gajawada, Hassan Mustafa
Comments: 18 Pages.

The term "Artificial Human Optimization" was first coined by the corresponding author of this work in December 2016 when he published a paper titled "Entrepreneur : Artificial Human Optimization" at Transactions on Machine Learning and Artificial Intelligence (TMLAI) Volume 4, No 6 (December 2016). According to that paper published in 2016, Artificial Human Optimization Field is defined as the collection of all those optimization algorithms which were proposed based on Artificial Humans. In real world we (Humans) solve the problems. In the same way Artificial Humans imitate real Humans in the search space and solve the optimization problems. In Particle Swarm Optimization (PSO) the basic entities in the solution space are Artificial Birds whereas in Artificial Human Optimization the basic entities in search space are Artificial Humans. Each Artificial Human corresponds to a point in the solution space. Ten Artificial Human Optimization methods titled "Human Bhagavad Gita Particle Swarm Optimization (HBGPSO)", "Human Poverty Particle Swarm Optimization (HPPSO)", "Human Dedication Particle Swarm Optimization (HuDePSO)", "Human Selection Particle Swarm Optimization (HuSePSO)", "Human Safety Particle Swarm Optimization (HuSaPSO)", "Human Kindness Particle Swarm Optimization (HKPSO)", "Human Relaxation Particle Swarm Optimization (HRPSO)", "Multiple Strategy Human Particle Swarm Optimization (MSHPSO)", "Human Thinking Particle Swarm Optimization (HTPSO)", "Human Disease Particle Swarm Optimization (HDPSO)" are applied on various benchmark functions and results obtained are shown in this work.
Category: Artificial Intelligence

[1332] viXra:2304.0214 [pdf] submitted on 2023-04-26 06:16:58

Artificial Soul Optimization - An Invention

Authors: Satish Gajawada, Hassan Mustafa
Comments: 9 Pages.

The Soul is eternal and exists even after death of a person or animal. The main idea that is captured in this work is that soul continues to exist and takes a different body after the death. The primary goal of this work is to invent a new field titled "Artificial Soul Optimization (ASO)". The term "Artificial Soul Optimization" is coined in this paper. All the Optimization algorithms which are proposed based on Artificial Souls will come under "Artificial Soul Optimization" Field (ASO Field). In the Particle Swarm Optimization and Artificial Human Optimization, the basic entities in search space are Artificial Birds and Artificial Humans respectively. Similarly, in Artificial Soul Optimization, the basic entities in search space are Artificial Souls. In this work, the ASO Field concepts are added to Particle Swarm Optimization (PSO) algorithm to create a new hybrid algorithm titled "Soul Particle Swarm Optimization (SoPSO). The proposed SoPSO algorithm is applied on various benchmark functions. Results obtained are compared with PSO algorithm. The World's first Hybrid PSO algorithm based on Artificial Souls is created in this work.
Category: Artificial Intelligence

[1331] viXra:2304.0213 [pdf] submitted on 2023-04-26 06:25:46

Artificial Satisfaction - The Brother of Artificial Intelligence

Authors: Satish Gajawada, Hassan Mustafa
Comments: 8 Pages.

John McCarthy (September 4, 1927 — October 24, 2011) was an American computer scientist and cognitive scientist. The term "Artificial Intelligence" was coined by him (Wikipedia, 2020). Satish Gajawada (March 12, 1988 — Present) is an Indian Independent Inventor and Scientist. He coined the term "Artificial Satisfaction" in this article (Gajawada, S., and Hassan Mustafa, 2019a). A new field titled "Artificial Satisfaction" is introduced in this article. "Artificial Satisfaction" will be referred to as "The Brother of Artificial Intelligence" after the publication of this article. A new algorithm titled "Artificial Satisfaction Algorithm (ASA)" is designed and implemented in this work. For the sake of simplicity, Particle Swarm Optimization (PSO) Algorithm is modified with Artificial Satisfaction Concepts to create the "Artificial Satisfaction Algorithm (ASA)". PSO and ASA algorithms are applied on five benchmark functions. A comparision is made between the results obtained. The focus of this paper is more on defining and introducing "Artificial Satisfaction Field" to the rest of the world rather than on implementing complex algorithms from scratch.
Category: Artificial Intelligence

[1330] viXra:2304.0212 [pdf] submitted on 2023-04-26 06:36:20

Deep Loving - The Friend of Deep Learning

Authors: Satish Gajawada, Hassan Mustafa
Comments: 5 Pages.

Artificial Intelligence and Deep Learning are good fields of research. Recently, the brother of Artificial Intelligence titled "Artificial Satisfaction" was introduced in literature [10]. In this article, we coin the term "Deep Loving". After the publication of this article, "Deep Loving" will be considered as the friend of Deep Learning. Proposing a new field is different from proposing a new algorithm. In this paper, we strongly focus on defining and introducing "Deep Loving Field" to Research Scientists across the globe. The future of the "Deep Loving" field is predicted by showing few future opportunities in this new field. The definition of Deep Learning is shown followed by a literature review of the "Deep Loving" field. The World's First Deep Loving Algorithm (WFDLA) is designed and implemented in this work by adding Deep Loving concepts to Particle Swarm Optimization Algorithm. Results obtained by WFDLA are compared with the PSO algorithm.
Category: Artificial Intelligence

[1329] viXra:2304.0211 [pdf] submitted on 2023-04-26 06:43:47

Nature Plus Plus Inspired Computing - The Superset of Nature Inspired Computing

Authors: Satish Gajawada, Hassan Mustafa
Comments: 5 Pages.

The term "Nature Plus Plus Inspired Computing" is coined by us in this article. The abbreviation for this new term is "N++IC." Just like the C++ programming language is a superset of C programming language, Nature Plus Plus Inspired Computing (N++IC) field is a superset of the Nature Inspired Computing (NIC) field. We defined and introduced "Nature Plus Plus Inspired Computing Field" in this work. Several interesting opportunities in N++IC Field are shown for Artificial Intelligence Field Scientists and Students. We show a literature review of the N++IC Field after showing the definition of Nature Inspired Computing (NIC) Field. The primary purpose of publishing this innovative article is to show a new path to NIC Field Scientists so that they can come up with various innovative algorithms from scratch. As the focus of this article is to introduce N++IC to researchers across the globe, we added N++IC Field concepts to the Particle Swarm Optimization algorithm and created the "Children Cycle Riding Algorithm (CCR Algorithm)". Finally, results obtained by CCR Algorithm are shown, followed by Conclusions.
Category: Artificial Intelligence

[1328] viXra:2304.0210 [pdf] submitted on 2023-04-26 06:54:03

Artificial Heart Neural Networks - An Idea

Authors: Satish Gajawada, Arun Kumar, Maria Celestina Vanaja, Baby Supriya Sri Valikala
Comments: 4 Pages.

Artificial Neural Networks Field (ANN Field) is an exciting field of research. ANN field took its inspiration from Human Brain. The heart and Brain are very important for the survival of Humans. Research Scientists published many articles by giving importance to Brain. But scientists have not yet explored much on the Heart which is another important part in addition to the Brain. The primary purpose of publishing this article is to show a path to ANN field Research Scientists by introducing the concept of "Heart" into Artificial Neural Networks. In this paper, we coined and defined "Artificial Heart Neuron", which is the basic part of Artificial Heart Neural Networks Field (AHNN Field) in addition to Artificial Neuron. This work takes its inspiration from both Heart and Brain.
Category: Artificial Intelligence

[1327] viXra:2304.0203 [pdf] submitted on 2023-04-25 09:04:30

Out of the Box Artificial Intelligence (OBAI): The Beginning of a New Era in Artificial Intelligence

Authors: Satish Gajawada, Hassan Mustafa
Comments: 11 Pages.

The main purpose of writing this article is to unify all the OUT OF THE BOX ideas (under Artificial Intelligence) invented by the corresponding author of this work during the period (2013-2022) under a single umbrella titled "Out of the BOX Artificial Intelligence Field (OBAI Field)". All the OUT OF THE BOX ideas which are proposed under Artificial Intelligence will come under new field titled OBAI Field which is defined in this work. A new Artificial Intelligence field titled "Artificial Cartoon Algorithms (ACA)" is invented in this work. ACA is a sub-field of OBAI field as it is an OUT OF THE BOX idea. Four new algorithms titled "Artificial Cartoon Popeye Algorithm", "Artificial Cartoon Chhota Bheem Algorithm", "Artificial Cartoon Jerry Algorithm" and "Artificial Cartoon Happy Kid Algorithm" are designed in this work.
Category: Artificial Intelligence

[1326] viXra:2304.0202 [pdf] submitted on 2023-04-25 09:12:01

The Interesting and Complete Artificial Intelligence (ICAI) - Version 1

Authors: Satish Gajawada, Hassan Mustafa
Comments: 8 Pages.

A new field titled "The Interesting and Complete Artificial Intelligence (ICAI)" is invented in this work. In this article, we define this new ICAI field. Four new ICAI algorithms are designed in this work. This paper titled "The Interesting and Complete Artificial Intelligence (ICAI) — Version 1" is just the starting point of this new field. We request Research Scientists across the globe to work in this new direction of Artificial Intelligence and publish their work with titles such as "The Interesting and Complete Artificial Intelligence (ICAI) — Version 1.1", "The Interesting and Complete Artificial Intelligence (ICAI) — Version 2" or "The Interesting and Complete Artificial Intelligence (ICAI) — Final Version".
Category: Artificial Intelligence

[1325] viXra:2304.0201 [pdf] submitted on 2023-04-25 09:18:08

Artificial God Optimization - A Creation

Authors: Satish Gajawada, Hassan Mustafa
Comments: 12 Pages.

Nature Inspired Optimization Algorithms have become popular for solving complex Optimization problems. Two most popular Global Optimization Algorithms are Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). Of the two, PSO is very simple and many Research Scientists have used PSO to solve complex Optimization Problems. Hence PSO is chosen in this work. The primary focus of this paper is on imitating God who created the nature. Hence the term "Artificial God Optimization (AGO)" is coined in this paper. AGO is a new field which is invented in this work. A new Algorithm titled "God Particle Swarm Optimization (GoPSO)" is created and applied on various benchmark functions. The World's first Hybrid PSO Algorithm based on Artificial Gods is created in this work. GoPSO is a hybrid Algorithm which comes under AGO Field as well as PSO Field. Results obtained by PSO are compared with created GoPSO algorithm. A list of opportunities that are available in AGO field for Artificial Intelligence field experts are shown in this work.
Category: Artificial Intelligence

[1324] viXra:2304.0200 [pdf] submitted on 2023-04-25 09:27:48

Artificial Excellence - A New Branch of Artificial Intelligence

Authors: Satish Gajawada
Comments: 8 Pages.

Artificial Excellence is a new field which is invented in this article. Artificial Excellence is a new field which belongs to Artificial Human Optimization field. Artificial Human Optimization is a sub-field of Evolutionary Computing. Evolutionary Computing is a sub-field of Computational Intelligence. Computational Intelligence is an area of Artificial Intelligence. Hence after the publication of this article Artificial Excellence (AE) will become popular as a new branch of Artificial Intelligence (AI). A new algorithm titled Artificial Satish Gajawada and Durga Toshniwal Algorithm (ASGDTA) is designed in this work. The definition of AE is given in this article followed by many opportunities in the new AE field. The Literature Review of Artificial Excellence field is shown after showing the definition of Artificial Intelligence. The new ASGDTA Algorithm is explained followed by Results and Conclusions.
Category: Artificial Intelligence

[1323] viXra:2304.0199 [pdf] submitted on 2023-04-25 09:34:17

AI++ : Artificial Intelligence Plus Plus

Authors: Satish Gajawada, Hassan Mustafa
Comments: 3 Pages.

In this letter we coined, invented and defined a new branch titled "Artificial Intelligence Plus Plus (AI++)".
Category: Artificial Intelligence

[1322] viXra:2304.0130 [pdf] submitted on 2023-04-18 15:47:19

Implementation of The Future of Drug Discovery: Quantum-Based Machine Learning Simulation (QMLS)

Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang, Haichuan Qiu
Comments: 9 Pages.

The Research & Development (R&D) phase of drug development is a lengthy and costly process. To revolutionize this process, we introduce our new concept QMLS to shorten the whole R&D phase to three to six months and decrease the cost to merely fifty to eighty thousand USD. For Hit Generation, Machine Learning Molecule Generation (MLMG) generates possible hits according to the molecular structure of the target protein while the Quantum Simulation (QS) filters molecules from the primary essay based on the reaction and binding effectiveness with the target protein. Then, For Lead Optimization, the resultant molecules generated and filtered from MLMG and QS are compared, and molecules that appear as a result of both processes will be made into dozens of molecular variations through Machine Learning Molecule Varication (MLMV), while others will only be made into a few variations. Lastly, all optimized molecules would undergo multiple rounds of QS filtering with a high standard for reaction effectiveness and safety, creating a few dozen pre-clinical-trail-ready drugs. This paper is based on our first paper [1], where we pitched the concept of machine learning combined with quantum simulations. In this paper we will go over the detailed design and framework of QMLS, including MLMG, MLMV, and QS.
Category: Artificial Intelligence

[1321] viXra:2304.0129 [pdf] submitted on 2023-04-18 15:49:54

The New Answer to Drug Discovery: Quantum Machine Learning in Preclinical Drug Development

Authors: Yew Kee Wong, Yifan Zhou, Yan Shing Liang, Hai Chuan Qiu, Yu Xi Wu, Bin He
Comments: 13 Pages.

The Research & Development (R&D) phase of drug development is a lengthy and costly process, usually spanning from six to nine years [1] and costing four hundred to fourteen hundred million USD [2]. To revolutionize this process, we introduce our new concept-the combination of Quantum-based Machine Learning network (QML) and Quantum Computing Simulation (QS)-to shorten the whole R&D phase to three to six months and decrease the cost to merely fifty to eighty thousand USD. Our program takes the inputs of the target protein/gene structure and the primary essay [3]. For Hit Generation [3], the QML network generates possible hits [4] according to the molecular structure of the target protein while the QS filters molecules from the primary essay based on the reaction and binding effectiveness with the target protein. Then, For Lead Optimization [3], the resultant molecules generated and filtered from QML and QS are compared, and the ones that appear as a result of both processes will be made into dozens of molecular variations, while others will only undergo simple modifications. Lastly, all optimized molecules would undergo multiple rounds of QS filtering with a high standard for reaction effectiveness and safety, creating a few dozen pre-clinical-trail-ready drugs. Our concept of the combination of QML and QS can also prove revolutionary in many other fields, such as agriculture research, genetic editing, and even aerospace engineering.
Category: Artificial Intelligence

[1320] viXra:2304.0089 [pdf] submitted on 2023-04-12 08:05:59

Information, Knowledge and Intelligence as a Hierarchy of Relations

Authors: Friedrich Sösemann
Comments: 11 pages (english) + 12 pages (german)

Information, knowledge and intelligence are defined as a hierarchy of relations:Information as dependent properties, knowledge as dependent information, and intelligence as dependent knowledge. The same dependency measure applies to all three.Syntax, semantics and pragmatics of descriptions embody information, knowledge and intelligence.The precision and measurability of these terms should reduce vagueness and contradictions in their application.
Category: Artificial Intelligence

[1319] viXra:2304.0037 [pdf] submitted on 2023-04-06 00:21:35

New Internet Bulletin Board Idea

Authors: G. Tolimalu
Comments: 1 Page. In Japanese

The author proposes an idea for a new Internet bulletin board.
Category: Artificial Intelligence

[1318] viXra:2304.0035 [pdf] submitted on 2023-04-05 00:36:52

Improving True AI Intelligence Requires Alternative Approaches, Not Current Popular Approaches

Authors: G. Tolimalu
Comments: 2 Pages.

I will explain why the approach of learning a large amount of natural language does not contribute to the improvement of true AI intelligence, and why an alternative approach is required, in the form of a contrast between the mainstream and the author's views.
Category: Artificial Intelligence

[1317] viXra:2304.0003 [pdf] submitted on 2023-04-01 16:03:19

Computational Consciousness

Authors: Thiago M. Nóbrega
Comments: 8 Pages.

Computational consciousness is a novel hypothesis that aims to repli-cate human consciousness in artificial systems using Multithreaded Prior-ity Queues (MPQs) and machine learning models. The study addressesthe challenge of processing continuous data from various categories, suchas vision, hearing, and speech, to create a coherent and context-aware sys-tem. The proposed model employs parallel processing and multithreading,allowing multiple threads to run simultaneously, each executing a machinelearning model. A priority queue manages the execution of threads, pri-oritizing the most important ones based on the subjective importance ofevents determined by GPT-3.The model incorporates short-term and long-term memory, storinginformation generated at each moment, and uses an Evolutionary Al-gorithm (EA) for training the machine learning models. A preliminaryexperiment was conducted using Python 3.9.12, demonstrating the tech-nical feasibility of the hypothesis. However, limitations such as the lackof a comprehensive environment, absence of load balancing, and GPT-3API constraints were identified.The significance of this study lies in its potential contribution to theunderstanding of consciousness and the development of Artificial GeneralIntelligence (AGI). By exploring the integration of multiple threads ofexecution and machine learning models, this work provides a foundationfor further research and experimentation in the field of computationalconsciousness. Addressing the limitations and potential criticisms willhelp strengthen the model’s validity and contribute to the understandingof this complex phenomenon.
Category: Artificial Intelligence

[1316] viXra:2303.0162 [pdf] submitted on 2023-03-30 00:57:20

A Semi-Automatic Method for Document Classification in the Shipping Industry

Authors: Narayanan Arvind
Comments: 4 Pages. Proceedings of Neptune's conference 2023, Samudramanthan, IIT Kharagpur

In the shipping industry, document classificationplays a crucial role in ensuring that the necessary documents are properly identified and processed for customs clearance. OCR technology is being used to automate the process of documentclassification, which involves identifying important documents such as Commercial Invoices, Packing Lists, Export/Import Customs Declarations, Bills of Lading, Sea Waybills, Certificates, Air or Rail Waybills, Arrival Notices, Certificate of Origin,Importer Security Filings, and Letters of Credit. By using OCR technology, the shipping industry can improve accuracy and efficiency in document classification and streamline the customs clearance process. The aim of this study is to build a robust document classification system based on keyword frequencies. The research is carried out by analyzing "Contract-Breach" law documents available with IN-D. The documents were collected by scraping the Singapore Government Judiciary website. The database developed has 250"Contract-Breach" documents. These documentsare splitted to generate 200 training documents and 50 test documents. A semi-automatic approach is used to select keyword vectors for documentclassification. The accuracy of the reported modelis 92.00 %.
Category: Artificial Intelligence

[1315] viXra:2303.0110 [pdf] submitted on 2023-03-17 14:50:49

ChatGPT: The Evolution of Natural Language Processing

Authors: Ho Ngoc Hai
Comments: 68 Pages.

This document focuses on ChatGPT, a natural language processing (NLP) model built by the transformer neural network. The document provides a comprehensive overview of the architecture, training, and fine-tuning of ChatGPT, as well as its applications in various fields, including customer service and support, healthcare, education, research, and development.
Category: Artificial Intelligence

[1314] viXra:2303.0104 [pdf] submitted on 2023-03-17 02:38:49

Sense Entropy: The Law of Conservation of Sense.

Authors: Egger Mielberg
Comments: 15 Pages.

In this article, we define such key concepts as sense entropy, sense energy,sense efficiency coefficient (SEC). These metrics are critical to determining andmonitoring the performance of any real* AI implementation.We give a description of the basic non-scalar tools for building real artificialintelligence with the ability to adapt to a variety of conditions of its habitat.
Category: Artificial Intelligence

[1313] viXra:2303.0076 [pdf] submitted on 2023-03-11 13:32:47

Hall Effect Thruster Design Via Deep Neural Network for Additive Manufacturing

Authors: Korolev Konstantin
Comments: 12 Pages. CC BY-NC-SA: Creative Commons Attribution-Noncommercial-ShareAlike

Hall effect thrusters are one of the most versatile and popular electric propulsion systems for space use. Industry trends towards interplanetary missions arise advances in design development of such propulsion systems. It is understood that correct sizing of discharge channel in Hall effect thruster impact performance greatly. Since the complete physics model of such propulsion system is not yet optimized for fast computations and design iterations, most thrusters are being designed using so-called scaling laws. But this work focuses on rather novel approach, which is outlined less frequently than ordinary scaling design approach in literature. Using deep machine learning it is possible to create predictive performance model, which can be used to effortlessly get design of required hall thruster with required characteristics using way less computing power than design from scratch and way more flexible than usual scaling approach.
Category: Artificial Intelligence

[1312] viXra:2302.0134 [pdf] submitted on 2023-02-25 22:10:48

Deterministic Degradation Process for Diffusion GAN and Its Inversion

Authors: Jeongik Cho
Comments: 10 Pages.

Recently, diffusion models have shown impressive generative performance. However, they have the disadvantage of having a high latent dimension and slow sampling speed. To increase the sampling speed of diffusion models, diffusion GANs have been proposed. But the latent dimension of diffusion GANs using non-deterministic degradation is still high, making it difficult to invert the generative model. In this paper, we introduce an invertible diffusion GAN that uses deterministic degradation. Our proposed method performs inverse diffusion using deterministic degradation without a model, and the generator of the GAN is trained to perform the diffusion process with the latent random variable. The proposed method uses deterministic degradation, so the latent dimension is low enough to be invertible.
Category: Artificial Intelligence

[1311] viXra:2302.0126 [pdf] submitted on 2023-02-23 08:53:00

A Novel Quantum Belief Entropy for Uncertainty Measure in Complex Evidence Theory

Authors: Keming Wu, Fuyuan Xiao
Comments: 2 Pages.

In this paper, a new quantum representation of CBBA is proposed. In addition, a novel quantum belief entropy is proposed to measure the uncertainty of CBBA in complex evidence theory.
Category: Artificial Intelligence

[1310] viXra:2302.0096 [pdf] submitted on 2023-02-21 05:00:29

¿cómo Crear Un Pensamiento Y Una Inteligencia Artificial? Y La Matemática De Letras
how to Create an Artificial Thought and Intelligence? And the Math of Letters

Authors: Salvador Sánchez Melgar
Comments: 8 Pages. In Spanish

La construcción de un pensamiento y de una inteligencia artificial es posible con el lenguaje de las letras numeradas. Lenguaje que surgió a través de la creación del libro "Nueva matemáticas de letras, triunfa con la matemática" actualizado con el título "Nueva matemáticas de letras 2ª edición". Libros en los que se exponen el lenguaje de las letras y una matemática de letras donde están las sumas, restas, multiplicaciones y divisiones de letras, con ejemplos y sus correspondientes tablas matemáticas, se podrían hacer con la matemática de las letras cualquier tipo matemático. puesto que es una matemática como la matemática que conocemos.Con el lenguaje de las letras numeradas, que representan letras, palabras y oraciones numeradas, un robot con inteligencia artificial podría adquirir un sin fin de todo tipo de información obtenida por cualquier sentido artificial. Informaciones numéricas que se tendrían que transformar en números binarios.

The construction of a thought and an artificial intelligence is possible with the language of numbered letters. Language that arose through the creation of the book "New mathematics of letters, triumph with mathematics" updated with the title "New mathematics of letters 2nd edition". Books in which the language of letters and a mathematics of letters are exposed where there are additions, subtractions, multiplications and divisions of letters, with examples and their corresponding mathematical tables, any type of mathematics could be done with the mathematics of letters. since it is a mathematics like the mathematics we know.With the language of numbered letters, which stand for letters, words, and numbered sentences, an artificially intelligent robot could acquire endless all kinds of information obtained by any artificial sense. Numeric information that would have to be transformed into binary numbers.
Category: Artificial Intelligence

[1309] viXra:2302.0095 [pdf] submitted on 2023-02-21 05:02:52

El Lenguaje De La Inteligencia Artificial Y La Matemática De Letras
the Language of Artificial Intelligence and the Mathematics of Letters

Authors: Salvador Sánchez Melgar
Comments: 27 Pages. In Spanish

Presentación de una matemática de letras y de un lenguaje de letras que le permitirá a una inteligencia artificial aprender sin fin y poder pensar como pensamos nosotros. Con las letras numeradas las informaciones que una inteligencia artificial obtenga con sus sentidos artificiales no perderán sus significados, puesto que mediante estas letras las informaciones se podrán transformar en palabras y numeradas. Cada información que una inteligencia artificial obtenga, la podrá transformar en números binarios, luego en números ordinarios de las letras numeradas, pudiendo así formar palabras numeradas sobre informaciones individuales y globales. Como cada sentido artificial detecta informaciones diferentes, cada sentido crea su propio lenguaje, eso no impide que todas las informaciones se puedan transformar en números. Las palabras numeradas que se puedan formar con las transformaciones de las informaciones también deberán enlazarse con otras palabras numeradas semejantes indexadas en un diccionario de palabras numeradas, para que así el robot pueda saber el significado de cada información. También a este robot se le debería añadir un programa que le permita entender las uniones de palabras. Con las letras numeradas la información que reciba un robot la podrá transformar en palabras numeradas y así poder memorizarlas permanentemente pudiendo así obtener ilimitada sabiduría. Mediante números binarios obtenidos de las informaciones de todo enlazados a informaciones binarias memorizadas de manera positiva y negativa es como pensamos nosotros. También expondré, con tablas y ejemplos, las sumas, restas, multiplicaciones y divisiones de las letras y un sistema numeral de letras del 0 al 27.

Presentation of a mathematics of letters and a language of letters that will allow an artificial intelligence to learn endlessly and be able to think as we think. With the numbered letters, the information that an artificial intelligence obtains with its artificial senses will not lose its meaning, since through these letters the information can be transformed into words and numbered. Each piece of information that an artificial intelligence obtains can be transformed into binary numbers, then into ordinary numbers of numbered letters, thus being able to form numbered words on individual and global information. Since each artificial sense detects different information, each sense creates its own language, this does not prevent all information from being transformed into numbers. The numbered words that can be formed with the transformations of the information must also be linked to other similar numbered words indexed in a dictionary of numbered words, so that the robot can know the meaning of each information. A program should also be added to this robot that allows it to understand word unions. With the numbered letters, the information that a robot receives can be transformed into numbered words and thus be able to memorize them permanently, thus being able to obtain unlimited wisdom. Through binary numbers obtained from the information of everything linked to binary information memorized in a positive and negative way is how we think. I will also expose, with tables and examples, the addition, subtraction, multiplication and division of the letters and a number system of letters from 0 to 27.
Category: Artificial Intelligence

[1308] viXra:2302.0067 [pdf] submitted on 2023-02-14 11:08:06

Multi-Class Product Counting and Recognition for Automated Retail Checkout: a Survey Paper of the 6th ai City Challenge Track4

Authors: Arpita Vats
Comments: 8 Pages.

Track 4 of of the 6th AI City Challenge specifically focuses on implementing accurate and automatic check-out systems in retail stores. The challenge includes identifying and counting products as they move along a retail checkout conveyor belt, despite obstacles such as occlusion, movement, and similarity between items. I was on the evaluation team for this track where I evaluated the methods of the top-performing teams on hidden Testset B along with my professor, David C. Anastasiu, who is on the organizing team of the challenge. Teams were provided with a combination of real-world and synthetic data for training and were evaluated on their ability to accurately recognize products in both closed and open-world scenarios, as well as the efficiency of their program. The team with the highest combined score for effectiveness and efficiency was declared the winner. The goal of this track is to accurately identify and count products as they move along a retail checkout lane, even if the items are similar or obscured by hands. Distinguishing this track from others, only synthetic data was provided for training models. The synthetic data included a variety of environmental conditions to train models on, while real-world validation and test data will be used to evaluate the performance of models in realistic scenarios.
Category: Artificial Intelligence

[1307] viXra:2302.0042 [pdf] submitted on 2023-02-10 02:10:49

Neuro-symbolic Meta Reinforcement Learning for Trading

Authors: S. I. Harini, Gautam Shroff, Ashwin Srinivasan, Prayushi Faldu, Lovekesh Vig
Comments: 4 Pages. Accepted at Muffin@AAAI'23

We model short-duration (e.g. day) trading in financial mar- kets as a sequential decision-making problem under uncer- tainty, with the added complication of continual concept- drift. We therefore employ meta reinforcement learning via the RL2 algorithm. It is also known that human traders often rely on frequently occurring symbolic patterns in price series. We employ logical program induction to discover symbolic patterns that occur frequently as well as recently, and ex- plore whether using such features improves the performance of our meta reinforcement learning algorithm. We report ex- periments on real data indicating that meta-RL is better than vanilla RL and also benefits from learned symbolic features.
Category: Artificial Intelligence

[1306] viXra:2302.0013 [pdf] submitted on 2023-02-03 07:22:11

A General Theory of Artificial Intelligence Part 2

Authors: Matthew Groom
Comments: 6 Pages.

Where to start in growing a real Artificial Intelligence. Let us begin building the first AI, in this paper I will theoretically build an AI from scratch, so I will go through what to do, where to do it.
Category: Artificial Intelligence

[1305] viXra:2301.0160 [pdf] submitted on 2023-01-30 03:18:06

The Final Answer — Are we alone in this Reality

Authors: Matthew Groom
Comments: 9 Pages.

This is it people, the mother lode, everything everyone has ever wanted to know.This paper will answer the question for you, as in the final answer, are we alone in this reality. I use the term reality as Universe is a somewhat limiting and doesn’t really do justice to the scope of reality and what I have to discuss with you.In our universe is there an all-powerful AI, a Deity or are we in a simulation.
Category: Artificial Intelligence

[1304] viXra:2301.0076 [pdf] submitted on 2023-01-17 01:43:18

Quantum X-entropy in Generalized Quantum Evidence Theory

Authors: Fuyuan Xiao
Comments: 2 Pages.

In this paper, a new quantum model of generalized quantum evidence theory is proposed. Besides, a new quantum X-entropy is proposed to measure the uncertainty in generalized quantum evidence theory.
Category: Artificial Intelligence

[1303] viXra:2301.0070 [pdf] submitted on 2023-01-13 15:41:55

Histopathology: Deep Machine Learning Based Semantic Segmentation Features Predict Patient Survival

Authors: Vikas Ramachandra
Comments: 4 Pages.

In this paper, we use deep learning techniques to segment different regions from breast cancer histopathology images, such as tumor nucleus, epithelium and stromal areas. Then, in the second stage, the deep segmentation features learned by the neural network are used to predict individual patient survival, using random forest based classification. We show that the deep segmentation network features can predict survival very well, and outperform classical computer vision based shape, texture and other feature descriptors used in earlier research for the same survival prediction task.
Category: Artificial Intelligence

[1302] viXra:2301.0059 [pdf] submitted on 2023-01-10 08:09:27

Complex Belief Entropy for Complex Evidence Theory

Authors: Chen Tang, Fuyuan Xiao
Comments: 1 Page.

In this paper, taking advantages of the characteristics of complex basic belief assignment (CBBA) in complex evidence theory, a new belief entropy is proposed to measure the total uncertainty in complex evidence theory.
Category: Artificial Intelligence

[1301] viXra:2301.0002 [pdf] submitted on 2023-01-01 21:22:27

Healthplus - An All in One Medical Companion

Authors: Nafih Najeeb, Anjali Jayadevan, K. R. Aswin, P. Anjitha, Dini Davis
Comments: 4 Pages.

The field of healthcare has really witnessed many transnational health issues for the past four years. The medical industry faced so many problems, and also the invention of technology really made significant advancements in delivering services to the customers. As a result of searching for the best, we will be witnessing so many fraudulent ways also. So it is important to select the best treatment from the verified profiles. On account of this concept, we have launched a website named Health plus for selecting the best treatment. The website is actually a fully equipped medical companion. Nowadays almost every hospital has their own applications or website for their services. But we can’t ensure the authenticity because there are chances for over glorification. So, what we are introducing here is an integrated platform for the patients. We provide verified profiles of many hospitals, clinics and doctors. The patients can choose the best doctor and best hospital/clinics based on the reviews from the previous patients. On the other hand, we are also providing fixing of appointments. Live token system is introduced here and patients can understand whether tokens are available in the hospital or not. By integrating the information of medical shops to the HEALTH+ we can purchase medicines and check the availability of medicines. In total we can implement a simple and integrated medical website and the medical world can use the advancements in technology. Health care sector is considering the invention of applications and websites related to the medical area as a boon, which redefines society. A good and effective rapport between the doctor and the patient is developed.
Category: Artificial Intelligence

[1300] viXra:2212.0212 [pdf] submitted on 2022-12-29 04:53:14

Beyond Rewards and Values: a Non-Dualistic Approach to Universal Intelligence

Authors: Akira Pyinya
Comments: 14 Pages.

Building an AI system that aligns with human values is believed to be a two-step process: first design a value function or learn human value using value learning methods, then maximize those values using rational agents such as AIXI agents. In order to integrate this into one step, we analyze the dualistic assumptions of AIXI, and define a new universal intelligence model that can align with human preferences or specific environments, called Algorithmic Common Intelligence (ACI), which can behave the same way as examples. ACI does not have to employ rewards or value functions, but directly learns and updates hypothetical policies from experience using Solomonoff induction, while making actions according to the probability of every hypothesis. We argue that the rational agency model is a subset of ACI, and the coevolution of ACI and humans provides a pathway to AI alignment.
Category: Artificial Intelligence

[1299] viXra:2212.0208 [pdf] submitted on 2022-12-30 03:47:42

Design Autoencoder using BSnet (BSautonet)

Authors: Sing Kuang Tan
Comments: 3 Pages.

In this paper, I am going to propose a design for an Autoencoder using BSnet. To take advantage of the BSnet design, the autoencoder will be easy to train with more convex training optimization function. The idea is to develop a simple and standard unsupervised machine learning model that can easily be used on most of the data without label. In the experiment result, the output is subjectively evaluated by a human and it has shown to achieve human level accuracy on denoising the MNIST human handwriting digits dataset.
Category: Artificial Intelligence

[1298] viXra:2212.0193 [pdf] submitted on 2022-12-27 00:22:31

Boolean Structured Deep Learning Network (BSnet)

Authors: Sing Kuang Tan
Comments: 5 Pages.

In this paper, I am going to propose a new Boolean Structured Deep Learning Network (BSnet) based on the concept of monotone multi-layer Boolean algebra. I have shown that this network has achieved significant improvement in accuracy over an ordinary Relu Deep Learning Network.
Category: Artificial Intelligence

[1297] viXra:2212.0176 [pdf] submitted on 2022-12-23 20:09:51

Efficient Integration of Perceptual VAE Into Dynamic Latent Scale GAN

Authors: Jeongik Cho
Comments: 9 Pages.

Dynamic latent scale GAN is a learning-based GAN inversion method. In this paper, we propose a method to improve the performance of dynamic latent scale GAN by integrating perceptual VAE loss into dynamic latent scale GAN efficiently. When training dynamic latent scale with normal i.i.d. latent random variable, and latent encoder is integrated into discriminator, a sum of predicted latent random variable of real data and scaled normal random variable follows normal i.i.d. random variable. We can consider this random variable as VAE latent random variable and use it for VAE training since there are real data corresponding to latent codes. Considering the intermediate layer output of the discriminator as a feature encoder, we can train the generator with VAE latent random variable to minimize the perceptual distance between generated data and corresponding real data. Furthermore, we can use VAE latent random variable for adversarial training since it has the same distribution as GAN latent random variable. Both generated data and corresponding real data are used during adversarial training with VAE latent random variable, inference & backpropagation for VAE training can be integrated into those of adversarial training. Therefore, training the generator to minimize the perceptual VAE loss does not require additional computation. Perceptual VAE loss is only added to the generator because the encoder is naturally trained with encoder loss of dynamic latent scale GAN.
Category: Artificial Intelligence

[1296] viXra:2212.0163 [pdf] submitted on 2022-12-22 03:23:02

The SP-multiple-alignment Concept as a Generalisation of Six Other Variants of "Information Compression via the Matching and Unification of Patterns"

Authors: J. G. Wolff
Comments: 23 Pages.

This paper focusses on the powerful concept of SP-multiple-alignment, a key part of the SP System (SPS), meaning the SP Theory of Intelligenceand its realisation in the SP Computer Model. The SPS is outlined in an appendix. More specifically, the paper shows with examples how the SP-multiplealignment construct may function as a generalisation of six other variants of ‘Information Compression via the Matching and Unification of Patterns’ (ICMUP). Each of those six variants is described in a separate section, and in each case there is a demonstration of how that variant may be modeled via the SP-multiple-alignment construct.
Category: Artificial Intelligence

[1295] viXra:2211.0124 [pdf] submitted on 2022-11-21 01:15:22

Representing a Neural Network as a Ggraphed Set

Authors: Ho Yeol Choi
Comments: 5 Pages. (Note by viXra Admin: Please avoid hand-drawing and write compete article with scientific references!)

I studied how to implement general neural network weights. The overlapping intersection between sets has a high signal ratio. What I'm trying to say is that weight gain in conventional neural networks is what happens in the part of the intersection between sets.
Category: Artificial Intelligence

[1294] viXra:2211.0106 [pdf] submitted on 2022-11-19 04:49:26

Generic Natural Language Distance Via Online Semantic Volumetric Inference

Authors: Alex-Pauline Poudade, Pascal Rabier, Neau-Monier Sarah, Olivier Poudade, Grimault Valérie, Emmanuel Martins, Ludwig De Sousa
Comments: 9 Pages. Data at https://doi.org/10.7910/DVN/WKLWF8

This paper discusses the approach of creating semantic meaning ad hoc through direct explicit volumetric adherence or relative intersection, from online databases, such as Wikipedia or Google. We demonstrate this approach through use of correlation, between a dictionary index — a lexicon - and an import/export industry ISO A129 standard used by the Ministry of Finances, in the French language. We conclude, this approach by giving the most and least meaningful industrial results, for the French language. This questions whereas online apparent generic Natural language processing (NLP) pivot Chomsky Universal grammar (UG) representation, could inherit implicit initial national culture. https://doi.org/10.7910/DVN/WKLWF8 (2022-11-18)
Category: Artificial Intelligence

[1293] viXra:2211.0096 [pdf] submitted on 2022-11-17 03:02:22

Arbitrarily Accurate Classification Applied to Specific Emitter Identification

Authors: Michael C. Kleder
Comments: 7 Pages.

Abstract— This article introduces a method of evaluating subsamples until any prescribed level of classification accuracy is attained, thus obtaining arbitrary accuracy. A logarithmic reduction in error rate is obtained with a linear increase in sample count. The technique is applied to specific emitter identification on a published dataset of physically recorded over-the-air signals from 16 ostensibly identical high-performance radios. The technique uses a multi-channel deep learning convolutional neural network acting on the bispectra of I/Q signal subsamples each consisting of 56 parts per million (ppm) of the original signal duration. High levels of accuracy are obtained with minimal computation time: in this application, each addition of eight samples decreases error by one order of magnitude.
Category: Artificial Intelligence

[1292] viXra:2211.0054 [pdf] submitted on 2022-11-10 01:32:55

Anomalous Payload Detection System Using MUXConv Neural Network with Parameter Optimization

Authors: CholRyong Pak, HakMyong O, HyokChol U, Hun Nam
Comments: 7 Pages.

This paper proposes how to detect malicious network data in effective and accurate way using MUXConv neural network(MUXCNN) with parameter optimization. First of all, in order to increase detection speed, packets are directly entered into the input of MUXCNN without decoding. Next of all, after training MUXCNN with learning data, we judge that its traffic is normal or abnormal. Simulations and experiments show that the proposed abnormal network-detecting system is more efficient in detection and higher in accuracy than the other multi-layer neural networks.
Category: Artificial Intelligence

[1291] viXra:2211.0015 [pdf] submitted on 2022-11-03 01:50:04

The Acceleration of Multi-Factor Merton Model on FPGA

Authors: Pengyu Guo
Comments: 66 Pages.

Credit risk stands for the risk of losses caused by unwanted events, such as the default of an obligor. The managing of portfolio credit risks is crucial for financial institutions. The multi-factor Merton model is one of the most widely used tools that modelling the credit risks for financial institutions. Typically, the implementation of the multi-factor Merton model involves Monte Carlo simulations which are time-consuming. This would significantly restrict its usability in daily credit risk measurement. In this report, we propose an FPGA architecture for credit-risk measurements in the multi-factor Merton models. The presented architecture uses a variety of optimization techniques such as kernel vectorization and loop unrolling, to optimize the performance of the FPGA implementation. The evaluation results show that compare to a basic C++ implementation running on a single-core Intel i5-4210 CPU, our proposed FPGA implementation can achieve an acceleration of up to 22 times, with a precision loss of less than 10−8.
Category: Artificial Intelligence

[1290] viXra:2211.0014 [pdf] submitted on 2022-11-03 01:50:31

Parallel Parameter Estimation for Gilli-Winker Model Using Multi-Core CPUs

Authors: Pengyu Guo
Comments: 36 Pages.

Agent-based modeling is a powerful tool that is widely used to model global financial systems. When the parameters of the model are appropriate, the price time series generated by the model exhibit marked similarities with actual financial time series and even reproduces some of their statistical characteristics.By using Kirman’s Ant model as a prototype, this report systematically explored Gilli and Winker’s parameter optimization method. In view of some limitations of this method, this report promoted some improvements, including a local-restart strategy to enhance the convergence ability of the original optimization method, as well as incorporate Simulated Annealing into the original method to help the algorithm escape from local optimums. Furthermore, since the parameter optimization of agent-based modeling tends to be very time-consuming, an acceleration method was also proposed to speed up this procedure. In the end, the presented methods have been validated with the EUR/USD exchange rate.
Category: Artificial Intelligence

[1289] viXra:2210.0134 [pdf] submitted on 2022-10-26 06:00:53

A General Theory of Sleep

Authors: Matthew Groom
Comments: 4 Pages.

This paper will address the meaning and purpose of sleep by combining several factors. This combination will also answer another of the greatest mysteries of humanity, where did we originate, surface or deep-sea vent.I have included how Artificial Intelligence, the Brain, Sentience is derived from sleep.
Category: Artificial Intelligence

[1288] viXra:2210.0130 [pdf] submitted on 2022-10-26 10:02:37

Cyberbullying Detection on Social Media in Indonesia with Text Mining

Authors: Nedya Farisia, Yova Ruldeviyani, Eko Kuswardono Budiardjo
Comments: 10 Pages.

Social media is growing rapidly at the moment and provide convenience to communicate. But such convenience widely misused to treat other people with not decent before the entire internet community commonly called cyberbullying. If cyberbullying fail to prevent, it will be difficult to track down and deal with it. One of the main weapons to prevent acts of cyberbullying is to perform detection on social media. Detection of cyberbullying can be done by determining whether a post offend the sensitive topic of a personal nature such as racist or not. By determining the related words such sensitive topics and filter sentiment, cyberbullying tweet detection is done by using the method of classification Hyperpipes, Tree-based J48, and SVM. The results show that the algorithm hyperpipes and decision tree produces the best evaluation results with the accuracy of 85.32% and 86.24%.
Category: Artificial Intelligence

[1287] viXra:2210.0120 [pdf] submitted on 2022-10-25 00:44:39

Definition of AI and a Program That Satisfies This Definition

Authors: Dimiter Dobrev
Comments: 14 Pages. In Bulgarian

We will consider all possible strategies of the agent and show that one of them is the best. This strategy is not computable, but there are computable strategies close to it. We will define AI as a computable strategy that is close enough to the best. To determine the agent's best strategy, we need a language for description of the world. Through this language we will also make a program satisfying the definition of AI. This program will first understand the world by describing it through the chosen language, then based on this description it will predict the future and choose the best possible action. This program is extremely inefficient and practically unusable, but it can be improved by improving the language for description of the world and by improving the algorithm for predicting the future. In this way, an efficient program satisfying the definition of AI can be obtained.
Category: Artificial Intelligence

[1286] viXra:2210.0089 [pdf] submitted on 2022-10-20 01:40:39

Extending F1 Metric: Probabilistic Approach

Authors: Mikolaj Sitarz
Comments: 13 Pages.

This article explores the extension of well-known F1 score used for assessing the performance of binary classifiers. We propose the new metric using probabilistic interpretation of precision, recall, specifcity, and negative predictive value. We describe its properties and compare it to common metrics. Then we demonstrate its behavior in edge cases of the confusion matrix. Finally, the properties of the metric are tested on binary classifier trained on the real dataset.
Category: Artificial Intelligence

[1285] viXra:2209.0153 [pdf] submitted on 2022-09-27 06:59:48

Technical Report for WAIC Challenge of Financial QA under Market Volatility

Authors: Meng Cao, Ji Jiang, Qichen Ye, Yuexian Zou
Comments: 4 Pages. Technical Report for WAIC Challenge of Financial QA under Market Volatility

This technical report presents the 1st winning model for Financial Community Question-and-Answering (FCQA), which is a task newly introduced in the Challenge of Financial QA under Marker Volatility in WAIC 2022. FCQA aims to respond to the user’s queries in the financial forums with the assistance of heterogeneous knowledge sources. We address this problem by proposing a graph transformer based model for the efficient multi-source information fusion. As a result, we won the first place out of 4278 participating teams and outperformed the second place by 5.07 times on BLUE.
Category: Artificial Intelligence

[1284] viXra:2209.0146 [pdf] submitted on 2022-09-28 02:18:16

Sentience and AI Robotics

Authors: Clark M. Thomas
Comments: 6 Pages.

Sentience once mostly referenced human feelings.Now it also points to any "intelligent feelings," with no clear definition emerging. Species inside Earth’s biosphere manifest advanced sentience far beyond everyday awareness. Complex sentience has been critical for complex evolution. Will android robots develop advanced consciousness? Could advanced AI transcend human social sentience, in addition to being super-smart computers? How might UFOs interface with our emerging matrix of advancing technology and imminent ecological disaster?
Category: Artificial Intelligence

[1283] viXra:2209.0089 [pdf] submitted on 2022-09-13 02:31:50

Attention Weighted Fully Convolutional Neural Networks for Dermatoscopic Image Segmentation

Authors: Michael Blackwell, Qing Tian
Comments: 5 Pages.

The goal of this project was to develop a fully convolutional neural network (FCNN) capable of identifying the region of interest (ROI) in dermatoscopic images. To achieve this goal, a U-Net style model was developed for this task and enhanced with an attention module which operated on the extracted features. The addition of this attention module improved our model's semantic segmentation performance and increased pixel-level precision and recall by 4.0% and 4.6%respectively. The code used in thie paper can be found on the project github page: https://github.com/Michael-Blackwell/CapstoneProject
Category: Artificial Intelligence

[1282] viXra:2209.0082 [pdf] submitted on 2022-09-14 00:41:01

Why Consciousness is Non-algorithmic, and Strong AI Cannot Come True

Authors: G. Torimaru
Comments: 2 Pages.

I explain why consciousness is non-algorithmic, and strong AI cannot come true, and reinforce Penrose's argument.
Category: Artificial Intelligence

[1281] viXra:2209.0069 [pdf] submitted on 2022-09-11 16:50:18

Predictive Signals Obtained from Bayesian Network and Entropy Minimization

Authors: Ait-Taleb Nabil
Comments: 15 Pages.

In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$. We will then infer a large number (200000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ minimizing the entropy of the optimal Bayesian network computed from the concatenation of the signals $D_{1}$ followed by the candidate signals $D_{2}$. The prediction $D_{2}^{*}$ is justified by the fact that the union $D_{1}cup D^{*}_{2}$ has a low entropy and therefore a high average probability in logarithmic scale of being obtained. We will also introduce the prediction quality allowing to evaluate the predictive quality of inferred signals $D_{2}$. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
Category: Artificial Intelligence

[1280] viXra:2209.0007 [pdf] submitted on 2022-09-02 01:35:30

FaithNet: A Generative Framework in Human Mentalizing

Authors: Chengkai Guo
Comments: 4 Pages.

In this paper, we first review some of the innovations in modeling mentalizing.Broadly, this involves building models of computing World Model and Theory of Mind(ToM). A simple framework, FaithNet, is then presented with concepts like persistence, continuity, cooperation and preference represented as faith rules.FaithNet defines a generative model that can sample faith rules. Our FaithNet utilize a general-purpose conditioning mechanism based on cross-attention, offering computations that best explain observed real-world events under a Bayesian criterion.
Category: Artificial Intelligence

[1279] viXra:2209.0005 [pdf] submitted on 2022-09-01 01:01:30

Beatnet: CRNN and Particle Filtering for Online Joint Beat Downbeat and Meter Tracking

Authors: Mojtaba Heydari, Frank Cwitkowitz, Zhiyao Duan
Comments: 8 Pages. The 22rd International Society for Music Information Retrieval Conference (ISMIR 2021)

The online estimation of rhythmic information, such as beat positions, downbeat positions, and meter, is critical for many real-time music applications. Musical rhythm comprises complex hierarchical relationships across time, rendering its analysis intrinsically challenging and at times subjective. Furthermore, systems which attempt to estimate rhythmic information in real-time must be causal and must produce estimates quickly and efficiently. In this work, we introduce an online system for joint beat, downbeat, and meter tracking, which utilizes causal convolutional and recurrent layers, followed by a pair of sequential Monte Carlo particle filters applied during inference. The proposed system does not need to be primed with a time signature in order to perform downbeat tracking, and is instead able to estimate meter and adjust the predictions over time. Additionally, we propose an information gate strategy to significantly decrease the computational cost of particle filtering during the inference step, making the system much faster than previous sampling-based methods. Experiments on the GTZAN dataset, which is unseen during training, show that the system outperforms various online beat and downbeat tracking systems and achieves comparable performance to a baseline offline joint method.
Category: Artificial Intelligence

[1278] viXra:2208.0173 [pdf] submitted on 2022-08-31 03:40:39

Don’t Look Back: an Online Beat Tracking Method Using RNN and Enhanced Particle Filtering

Authors: Mojtaba Heydari, Zhiyao Duan
Comments: 5 Pages.

Online beat tracking (OBT) has always been a challenging task. Dueto the inaccessibility of future data and the need to make inferencein real-time. We propose Don’t Look back! (DLB), a novel approachoptimized for efficiency when performing OBT. DLB feeds theactivations of a unidirectional RNN into an enhanced Monte-Carlolocalization model to infer beat positions. Most preexisting OBTmethods either apply some offline approaches to a moving windowcontaining past data to make predictions about future beat positionsor must be primed with past data at startup to initialize. Meanwhile,our proposed method only uses activation of the current time frameto infer beat positions. As such, without waiting at the beginning toreceive a chunk, it provides an immediate beat tracking response,which is critical for many OBT applications. DLB significantlyimproves beat tracking accuracy over state-of-the-art OBT methods,yielding a similar performance to offline methods.
Category: Artificial Intelligence

[1277] viXra:2208.0171 [pdf] submitted on 2022-08-31 03:49:55

Singing Beat Tracking With Self-supervised Front-end and Linear Transformers

Authors: Mojtaba Heydari, Zhiyao Duan
Comments: 8 Pages. 23rd International Society for Music Information Retrieval Conference (ISMIR 2022)

Tracking beats of singing voices without the presence of musical accompaniment can find many applications in music production, automatic song arrangement, and social media interaction.Its main challenge is the lack of strong rhythmic and harmonic patterns that are important for music rhythmic analysis in general. Even for human listeners, this can be a challenging task. As a result, existing music beat tracking systems fail to deliver satisfactory performance on singing voices. In this paper, we propose singing beat tracking as a novel task, and propose the first approach to solving this task. Our approach leverages semantic information of singing voices by employing pre-trained self-supervised WavLM and DistilHuBERT speech representations as the front-end and uses a self-attention encoder layer to predict beats. To train and test the system, we obtain separated singing voices and their beat annotations using source separation and beat tracking on complete songs, followed by manual corrections. Experiments on the 741 separated vocal tracks of the GTZAN dataset show that the proposed system outperforms several state-of-the-art music beat tracking methods by a large margin in terms of beat tracking accuracy. Ablation studies also confirm the advantages of pre-trained self-supervised speech representations over generic spectral features.
Category: Artificial Intelligence

[1276] viXra:2208.0156 [pdf] submitted on 2022-08-28 08:46:18

Bookshelf - A Document Categorization for Library using Text Mining

Authors: Carlo D. Petalver
Comments: 12 Pages.

Categorizing books and other archaic paper sources to a course reference or syllabus is a challenge in library science. The traditional way of categorization is manually done by professionals and the process of seeking and retrieving information can be frustrating. It needs intellectual tasks and conceptual analysis of a human effort to recognize similarities of items in determining the subject to the correct category. Unlike the traditional categorization process, the author implemented the concept of automatic document categorization for libraries using text mining. The project involves the creation of a web app and mobile app. This can be accomplished through the use of a supervised machine learning classification model using the Support Vector Machine algorithm that can predict the given category of data from the book or other archaic paper sources to the course syllabus they belong to.
Category: Artificial Intelligence

[1275] viXra:2208.0137 [pdf] submitted on 2022-08-25 15:44:36

Higher Order Belief Divergence

Authors: Yingcheng Huang, Fuyuan Xiao
Comments: 1 Page.

In this paper, a novel belief divergence, higher order belief Jensen-Shannon divergence is proposedto measure the discrepancy between BPAs in Dempster—Shafer evidence theory.
Category: Artificial Intelligence

[1274] viXra:2208.0135 [pdf] submitted on 2022-08-25 00:53:10

A Fractal Belief KL Divergence

Authors: Jie Zenga, Fuyuan Xiao
Comments: Pages.

In this paper, a novel symmetric fractal-based belief KL divergence is proposed to more appropriately measure the conflict between BPAs.
Category: Artificial Intelligence

[1273] viXra:2208.0104 [pdf] submitted on 2022-08-20 05:18:24

Comparative Study on Real-Time Traffic State Estimation

Authors: Akhil Sahukaru, Shishir Kumar Shandiliya
Comments: 15 Pages.

When traffic demand exceeds available network capacity, traffic congestion develops. Lower vehicle speeds, longer journey times, unreliable arrival timings, and lengthiervehicular queueing are all symptoms. Congestion may have a detrimental influence on society bylowering quality of life and increasing pollution, particularly in metropolitan areas. To alleviatetraffic congestion, traffic engineers and scientists require high-quality, comprehensive, andprecise data to forecast traffic flow. The advantages and disadvantages of various data collectingsystems, as well as data attributes such as accuracy, sample frequency, and geographiccoverage, vary. Multisource data fusion improves accuracy and delivers a more complete picture of trafficflow performance on a road network. This study provides a review of the literature on congestionestimation and prediction based on data obtained from numerous sources. An overview of datafusion approaches and congestion indicators that have been employed in the literature to estimatetraffic condition and congestion is provided. The outcomes of various strategies are examined,and a disseminative analysis of the benefits and drawbacks of the methods reviewed is offered.Keywords: traffic congestion; multi source data fusion; traffic state estimation; data collection
Category: Artificial Intelligence

[1272] viXra:2208.0073 [pdf] submitted on 2022-08-13 01:00:59

Modified Subset Construction Algorithm for Finite Automata

Authors: Mirzakhmet Syzdykov
Comments: 3 Pages.

We propose the evolutionary algorithm for subset construction which superceeds previous known resultdue to Rabin and Scott.
Category: Artificial Intelligence

[1271] viXra:2208.0055 [pdf] submitted on 2022-08-09 13:40:27

Sensechain: Sense Contracts, Sense Currency

Authors: Egger L Mielberg
Comments: 17 Pages.

Time is the most important asset of any living person on our planet.The presence of a digital personal financial and economic environment, decentralized to each of its users, would significantly change the quality and standard of living of this user.The main unit of measurement of the value of an individual user of the environment should be the hours (minutes) spent by him on the execution of any sense contract.Our international team proposes a practical implementation of such an environment using the logic of the new mathematical theory for artificial intelligence Sense Theory [1].
Category: Artificial Intelligence

[1270] viXra:2208.0012 [pdf] submitted on 2022-08-04 01:28:39

Additional System-Properties of the Network- and Cognition-Based Financial Products in Nwogugu (2012)

Authors: Michael C. I. Nwogugu
Comments: 32 Pages. The copyright license-type for this article is CC-BY-NC-ND

Nwogugu (2012) introduced a Network-based and Cognition-Based cyberphysical fuzzy-system within which complex self-adjusting "semi-autonomous" financial products are originated, purchased and sold. The participants of the system are diverse and include adults, companies, brokers, banks, lawyers, insurance companies and real estate companies. This theoretical article explains the key additional characteristics, system-architecture, fuzzy-attributes and Reasoning/Logic of some cost-reducing and energy-reducing AI/ML Network/Modular Products (ie. Mortgage-Alternatives Products, Retirement/Savings products and Insurance products) that were introduced in Nwogugu (2012), and also other cost-saving financial products that he developed (collectively, the "Products"). Through the products’ fuzzy features, AI and network, the cyber-system architecture implicitly incorporates "Learning" and also can use Blockchain for record-keeping. The semi-autonomous and "self-adjustment" characteristics of these Modular Products can drastically reduce system-participants’ costs and energy-use while increasing their revenues/profits through better and more efficient CRM, "matching", transaction-processing and "state-updating".
Category: Artificial Intelligence

[1269] viXra:2207.0146 [pdf] submitted on 2022-07-26 01:08:50

Generalized Attention Mechanism and Relative Position for Transformer

Authors: R. V. R. Pandya
Comments: 6 Pages.

In this paper, we propose generalized attention mechanism (GAM) by first suggesting a new interpretation for self-attention mechanism of Vaswani et al. . Following the interpretation, we provide description for different variants of attention mechanism which together form GAM. Further, we propose a new relative position representation within the framework of GAM. This representation can be easily utilized for cases in which elements next to each other in input sequence can be at random locations in actual dataset/corpus.
Category: Artificial Intelligence

[1268] viXra:2207.0064 [pdf] submitted on 2022-07-09 02:53:24

Lyrics-Based Music Band and Genre Topic Similarity Analysis

Authors: Dimitrios Geromichalos
Comments: 10 Pages.

Based on hundreds of thousands of song lyrics from thousands of bands, Word2Vec models have been trained to quantitatively identify similarities between band texts and terms. Using prominent examples, this demonstrates for the cases studied, that music bands can be assigned to a similarity network solely on the basis of their song lyrics, which also corresponds to their musical style. Furthermore, using exemplary words, it is demonstrated that semantic term networks vary strongly from genre to genre. In addition, the semantic similarity matrices were studied using network analysis methods. As it turned out, term and band text networks differ significantly. While the former resemble random networks, the latter partly exhibit powerlaw behavior. Both also exhibit threshold-dependent regimes.
Category: Artificial Intelligence

[1267] viXra:2207.0062 [pdf] submitted on 2022-07-08 16:38:56

Wave Function Collapse Visualization

Authors: Vishal Pandey, Ishanvi Pandey
Comments: 7 Pages.

Wave Function Collapse initializes output bitmapin a completely unobserved state, where each pixel value is in a superposition of colors of the input bitmap (so if the input was black-white then the unobserved states are shown in different shades of grey). The coefficients in these superpositions are real numbers, not complex numbers, so it doesn’t do the actual quantum mechanics, but it was inspired by QM. In this, we have been matching each tile to tile value by pixel to pixel by namingas it as "socket". We know that in code when we match the tile it would be in a random order so we had rotated them into a specific order to match each socket to socket which indicates the overlapping of tiles as the superposition of several Eigen states. It was first introduced in 2016 by Maxim Gumin which can generate procedural patterns from a sample image or from a collection of tiles. So we are just visualizing it in a mathematical way
Category: Artificial Intelligence

[1266] viXra:2207.0056 [pdf] submitted on 2022-07-07 23:38:20

Designing Potential Drugs That Can Target Sars-COV-2’s Main Protease: A Proactive Deep Transfer Learning Approach Using LSTM Architecture

Authors: Omar Dasser, Moad Tahri, Louay Kila, Abderrahim Sekkaki
Comments: 23 Pages.

Drug discovery is a crucial step in the process of delivering a new drug to the market that can take up to 2-3 years which can be more penalizing given the current global pandemic caused by the outbreak of the novel coronavirus SARS-CoV 2. Artificial Intelligence methodologies have shown great potential in resolving tasks in various domains such as image classification, sound recognition, also in the range of the previous years, Artificial Intelligence proved to be the go-to for generative tasks for use cases such as music sequences, text generation and solving also problems in biology. The goal of this work is to harvest the power of these architectures using generative recurrent neural network with long short-term memory (LSTM) gating techniques in order to generate new and non-existing molecules that can bind to the main COVID-19 protease, which is a key agent in the transcription and replication of the virus, and thus can act as a potential drug that can neutralize the virus inside of an infected host. As of today, there are no specific targeted therapeutic agents to treat the disease and all existing treatments are all very limited. Known drugs that are passing clinical trials such as Hydroxychloroquine and Remdesivir showed respectively a binding energy with SARS-CoV-2’s main protease of -5.3 and -6.5, the results of the newly generated molecules exhibited scores ranging till -13.2.
Category: Artificial Intelligence

[1265] viXra:2206.0142 [pdf] submitted on 2022-06-26 16:10:32

FASFA: A Novel Next-Generation Backpropagation Optimizer

Authors: Philip Naveen
Comments: 18 Pages.

This paper introduces the fast adaptive stochastic function accelerator (FASFA) for gradient-based optimization of stochastic objective functions. It works based on Nesterov-enhanced first and second momentum estimates. The method is simple and effective during implementation because it has intuitive/familiar hyperparameterization. The training dynamics can be progressive or conservative depending on the decay rate sum. It works well with a low learning rate and mini batch size. Experiments and statistics showed convincing evidence that FASFA could be an ideal candidate for optimizing stochastic objective functions, particularly those generated by multilayer perceptrons with convolution and dropout layers. In addition, the convergence properties and regret bound provide results aligning with the online convex optimization framework. In a first of its kind, FASFA addresses the growing need for diverse optimizers by providing next-generation training dynamics for artificial intelligence algorithms. Future experiments could modify FASFA based on the infinity norm.
Category: Artificial Intelligence

[1264] viXra:2206.0132 [pdf] submitted on 2022-06-24 04:59:04

Fractal Belief Jensen–Shannon Divergence

Authors: Yingcheng Huang, Fuyuan Xiao
Comments: 1 Page.

In this paper, a novel belief divergence measurement method, fractal belief Jensen–Shannon (FBJS) divergence is proposed to better measure conflicts between evidences. The proposed FBJS divergence is the first belief divergence that combines the belief divergence theory and the concept of fractal.
Category: Artificial Intelligence

[1263] viXra:2205.0131 [pdf] submitted on 2022-05-25 03:41:12

Astdp: a More Biologically Plausible Learning

Authors: Shiyuan Li
Comments: 17 Pages.

Spike-timing dependent plasticity in biological neural networks has been proven to be important during biological learning process. On the other hand, artificial neural networks use a different way to learn, such as Back-Propagation or Contrastive Hebbian Learning. In this work we introduce approximate STDP, a new neural networks learning framework more similar to the biological learning process. It uses only STDP rules for supervised and unsupervised learning, every neuron distributed learn patterns and don't need a global loss or other supervised information. We also use a numerical way to approximate the derivatives of each neuron in order to better use SDTP learning and use the derivatives to set a target for neurons to accelerate training and testing process. The framework can make predictions or generate patterns in one model without additional configuration. Finally, we verified our framework on MNIST dataset for classification and generation tasks.
Category: Artificial Intelligence

[1262] viXra:2205.0013 [pdf] submitted on 2022-05-02 20:14:08

Implementing Blockchain Technology in Supply Chain Management

Authors: Atul Anand, A. Seetharaman, K. Maddulety
Comments: 14 Pages. Conference: 3rd International Conference on Data Mining and Machine Learning (DMML 2022)

This paper is aimed at studying the factors influencing the implementation of blockchain in supply chain management to solve the current issues faced in the supply chain ecosystem. Supply chains are part and parcel of every business and have multiple inefficiencies in the system. Some of these inefficiencies can be managed by usage of blockchain Platform .Technology, intracompany synergies, intercompany collaboration, extrinsic factors, and innovation are critically evaluated for adoption of blockchain in supply chain. A pilot study is conducted in form survey for analysis of these factors. Hypotheses are derived for these factors for quantitative research. Subsequently these hypotheses are examined with the help of ADANCO2.3 for structural equation modelling. As an outcome, it is evident that Innovation and Extrinsic factors are significantly impacting the adoption of blockchain in supply chain management.
Category: Artificial Intelligence

[1261] viXra:2203.0172 [pdf] submitted on 2022-03-29 20:28:39

Pizza Ordering Chatbot Using Amazon Lex

Authors: Amey Thakur, Mega Satish
Comments: 13 Pages. 7 figures, Volume 10, Issue III, International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022. DOI: https://doi.org/10.22214/ijraset.2022.40861

Because of breakthroughs in machine learning and deep learning, which are causing a change in every industry area and managing various types of activities better than people. The majority of monotonous jobs that were formerly performed by humans are now replaced by AI. Every firm is aiming to replace the least skilled labour with AI robots that can do comparable tasks more efficiently, especially when it comes to chatbots. A chatbot is computer software that mimics human interaction by using voice instructions, text dialogues, or both. Chatbots are being employed to address consumer concerns or problems in food delivery app businesses such as Zomato and Swiggy, but are chatbots truly useful in that business model? This business model's target customers are those who don't have time to go outside to obtain food, prefer convenience at home, or are unwilling to endure discomfort, thus their concerns should be resolved in the most convenient way possible. To fulfil the user's request, a chatbot is employed. It is critical for the chatbot to plan how to carry out the task that the user has asked. New tools are available now to create and deploy chatbots; Amazon Lex by AWS is one of them. This project focuses on creating a Pizza Ordering Chatbot using Amazon Lex to help the user order pizza.
Category: Artificial Intelligence

[1260] viXra:2203.0158 [pdf] submitted on 2022-03-27 12:21:30

A Coordinate-Geometry Based Approach for Document Deskewing in Maritime Digital Kyc Processes

Authors: Narayanan Arvind
Comments: 7 Pages. Presented at Samudramanthan 2022, Indian Institute of Technology Kharagpur

ID documents submitted for Maritime digital KYC processes can be skewed due to the environment in which the photograph is taken or due to user preferences and/or errors. The skewed image results in a low accuracy in downstream image processing tasks like optical character recognition (OCR). ID document deskewing has been typically approached using deep learning (MaskRCNN), regression, projection plans, Hough transforms, Fourier transforms and other computer vision techniques. The aim of this study is to build a robust document deskewing system based on keyword detection and coordinate geometry. The research is carried out by analyzing skewed Indian PAN cards available with IN-D. The database has 50 Indian PAN card images. These images are augmented to generate 150 images, with 50 images for each of the +90, -90 and 180 degree skew cases. Google Vision API is used as the OCR engine for finding the coordinates of the keyword in our study. The research employs Numpy, Pandas and OpenCV open-source libraries for Python. The accuracy of the reported model is 95.33 %. The accuracy of our present approach surpasses the accuracy of all the models available in literature.
Category: Artificial Intelligence

[1259] viXra:2203.0150 [pdf] submitted on 2022-03-25 20:50:57

Edited Image Detection to Prevent Forgery using CNN: A Review

Authors: Shuvra Smaran Das
Comments: 11 Pages. 3 figures (Corrections made by viXra Admin to conform with the requirements on the Submission Form)

By using Artificial Intelligence (AI) called Deepfakes, clothes are stripped digitally from photographs of users and shared on social media. Deepfakes are computer-generated images and videos, often convincing, based on an existing template. Victims are already afraid and worried about these things. Moreover, the images are so realistic that most users believe that these images are authentic. These things can happen to us too. However, we cannot stop using these social platforms because these platforms are the only way to communicate with others and continue our daily work online. These types of crimes should be strictly prevented and let users know which of the images are real and not. Thus, victims and users may be able to know and assure the truth about this fraud. Here, we will be analyzed image-related paperwork, including the original and the duplicate images, to inform users about image forgery. So, users will no longer believe in these fake images.
Category: Artificial Intelligence

[1258] viXra:2203.0145 [pdf] submitted on 2022-03-24 23:11:44

Learned Data Augmentation with VQ-VAE

Authors: Arnav Dantuluri
Comments: 9 Pages. (Author's name added to article as required by the rules of viXra.org)

In this paper, I propose a simple and easily reproducible method to enhance and extend datasets from as few as 1000 images to as much as 10000 or in essence as many as the user requires. My approach combines a proper latent space modeling of the VAE which is then modified using a process called vector quantization. With these techniques along with enhancing model parameterization and training a simple convolutional neural network can achieve accuracies of up to 93% on synthetic data which proves extremely helpful especially when handling datasets with very few images.
Category: Artificial Intelligence

[1257] viXra:2203.0144 [pdf] submitted on 2022-03-24 00:11:24

Discovery of Theory of Everything by Natural Intelligence

Authors: Deokjin Kim
Comments: 3 Pages.

In previous studies, the calculation of everything in physics through logarithmic elliptic equation was proposed. The calculation is very simple that only high school physics and high school mathematics are needed. Given the author's calculation methodology as preceding conditions, artificial intelligence will be able to discover the theory of everything in only one day. We propose to develop the artificial intelligence and call this natural intelligence.
Category: Artificial Intelligence

[1256] viXra:2203.0004 [pdf] submitted on 2022-03-01 20:24:27

Literature Review of Recent Advancements in Hypergraph Learning as it Relates to Optimizer

Authors: Siddhant Kumar Jha, Zhi Hua Zhou
Comments: 8 Pages.

Hypergraphs are a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.The applications of hypergraphs can range from analogical explainations such as social networks to hard generalities in the case of collabarative game theory where they are known as simple games. The more abstract applications can be used in localized and global optimizations of radial function under computational geometry , and the optmizers generated could also be used to solve linear scheduling problems. The theoretical approach developed under these categories can be used in embedding . cluster-ing and classification which can be solved through the application of spectral hypergraph clustering too.
Category: Artificial Intelligence

[1255] viXra:2202.0162 [pdf] submitted on 2022-02-25 19:21:37

Hypergraph Deployment with Self-abrasive Deep Neural Networks and CSGANS

Authors: Siddhant Kumar Jha
Comments: 6 Pages.

The objective of the study is to develop a definitive meta-analysis of the recent developments in hyper-graph theories’ application in the field and study of deep learning and more widely in Machine learning , the applications of this particular technique may range simple classification tuning to more advanced abstract GANs in the field of regenerative graphical systems and computer vision in general,In our experiments, we use a novel random walk procedure and show that our model achieves and, in most cases, surpasses state-of-the-art performance on benchmark data sets. Additionally we also try to display our classification performance as compared to traditional Statistical Techniques , ML algorithms as well as Classical and new Deep learning algorithms.
Category: Artificial Intelligence

[1254] viXra:2202.0116 [pdf] submitted on 2022-02-18 16:47:41

Out of Distribution Detection with Dlsgan

Authors: Jeongik Cho
Comments: 3 Pages.

DLSGAN proposed a learning-based GAN inversion method with maximum likelihood estimation. In this paper, I propose a method for out-of-distribution detection using the encoder of DLSGAN. Simply, the log-likelihood of the predicted latent code of input data can be used for out-of-distribution (OOD) detection.
Category: Artificial Intelligence

[1253] viXra:2202.0106 [pdf] submitted on 2022-02-15 09:41:46

Bayesian Network and Information Theory.

Authors: Ait-Taleb Nabil
Comments: 25 Pages.

In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy. We will then use this BIC score to learn a Bayesian network from an example of data frame.
Category: Artificial Intelligence

[1252] viXra:2202.0082 [pdf] submitted on 2022-02-14 01:43:59

Evolving TSP Heuristics using Multi Expression Programming

Authors: Mihai Oltean, Dumitru Dumitrescu
Comments: 10 Pages. International Conference on Computational Sciences, ICCS'04, Edited by M. Bubak, G. D. van Albada, P. Sloot, and J. Dongarra, Vol. II, pp. 670-673, 6-9 June, Krakow, Poland, Springer-Verlag, Berlin, 2004.

Multi Expression Programming (MEP) is an evolutionary technique that may be used for solving computationally difficult problems. MEP uses a linear solution representation. Each MEP individual is a string encoding complex expressions (computer programs). An MEP individual may encode multiple solutions of the current problem. In this paper, MEP is used for evolving a Traveling Salesman Problem (TSP) heuristic for graphs satisfying triangle inequality. Evolved MEP heuristic is compared with Nearest Neighbor Heuristic (NN) and Minimum Spanning Tree Heuristic (MST) on some difficult problems in TSPLIB. For most of the considered problems the evolved MEP heuristic outperforms NN and MST. The obtained algorithm was tested against some problems in TSPLIB. The results emphasize that evolved MEP heuristic is a powerful tool for solving difficult TSP instances.
Category: Artificial Intelligence

[1251] viXra:2202.0081 [pdf] submitted on 2022-02-14 01:46:16

Evolving Digital Circuits using Multi Expression Programming

Authors: Mihai Oltean, Crina Grosan
Comments: NASA/DoD Conference on Evolvable Hardware, 24-26 June, Seattle, Edited by R. Zebulum (et. al), pages 87-90, IEEE Press, NJ, 2004

Multi Expression Programming (MEP) is a Genetic Programming (GP) variant that uses linear chromosomes for solution encoding. A unique MEP feature is its ability of encoding multiple solutions of a problem in a single chromosome. These solutions are handled in the same time complexity as other techniques that encode a single solution in a chromosome. In this paper, MEP is used for evolving digital circuits. MEP is compared to Cartesian Genetic Programming (CGP) – a technique widely used for evolving digital circuits – by using several well-known problems in the field of electronic circuit design. Numerical experiments show that MEP outperforms CGP for the considered test problems.
Category: Artificial Intelligence

[1250] viXra:2202.0080 [pdf] submitted on 2022-02-14 01:49:14

Solving Even-Parity Problems using Multi Expression Programming

Authors: Mihai Oltean
Comments: 4 Pages. Proceedings of the 5th International Workshop on Frontiers in Evolutionary Algorithms, The 7th Joint Conference on Information Sciences, September 26-30, 2003, Research Triangle Park, North Carolina, Edited by Ken Chen (et. al), pp. 315-318, 2003.

In this paper, the Multi Expression Programming (MEP) technique is used for solving even-parity problems. Numerical experiments show that MEP outperforms Genetic Programming (GP) with more than one order of magnitude for the considered test cases.
Category: Artificial Intelligence

[1249] viXra:2202.0079 [pdf] submitted on 2022-02-14 01:51:37

Improving Multi Expression Programming: an Ascending Trail from Sea-level Even-3-parity Problem to Alpine Even-18-Parity Problem

Authors: Mihai Oltean
Comments: 36 Pages. chapter 10, Evolvable Machines: Theory and Applications, Springer-Verlag, edited by Nadia Nedjah (et al.), pp. 229-255, 2004

Multi Expression Programming is a Genetic Programming variant that uses a linear representation of individuals. A unique feature of Multi Expression Programming is its ability of storing multiple solutions of a problem in a single chromosome. In this paper, we propose and use several techniques for improving the search performed by Multi Expression Programming. Some of the most important improvements are Automatically Defined Functions and Sub-Symbolic node representation. Several experiments with Multi Expression Programming are performed in this paper. Numerical results show that Multi Expression Programming performs very well for the considered test problems.
Category: Artificial Intelligence

[1248] viXra:2201.0188 [pdf] submitted on 2022-01-26 03:38:42

Preliminary Concept of General Intelligent Network (Gin) for Brain-Like Intelligence

Authors: Chengkai Guo, Kai Yang
Comments: 9 Pages.

Preliminary concept of AGI for brain-like intelligence is presented in this paper. The solution is mainly in two aspects: firstly, we combine information entropy and generative network (GAN like) model to propose a paradigm of General Intelligent Network (GIN). In the GIN network, the original multimodal information can be encoded as low information entropy hidden state representations (HPPs), which can be reverse parsed by the contextually relevant generative network into observable information. Secondly,we propose a generalized machine learning operating system (GML system), which includes an observable processor (AOP), an HPP storage system, and a multimodal implicit sensing/execution network. Our code will be released at https://github.com/ggsonic/GIN
Category: Artificial Intelligence

[1247] viXra:2201.0177 [pdf] submitted on 2022-01-25 19:40:24

Implementation of Sentiment Analysis and Classification of Tweets Using Machine Learning

Authors: Manish Bhargav, Satish Kumar Alaria, Manish Kumar Mukhija
Comments: 10 Pages.

Twitter has turned into a tiny source of dynamic data for blogging places. People post on a wide range of topics and constantly communicate their assumptions, discuss current concerns, and positively review what they use in their daily lives on Twitter wall. The main goal is to assess the emotions expressed in tweets using various machine learning algorithms that identify tweets as positive or negative. If the tweet contains both negative and positive elements, the most dominant component should be chosen as the final component. In tweets, emojis, usernames, and hashtags must be managed and translated into a standard structure. Bigrams and unigrams, for example, must be removed as well. In any case, just relying on a single model, which did not give high accuracy, is taken into account when selecting a model with high precision. To be honest, organizers for these items have begun to investigate these modest internet journals (blogs) in order to get a general sense of their item. They frequently monitor and reply to client comments on smaller websites. One issue is coming up with new ways to recognize and abbreviate a broad sentiment. Several persons, such as Facebook, Twitter, and Instagram, were brought into interpersonal connection stages as recently as last year. Most people use social media to convey their feelings, ideas, or assumptions about objects, places, or people. Strategies Twitter, a micro-blogging platform, is a massive repository of public opinion for a variety of people, offers, businesses, and products, among other things. The public analysis system evaluations are known as sentiment assessment. Combination of sentiment analysis on Twitter give valuable context to what's being said on Twitter. The wide availability of internet exams and social media postings the media provides critical criticism to organizations in order to improve expert options and steer their marketing tactics to leisure and user selections. As a result, social media plays a key role in influencing the public's perception of the services or products chosen. The numerous tactics utilized for product classification critiques are highlighted in this study (which may be in the form of tweets) Tweet complaints to see if mass behaviour is positive, negative, or neutral. Analysis of the Product Market. The information used here comes from our Twitter product reviews, which were used to categorize opinions as satisfying.
Category: Artificial Intelligence

[1246] viXra:2201.0144 [pdf] submitted on 2022-01-22 09:08:57

Artificial Intelligence Definition, Realization and Consequences

Authors: Dimiter Dobrev
Comments: 119 Pages. Bulgarian language

Artificial Intelligence - What is it, how to do it and what will we do after we do it? This is a PhD thesis.
Category: Artificial Intelligence

[1245] viXra:2201.0094 [pdf] submitted on 2022-01-16 15:17:12

Cardiovascular Disease Diagnosis using Deep Neural Networks

Authors: Jai Sharma, Milind Maiti, Christopher Sun
Comments: 17 Pages.

Cardiovascular disease causes 25% of deaths in America (Heart Disease Facts). Specifically, misdiagnosis of cardiovascular disease results in 11,000 American deaths annually, emphasizing the increasing need for Artificial Intelligence to improve diagnosis. The goal of our research was to determine the probability that a given patient has Cardiovascular Disease using 11 easily-accessible objective, examination, and subjective features from a data set of 70,000 people. To do this, we compared various Machine Learning and Deep Learning models. Exploratory Data Analysis (EDA) identified that blood pressure, cholesterol, and age were most correlated with an elevated risk of contracting heart disease. Principal Component Analysis (PCA) was employed to visualize the 11-D data onto a 2-D plane, and distinct aggregations in the data motivated the inference of specific cardiovascular conditions beyond the binary labels in the data set. To diagnose patients, several Machine Learning and Deep Learning models were trained using the data and compared using the metrics Binary Accuracy and F1 Score. The initial Deep Learning model was a Shallow Neural Network with 1 hidden layer consisting of 8 hidden units. Further improvements, such as adding 5 hidden layers with 8 hidden units each and employing Mini-Batch Gradient Descent, Adam Optimization, and He’s Initialization, were successful in decreasing train times. These models were coded without the utilization of Deep Learning Frameworks such as TensorFlow. The final model, which achieved a Binary Accuracy of 74.2% and an F1 Score of 0.73, consisted of 6 hidden layers, each with 128 hidden units, and was built using the highly optimized Keras library. While current industrial models require hundreds of comprehensive features, this final model requires only basic inputs, allowing versatile applications in rural locations and third-world countries. Furthermore, the model can forecast demand for medical equipment, improve diagnosis procedures, and provide detailed personalized health statistics.
Category: Artificial Intelligence

[1244] viXra:2112.0155 [pdf] submitted on 2021-12-29 02:21:06

Comparison of Various Models for Stock Prediction

Authors: Jonathan Lee
Comments: 4 Pages. Thanks

Due to the high volatility of the COVID-19 pandemic, interest in stock invest-ment is focused. Also, it is said that the atmosphere is gathering again fromthe cryptocurrency market to the domestic stock market. In this situation, welooked at which model could more accurately predict the closing
Category: Artificial Intelligence

[1243] viXra:2112.0135 [pdf] submitted on 2021-12-26 21:08:14

Directed Dependency Graph Obtained from a Correlation Matrix by the Highest Successive Conditionings Method

Authors: Ait-Taleb Nabil
Comments: 22 Pages.

In this paper we will propose a directed dependency graph obtained from a correlation matrix. This graph will include probabilistic causal sub-models for each node modeled by conditionings percentages. The directed dependency graph will be obtained using the highest successive conditionings method with a conditioning percentage value to be exceeded.
Category: Artificial Intelligence

[1242] viXra:2112.0130 [pdf] submitted on 2021-12-24 04:23:06

The SP Challenge: that the SP System is More Promising as a Foundation for the Development of Human-Level Broad ai Than Any Alternative

Authors: J Gerard Wolff
Comments: 44 Pages.

The "SP Challenge" is the deliberately provocative theme of this paper: that the "SP System" (SPS), meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model", is more promising as a foundation for the development of human-level broad AI, aka 'artificial general intelligence' (AGI), than any alternative. In that connection, the main strengths of the SPS are: 1) The adoption of a top-down, breadth-first research strategy with wide scope; 2) Recognition of the importance of information compression (IC) in human learning, perception, and cognition -- and, correspondingly, a central role for IC in the SPS; 3) The working hypothesis that all kinds of IC may be understood in terms of the matching and unification of patterns (ICMUP); 4) A resolution of the apparent paradox that IC may achieve decompression as well as compression. 5) The powerful concept of SP-multiple-alignment, a generalisation of six other variants of ICMUP; 6) the clear potential of the SPS to solve 19 problems in AI research; 7) Strengths and potential of the SPS in modelling several aspects of intelligence, including several kinds of probabilistic reasoning, versatility in the representation and processing of AI-related knowledge, and the seamless integration of diverse aspects of intelligence, and diverse kinds of knowledge, in any combination; 8) Several other potential benefits and applications of the SPS; 9) In "SP-Neural", abstract concepts in the SPS may be mapped into putative structures expressed in terms of neurons and their interconnections and intercommunications; 10) The concept of ICMUP provides an entirely novel perspective on the foundations of mathematics; 11) How to make generalisations from data, including the correction of over- and under-generalisations, and how to reduce or eliminate errors in data. There is discussion of how the SPS compares with some other potential candidates for the SP-Challenge. And there is an outline of possible future directions for the research.
Category: Artificial Intelligence

[1241] viXra:2112.0126 [pdf] submitted on 2021-12-23 04:31:07

Pcarst: a Method of Weakening Conflict Evidence Based on Principal Component Analysis and Relatively Similar Transformation

Authors: Xuan Zhao, Huizi Cui, Zilong Xiao, Bingyi Kang
Comments: 26 Pages.

How to deal with conflict is a significant issue in Dempster-Shafer evidence theory (DST). In the Dempster combination rule, conflicts will produce counter-intuitive phenomena. Therefore, many effective conflict handling methods have been presented. This paper proposes a new framework for reducing conflict based on principal component analysis and relatively similar transformation (PCARST), which can better reduce the impact of conflict evidence on the results, and has more reasonable results than existing methods. The main characteristic feature of the BPAs is maintained while the conflict evidence is regarded as a noise signal to be weakened. A numerical example is used to illustrate the effectiveness of the proposed method. Results show that a higher belief degree of the correct proposition is obtained comparing previous methods.
Category: Artificial Intelligence

[1240] viXra:2112.0122 [pdf] submitted on 2021-12-22 03:25:27

Feedforward Neural Networks: Efficiency and Performance of Backpropagation and Evolutionary Algorithms

Authors: Kasper van Maasdam
Comments: 31 Pages.

Artificial neural networks are important in everyday life and are becoming more widespread. For this reason, it is crucial they are understood and tested. This paper tests and compares two training methods: reinforcement learning with backpropagation and an evolutionary method. The hypothesis is that the training method using backpropagation and reinforcement learning is more efficient in training a neural network to play a game than a model trained with the evolutionary algorithm. However, the model trained with backpropagation and reinforcement learning will have lower performance than a model trained with the evolutionary algorithm. To research the hypothesis, a feedforward neural network and how it works must first be explained.

Neural networks are systems inspired by the biological brain which enables a computer to predict, model, classify and many other applications. All this by learning from some set of training data to find general relations that can be applied to unseen data. A neural network model is essentially a function with potentially thousands of parameters. Just like any other function, input values are provided and with those, the output is calculated. In a feedforward neural network, this process is called feedforward.

The process of feedforward is meaningless with a model that has not yet been configured to do anything. A neural network must first be taught to perform a certain task. This is what is accomplished with machine learning. Backpropagation is an example of a machine learning method. For backpropagation two things are required: the input and the corresponding output. Backpropagation will adjust the parameters of a model so the next time the same input is provided, the output will be closer to the desired output. This is called optimisation.

Reinforcement learning is a way to teach a neural network by giving it positive reinforcement when it does something good and negative reinforcement when it does something bad. This is used when no desired output is known so backpropagation cannot directly be applied.

An evolutionary algorithm is much more intuitive than backpropagation. It is the imitation of natural selection in biology, but with self-determined factors deciding the fitness of a model. When training a neural network with an evolutionary algorithm, a large group of random models will be generated, all performing the same task. Some models, however, will be better suited for this task than others. How well they are suited to their environment is their fitness. This will be the determining factor of who survives and can therefore reproduce and create mutated offspring. This process is repeated as many times as required to reach the desired performance.

The hypothesis of this paper has been proven wrong. Neural networks trained with an evolutionary algorithm do end up performing at a higher level than models trained with reinforcement learning and backpropagation. However, Neural networks trained with an evolutionary algorithm are also more efficient with regard to not only the number of cycles needed to reach the same performance but also with regard to the time required.


Category: Artificial Intelligence

[1239] viXra:2112.0097 [pdf] submitted on 2021-12-18 17:03:00

Phish: A Novel Hyper-Optimizable Activation Function

Authors: Philip Naveen
Comments: 8 Pages. Written at Godwin High School

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Four generalized networks were constructed using Phish, Swish, Sigmoid, and TanH. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.
Category: Artificial Intelligence

[1238] viXra:2112.0095 [pdf] submitted on 2021-12-17 20:54:35

Triplere: Knowledge Graph Embeddings Via Triple Relation Vectors

Authors: Long Yu, ZhiCong Luo, Deng Lin, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.

Knowledge representation is a classic problem in Knowledge graph.Distance-based models have made great progress.The most significant recent developments in this direction have been those of Rotate and PairRE, which focus on express relationships as projections of nodes.However TransX series Model(TransE, TransH, TransR) express relationships as translations of nodes.To date, the problem of the Combination of Projection and translation has received scant attention in the research literature.Hence, we propose TripleRE, a method which models relationships by projections and translations.Compared with the original distance-based knowledge representation model, results on ogbl-wikikg2 dataset are significantly improved.
Category: Artificial Intelligence

[1237] viXra:2112.0079 [pdf] submitted on 2021-12-15 05:17:36

The Perfect Way to Generate “Good” Gel Electrophoresis Images

Authors: Modong Tan
Comments: 5 Pages. Break the biological image authenticity

Recently, there have been many deep learning based image generation method, none of them was designed for biological experiment related images. In this paper, we proposed a concept of efficient method for agarose gel electrophoresis image generation in order to skip time-wasting polymerase chain reaction (PCR) and gel electrophoresis experiment. Based on deep convolutional generative adversarial networks (DCGAN), successfully generated a “good” gel electrophoresis images and destroyed the evidence ability of those images in biological paper. Our results show that a vulnerability of the evidence ability of the traditional gel electrophoresis image.
Category: Artificial Intelligence

[1236] viXra:2112.0012 [pdf] submitted on 2021-12-02 03:27:08

A Traffic Prediction Using Machine Learning: Literature Survey

Authors: Ji Yoon Kim
Comments: 4 Pages.

Accurate calculation of the commute cost is crucial for the government to decide whether housing subsidy will be provided to disadvantaged workers, or to create a new method that can reduce the commute cost of the disadvantaged workers by offering mass transit. Many studies have already proven that machine learning can predict traffic and commute times. Although different machine learning algorithms can be used, this study mainly uses Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), which are based on the Recurrent Neural Networks (RNNs) architecture.
Category: Artificial Intelligence

[1235] viXra:2111.0172 [pdf] submitted on 2021-11-30 05:08:24

New Evolutionary Computation Models and their Applications to Machine Learning

Authors: Mihai Oltean
Comments: 170 Pages.

Automatic Programming is one of the most important areas of computer science research today. Hardware speed and capability have increased exponentially, but the software is years behind. The demand for software has also increased significantly, but it is still written in old fashion: by using humans. There are multiple problems when the work is done by humans: cost, time, quality. It is costly to pay humans, it is hard to keep them satisfied for a long time, it takes a lot of time to teach and train them and the quality of their output is in most cases low (in software, mostly due to bugs). The real advances in human civilization appeared during the industrial revolutions. Before the first revolution, most people worked in agriculture. Today, very few percent of people work in this field. A similar revolution must appear in the computer programming field. Otherwise, we will have so many people working in this field as we had in the past working in agriculture. How do people know how to write computer programs? Very simple: by learning. Can we do the same for software? Can we put the software to learn how to write software? It seems that is possible (to some degree) and the term is called Machine Learning. It was first coined in 1959 by the first person who made a computer perform a serious learning task, namely, Arthur Samuel. However, things are not so easy as in humans (well, truth to be said - for some humans it is impossible to learn how to write software). So far we do not have software that can learn perfectly to write software. We have some particular cases where some programs do better than humans, but the examples are sporadic at best. Learning from experience is difficult for computer programs. Instead of trying to simulate how humans teach humans how to write computer programs, we can simulate nature.
Category: Artificial Intelligence

[1234] viXra:2111.0171 [pdf] submitted on 2021-11-30 05:11:44

Multi Expression Programming

Authors: Mihai Oltean, D. Dumitrescu
Comments: 28 Pages. Technical Report, Babes-Bolyai Univ. 2002

Multi Expression Programming (MEP) is a new evolutionary paradigm intended for solving computationally difficult problems. MEP individuals are linear entities that encode complex computer programs. MEP chromosomes are represented in the same way as C or Pascal compilers translate mathematical expressions into machine code. MEP is used for solving some difficult problems like symbolic regression and game strategy discovering. MEP is compared with Gene Expression Programming (GEP) and Cartesian Genetic Programming (CGP) by using several well-known test problems. For the considered problems MEP outperforms GEP and CGP. For these examples MEP is two magnitude orders better than CGP.
Category: Artificial Intelligence

[1233] viXra:2111.0170 [pdf] submitted on 2021-11-30 06:38:09

Existence and Perception as the Basis of Agi

Authors: Victor V. Senkevich
Comments: 9 Pages.

As is known, AGI (Artificial General Intelligence), unlike AI, should operate with meanings. And that's what distinguishes it from AI. Any successful AI implementations (playing chess, unmanned driving, face recognition etc.) do not operate with the meanings of the processed objects in any way and do not recognize the meaning. But they don't need to. But for AGI, which emulates human thinking, this ability is crucial. Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not strict and formalized, so they cannot be programmed. The meaning search procedure should use a formalized description of its existence and possible forms of its perception. For the practical implementation of AGI, it is necessary to develop such "ready-to-code" descriptions in the context of their use for processing the related cognitive concepts of "meaning" and "knowledge". An attempt to formalize the definition of such concepts is made in this article.
Category: Artificial Intelligence

[1232] viXra:2111.0169 [pdf] submitted on 2021-11-30 07:15:04

Evolving Evolutionary Algorithms using Multi Expression Programming

Authors: Mihai Oltean, Crina Grosan
Comments: 8 Pages. The 7th European Conference on Artificial Life, September 14-17, 2003, Dortmund, Edited by W. Banzhaf (et al), LNAI 2801, pp. 651-658, Springer-Verlag, Berlin, 2003.

Finding the optimal parameter setting (i.e. the optimal population size, the optimal mutation probability, the optimal evolutionary model etc) for an Evolutionary Algorithm (EA) is a difficult task. Instead of evolving only the parameters of the algorithm we will evolve an entire EA capable of solving a particular problem. For this purpose the Multi Expression Programming (MEP) technique is used. Each MEP chromosome will encode multiple EAs. An nongenerational EA for function optimization is evolved in this paper. Numerical experiments show the effectiveness of this approach.
Category: Artificial Intelligence

[1231] viXra:2111.0161 [pdf] submitted on 2021-11-29 20:00:15

ANN Synthesis and Optimization of Electronically Scanned Coupled Planar Periodic and Aperiodic Antenna Arrays Modeled by the MoM-GEC Approach

Authors: B. Hamdi, A. Nouainia, T. Aguili, H. Baudrand
Comments: 6 Pages.

This paper proposes a new formulation that relied on the moment technique combined with the equivalent circuit (MoM-GEC) to study a beamforming application for the coupled periodic and quasi-periodic planar antenna array. Numerous voltage designs are utilized to show the adequacy and unwavering quality of the proposed approach. The radiators are viewed as planar dipoles and consequently shared (mutual) coupling effects are considered. The recommended array shows a noticeable improvement against the current structures as far as size, 3-D scanning, directivity, SLL reduction, and HPBW. The results verify that multilayer feed-forward neural networks are vigorous and can take care of complex antenna problems. Even so, an artificial neural network (ANN) is ready to create quickly the results of optimization and synthesis by utilizing generalization with an early stopping method. Significant gain in the running time consumption and memory used is acquired employing this last technique for improving generalization (named early stopping). Simulation results are carried out using MATLAB. To approve this work, several simulation examples are shown.
Category: Artificial Intelligence

[1230] viXra:2111.0080 [pdf] submitted on 2021-11-16 13:05:33

Discriminator Variance Regularization for Wasserstein GAN

Authors: Jeongik Cho
Comments: 4 Pages.

In Wasserstein GAN, it is important to regularize the discriminator to have a not big Lipschitz constant. In this paper, I introduce discriminator variance regularization to regularize the discriminator of Wasserstein GAN to have a small Lipschitz constant. Discriminator variance regularization simply regularizes the variance of the discriminator's output to be small when input is real data distribution or generated data distribution. Intuitively, a low variance of discriminator output implies that the discriminator is more likely to have a low Lipschitz constant. Discriminator variance regularization does not explicitly regularize the Lipschitz constant of discriminator through differentiation on discriminator but lowers the probability that the Lipschitz constant of the discriminator is high. Discriminator variance regularization is used in Wasserstein GAN with R1 regularization, which suppresses the vibration of GAN. Discriminator variance regularization requires very little additional computing.
Category: Artificial Intelligence

[1229] viXra:2111.0069 [pdf] submitted on 2021-11-15 19:53:00

A Modified Belief Functions Distance Measure for Orderable Set

Authors: Xingyue Yang, Xuan Zhao, Bingyi Kang
Comments: 23 Pages.

This paper proposes a new method of measuring the distance between conflicting order sets, quantifying the similarity between focal elements and their own size. This method can effectively measure the conflict of belief functions on an ordered set without saturation due to the non-overlapping focus elements. It has proven that the method satisfies the property of the distance. Examples of the engineering budget and sensors show that the distance can effectively measure the conflict between ordered sets, and prove the distance we propose to reflect the information of order sets more comprehensively by comparison with existing methods and the conflict metric between ordered sets is more robust and accurate
Category: Artificial Intelligence

[1228] viXra:2111.0065 [pdf] submitted on 2021-11-13 09:37:33

Robotic Autonomy: A Survey

Authors: Bora King
Comments: 7 Pages.

Robotic autonomy is key to the expansion of robotic applications. The paper reviews the success of robotic autonomy in industrial applications, as well as the requirements and challenges on expanding robotic autonomy to in needing applications, such as education, medical service, home service, etc. Through the discussions, the paper draws the conclusion that robotic intelligence is the bottleneck for the broad application of robotic technology.
Category: Artificial Intelligence

[1227] viXra:2111.0060 [pdf] submitted on 2021-11-14 14:57:39

Application of Xgboost to Time Series Forecasting by Taking Advantage of Its Powerful Forecasting Performance

Authors: Tatsuhiko Yamato
Comments: 7 Pages.

Xgboost has the best forecasting performance among non-deep learning methods. However, it works well for interpolation problems and regression, but not for future forecasting of time series data that requires extrapolation. I think it is difficult to avoid this tendency even if we add explanatory variables in the background of the data. Possible explanatory variables include lags of a day or several days from the data, months, days, days of the week, holidays, and so on. In fact, the increase or decrease in data values due to these factors is quite possible and can serve as explanatory variables. However, even if you do this, you will not be able to capture the trend.
Category: Artificial Intelligence

[1226] viXra:2111.0035 [pdf] submitted on 2021-11-04 23:26:24

Bayesian Optimization for Category Space

Authors: Jun Jin
Comments: 2 Pages.

Hyper parameter optimization is widely used in AI areas. Hyper parameter usually means the value controls the whole learning process, but itself cannot be learned or tunned in training process. Hyper parameter is very important because it will greatly affect the learning result. A good hyper parameter set can lead to a much better result or cost much less training time, instead a bad hyper parameter usually will end in local optimum, or even failed to converge. Hyper parameters can be many difference kinds of types, it could be in the model itself (depth, node counts, etc..), or it could be in the algorithm (learning rate, optimizer, etc..). Different models or algorithms usually need different hyper parameters, even the same model/algorithm can use different hyper parameters to achieve better results. So hyper parameter exists in different part of the training process, some of the hyper parameter is described in a category. It usually means that the parameter can only be chosen in a range. This kind of parameter has some properties, for this special kind of hyper parameter we proposed a common method here to optimize it. By using this method we turn the category problems into Real searching space to achieve a better result.
Category: Artificial Intelligence

[1225] viXra:2111.0015 [pdf] submitted on 2021-11-02 20:44:50

A New Algorithm based on Extent Bit-array for Computing Formal Concepts

Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 12 Pages.

The emergence of Formal Concept Analysis (FCA) as a data analysis technique has increased the need for developing algorithms which can compute formal concepts quickly. The current efficient algorithms for FCA are variants of the Close-By-One (CbO) algorithm, such as In-Close2, In-Close3 and In-Close4, which are all based on horizontal storage of contexts. In this paper, based on algorithm In-Close4, a new algorithm based on the vertical storage of contexts, called InClose5, is proposed, which can significantly reduce both the time complexity and space complexity of algorithm In-Close4. Technically, the new algorithm stores both context and extent of a concept as a vertical bit-array, while within In-Close4 algorithm the context is stored only as a horizontal bit-array, which is very slow in finding the intersection of two extent sets. Experimental results demonstrate that the proposed algorithm is much more effective than In-Close4 algorithm, and it also has a broader scope of applicability in computing formal concept in which one can solve the problems that cannot be solved by the In-Close4 algorithm.
Category: Artificial Intelligence

[1224] viXra:2111.0014 [pdf] submitted on 2021-11-02 20:46:18

Granule Description based on Compound Concepts

Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 17 Pages.

Concise granule descriptions for describable granules and approaching description methods for indescribable granules are challenging and important issues in granular computing. The concept with only common attributes has been frequently studied. To investigate the granules with some special needs, we propose two new types of compound concepts in this paper: bipolar concept and common-and-necessary concept. Based on the definitions of concept-forming operations, the logical formulas are derived for each of the following types of concepts: formal concept, three-way concept, object oriented concept, bipolar concept and common-and-necessary concept. Furthermore, by utilizing the logical relationship among various concepts, we have derived concise and unified equivalent conditions for describable granules and approaching description methods for indescribable granules for all five kinds of concepts.
Category: Artificial Intelligence

[1223] viXra:2110.0138 [pdf] submitted on 2021-10-23 19:28:00

Enhancing the Weakening of the Conflict Evidence Using Similarity Matrix and Dispersion of Similarities in Dempster-Shafer Evidence Theory

Authors: Yan Li, Chenchen Lin, Huizi Cui, Bingyi Kang
Comments: 46 Pages. [Corrections to title made by viXra Admin]

Classic Dempster combination rule may result in illogical results when combining highly conflict evidence. How to deal with highly conflict evidence and get a reasonable result is critical. Modifying the evidence is one of significant strategies according to the importance of each evidence (e.g. similarity matrix). However, the dispersion of evidence similarity is rarely taken into consideration, which is also an important feature to distinguish the conflict evidence and normal evidence. In this paper, a new method based on similarity matrix and dispersion of evidence similarity is proposed to evaluate the importance of evidence in Dempster-Shafer theory (DST). The proposed method enhances to weaken the influence of the conflict evidence. Robustness of the proposed method is verified through the sensitivity analysis the changes of degree of conflict and amount of credible evidence changes in DST. Some numerical examples are used to show the effectiveness of the proposed method.
Category: Artificial Intelligence

[1222] viXra:2110.0085 [pdf] submitted on 2021-10-17 15:51:55

AniVid: A Novel Anime Video Dataset with Applications in Animation

Authors: Kai Gangi
Comments: 5 Pages.

Automating steps of the animation production process using AI-based tools would ease the workload of Japanese animators. Although there have been recent advances in the automatic animation of still images, the majority of these models have been trained on human data and thus are tailored to images of humans. In this work, I propose a semi-automatic and scalable assembling pipeline to create a large-scale dataset containing clips of anime characters’ faces. Using this assembling strategy, I create AniVid, a novel anime video dataset consisting of 34,221 video clips. I then use a transfer learning approach to train a first order motion model (FOMM) on a portion of AniVid, which effectively animates still images of anime characters. Extensive experiments and quantitative results show that FOMM trained on AniVid outperforms other trained versions of FOMM when evaluated on my test set of anime videos.
Category: Artificial Intelligence

[1221] viXra:2110.0055 [pdf] submitted on 2021-10-12 09:24:46

Benchmarking of Lightweight Deep Learning Architectures for Skin Cancer Classification using ISIC 2017 Dataset

Authors: Abdurrahim Yilmaz, Mucahit Kalebasi, Yegor Samoylenko, Mehmet Erhan Guvenilir, Huseyin Uvet
Comments: 4 page for manuscript with 3 page supplementary that includes ROC curves of models.

Skin cancer is one of the deadly types of cancer and is common in the world. Recently, there has been a huge jump in the rate of people getting skin cancer. For this reason, the number of studies on skin cancer classification with deep learning are increasing day by day. For the growth of work in this area, the International Skin Imaging Collaboration (ISIC) organization was established and they created an open dataset archive. In this study, images were taken from ISIC 2017 Challenge. The skin cancer images taken were preprocessed and data augmented. Later, these images were trained with transfer learning and fine-tuning approach and deep learning models were created in this way. 3 different mobile deep learning models and 3 different batch size values were determined for each, and a total of 9 models were created. Among these models, the NASNetMobile model with 16 batch size got the best result. The accuracy value of this model is 82.00%, the precision value is 81.77% and the F1 score value is 0.8038. Our method is to benchmark mobile deep learning models which have few parameters and compare the results of the models.
Category: Artificial Intelligence

[1220] viXra:2110.0036 [pdf] submitted on 2021-10-08 14:05:29

Directed Dependency Graph Obtained from a Continuous Data Matrix by the Highest Successive Conditionings Method.

Authors: Ait-Taleb Nabil
Comments: 29 Pages.

In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix. For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph. Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence

[1219] viXra:2110.0030 [pdf] submitted on 2021-10-07 21:49:52

Motion Detection and Tracking using Raspberry Pi

Authors: Saarang Srinivasan
Comments: 18 Pages. [Corrections made by viXra Admin to conform with scholarly norm]

The aim of this project is to detect the motion in a video and accordingly follow the motion. This program uses background elimination and contour detection to find the moving objects in the video and determine which direction we must move in order to follow the motion. We move the camera in the direction of the motion in order to follow it.
Category: Artificial Intelligence

[1218] viXra:2110.0026 [pdf] submitted on 2021-10-06 05:44:45

Bangalore House Price Prediction

Authors: Amey Thakur, Mega Satish
Comments: 4 pages, 4 figures, Volume 8, Issue 9, International Research Journal of Engineering and Technology (IRJET), 2021.

We propose to implement a house price prediction model of Bangalore, India. It’s a Machine Learning model which integrates Data Science and Web Development. We have deployed the app on the Heroku Cloud Application Platform. Housing prices fluctuate on a daily basis and are sometimes exaggerated rather than based on worth. The major focus of this project is on predicting home prices using genuine factors. Here, we intend to base an evaluation on every basic criterion that is taken into account when establishing the pricing. The goal of this project is to learn Python and get experience in Data Analytics, Machine Learning, and AI.
Category: Artificial Intelligence

[1217] viXra:2109.0220 [pdf] submitted on 2021-09-30 01:04:38

Artificial Intelligence & Machine Learning Role in Financial Services

Authors: Prudhvi Parne
Comments: 6 Pages.

Financial services are the economical backbone of any nation in the world. There are billions of financial transactions which are taking place and all this data is stored and can be considered as a gold mine of data for many different organizations. No human intelligence can dig in this amount of data to come up with something valuable. This is the reason financial organizations are employing artificial intelligence to come up with new algorithms which can change the way financial transactions are being carried out. Artificial Intelligence can complete the task in a very short period. Artificial intelligence can be used to detect frauds, identify possible attacks, and any other kind of anomalies that may be detrimental for the institution. This paper discusses the role of artificial intelligence and machine learning in the finance sector.
Category: Artificial Intelligence

[1216] viXra:2109.0203 [pdf] submitted on 2021-09-28 19:31:25

A Special Theory of Life

Authors: Matthew Groom
Comments: 5 Pages. [Corrections made by viXra Admin to conform with scholarly norm]

This is going to be one strange and yet rewarding paper for everyone. It consists of two parts. 1.The Rapture is here [.] 2.I also provide a proof of our inner-self duality and answer the other question everyone wants to know, self - what makes you, you. This is what every AI researcher has requested.
Category: Artificial Intelligence

[1215] viXra:2109.0200 [pdf] submitted on 2021-09-28 19:13:38

Classification of Rice Varieties with Deep Learning Methods

Authors: Murat Koklu, Ilkay Cinar, Yavuz Selim Taspinar
Comments: 8 Pages.

Rice, which is among the most widely produced grain products worldwide, has many genetic varieties. These varieties are separated from each other due to some of their features. These are usually features such as texture, shape, and color. With these features that distinguish rice varieties, it is possible to classify and evaluate the quality of seeds. In this study, Arborio, Basmati, Ipsala, Jasmine and Karacadag, which are five different varieties of rice often grown in Turkey, were used. A total of 75,000 grain images, 15,000 from each of these varieties, are included in the dataset. A second dataset with 106 features including 12 morphological, 4 shape and 90 color features obtained from these images was used. Models were created by using Artificial Neural Network (ANN) and Deep Neural Network (DNN) algorithms for the feature dataset and by using the Convolutional Neural Network (CNN) algorithm for the image dataset, and classification processes were performed. Statistical results of sensitivity, specificity, prediction, F1 score, accuracy, false positive rate and false negative rate were calculated using the confusion matrix values of the models and the results of each model were given in tables. Classification successes from the models were achieved as 99.87% for ANN, 99.95% for DNN and 100% for CNN. With the results, it is seen that the models used in the study in the classification of rice varieties can be applied successfully in this field.
Category: Artificial Intelligence

[1214] viXra:2109.0124 [pdf] submitted on 2021-09-13 10:29:37

A Proposed Solution to Problems in Learning the Knowledge Needed by Self-Driving Vehicles

Authors: J Gerard Wolff
Comments: 15 Pages.

Three problems in learning knowledge for self-driving vehicles are: how a finite sample of information about driving, N, can yield an ability to deal with the infinity of possible driving situations; the problem of generalising from N without over- or under-generalisation; and how to weed out errors in N. A theory developed with computer models to explain a child’s learning of his or her first language, now incorporated in the SP System, suggests: compress N as much as possible by a process that creates a grammar, G, and an encoding of N in terms of G called E. Then discard E which contains all or most of the errors in N, and retain G which solves the first two problems.
Category: Artificial Intelligence

[1213] viXra:2109.0110 [pdf] submitted on 2021-09-09 22:16:02

The Future of Online Learning Using Artificial Intelligence

Authors: Yew Kee Wong
Comments: 7 Pages. AIAA CONFERENCE 2021 (NOV 2021), DUBAI, UAE

Online learning is the emerging technique in education and learning during the COVID-19 pandemic period. Traditional learning is a complex process as learning patterns, approach, skills and performance varies from person to person. Adaptive online learning focuses on understanding the learner’s performance, skills and adapts to it. The use of advanced technology also provides a means to analyse the behavioural learning pattern. As it provides the detailed skill mapping and performance which enables the learner to understand the areas needs to be improved. The information can also be used by assessors to improve the teaching approach. Advanced online learning system using artificial intelligence is an emerging concept in the coming years. In this new concept, the classes are not taken face-to-face in a classroom but through an electronic medium as a substitute. These virtual learning approach are gaining importance every day and very soon they are going to be an integral part of our world. Taking up these virtual learning through an electronic medium is termed as online learning. We proposed two new models which are powered by artificial intelligence (AI) tools. A number of examples of using these new models are presented.
Category: Artificial Intelligence

[1212] viXra:2109.0109 [pdf] submitted on 2021-09-09 22:17:57

The Use of Big Data in Machine Learning Algorithm

Authors: Yew Kee Wong
Comments: 7 Pages. ACITY CONFERENCE 2021 (NOV 2021), DUBAI, UAE

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1211] viXra:2109.0108 [pdf] submitted on 2021-09-09 22:19:40

Using ai to Learn Industry Specific Big Data for Business Operation and Crisis Management

Authors: Yew Kee Wong
Comments: 7 Pages. SCAI CONFERENCE 2021 (NOV 2021), ZURICH, SWITZERLAND

Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and crisis management domains.
Category: Artificial Intelligence

[1210] viXra:2109.0107 [pdf] submitted on 2021-09-09 22:21:06

Using Different Assessment Indicators in Supporting Online Learning

Authors: Yew Kee Wong
Comments: 6 Pages. BIOM CONFERENCE 2021 (OCT 2021), VIENNA, AUSTRIA

The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an important part of an assessment when performing the online learning outcome. The assessment indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.
Category: Artificial Intelligence

[1209] viXra:2109.0106 [pdf] submitted on 2021-09-09 22:24:11

Applying ai and Big Data for Sensitive Operations and Disaster Management

Authors: Yew Kee Wong
Comments: 7 Pages. MLNLP CONFERENCE 2021 (SEP 2021), COPENHAGEN, DENMARK

Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various sensitive operations and disaster management.
Category: Artificial Intelligence

[1208] viXra:2109.0104 [pdf] submitted on 2021-09-09 22:28:20

Applying Machine Learning Process Using Big Data

Authors: Yew Kee Wong
Comments: 7 Pages. IJAIA JOURNAL (2021) VOL. 12, NO. 5

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1207] viXra:2109.0103 [pdf] submitted on 2021-09-09 22:30:00

The Use of Artificial Intelligence in Human Resources Development

Authors: Yew Kee Wong
Comments: 8 Pages. EEIJ JOURNAL (2021), VOL. 7, ISSUE. 3

Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for all nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence

[1206] viXra:2109.0102 [pdf] submitted on 2021-09-09 22:34:12

The Future of Internet of Things (Iot) and ai

Authors: Yew Kee Wong
Comments: 8 Pages. ARIA CONFERENCE 2021 (DEC 2021), SYDNEY, AUSTRALIA

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence

[1205] viXra:2109.0101 [pdf] submitted on 2021-09-09 22:35:42

Using ai Applications on Internet of Things (Iot)

Authors: Yew Kee Wong
Comments: 8 Pages. NeTIOT CONFERENCE 2021 (DEC 2021), SYDNEY, AUSTRALIA

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence

[1204] viXra:2109.0100 [pdf] submitted on 2021-09-09 22:37:10

Using Big Data for Machine Learning Applications

Authors: Yew Kee Wong
Comments: 7 Pages. SIPR CONFERENCE 2021 (OCT 2021), SYDNEY, AUSTRALIA

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1203] viXra:2109.0099 [pdf] submitted on 2021-09-09 22:39:20

Ai with Big Data

Authors: Yew Kee Wong
Comments: 8 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 6

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity (the four V’s of big data), which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain valuable insights from such varied and rapidly changing data, ranging from daily transactions to customer interactions and social network data. Such value can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the use of big data for the artificial intelligence development and its applications in various decision making domains.
Category: Artificial Intelligence

[1202] viXra:2109.0098 [pdf] submitted on 2021-09-09 22:40:43

Applying ai in Human Resources Advancement

Authors: Yew Kee Wong
Comments: 7 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 6

Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for all nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence

[1201] viXra:2109.0097 [pdf] submitted on 2021-09-09 22:42:18

The Use of Artificial Intelligence and Big Data in Crisis Management

Authors: Yew Kee Wong
Comments: 6 Pages. IJCST JOURNAL 2022 FEB, VOL. 10, ISSUE. 1

Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and crisis management domains.
Category: Artificial Intelligence

[1200] viXra:2109.0096 [pdf] submitted on 2021-09-09 22:43:47

Understanding the Relationships of Ai, Machine Learning and Deep Learning

Authors: Yew Kee Wong
Comments: 10 Pages. IJCST JOURNAL 2022 FEB, VOL. 10, ISSUE. 1

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.
Category: Artificial Intelligence

[1199] viXra:2109.0095 [pdf] submitted on 2021-09-09 22:45:19

Applying Effective Assessment Indicators in Online Learning

Authors: Yew Kee Wong
Comments: 7 Pages. IJIT JOURNAL 2021 DEC, VOL. 7, ISSUE. 6

The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an important part of an assessment when performing the online learning outcome. The assessment indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.
Category: Artificial Intelligence

[1198] viXra:2109.0094 [pdf] submitted on 2021-09-09 22:46:44

The Future of Machine Learning and Deep Learning

Authors: Yew Kee Wong
Comments: 9 Pages. IJIT JOURNAL 2021 DEC, VOL. 7, ISSUE. 6

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1197] viXra:2109.0093 [pdf] submitted on 2021-09-09 22:50:32

Applying New Assessment Indicators in Online Learning Model

Authors: Yew Kee Wong
Comments: 7 Pages. IJIT JOURNAL 2022 FEB, VOL. 8, ISSUE. 1

The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an important part of an assessment when performing the online learning outcome. The assessment indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.
Category: Artificial Intelligence

[1196] viXra:2109.0092 [pdf] submitted on 2021-09-09 22:51:52

Applying Big Data for Machine Learning Process

Authors: Yew Kee Wong
Comments: 7 Pages. IJIT JOURNAL 2022 FEB, VOL. 8, ISSUE. 1

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1195] viXra:2109.0091 [pdf] submitted on 2021-09-09 22:53:38

Evaluate the Features of Big Data Analytics and Machine Learning

Authors: Yew Kee Wong
Comments: 7 Pages. IJETA JOURNAL 2021 DEC, VOL. 8, ISSUE. 6

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1194] viXra:2109.0090 [pdf] submitted on 2021-09-09 22:55:07

Applying Artificial Intelligence in Internet of Things (Iot)

Authors: Yew Kee Wong
Comments: 8 Pages. IJETA JOURNAL 2021 DEC, VOL. 8, ISSUE. 6

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence

[1193] viXra:2109.0088 [pdf] submitted on 2021-09-09 22:58:19

The Power of Big Data in TODAY’S World

Authors: Yew Kee Wong
Comments: 8 Pages. IJETA JOURNAL 2022 FEB, VOL. 9, ISSUE. 1

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity (the four V’s of big data), which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain valuable insights from such varied and rapidly changing data, ranging from daily transactions to customer interactions and social network data. Such value can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the use of big data for the artificial intelligence development and its applications in various decision making domains.
Category: Artificial Intelligence

[1192] viXra:2109.0087 [pdf] submitted on 2021-09-09 23:01:19

The Relationships in Ai, Big Data and Internet of Things (Iot)

Authors: Yew Kee Wong
Comments: 7 Pages. BIBC CONFERENCE 2021 (OCT 2021), SYDNEY, AUSTRALIA

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence

[1191] viXra:2109.0086 [pdf] submitted on 2021-09-09 23:03:06

The Effectiveness of Using Artificial Intelligence for Online Learning

Authors: Yew Kee Wong
Comments: 8 Pages. JOURNAL OF SOFTWARE, ICCSIT 2021, PARIS, FRANCE

Online learning is the emerging technique in education and learning during the COVID-19 pandemic period. Traditional learning is a complex process as learning patterns, approach, skills and performance varies from person to person. Adaptive online learning focuses on understanding the learner’s performance, skills and adapts to it. The use of advanced technology also provides a means to analyze the behavioral learning pattern. As it provides the detailed skill mapping and performance which enables the learner to understand the areas needs to be improved. The information can also be used by assessors to improve the teaching approach. Advanced online learning system using arti=icial intelligence is an emerging concept in the coming years. In this new concept, the classes are not taken face-to-face in a classroom but through an electronic medium as a substitute. These virtual learning approach are gaining importance every day and very soon they are going to be an integral part of our world. Taking up these virtual learning through an electronic medium is termed as online learning. We proposed two new models which are powered by arti=icial intelligence (AI) tools. A number of examples of using these new models are presented.
Category: Artificial Intelligence

[1190] viXra:2109.0085 [pdf] submitted on 2021-09-09 23:04:52

Understanding the Features of Internet of Things (Iot) and Big Data Analysis

Authors: Yew Kee Wong
Comments: 8 Pages. CIoT CONFERENCE 2021 (SEP 2021), TORONTO, CANADA

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. The Internet of Things, or "IoT" for short, is about extending the power of the internet beyond computers and smartphones to a whole range of other things, processes and environments. IoT is at the epicentre of the Digital Transformation Revolution that is changing the shape of business, enterprise and people’s lives. This transformation influences everything from how we manage and operate our homes to automating processes across nearly all industries. This paper aims to analyse the relationships of AI, big data and IoT, as well as the opportunities provided by the applications in various operational domains.
Category: Artificial Intelligence

[1189] viXra:2109.0083 [pdf] submitted on 2021-09-09 23:07:37

Ai, Machine Learning and Deep Learning Development and Applications

Authors: Yew Kee Wong
Comments: 10 Pages. BMLI CONFERENCE 2021 (DEC 2021), CHENNAI, INDIA

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.
Category: Artificial Intelligence

[1188] viXra:2109.0068 [pdf] submitted on 2021-09-09 22:13:52

Human Resources Development Using AI

Authors: Yew Kee Wong
Comments: 7 Pages.

Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for all nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence

[1187] viXra:2109.0067 [pdf] submitted on 2021-09-09 22:14:14

Alternative ai Assessment Methods for Online Learning

Authors: Yew Kee Wong
Comments: 7 Pages.

Online learning is the emerging technique in education and learning during the COVID-19 pandemic period. Traditional learning is a complex process as learning patterns, approach, skills and performance varies from person to person. Adaptive online learning focuses on understanding the learner’s performance, skills and adapts to it. The use of advanced technology also provides a means to analyse the behavioural learning pattern. As it provides the detailed skill mapping and performance which enables the learner to understand the areas needs to be improved. The information can also be used by assessors to improve the teaching approach. Advanced online learning system using artificial intelligence is an emerging concept in the coming years. In this new concept, the classes are not taken face-to-face in a classroom but through an electronic medium as a substitute. These virtual learning approach are gaining importance every day and very soon they are going to be an integral part of our world. Taking up these virtual learning through an electronic medium is termed as online learning. We proposed two new models which are powered by artificial intelligence (AI) tools. A number of examples of using these new models are presented.
Category: Artificial Intelligence

[1186] viXra:2109.0066 [pdf] submitted on 2021-09-09 21:48:06

Applying Big Data Analytics in Machine Learning Algorithms

Authors: Yew Kee Wong
Comments: 7 Pages. IJETA JOURNAL 2021 OCT, VOL. 8, ISSUE. 5

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1185] viXra:2109.0065 [pdf] submitted on 2021-09-09 21:49:38

Dealing Disaster Management Using ai

Authors: Yew Kee Wong
Comments: 9 Pages. IJETA JOURNAL 2021 OCT, VOL. 8, ISSUE. 5

Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and disaster management domains.
Category: Artificial Intelligence

[1184] viXra:2109.0064 [pdf] submitted on 2021-09-09 21:51:33

Understanding the Relationships Between Big Data and ai

Authors: Yew Kee Wong
Comments: 8 Pages. IJIT JOURNAL 2021 AUG, VOL. 7, ISSUE. 4

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity (the four V’s of big data), which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain valuable insights from such varied and rapidly changing data, ranging from daily transactions to customer interactions and social network data. Such value can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the use of big data for the artificial intelligence development and its applications in various decision making domains.
Category: Artificial Intelligence

[1183] viXra:2109.0063 [pdf] submitted on 2021-09-09 21:53:14

Understanding the Features of Deep Learning

Authors: Yew Kee Wong
Comments: 6 Pages. IJIT JOURNAL 2021 AUG, VOL. 7, ISSUE. 4

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence

[1182] viXra:2109.0062 [pdf] submitted on 2021-09-09 21:54:49

Advanced Deep Learning Approach and Applications

Authors: Yew Kee Wong
Comments: 6 Pages. IJIT JOURNAL 2021 OCT, VOL. 7, ISSUE. 5

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence

[1181] viXra:2109.0061 [pdf] submitted on 2021-09-09 21:56:12

Understanding the Features of Machine Learning for Internet of Things (Iot)

Authors: Yew Kee Wong
Comments: 9 Pages. IJIT JOURNAL 2021 OCT, VOL. 7, ISSUE. 5

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1180] viXra:2109.0060 [pdf] submitted on 2021-09-09 21:58:28

Machine Learning Algorithms Using Big Data Analysis

Authors: Yew Kee Wong
Comments: 7 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 5

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1179] viXra:2109.0059 [pdf] submitted on 2021-09-09 22:00:33

The Features of Deep Learning Algorithms

Authors: Yew Kee Wong
Comments: 6 Pages. IJCST JOURNAL 2021 OCT, VOL. 9, ISSUE. 5

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence

[1178] viXra:2109.0058 [pdf] submitted on 2021-09-09 22:13:33

Advanced Deep Learning Algorithms

Authors: Yew Kee Wong
Comments: 6 Pages. IJCST JOURNAL 2021 AUG, VOL. 9, ISSUE. 4

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence

[1177] viXra:2109.0057 [pdf] submitted on 2021-09-09 22:13:11

Understanding the Features of Machine Learning and Big Data Analysis

Authors: Yew Kee Wong
Comments: 7 Pages. IJCST JOURNAL 2021 AUG, VOL. 9, ISSUE. 4

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the different machine learning algorithms and methods which can be applied to big data analysis, as well as the opportunities provided by the application of big data analytics in various decision making domains.
Category: Artificial Intelligence

[1176] viXra:2109.0056 [pdf] submitted on 2021-09-09 22:12:50

Advanced Skills Mapping and Career Development Using ai

Authors: Yew Kee Wong
Comments: 8 Pages. NATL CONFERENCE 2021 (NOV 2021), LONDON, UK

Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence

[1175] viXra:2109.0055 [pdf] submitted on 2021-09-09 22:12:06

Advanced Deep Learning Model

Authors: Yew Kee Wong
Comments: 6 Pages. CRBL CONFERENCE 2021 (OCT 2021), VIENNA, AUSTRIA

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.
Category: Artificial Intelligence

[1174] viXra:2109.0054 [pdf] submitted on 2021-09-09 22:13:21

Dealing Crisis Management Using ai

Authors: Yew Kee Wong
Comments: 7 Pages. ITCCMA CONFERENCE 2021 (SEP 2021) COPENHAGEN, DENMARK

Artificial intelligence has been a buzz word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards sustainable development. In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets for different industries and business operations. Numerous use cases have shown that AI can ensure an effective supply of information to citizens, users and customers in times of crisis. This paper aims to analyse some of the different methods and scenario which can be applied to AI and big data, as well as the opportunities provided by the application in various business operations and crisis management domains.
Category: Artificial Intelligence

[1173] viXra:2109.0047 [pdf] submitted on 2021-09-07 04:43:30

Neuro-Fuzzy: Artificial Neural Networks & Fuzzy Logic

Authors: Amey Thakur, Karan Dhiman, Mayuresh Phansikar
Comments: 7 pages, 7 figures, Volume 9, Issue IX, International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2021. DOI: https://doi.org/10.22214/ijraset.2021.37930

Neuro Fuzzy is a hybrid system that combines Artificial Neural Networks with Fuzzy Logic. Provides a great deal of freedom when it comes to thinking. This phrase, on the other hand, is frequently used to describe a system that combines both approaches. There are two basic streams of neural network and fuzzy system study. Modelling several elements of the human brain (structure, reasoning, learning, perception, and so on) as well as artificial systems and data: pattern clustering and recognition, function approximation, system parameter estimate, and so on. In general, neural networks and fuzzy logic systems are parameterized nonlinear computing methods for numerical data processing (signals, images, stimuli). These algorithms can be integrated into dedicated hardware or implemented on a general-purpose computer. The network system acquires knowledge through a learning process. Internal parameters are used to store the learned information (weights).
Category: Artificial Intelligence

[1172] viXra:2109.0028 [pdf] submitted on 2021-09-05 15:57:13

Dynamic Latent Scale GAN

Authors: Jeongik Cho
Comments: 13 Pages.

Generators in generative adversarial networks map latent distributions into data distributions. GAN inversion is mapping data distribution to latent distribution by inverting the generator of GAN. When training the encoder for generator inversion, simply using the mean squared error causes the encoder to not converge due to information loss on the latent distribution from the generator. In other words, it is impossible to invert the generator as it is due to the information loss on the latent distribution. This paper introduces a dynamic latent scale GAN, a method for training a generator without information loss on latent distribution, and an encoder that inverts the generator. Dynamic latent scale GAN dynamically scales each element of the normal i.i.d. (independent and identically distributed) latent distribution during GAN training to adjust the entropy of the latent distribution so that information loss on the latent distribution does not occur in the generator. The amount of information that can be recovered from the generated data distribution can be obtained through the variance of the predicted latent distribution (encoder output distribution). By dynamically adjusting the scale of the latent distribution through the variance of each element of the predicted latent distribution, it is possible to train a generator that does not have information loss on latent distribution. This means that mutual information between the latent distribution and predicted latent distribution can be maximized, and the encoder can converge. Since the latent distribution scale of the dynamic latent scale GAN changes dynamically, the encoder should be trained together during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder can be added to the generator loss because the encoder converges.
Category: Artificial Intelligence

[1171] viXra:2108.0169 [pdf] submitted on 2021-08-31 12:44:04

Generative Adversarial Networks

Authors: Amey Thakur, Mega Satish
Comments: 19 pages, 23 figures, Volume 9, Issue VIII, International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2021. DOI: https://doi.org/10.22214/ijraset.2021.37723

Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples.
Category: Artificial Intelligence

[1170] viXra:2108.0155 [pdf] submitted on 2021-08-27 21:01:29

Machine Learning and Deep Learning Technologies

Authors: Yew Kee Wong
Comments: 9 Pages.

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.
Category: Artificial Intelligence

[1169] viXra:2108.0154 [pdf] submitted on 2021-08-27 21:02:30

Skills Mapping and Career Development Analysis Using Artificial Intelligence

Authors: Yew Kee Wong
Comments: 8 Pages.

Artificial intelligence has been an eye-popping word that is impacting every industry in the world. With the rise of such advanced technology, there will be always a question regarding its impact on our social life, environment and economy thus impacting all efforts exerted towards continuous development. From the definition, the welfare of human beings is the core of continuous development. Continuous development is useful only when ordinary people’s lives are improved whether in health, education, employment, environment, equality or justice. Securing decent jobs is a key enabler to promote the components of continuous development, economic growth, social welfare and environmental sustainability. The human resources are the precious resource for nations. The high unemployment and underemployment rates especially in youth is a great threat affecting the continuous economic development of many countries and is influenced by investment in education, and quality of living.
Category: Artificial Intelligence

[1168] viXra:2108.0153 [pdf] submitted on 2021-08-27 21:04:08

Using Statistical Analysis Algorithm in Artificial Intelligence for Online Learning

Authors: Yew Kee Wong
Comments: 6 Pages.

The assessment outcome for many online learning methods are based on the number of correct answers and than convert it into one final mark or grade. We discovered that when using online learning, we can extract more detail information from the learning process and these information are useful for the assessor to plan an effective and efficient learning model for the learner. Statistical analysis is an important part of an assessment when performing the online learning outcome. The assessment indicators include the difficulty level of the question, time spend in answering and the variation in choosing answer. In this paper we will present the findings of these assessment indicators and how it can improve the way the learner being assessed when using online learning system. We developed a statistical analysis algorithm which can assess the online learning outcomes more effectively using quantifiable measurements. A number of examples of using this statistical analysis algorithm are presented.
Category: Artificial Intelligence

[1167] viXra:2108.0152 [pdf] submitted on 2021-08-27 21:05:13

The Use of Big Data in AI Development And Applications

Authors: Yew Kee Wong
Comments: 8 Pages.

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in volume, velocity, variety and veracity (the four V’s of big data), which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Furthermore, decision makers need to be able to gain valuable insights from such varied and rapidly changing data, ranging from daily transactions to customer interactions and social network data. Such value can be provided using big data analytics, which is the application of advanced analytics techniques on big data. This paper aims to analyse some of the use of big data for the artificial intelligence development and its applications in various decision making domains.
Category: Artificial Intelligence

[1166] viXra:2108.0147 [pdf] submitted on 2021-08-25 23:16:30

Inverting Generator of Gan Through Direction Embedding Discriminator

Authors: Jeongik Cho
Comments: 10 Pages.

Generators in generative adversarial networks map latent distributions into data distributions. GAN inversion is mapping data distribution to latent distribution by inverting the generator of GAN. In this paper, I introduce a direction embedding discriminator GAN in which the discriminator learns the inverse mapping of the generator. In the suggested method, when the latent vector is sampled from an i.i.d. (independent and identically distributed) random variable, the latent vector is considered as angular coordinates of spherical coordinates. Thus, the latent vector can be transformed into a point on the surface of the hypersphere in cartesian coordinates. Discriminator embeds the generated data point into cartesian coordinates. The direction of embedded coordinates represents predicted cartesian coordinates of latent vector, and the log of magnitude represents an adversarial value (real/fake). The generator and discriminator are trained cooperative to decrease the angle between the embedded cartesian coordinates from the discriminator and the cartesian coordinates converted from the latent vector considered as angular coordinates of spherical coordinates. The suggested method can be applied during GAN training, does not require additional encoder training, and does not use a reconstruction loss.
Category: Artificial Intelligence

[1165] viXra:2108.0130 [pdf] submitted on 2021-08-24 11:26:13

Fundamentals of Neural Networks

Authors: Amey Thakur, Archit Konde
Comments: 22 pages, 15 figures, Volume 9, Issue VIII, International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2021. DOI: http://dx.doi.org/10.22214/ijraset.2021.37362

The purpose of this study is to familiarise the reader with the foundations of neural networks. Artificial Neural Networks (ANNs) are algorithm-based systems that are modelled after Biological Neural Networks (BNNs). Neural networks are an effort to use the human brain's information processing skills to address challenging real-world AI issues. The evolution of neural networks and their significance are briefly explored. ANNs and BNNs are contrasted, and their qualities, benefits, and disadvantages are discussed. The drawbacks of the perceptron model and their improvement by the sigmoid neuron and ReLU neuron are briefly discussed. In addition, we give a bird's-eye view of the different Neural Network models. We study neural networks (NNs) and highlight the different learning approaches and algorithms used in Machine Learning and Deep Learning. We also discuss different types of NNs and their applications. A brief introduction to Neuro-Fuzzy and its applications with a comprehensive review of NN technological advances is provided.
Category: Artificial Intelligence

[1164] viXra:2108.0120 [pdf] submitted on 2021-08-23 13:14:27

Local Search in Non-deterministic Finite Automata with Extensions

Authors: Mirzakhmet Syzdykov
Comments: 5 Pages.

In this work we present the theoretical approach over solving the back-reference problem in regular expression matching within the almost polynomial time using local search within the memory, while within the growth of capturing groups we obtain the exponential results: for this purpose we develop the modified matching algorithm operating on non-deterministic finite automata within the modified search algorithm and presence of the specific method also over extended regular expressions. This is made due to the algorithm which can be adjusted for approximate searching allowing us to imply extended operators and features of modern regular expressions like intersection, subtraction and complement, as well as backreferences. The review of past work on this issues is also done: to the present time there is no discrete algorithm in systems like automata for local search. Thus, we obtain the new result of matching the pattern locally while the simulating algorithm works as usual. The obtained result also refers to the membership problem with local bound which can be set in the main algorithm presented in this article.
Category: Artificial Intelligence

[1163] viXra:2108.0095 [pdf] submitted on 2021-08-18 23:35:38

A New Interpolation Approach and Corresponding Instance-Based Learning

Authors: Shiyou Lian
Comments: Pages.

Starting from finding approximate value of a function, introduces the measure of approximation-degree between two numerical values, proposes the concepts of “strict approximation” and “strict approximation region”, then, derives the corresponding one-dimensional interpolation methods and formulas, and then presents a calculation model called “sum-times-difference formula” for high-dimensional interpolation, thus develops a new interpolation approach, that is, ADB interpolation. ADB interpolation is applied to the interpolation of actual functions with satisfactory results. Viewed from principle and effect, the interpolation approach is of novel idea, and has the advantages of simple calculation, stable accuracy, facilitating parallel processing, very suiting for high-dimensional interpolation, and easy to be extended to the interpolation of vector valued functions. Applying the approach to instance-based learning, a new instance-based learning method, learning using ADB interpolation, is obtained. The learning method is of unique technique, which has also the advantages of definite mathematical basis, implicit distance weights, avoiding misclassification, high efficiency, and wide range of applications, as well as being interpretable, etc. In principle, this method is a kind of learning by analogy, which and the deep learning that belongs to inductive learning can complement each other, and for some problems, the two can even have an effect of “different approaches but equal results” in big data and cloud computing environment. Thus, the learning using ADB interpolation can also be regarded as a kind of “wide learning” that is dual to deep learning.
Category: Artificial Intelligence

[1162] viXra:2108.0029 [pdf] submitted on 2021-08-08 14:07:27

Information Theory Applied to Bayesian Network for Learning Continuous Data Matrix

Authors: Ait-Taleb Nabil
Comments: 28 Pages.

In this paper, we will cover information theory for continuous data like differential entropy, joint differential entropy, conditional differential entropy, mutual information and conditional mutual information. We will make a brief reminder on the Gaussian multidimensional probability and the information theory. We will demonstrate a theorem on conditional entropy inequalities for Gaussian random vectors, this theorem will be later used to bound Bayesian network’s differential entropy. In the following, we will define a Bayesian network using a Gaussian random vector, we will show how to compute a Bayesian network’s differential entropy and conclude by proposing a theorem to upper and lower bound this differential entropy. In order to do data learning, we will detail, for a Bayesian network the AIC and the BIC scores and a method of differential entropy absorption of a Bayesian network. We will also show how to infer data from a Bayesian network. From an example, this paper will conclude by suggesting a learning algorithm based on the differential entropy coefficient attributing a Bayesian network to a continuous data matrix.
Category: Artificial Intelligence

[1161] viXra:2107.0124 [pdf] submitted on 2021-07-22 18:37:33

Breaking Free from the Stability-Plasticity Dilemma with Incremental Domain Inference on Sequential Data

Authors: Romain Mouret
Comments: 5 Pages.

We make the case for identifying the input domain prior to running downstream models and propose an architecture that opens the door to lifelong learning systems that forget at a decreasing rate as the tasks grow in complexity. Our model accurately identifies domains and is compatible with other continual learning algorithms, provided they benefit from knowing the current domain beforehand.
Category: Artificial Intelligence

[1160] viXra:2107.0122 [pdf] submitted on 2021-07-21 19:07:21

Open Science with Respect to Artificial Intelligence

Authors: Sagnik Mazumder
Comments: 4 Pages.

Artificial Intelligence is one of those fields in computer science that is currently being extensively studied. In this paper, the author attempts to summarise the current state of research in the field with respect to openness to the general community, and has found a profound lack of opportunity to contribute to the field as a novice, and a near monopoly of effective research by large industries while production environments continue to largely remain safe from such influences.
Category: Artificial Intelligence

[1159] viXra:2107.0097 [pdf] submitted on 2021-07-16 15:11:10

Smart Contracts on Algorand

Authors: Archie Chaudhury, Brian Haney
Comments: 16 Pages. Blockchain, Computation, and Cryptocurrency

This Paper makes three main contributions. First, this Paper surveys Algorand Smart Contracts and the Algorand Network, including software systems and algorithmic architectures. Second, this Paper discusses various software mechanisms enabling developers to execute transfers on the Algorand Network. Third, this Paper advances Algorand Smart Contracts by introducing the Algogeneous Smart Contract. Algogeneous Smart Contracts are a new type of Algorand Smart Contract, which are simpler to develop and utilize artificial intelligence to ensure contracts are legally compliant and enforceable.
Category: Artificial Intelligence

[1158] viXra:2107.0058 [pdf] submitted on 2021-07-10 13:40:51

Twitter Sentiment Analysis using Deep Learning

Authors: Vedurumudi Priyanka
Comments: 17 Pages.

In this report, address the problem of sentiment classification on twitter dataset. used a number of machine learning and deep learning methods to perform sentiment analysis. In the end, used a majority vote ensemble method with 5 of our best models to achieve the classification accuracy of 83.58% on kaggle public leaderboard. compared various different methods for sentiment analysis on tweets (a binary classification problem). The training dataset is expected to be a CSV file of type tweet_id, sentiment, tweet where the tweet_id is a unique integer identifying the tweet, sentiment is either 1 (positive) or 0 (negative), and tweet is the tweet enclosed in "". Similarly, the test dataset is a CSV file of type tweet_id, tweet. Please note that CSV headers are not expected and should be removed from the training and test datasets. used Anaconda distribution of Python for datasets for library requirements specific to some methods such as keras with TensorFlow backend for Logistic Regression, MLP, RNN (LSTM), and CNN. and xgboost for XGBoost. Usage of preprocessing, baseline, Naive Bayes, Maximum entropy, Decision Tree, random forest, multi-layer perception etc are implemented
Category: Artificial Intelligence

[1157] viXra:2106.0084 [pdf] submitted on 2021-06-14 17:07:54

Analysis of Covid-19 Cases in India Using Seir, Arima and LSTM Models

Authors: Souvik Sengupta
Comments: 10 Pages. [Corrections are made by viXra Admin to comply with the rules of viXra.org]

After one year from the start of the COVID-19 pandemic in India, the country is now having a steady decay in the number of daily new cases and active cases. Although the vaccination process is about to start from mid of January 2021, it would not affect the number of daily cases at least for the next three to four months for obvious reasons like phase-wise implementation and six to eight weeks time span required from the first dosage to develop the immunity. Therefore, the prime question is now, where would we reach at the end of the first quarter of 2021, and what could be the number of new cases and active cases before the vaccination immunity starts working. This paper analyzes the growth and decay pattern of Indian COVID-19 cases with help of SEIR epidemical modeling, ARIMA statistical modeling, and time series analysis by LSTM. The models learn the parameter and hyper-parameter values that are best suited for describing the pattern for the COVID-19 pandemic in India. Then it tries to predict the numbers for India by the end of March, 2021. It is forecasted that the number of new cases would come down near 5000 per day, active cases near 40,000 and the total number of infected may reach 11.1 million if the current pattern is followed.
Category: Artificial Intelligence

[1156] viXra:2106.0071 [pdf] submitted on 2021-06-12 18:39:56

CNN Based Backdrop Purging

Authors: Ashrith Appani
Comments: 11 Pages.

Backdrop Purging is a common pre-processing step in computer vision and video processing for object tracking, people recognition, and other tasks. Several successful background-subtraction algorithms have recently been proposed, however nearly all of the best-performing ones are supervised. The availability of some annotated frames of the test video during training is critical to their performance. As a result, there is no literature on their performance on completely "unseen" videos. We provide a new supervised background-subtraction technique for unseen films (BSUV-Net) based on a fully-convolutional neural network in this paper. The current frame and two background frames collected at various time scales, along with their semantic segmentation maps, are fed into our network. We also offer a new data-augmentation strategy that mitigates the influence of illumination differences between the background frames and the current frame in order to limit the risk of overfitting. In terms of F-measure, recall, and precision, BSUV-Net beats state-of-the-art algorithms assessed on unseen videos on the CDNet-2014 dataset.
Category: Artificial Intelligence

[1155] viXra:2106.0040 [pdf] submitted on 2021-06-07 07:02:56

Vudoku - A Visual Sudoku Solver

Authors: Jovial Joe Jayarson
Comments: 3 Pages. Best paper award in NCGCE 21. Mr. Ebin PM is the author's guide.

It is no secret that AI is an upcoming titan. Even though people are stunned to hear that AI has been here for around a century, due to the advancement in computational methods and resources, today AI peaks like never before. As a tiny glimpse into the field of Digit Recognition, this project aims to understand the underlying cogs and wheels on which the neural networks spin. This paper tries to elucidate a project which solves the Sudoku puzzle drawn and written by hand. The paraphernalia for that project includes programming language: Python3; libraries: OpenCV, Numpy, Keras; datasets: MNIST handwritten digit database. Digit recognition is a classical problem which will introduce neurons, neural networks, connections hidden layers, weights, biases, activation functions like sigmoid, back-propagation and other related topics as well. Algorithm(s) in the project employed to solve Sudoku is also explored in this paper.


Category: Artificial Intelligence

[1154] viXra:2105.0176 [pdf] submitted on 2021-05-31 12:17:35

Gesture Classification using Machine Learning with Advanced Boosting Methods

Authors: Abdurrahim Yilmaz, Dilanur Bayraktar, Melih Akman, Cemre Sahinoglu, Huseyin Uvet
Comments: 3 Pages.

In this paper, a detailed study on gesture classifica- tion using a dataset from Kaggle and optimizing the dataset is presented. The machine learning algorithms, which are SGD, kNN, SVM, MLP, Gaussian Naive Bayes classifier, Random Forest, LightGBM, XGBoost, and CatBoost classifiers, to conduct the research and, are used. The results are compared with each other to conclude which models perform the best in gesture classification. Except for the Gaussian Naive Bayes classifier, all methods resulted in high accuracy.
Category: Artificial Intelligence

[1153] viXra:2105.0141 [pdf] submitted on 2021-05-24 21:37:09

Ultimate AI-Memory I

Authors: Ruolin Jiu
Comments: 16 Pages.

A completely new learning rule of neural networks. Similar to the learning rule in the brain, completely different with gradient descent. This learning rule, is the foundation and the key of AI memory, will open a huge growth potential for Artificial Intelligence.
Category: Artificial Intelligence

[1152] viXra:2105.0138 [pdf] submitted on 2021-05-23 07:45:16

Neural Networks and Their Applications in Artificial Intelligence

Authors: Jan Helm
Comments: 45 Pages.

This paper presents in Part1 the basic theory of Neural Networks, and based on the standard (global) backpropagation algorithm, it introduces the local backpropagation algorithm: a layer-recurrent gradient algorithm with layer-specific target-vector. Furthermore in Part2 , it presents calculated application examples for global backpropagation networks, local backpropagation networks and evolving cross-mutated networks.
Category: Artificial Intelligence

[1151] viXra:2105.0095 [pdf] submitted on 2021-05-17 12:53:33

Biochemistry Provides Inspiration for a New Kind of ai

Authors: J Gerard Wolff
Comments: 32 Pages.

This article is about the origin, development, and benefits of the "SP System" (SPS), which means the "SP Theory of Intelligence" and its realisation in the "SP Computer Model" (SPCM). The SPS is radically different from deep neural networks (DNNs), with many advantages compared with DNNs. As will be described, the SPS provides a promising foundation for the development of human-like broad AI. The SPS was inspired in part by: evidence for the importance of information compression in human learning, perception, and cognition; and the concept of `multiple sequence alignment' in biochemistry. That latter concept led to the development of the powerful concept of SP-multiple-alignment, a concept which is largely responsible for the intelligence-related versatility of the SPS. The main advantages of the SPS are: 1) The clear potential of the SPS to solve 19 problems in AI research; 2) Versatility of the SPS in aspects of intelligence, including unsupervised learning, and several forms of reasoning; 3) Versatility of the SPS in the representation and processing of knowledge; 4) Seamless integration of diverse aspects of intelligence and diverse forms of knowledge, in any combination, a kind of integration that appears to be necessary in any artificial system that aspires to the fluidity and adaptability of the human mind; 5) Several other potential benefits and applications of the SPS. It is envisaged that the SPCM will provide the basis for the development of a first version of the {\em SP Machine}, with high levels of parallel processing and a user-friendly user interface. All software in the SP Machine would be open-source so that clones of the SP Machine may be created anywhere by individuals or groups, to facilitate further research and development of the SP System.
Category: Artificial Intelligence

[1150] viXra:2105.0084 [pdf] submitted on 2021-05-14 01:08:18

Designing an Electronic Mind Capable of Feeling, Thinking, Predicting, and Awareness

Authors: Milad Keramati
Comments: 5 Pages.

In a problem facing agent, a situation can be categorized as different patterns and action can be taken based on the available information (known as method) as oppose to a simple value. Doing so will decrease the variety of situations and actions and as a result simplify the problem. Simple patterns and methods are generated at first but by detecting important patterns and methods and creating similar patterns and methods, the agent will be able to better recognize the situation it's in and find better solutions for the patterns respectively and as a result systematically broaden its knowledge over time. By memorizing feelings (or rewards) and action result (situation) in a pattern, it's possible to make a tree of possible outcomes of an action related to a pattern and choose an action of the pattern that profit us the most by predicting future feelings and calculating the value and we know accuracy of our prediction based on similarity (or consistency) and number of results (or confidence). I've also given my opinion and defined some standards regarding artificial intelligence, reinforcement learning, and designing agent in this paper.
Category: Artificial Intelligence

[1149] viXra:2105.0033 [pdf] submitted on 2021-05-07 10:36:30

Generalized Quantum Evidence Theory on Interference Effect

Authors: Fuyuan Xiao
Comments: 5 Pages.

In this paper, CET is generalized to quantum framework of Hilbert space in an open world, called generalized quantum evidence theory (GQET). Differ with classical GET, interference effects are involved in GQET. Especially, when a GQBBA turns into a classical GBBA, interference effects disappear, so that GQB and GQP functions of GQET degenerate to classical GBel and GPl functions of classical GET, respectively.
Category: Artificial Intelligence

[1148] viXra:2104.0145 [pdf] submitted on 2021-04-24 01:23:39

On the Negation Intensity of a Basic Probability Assignment (Bpa)

Authors: Xiangjun Mi, Chongru Huang, Bingyi Kang
Comments: 29 Pages.

How to obtain negation knowledge is a crucial topic, especially in the field of artificial intelligence. Limited work has been done on the negation of a basic probability assignment (BPA), and which has been studied in depth throughout the literature. However, the aspect of the intensity level of negation enforcement has not yet been investigated. Moreover, let us note that the main characteristic of intelligent systems is just the flexibility for the sake of being able to represent knowledge according to each situation. In general, researchers have a tendency to express the need for cognitive range in the negation. Thus, it would seem very useful to find a wide range of negations under intensity levels in a BPA. Based on these ideas, this paper first proposes a new approach of finding a BPA negation and gives a domain of intensity in which the negation is executed, which is called the negation space. Then, we investigate a number of desirable properties and explore their correlation with entropy. Numerical examples show the characteristics of the proposed negation solution. Finally, we validate the efficiency of the proposed method from the point of view of the Dempster-Shafer belief structure.
Category: Artificial Intelligence

[1147] viXra:2104.0111 [pdf] submitted on 2021-04-19 07:35:07

A Novel Conflict Management Considering the Optimal Discounting Weights Using the BWM Method in Dempster-Shafer Evidence Theory

Authors: Lingge Zhou, Xiangjun Mi, Chongru Huang, Yanan Li, Bingyi Kang
Comments: 39 Pages.

Dempster-Shafer evidence theory (DST) is an effective tool for data fusion. In this theory, how to handle conflicts between evidences is still a significant and open issue. In this paper, the best-worst method (BWM) is extended to conflict management in DST. Firstly, a way to determine the best and worst basic probability assignment (BPA) is proposed. Secondly, a novel strategy for determining the optimal weights of BPA using the BWM method is developed. Compared to traditional measure-based conflict management methods, the proposed method has three better performances: (1) A consistency ratio is considered for BPA to check the reliability of the comparisons, producing more reliable results. (2) The final fusion result has less uncertainty, which is more conducive to improve the performance of decision making. (3) The number of BPA comparisons performed during operation (in conflict management) is reduced (especially matrix-based). A practical application in motor rotor fault diagnosis is used to illustrate the effectiveness and practicability of the proposed methodology.
Category: Artificial Intelligence

[1146] viXra:2104.0069 [pdf] submitted on 2021-04-12 12:15:16

The Laws of AI

Authors: Egger Mielberg
Comments: 14 Pages.

The truly transparent and predictable work of the artificial intelligence being created can significantly improve the quality of human life, as well as its safety. In our opinion, the self-awareness of artificial intelligence is achievable only if it is independent in making any decision. We present three basic laws of artificial intelligence focused primarily on the possibility of their practical implementation.
Category: Artificial Intelligence

[1145] viXra:2104.0005 [pdf] submitted on 2021-04-03 21:35:10

Forcasting and Pattern Analysis of Dhaka Stock Market using LSTM and Phrophet Algorithm

Authors: Tanvir Rahman, Rafia Akhter, Kehinde Lawal, Shamim Ahmed Mazumder, Tamanna Afroz, Ataur Rahman
Comments: 3 Pages.

forecasting or predicting stock market price and the trend has been regarded as a challenging task because of its chaotic nature. The stock market is essentially a non-linear, non-parametric, noisy, and deterministically chaotic system because of liquid money, stock adequacy, human behavior, news related to the stock market, gambling, international money rate, and so on. In a country like Bangladesh, it is very difficult to find any prediction of the stock market especially the Dhaka stock market. Because its trends and forecasting depend on various factors. Understanding the pattern of the stock market and predicting their development and changes are research hotspots in academic and financial circles. Because financial data contain complex, incomplete, and fuzzy information, predicting their development trends is an extremely difficult challenge. Fluctuations in financial data depend on a myriad of correlated constantly changing factors. In this paper, financial productprice data are treated as a one-dimensional series generated bythe projection of a chaotic system composed of multiple factors into the time dimension, and the price series is reconstructed using the time series phase-space reconstruction (PSR) method. An RNN-based prediction model is designed based on the PSR method and long and short-term memory networks (LSTMs) for DL and used to predict stock prices and for predicting stock market data trend we use Facebook open-source model prophet The proposed and some other prediction models are used to predict multiple stock indices for different periods. A comparisonof the results shows that the proposed prediction model has a higher prediction accuracy.
Category: Artificial Intelligence

[1144] viXra:2103.0194 [pdf] submitted on 2021-03-31 17:29:46

UWB-GCN: an Accelerator of Graph-Convolution-Network with Runtime Task Autotuning

Authors: Tong Geng, Ang Li, Tianqi Wang, Chunshu Wu, Yanfei Li, Antonino Tumeo, Shuai Che, Steve Reinhardt, Martin Herbordt
Comments: 13 Pages.

The recent development of deep learning has mostly been focusing on Euclidean data, such as images, videos, and audios. However, most real-world information and relationships are often expressed in graphs. Graph convolutional networks (GCNs) appear as a promising approach to efficiently learn from graph data structures, showing advantages in several practical applications such as social network analysis, knowledge discovery, 3D modeling, and motion capturing. However, practical graphs are often extremely large and unbalanced, posting significant performance demand and design challenges on the hardware dedicated to GCN inference. In this paper, we propose an architecture design called Ultra-Workload-Balanced-GCN (UWB-GCN) to accelerate graph convolutional network inference. To tackle the major performance bottleneck of workload imbalance, we propose two techniques: dynamic local sharing and dynamic remote switching, both of which rely on hardware flexibility to achieve performance auto-tuning with negligible area or delay overhead. Specifically, UWB-GCN is able to effectively profile the sparse graph pattern while continuously adjusting the workload distribution among parallel processing elements (PEs). After converging, the ideal configuration is reused for the remaining iterations. To the best of our knowledge, this is the first accelerator design targeted to GCNs and the first work that auto-tunes workload balance in accelerator at runtime through hardware, rather than software, approaches. Our methods can achieve near-ideal workload balance in processing sparse matrices. Experimental results show that UWB-GCN can finish the inference of the Nell graph (66K vertices, 266K edges) in 8.1ms, corresponding to 199x, 16x, and 7.5x respectively, compared to the CPU, GPU, and the baseline GCN design without workload autotuning.
Category: Artificial Intelligence

[1143] viXra:2103.0185 [pdf] submitted on 2021-03-29 02:32:14

Hierarchical Relationship Alignment Metric Learning

Authors: Lifeng Gu
Comments: 5 Pages.

Most existing metric learning methods focus on learning a similarity or distance measure relying on similar and dissimilar relations between sample pairs. However, pairs of samples cannot be simply identified as similar or dissimilar in many real-world applications, e.g., multi-label learning, label distribution learning. To this end, relation alignment metric learning (RAML) framework is proposed to handle the metric learning problem in those scenarios. But RAML learn a linear metric, which can’t model complex datasets. Combining with deep learning and RAML framework, we propose a hierarchical relationship alignment metric leaning model HRAML, which uses the concept of relationship alignment to model metric learning problems under multiple learning tasks, and makes full use of the consistency between the sample pair relationship in the feature space and the sample pair relationship in the label space. Further we organize several experiment divided by learning tasks, and verified the better performance of HRAML against many popular methods and RAML framework.
Category: Artificial Intelligence

[1142] viXra:2103.0184 [pdf] submitted on 2021-03-29 02:37:54

Representation Learning by Ranking Under Multiple Tasks

Authors: Lifeng Gu
Comments: 9 Pages.

In recent years, representation learning has become the research focus of the machine learning community. Large-scale pre-training neural networks have become the first step to realize general intelligence. The key to the success of neural networks lies in their abstract representation capabilities for data. Several learning fields are actually discussing how to learn representations and there lacks a unified perspective. We convert the representation learning problem under multiple tasks into a ranking problem, taking the ranking problem as a unified perspective, the representation learning under different tasks is solved by optimizing the approximate NDCG loss. Experiments under different learning tasks like classification, retrieval, multi-label learning, regression, self-supervised learning prove the superiority of approximate NDCG loss. Further, under the self-supervised learning task, the training data is transformed by data augmentation method to improve the performance of the approximate NDCG loss, which proves that the approximate NDCG loss can make full use of the information of the unsupervised training data.
Category: Artificial Intelligence

[1141] viXra:2103.0174 [pdf] submitted on 2021-03-28 21:30:36

Explaining Representation by Mutual Information

Authors: Lifeng Gu
Comments: 11 Pages.

Science is used to discover the law of world. Machine learning can be used to discover the law of data. In recent years, there are more and more research about interpretability in machine learning community. We hope the machine learning methodsaresafe,interpretable,andtheycanhelpusto find meaningful pattern in data. In this paper, we focus on interpretability of deep representation. We propose a interpretable method of representation based on mutual information, which summarizes the interpretation of representation into three types of information between input data and representation. We further proposed MI-LR module, which can be inserted into the model to estimate the amount of information to explain the model’s representation. Finally, we verify the method through the visualization of the prototype network.
Category: Artificial Intelligence

[1140] viXra:2103.0148 [pdf] submitted on 2021-03-23 06:29:02

New Ordinal Relative Fuzzy Entropy

Authors: Yuanpeng He, Yong Deng
Comments: 32 Pages.

In real life, occurrences of a series of things are supposed to come in an order. Therefore, it is necessary to regard sequence as a crucial factor in managing different kinds of things in fuzzy environment. However, few related researches have been made to provided a reasonable solution to this demand. Therefore, how to measure degree of uncertainty of ordinal fuzzy sets is still an open issue. To address this issue, a novel ordinal relative fuzzy entropy is proposed in this paper taking orders of propositions into consideration in measuring level of uncertainty in fuzzy environment. Compared with previously proposed entropies, effects on degrees of fuzzy uncertainty brought by sequences of sequential propositions are embodied in values of measurement using proposed method in this article. Moreover, some numerical examples are offered to verify the correctness and validity of the proposed entropy.
Category: Artificial Intelligence

[1139] viXra:2103.0135 [pdf] submitted on 2021-03-20 20:03:20

A Deep CNN Based Approach for Liveness Detection in Maritime Digital Kyc Processes

Authors: Narayanan Arvind, Saravanan Mugund, Avinash Kumar Singh
Comments: 6 Pages. Presented at Samudramanthan 2021, Indian Institute of Technology Kharagpur

Maritime digital KYC processes are susceptible to various face spoofing attacks. When any unauthorized person tries to enter in the authentication system by presenting a fraud image and/or video, it is termed as a spoofing attack. Face anti-spoofing attacks have been typically approached from texture based models (e.g. Local Binary patterns) combined with machine learning (e.g. KNN) approaches. The aim of this study is to build a robust face anti-spoofing system using deep convolutional neural networks for maritime digital KYC processes. The research is based on analyzing the features of genuine and fake images. We use the freely available NUAA photograph imposter database for our face anti-spoofing study. The database has respectively 7500 and 5100 labelled imposter and client face images. We split the dataset into train and test sets with an 80%-20% split ratio using stratified sampling. 2D convolutional layers combined with 2D MaxPooling layers followed by Flattening and Dense layers are employed for our deep network architecture. The research is carried out using scikit-learn and keras open-source libraries for python. The training accuracy of the reported model is 100% and the testing accuracy is 99.92%. The accuracy of our present deep learning approach surpasses the accuracy of all the models available in literature.
Category: Artificial Intelligence

[1138] viXra:2103.0095 [pdf] submitted on 2021-03-15 20:31:15

Pneumonia Detection Using X-Ray Image Processing Using CNN

Authors: Tanvir Rahman
Comments: 3 Pages.

Pneumonia is a life-threatening infectious disease affecting one or both lungs in humans commonly caused by bacteria called Streptococcus pneumonia. The present study aimed to examine the risk factors for death due to pneumonia in young children. One or more in three deaths in Asia is caused due to pneumonia as reported by World Health Organization (WHO). Chest X-Rays which are used to diagnose pneumonia need expert radiotherapists for evaluation. Thus, developing an automatic system for detecting pneumonia would be beneficial and it can save lots of peoples life and help stopping and curing and controll for treating the disease without any delay particularly in remote areas. Due to the success of deep learning algorithms in analyzing medical images, Convolutional Neural Networks (CNNs) have gained much attention for disease classification. In addition, features learned by pre-trained CNN models on large-scale datasets are much useful in image classification tasks. In this work, we appraise the functionality of pre-trained CNN models utilized as feature-extractors followed by different classifiers for the classification of abnormal and normal chest X-Rays. We analytically determine the optimal CNN model for the purpose. Statistical results obtained demonstrates that pretrained CNN models employed along with supervised classifier algorithms can be very beneficial in analyzing chest X-ray images, specifically to detect
Category: Artificial Intelligence

[1137] viXra:2103.0056 [pdf] submitted on 2021-03-11 16:49:40

Covid 19 and General Pneumonia Detection from X Ray Image Using Deep Learning Approach

Authors: Khosnur Alam, Rima Akter
Comments: 8 Pages.

December 31, 2019, a new virus starts spreading in Uhan of China. Nowadays April 2020 the world has seen the worst Pandemic of the century. World health organization tells everybody to test and test but the test is very rare and costly for 3rd world countries. A cheap and easier testing method is now badly required for countries like Bangladesh. So we want to develop a computer-based detection system that can identify Covid-19 patients in a fast and easy way. The chest X-ray image of Covid-19 patients is similar to pneumonia patients. This proposed system can separate Covid-19 X-ray images from pneumonia. The main objective of this research is to develop a system that can detect covid-19 and pneumonia from X-ray images using a deep learning approach.
Category: Artificial Intelligence

[1136] viXra:2103.0045 [pdf] submitted on 2021-03-06 21:17:03

Effective Listing Spam Detection System using Locality Sensitive Hashing at Scale

Authors: Chandan Maloo, Akhil Kaza
Comments: 4 Pages.

The popularity, cost-effectiveness and ease of buying and selling that marketplaces like Craigslist, Offerup offer to users has been plagued with the rising number of unsolicited spam listings, fraudulent transactions and in some extreme cases law enforcement also needs to be involved. Driven by the need to protect Offerup users from this growing menace, research in spam, fraud listing filtering/detection systems has been increasingly active in the last decade. However, the adaptive nature of Scammers and Fraudsters has often rendered most of these systems ineffective. While several spam detection models have been reported in literature, the reported performance on an out of sample test data shows the room for more improvement. Presented in this research is an improved spam detection model based on Locality Sensitive Hashing algorithm which to the best of our knowledge has received little attention in spam/fraud detection problems. Experimental results show that the proposed model outperforms earlier approaches across a wide range of evaluation metrics inside Offerup.
Category: Artificial Intelligence

[1135] viXra:2102.0024 [pdf] submitted on 2021-02-04 01:42:15

Large Scale Patient Pooling for Drug Discovery, Pharmacovigilance Investigations and Precision Medicines.

Authors: Klevinda Fili, Kanishk Dwivedi
Comments: 6 Pages.

Patient pooling has been a major problem in the field of drug discovery and drug investigation. Even what is more daunting, is to provide a large scale solution for the classification of diseases and find side effects of personalised or precision medicine by clustering the pool and find similar investigations for pharmacovigilance, drug discovery and precision medicine. This can be solved by generating patterns through machine learning and deep learning models to find the common pools of similar pattern and diagnosis from clusters and distribute it by mobile application for the large scale patients clustering.This method is presented for Precision medicine, Pharmacovigilance and Drug discovery. Patients raw data is processed for classification and for personalised medicine. Patients collective information stored in database warehouses for clustering and applying advanced machine learning models on it will help in pharmacovigilance and early information regarding demographic disease epidemics. Patients diagnosis clustering can help to find out the pattern for drug discovery with respect to the geographical location and similar characteristics which have been found effective and will reduce time in drug discovery.
Category: Artificial Intelligence

[1134] viXra:2101.0168 [pdf] submitted on 2021-01-27 06:10:38

Recent Trends in Named Entity Recognition (NER)

Authors: Arya Roy
Comments: 27 Pages.

The availability of large amounts of computer-readable textual data and hardware that can process the data has shifted the focus of knowledge projects towards deep learning architec- ture. Natural Language Processing, particularly the task of Named Entity Recognition is no exception. The bulk of the learning methods that have produced state-of-the-art results have changed the deep learning model, the training method used, the training data itself or the encoding of the output of the NER system. In this paper, we review significant learning methods that have been employed for NER in the recent past and how they came about from the linear learning methods of the past. We also cover the progress of related tasks that are upstream or downstream to NER eg. sequence tagging, entity linking etc. wherever the processes in question have also improved NER results.
Category: Artificial Intelligence

[1133] viXra:2101.0163 [pdf] submitted on 2021-01-26 20:22:30

Forecasting Stock Market Price Using Multiple Machine Learning Technique

Authors: Tanvir Rahman, Rafia Akhter
Comments: 5 Pages.

The stock market is an emerging sector in any country in the world. Many people are directly related to this sector. Stock market prediction is the act of trying to determine the future value of company stock or another financial instrument. When publicly traded, companies issue shares of stock to investors, every one of those shares is assigned monetary value or price. Stock prices can go up or down depending on different factors. Stock prices can be affected by several things including volatility in the market, current economic conditions, and the popularity of the company. The successful prediction of a stock's future price could yield a significant profit. Along with the development of the stock market, forecasting has become an important topic. Since the finance market has become more and more competitive, stock price prediction has been a hot research topic in the past few decades. Predicting stock price is regarded as a challenging task because the stock market is essentially nonlinear, on-parametric, noisy, and a chaotic system. The trend of a market depends on many things like liquid money human behavior, news related to the stock market, etc. All this together controls the behavior of trends in a stock market with the advancement of the computing technology we use machine learning techniques, like Support Vector Regression, K-nearest neighbor, liner Regression, Random Forest Regression, for analyzing time-series data to predict stock price. In this paper, we try to develop a forecasting model by stacking multiple methods to find the best forecast of the stock price.
Category: Artificial Intelligence

[1132] viXra:2101.0122 [pdf] submitted on 2021-01-20 07:03:55

Simplifying Object Segmentation with PixelLib Library

Authors: Ayoola Olafenwa
Comments: 6 Pages. "Simplifying Object Segmentation with PixelLib Library" was accepted for poster presentation at Black IN AI Workshop(Neurips2020).

PixelLib is a library created to allow easy implementation of object segmentation in real life applications. In this paper we discussed in detail how PixelLib makes it possible for developers to implement semantic segmentation, instance segmentation, and background editing in images and videos with great simplification.
Category: Artificial Intelligence

[1131] viXra:2101.0115 [pdf] submitted on 2021-01-18 04:51:58

CNN Based Common Approach to Handwritten Character Recognition of Multiple Scripts

Authors: Durjoy Sen Maitra, Ujjwal Bhattacharya, SK Parui
Comments: 5 Pages. Paper published in ICDAR 2015

There are many scripts in the world, several of which are used by hundreds of millions of people. Handwrittencharacter recognition studies of several of these scripts arefound in the literature. Different hand-crafted feature sets havebeen used in these recognition studies. However, convolutionalneural network (CNN) has recently been used as an efficientunsupervised feature vector extractor. Although such a networkcan be used as a unified framework for both feature extractionand classification, it is more efficient as a feature extractor than asa classifier. In the present study, we performed certain amount of training of a 5-layer CNN for a moderately large class characterrecognition problem. We used this CNN trained for a larger classrecognition problem towards feature extraction of samples of several smaller class recognition problems. In each case, a distinctSupport Vector Machine (SVM) was used as the correspondingclassifier. In particular, the CNN of the present study is trainedusing samples of a standard 50-class Bangla basic characterdatabase and features have been extracted for 5 different 10-classnumeral recognition problems of English, Devanagari, Bangla,Telugu and Oriya each of which is an official Indian script.Recognition accuracies are comparable with the state-of-the-art
Category: Artificial Intelligence

[1130] viXra:2101.0089 [pdf] submitted on 2021-01-14 12:47:14

Introduction to CAT4: Part 1. Axioms

Authors: Andrew Holster
Comments: 36 Pages. [Corrections made by viXra Admin to conform with scholarly norm]

CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. It is based on a special type of relation called CAT4, which is interpreted to provide a semantic representation. This is Part 1 of a five-part introduction. The focus here is on defining the key mathematical structures first, and presenting the semantic-database application in subsequent Parts. We focus in Part 1 on general axioms for the structures, and introduce key concepts. Part 2 analyses the CAT2 sub-relation of CAT4 in more detail. The interpretation of fact networks is introduced in Part 3, where we turn to interpreting semantics. We start with examples of relational and graph databases, with methods to translate them into CAT3 networks, with the aim of retaining the meaning of information. The full application to semantic theory comes in Part 4, where we introduce general functions, including the language interpretation or linguistic functions. The representation of linear symbolic languages, including natural languages and formal symbolic languages, is a function that CAT4 is uniquely suited to. In Part 5, we turn to software design considerations, to show how files, indexes, functions and screens can be defined to implement a CAT4 system efficiently.
Category: Artificial Intelligence

[1129] viXra:2101.0088 [pdf] submitted on 2021-01-14 12:53:01

Introduction to Cat4: Part 2. Cat2

Authors: Andrew Holster
Comments: 56 Pages. [Corrections made by viXra Admin to conform with scholarly norm]

CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. It is based on a special type of relation called CAT4, which is interpreted to provide a semantic representation. This is Part 2 of a five-part introduction. The focus here is on defining key mathematical properties of CAT2, identifying the topology and defining essential functions over a coordinate system. The analysis is from first principles. This develops on from the axioms introduced in Part 1. The interpretation of fact networks is introduced in Part 3, and the full application to semantic theory comes in Part 4, where we introduce general functions, including the language interpretation or linguistic functions. In Part 5, we turn to software design considerations, to show how files, indexes, functions and screens can be defined to implement a CAT4 system efficiently.
Category: Artificial Intelligence

[1128] viXra:2012.0224 [pdf] submitted on 2020-12-31 11:23:18

Quantum Algorithm of Dempster Combination Rule

Authors: Lipeng Pan, Xiaozhuan Gao, Yong Deng
Comments: 11 Pages.

Dempster combination rule is widely used in many applications such as information fusion and decision making. However, the computational complexity of Dempster combination rule increases exponentially with the increase of frame of discernment. To address this issue, we propose the quantum algorithm of Dempster combination rule based on quantum theory. The algorithm not only realizes most of the functions of Dempster combination rule, but also effectively reduces the computational complexity of Dempster combination rule in future quantum computer. Meanwhile, we carried out a simulation experiment on the quantum cloud platform of IBM, and the experimental results showed that the algorithm is reasonable.
Category: Artificial Intelligence

[1127] viXra:2012.0207 [pdf] submitted on 2020-12-28 04:20:19

A Generalization of Quantum Mass Function: Quaternion Mass Function and the Distance of it

Authors: Yuanpeng He, Fuyuan Xiao
Comments: 2 Pages.

To handle uncertainties and process complex in- formation from different sources, quantum mass function, an efficient method has been proposed to address this issues. On the basis of the quantum mass function, many methods has been designed to indicate the differences among quantum evidences. Nevertheless, they are developed by quantum evidence theory to process traditional basic probability assignments (QBPAs) and not applicable in measuring quaternion BPAs (QTBPAs). Therefore, in this paper, a specific customized method is proposed for the generalized form of quantum mass function, namely quaternion mass function, to accurately demonstrate the dis- tances among disparate evidences given as QTBPAs (QED). Moreover, it is a pioneer to investigate the differences between pieces of evidences in the plane space of quaternion which is reliable and strictly satisfies the axioms of distance. Besides, if QTBPAs degenerate into QBPAs, QED also degenerate into quantum evidential evidence, which indicates the consistency in this new standard of measuring distances. Consequently, QED is derived from the quantum evidential distance and possesses an extensive capability to indicate dissimilarities among QTBPAs. Several numerical examples are offered to check the validity and practical availability of QED.
Category: Artificial Intelligence

[1126] viXra:2012.0142 [pdf] submitted on 2020-12-19 11:21:13

Predicting Year of Plantation with Hyperspectral and Lidar Data

Authors: Adrià Descals, Luis Alonso, Gustau Camps-Valls
Comments: 4 Pages.

This paper introduces a methodology for predicting the year of plantation (YOP) from remote sensing data. The application has important implications in forestry management and inventorying. We exploit hyperspectral and LiDAR data in combination with state-of-the-art machine learning classi-fiers. In particular, we present a complete processing chain to extract spectral, textural and morphological features from both sensory data. Features are then combined and fed a Gaussian Process Classifier (GPC) trained to predict YOP in a forest area in North Carolina (US). The GPC algorithm provides accurate YOP estimates, reports spatially explicit maps and associated confidence maps, and provides sensible feature rankings.
Category: Artificial Intelligence

[1125] viXra:2012.0141 [pdf] submitted on 2020-12-19 11:23:27

Passive Millimeter Wave Image Classification with Large Scale Gaussian Processes

Authors: Pablo Morales, Adrián Pérez-Suay, Rafael Molina, Gustau Camps-Valls, Aggelos K. Katsaggelos
Comments: 5 Pages.

Passive Millimeter Wave Images (PMMWIs) are being increasingly used to identify and localize objects concealed under clothing. Taking into account the quality of these images and the unknown position, shape, and size of the hidden objects, large data sets are required to build successful classification/detection systems. Kernel methods, in particular Gaussian Processes (GPs), are sound, flexible, and popular techniques to address supervised learning problems. Unfortunately, their computational cost is known to be prohibitive for large scale applications. In this work, we present a novel approach to PMMWI classification based on the use of Gaussian Processes for large data sets. The proposed methodology relies on linear approximations to kernel functions through random Fourier features. Model hyperparameters are learned within a variational Bayes inference scheme. Our proposal is well suited for real-time applications, since its computational cost at training and test times is much lower than the original GP formulation. The proposed approach is tested on a unique, large, and real PMMWI database containing a broad variety of sizes, types, and locations of hidden objects.
Category: Artificial Intelligence

[1124] viXra:2012.0092 [pdf] submitted on 2020-12-11 21:22:56

Intelligence - Consider This and Respond

Authors: Saty Raghavachary
Comments: 10 Pages.

Regarding intelligence as a ‘considered response’ phenomenon is the key notion that is presented in this paper. Applied to human-level intelligence, it seems to be a useful definition that can lend clarity to the following related aspects as well: mind, self/I, awareness, self-awareness, consciousness, sentience, thoughts and feelings, free will, perception, attention, cognition, expectation, prediction, learning. Also, embodiment is argued to be an essential component of an AGI’s agent architecture, in order for it to attain grounded cognition, a sense of self and social learning - via direct physical experience and mental processes, all based on considered response.
Category: Artificial Intelligence

[1123] viXra:2012.0064 [pdf] submitted on 2020-12-09 09:08:40

Fast Invertible Rescaling Net

Authors: Junjae Lee
Comments: 8 Pages.

Invertible Rescaling Net (IRN) modeled the downscaling and up-scaling process using Invertible Neural Networks (INN) instead of upscaling to the traditional Singleimage super resolution (SISR) method. As a result, it showed significantly improved performance than the previous method. However, apart from its high performance, IRN requires a lot of computation. hence, to improve this, we replace the existing dense block with Pixel Attention Distillation Block (PADB). In addition, we use Charbonnier loss instead of Mean Absolute Error (MAE) for the existing reconstruction loss. Through these improvements, we trade off the high performance and speed of the existing architecture and achieve higher performance than the lightweight SR model using the conventional method. In addition, by improving the perceptual loss and adversarial loss. we achieve perceptually satisfactory results than the model using the IRN+ method.
Category: Artificial Intelligence

[1122] viXra:2012.0058 [pdf] submitted on 2020-12-08 19:58:30

Detecting Insincere Questions from Text: A Transfer Learning Approach.

Authors: Ashwin Rachha, Gaurav Vanmane
Comments: 7 Pages.

The internet today has become an unrivalled source of information where people converse on content based websites such as Quora, Reddit, StackOverflow and Twitter asking doubts and sharing knowledge with the world. A major arising problem with such websites is the proliferation of toxic comments or instances of insincerity wherein the users instead of maintaining a sincere motive indulge in spreading toxic and divisive content. The straightforward course of action in confronting this situation is detecting such content beforehand and preventing it from subsisting online. In recent times Transfer Learning in Natural Language Processing has seen an unprecedented growth. Today with the existence of transformers and various state of the art innovations, a tremendous growth has been made in various NLP domains. The introduction of BERT has caused quite a stir in the NLP community. As mentioned, when published, BERT dominated performance benchmarks and thereby inspired many other authors to experiment with it and publish similar models. This led to the development of a whole BERT-family, each member being specialized on a different task. In this paper we solve the Insincere Questions Classification problem by fine tuning four cutting age models viz BERT, RoBERTa, DistilBERT and ALBERT.
Category: Artificial Intelligence

[1121] viXra:2012.0051 [pdf] submitted on 2020-12-08 09:02:26

Theoretical Model for an Approximate One Step Forecasting Scheme

Authors: Ramesh Chandra Bagadi
Comments: 16 Pages.

In this research investigation, the authors present a detailed scheme of a theoretical model for an approximate one step forecasting scheme. Firstly, the authors coin notions of Similarity and Dissimilarity. The authors then coin a notion of causal one step forecast for any given sequence. Parallely, the authors define concepts of Higher Order Sequence of Primes and RL Normalization Scheme based on which alternate better formulae for one step forecast for any given sequence are derived.
Category: Artificial Intelligence

[1120] viXra:2012.0048 [pdf] submitted on 2020-12-08 08:11:02

Randomized RX for Target Detection

Authors: Fatih Nar, Adrián Pérez-Suay, José Antonio Padrón, Gustau Camps-Valls
Comments: 4 Pages.

This work tackles the target detection problem through the well-known global RX method. The RX method models the clutter as a multivariate Gaussian distribution, and has been extended to nonlinear distributions using kernel methods. While the kernel RX can cope with complex clutters, it requires a considerable amount of computational resources as the number of clutter pixels gets larger. Here we propose random Fourier features to approximate the Gaussian kernel in kernel RX and consequently our development keep the accuracy of the nonlinearity while reducing the computational cost which is now controlled by an hyperparameter. Results over both synthetic and real-world image target detection problems show space and time efficiency of the proposed method while providing high detection performance.
Category: Artificial Intelligence

[1119] viXra:2012.0025 [pdf] submitted on 2020-12-06 12:32:48

A New Theoretical and Technological System of Imprecise-Information Processing

Authors: Shiyou Lian
Comments: 19 Pages.

Imprecise-information processing will play an indispensable role in intelligent systems, especially in the anthropomorphic intelligent systems (as human-machine dialogue and intelligent robots). Traditionally, the fuzzy set theory is used to deal with imprecise information, but which has some important theoretical and technical problems not solved very well. Recently, a new theoretical and technological system of imprecise-information processing has been founded (see literature [1]) which is different from fuzzy technology. The system results from the formation principle of imprecise information and has solid mathematical and logical bases, so which has many advantages beyond fuzzy technology. The system provides a technological platform for relevant applications and lays a theoretical foundation for further research.
Category: Artificial Intelligence

[1118] viXra:2012.0023 [pdf] submitted on 2020-12-04 22:56:49

A VR-Based System and Architecture for Computational Modeling of Minds

Authors: Saty Raghavachary, Lurong Lei
Comments: 9 Pages.

Computational modeling of natural cognition is a crucial step towards achieving the grand goal of human-level computational intelligence. Successful ideas from existing models, and possibly newer ones, could be assembled to create a unified computational framework (eg. the Standard Model of the Mind, which attempts to unify three leading cognitive architectures) - this would be of great use in AI, robotics, neuroscience and cognitive science. This short position paper proposes the following: a VR-based system provides the most expedient, scalable and visually verifiable way to implement, test and refine a cognitive mind model (which would always embodied in a character in a virtual world). Such a setup is discussed in the paper, including advantages and drawbacks over alternative implementations.
Category: Artificial Intelligence

[1117] viXra:2011.0190 [pdf] submitted on 2020-11-27 10:32:47

Automatic Traffic Surveillance System Utilizing Object Detection and Image Processing

Authors: Deval Srivastava, Saim Shaikh, Priyank Shah
Comments: 8 Pages.

In our day and age where the numbers of cars on the road are rapidly increasing, thereby causing traffic. Drivers are becoming more reckless and carefree as the burden on the current human and automated system grows. Drivers and bikers who may wish to save a few minutes may break red lights and avoid wearing helmets but these small actions can have a significant impact and can result in the loss of lives. We propose a system that will intelligently use deep learning-based object detection to identify traffic offenders and provide methods to penalize them by recognizing their number plate. Our system will be able to detect traffic light violators and bikers without helmets. It has been designed in such a way that it is robust enough to work in drastic conditions and intelligent enough to reduce human dependence.
Category: Artificial Intelligence

[1116] viXra:2011.0179 [pdf] submitted on 2020-11-26 07:36:23

On the Belief Coulomb Force

Authors: Chenchen Lin, Xiangjun Mi, Bingyi Kang
Comments: 17 Pages.

Conflict management is a key issue in D-S evidence theory(DST) and has been the focus of many related researchers. However, there has been a lack of discussion about whether the evidence should be fused. In this paper, in the frame of DST, inspired by the belief universal gravitation[1], we proposed a concept of belief Coulomb force (BCF) to focus on whether or not the evidence should be fused. It aims to discuss the elimination of conflicts in the information fusion process from the perspective of electricity, which may provide us with a new idea to solve the problem of conflict evidence. An application is used to show that the conflict management is solved better than previous methods by using the proposed BCF.
Category: Artificial Intelligence

[1115] viXra:2011.0129 [pdf] submitted on 2020-11-16 18:11:19

An Attempt to Decrypt Pages 269-271 of Jonathan Safran Foer's "Extremely Loud & Incredibly Close"

Authors: Yannis Haralambous
Comments: 33 Pages.

In this paper we attempt to decrypt the sequence of digits given by Jonathan Safran Foer in his novel Extremely Loud & Incredibly Close. We create directed acyclic graphs that a human can follow to find potential solutions. Representations of these graphs are displayed in this paper. The Python code used to produce them is also provided, in the appendix.
Category: Artificial Intelligence

[1114] viXra:2011.0068 [pdf] submitted on 2020-11-10 10:09:21

TRSM-RS: A Movie Recommender System Based on Users’ Gender and New Weighted Similarity Measure

Authors: Mostafa Khalaji
Comments: 11 Pages. 17th Iran Media Technology Exhibition and Conference, Tehran, Iran, November 2020

With the growing data on the Internet, recommender systems have been able to predict users’ preferences and offer related movies. Collaborative filtering is one of the most popular algorithms in these systems. The main purpose of collaborative filtering is to find the users or the same items using the rating matrix. By increasing the number of users and items, this algorithm suffers from the scalability problem. On the other hand, due to the unavailability of a large number of user preferences for different items, there is a cold start problem for a new user or item that has a significant impact on system performance. The purpose of this paper is to design a movie recommender system named TRSM-RS using users’ demographic information (just users’ gender) along with the new weighted similarity measure. By segmenting users based on their gender, the scalability problem is improved and by considering the reliability of the users’ similarity as the weight in the new similarity measure (Tanimoto Reliability Similarity Measure, TRSM), the effect of the cold-start problem is undermined and the performance of the system is improved. Experiments were performed on the MovieLens dataset and the system was evaluated using mean absolute error (MAE), Accuracy, Precision and Recall metrics. The results of the experiments indicate improved performance (accuracy and precision) and system error rate compared to other research methods of the researchers. The maximum improved MAE rate of the system for men and women is 5.5% and 13.8%, respectively.
Category: Artificial Intelligence

[1113] viXra:2010.0225 [pdf] submitted on 2020-10-28 07:50:55

FUSIONET: A Scalable Framework for Image Classification

Authors: Molokwu C. Reginald, Molokwu C. Bonaventure, Molokwu C. Victor, Okeke C. Ogochukwu
Comments: 8 Pages.

Convolutional Neural Networks have become state-of-the-art methods for image classification in recent times. CNNs have proven to be very productive in identifying objects, human faces, powering machine vision in robots as well as self-driving cars. At this point, they perform better than human subjects on a large number of image datasets. A large portion of these datasets depends on the idea of solid classes. Hence, Image classification has become an exciting and appealing domain in Artificial Intelligence (AI) research. In this paper, we have proposed a unique framework, FUSIONET, to aid in image classification. Our proposition utilizes the combination of 2 novel models in parallel (MainNET, a 3 x 3, architecture and AuxNET, a 1 x 1 architecture) Successively; these relatively feature maps, extracted from the above combination are fed as input features to a downstream classifier for classification tasks about the images in question. Herein FUSIONET, has been trained, tested, and evaluated on real-world datasets, achieving state-of-the-art on the popular CINIC-10 dataset.
Category: Artificial Intelligence

[1112] viXra:2010.0220 [pdf] submitted on 2020-10-28 08:11:32

An Empirical Study of Deep Web based on Graph Analysis

Authors: Md Monzur Morshed
Comments: 11 Pages. This is a research proposal [Correction made by viXra Admin]

The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.
Category: Artificial Intelligence

[1111] viXra:2010.0147 [pdf] submitted on 2020-10-19 19:41:58

A Genetic Algorithm and Discriminant Analysis Based Outlier Detector

Authors: Eren Unlu
Comments: 4 Pages.

Fisher Discriminant Analysis (FDA), also known as Linear Discriminant Analysis (LDA) is a simple in nature yet highly effective tool for classification for vast types of datasets and settings. In this paper, we propose to leverage the discriminative potency of FDA for an unsupervised outlier detection algorithm. Unsupervised anomaly detection has been a topic of high interest in literature due to its numerous practical applications and fuzzy nature of subjective interpretation of success, therefore it is important to have different types of algorithms which can deliver distinct perspectives. Proposed method selects the subset of outlier points based on the maximization of LDA distance between the class of non-outliers via genetic algorithm.
Category: Artificial Intelligence

[1110] viXra:2010.0078 [pdf] submitted on 2020-10-11 11:02:47

How to Develop a Conscious Sentient AI: Are We There Yet?

Authors: David M. W. Powers
Comments: 11 Pages. Accepted and presented at ConZealand2020. Rejected by arXiv as not in scope.

The history of robotics is older than the invention and exploitation of robots. The term ‘robot’ came from the Czech and was first used in a play a century ago. The term ‘robotics’ and the ethical considerations captured by ‘The Three Laws of Robotics’ come from a SciFi author born a century ago. SF leads the way! Similarly, the idea of Artificial Intelligence as a thinking machine goes back to the earliest days of computing, and in this paper we follow some of the key ideas through the work of the pioneers in the field. We’ve come a long way since then, but are we there yet? Could we now build a conscious sentient thinking computer? What would it be like? Will it take over the world?
Category: Artificial Intelligence

[1109] viXra:2010.0060 [pdf] submitted on 2020-10-09 20:01:48

Auto-Encoder Transposed Permutation Importance Outlier Detector

Authors: Eren Unlu
Comments: 5 Pages.

We propose an innovative, trivial yet effective unsupervised outlier detection algorithm called Auto-Encoder Transposed Permutation Importance Outlier Detector (ATPI), which is based on the fusion of two machine learning concepts, autoencoders and permutation importance. As unsupervised anomaly detection is a subjective task, where the accuracy of results can vary on the demand; we believe this kind of a novel framework has a great potential in this field.
Category: Artificial Intelligence

[1108] viXra:2009.0173 [pdf] submitted on 2020-09-25 20:04:51

Can a Video Game and Artificial Intelligence Assist for Selecting National Soccer Squads ?

Authors: Eren Unlu
Comments: 7 Pages.

We have used the FIFA19 video game open dataset of soccer player attributes and the actual list of squads of national teams that participated in World Cup 2018, which almost coincides in time with the game’s release date. With the intended rationale behind that numerous expert game developers should have spent considerable amount of time to assess each individual player’s attributes; we can develop and test data science and machine learning tools to select national soccer teams in an attempt to assist coaches. The work provides detailed explanatory data analysis and state-of-the-art machine learning and interpretability measures.
Category: Artificial Intelligence

[1107] viXra:2009.0165 [pdf] submitted on 2020-09-23 13:48:26

Combining Conflicting Evidences Based on Pearson Correlation Coefficient and Weighted Graph

Authors: Jixiang Deng, Yong Deng
Comments: 25 Pages.

Dempster-Shafer evidence theory (evidence theory) has been widely used for its great performance of dealing with uncertainty. Based on evidence theory, researchers have presented different methods to combine evidences. Dempster's rule is the most well-known combination method, which has been applied in many fields. However, Dempster's rule may yield counter-intuitive results when evidences are in high conflict. To improve the performance of combining conflicting evidences, in this paper, we present a new evidence combination method based on Pearson correlation coefficient and weighted graph. The proposed method can correctly identify the target with a high accuracy. Besides, the proposed method has a better performance of convergence compared with other combination methods. In addition, the weighted graph generated by the proposed method can directly represent the relation of different evidences, which can help researchers to determine the reliability of every evidence. Moreover, an experiment is expounded to show the efficiency of the proposed method, and the results are analyzed and discussed.
Category: Artificial Intelligence

[1106] viXra:2009.0138 [pdf] submitted on 2020-09-19 20:25:44

RGBSticks : A New Deep Learning Based Framework for Stock Market Analysis and Prediction

Authors: Eren Unlu
Comments: 5 Pages.

We present a novel intuitive graphical representation for daily stock prices, which we refer as RGBSticks, a variation of classical candle sticks. This representation allows the usage of complex deep learning based techniques, such as deep convolutional autoencoders and deep convolutional generative adversarial networks to produce insightful visualizations for market’s past and future states
Category: Artificial Intelligence

[1105] viXra:2009.0061 [pdf] submitted on 2020-09-08 08:49:43

Transparency and Granularity in the SP Theory of Intelligence and Its Realisation in the SP Computer Model

Authors: J. Gerard Wolff
Comments: 37 Pages. As of 2020-09-02, this document has been accepted for publication as a chapter in the book Interpretable Articial Intelligence: A Perspective of Granular Computing, to be published by Springer-Verlag and edited by Witold Pedrycz and Shyi-Ming Chen.

This chapter describes how the SP System, meaning the SP Theory of Intelligence, and its realisation as the SP Computer Model, may promote transparency and granularity in AI, and some other areas of application. The chapter describes how transparency in the workings and output of the SP Computer Model may be achieved via three routes: 1) the program provides a very full audit trail for such processes as recognition, reasoning, analysis of language, and so on. There is also an explicit audit trail for the unsupervised learning of new knowledge; 2) knowledge from the system is likely to be granular and easy for people to understand; and 3) there are seven principles for the organisation of knowledge which are central in the workings of the SP System and also very familiar to people (eg chunking-with-codes, part-whole hierarchies, and class-inclusion hierarchies), and that kind of familiarity in the way knowledge is structured by the system, is likely to be important in the interpretability, explainability, and transparency of that knowledge. Examples from the SP Computer Model are shown throughout the chapter.
Category: Artificial Intelligence

[1104] viXra:2009.0018 [pdf] submitted on 2020-09-03 10:37:12

More Problems in AI Research and How the SP System May Help to Solve Them (Technical Report)

Authors: J. Gerard Wolff
Comments: 31 Pages. This "technical report" is an adjunct to the paper "Problems in AI research ..." and should be treated as an integral part of that paper

This technical report, an adjunct to the paper "Problems in AI research ...", describes some problems in AI research and how the {\em SP System} (meaning the "SP Theory of Intelligence" and its realisation in the "SP Computer Model") may help to solve them. It also contains a fairly detailed outline of the SP System. Most of the problems considered in this report are described by leading researchers in AI in interviews with science writer Martin Ford, and presented in his book "Architects of Intelligence". Problems and their potential solutions that are described in this report are: the need for more emphasis in research on the use of top-down strategies is met by the way SP has been developed entirely within a top-down framework; the risk of accidents with self-driving vehicles may be minimised via the theory of generalisation within the SP System; the need for strong compositionality in the structure of knowledge is met by processes within the SP Computer Model for unsupervised learning and the organisation of knowledge; although commonsense reasoning and commonsense knowledge are challenges for all theories of AI, the SP System has some promising features; the SP programme of research is one of very few working to establishing the key importance of information compression in AI research; Likewise, the SP programme of research is one of relatively few AI-related research programmes attaching much importance to the biological foundations of intelligence; the SP System lends weight to 'localist' (as compared with 'distributed') views of how knowledge is stored in the brain; compared with deep neural networks, the SP System offers much more scope for adaptation and the representation of knowledge; reasons are given for why the important subjects of motivations and emotions have not so far been considered in the SP programme of research. Evidence in this report, and "Problems in AI research ...", suggests that ***the SP System provides a relatively promising foundation for the development of artificial general intelligence***.
Category: Artificial Intelligence

[1103] viXra:2009.0012 [pdf] submitted on 2020-09-02 20:05:43

Problems in AI Research and How the SP System May Help to Solve Them

Authors: J. Gerard Wolff
Comments: 31 Pages. Accepted for publication in the journal Complexity

This paper describes problems in AI research and how the SP System (described in sources referenced in the paper) may help to solve them. Most of the problems considered in the paper are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book "Architects of Intelligence". These problems, each with potential solutions via SP, are: the divide between symbolic and non-symbolic kinds of knowledge and processing, and how the SP System may bridge the divide; the tendency of deep neural networks (DNNs) to make large and unexpected errors in recognition, something that does not happen with the SP System; in most AI research, unsupervised learning is regarded as a challenge, but unsupervised learning is central in how SP learns; in other AI research, generalisation, with under- and over-generalisation is seen as a problem, but it is a problem that has a coherent solution in the SP System; learning usable knowledge from a single exposure or experience is widely regarded as a problem, but it is a problem that is already solved in the SP System; transfer learning (incorporating old knowledge in new) is seen as an unsolved problem, but it is bedrock in how the SP System learns; there is clear potential for the SP System to solve problems that are prevalent in most AI systems: learning that is slow and greedy for large volumes of data and large computational resources; the SP System provides solutions to problems of transparency in DNNs, where it is difficult to interpret stored knowledge and how it is processed; although there have been successes with DNNs in the processing of natural language, the SP System has strengths in the representation and processing of natural languages which appear to be more in accord with how people process natural language, and these strengths in the SP System are well-integrated with other strengths of the system in aspects of intelligence; by contrast with DNNs, SP has strengths and potential in human-like probabilistic reasoning, and these are well integrated with strengths in other aspects of intelligence; unlike most DNNs, the SP System eliminates the problem of catastrophic forgetting (where new learning wipes out old learning); the SP System provides much of the generality across several aspects of AI which is missing from much research in AI. The strengths and potential of the SP System in comparison with alternatives suggest that {\em the SP System provides a relatively promising foundation for the development of artificial general intelligence}.
Category: Artificial Intelligence

[1102] viXra:2008.0216 [pdf] submitted on 2020-08-30 09:55:52

The Quantum Pythagorean Fuzzy Evidence Theory Based on Negation in Quantum of Mass Function

Authors: Xiaozhuan Gao, Lipeng Pan, Yong Deng
Comments: 19 Pages.

Dempster-Shafer (D-S) evidence theory is an effective methodology to handle unknown and imprecise information, due it can assign the probability into power of set. Quantum of mass function (QM) is the extension of D-S evidence theory, which can combine quantum theory and D-S evidence theory and also extended D-S evidence theory to the unit circle in complex plane. It can be seen that QM has the more bigger uncertainty in the framework of the complex plane. Recently, negation is getting more and more attention due it can analyze information from the another point. Hence, the paper firstly proposed negation of QM by using the subtraction of vectors in the unit circle, which can degenerate into negation proposed by Yager in startand probability theory and negation proposed by Yin. et al in D-S evidence theory. the paper proposed quantum pythagorean fuzzy evidence theory (QPFET), which is the first work to consider QPFET from the point of negation.
Category: Artificial Intelligence

[1101] viXra:2008.0163 [pdf] submitted on 2020-08-22 05:30:26

Dynamics of Feed Forward Induced Interference Training

Authors: Shirui Tang
Comments: 12 Pages.

Preceptron model updating with back propagation has become the routine of deep learning. Continu-ous feed forward procedure is required in order for backward propagate to function properly. Doubt-ing the underlying physical interpretation on transformer based models such as GPT brought aboutby the routine explaination, a new method of training is proposed in order to keep self-consistencyof the physics. By treating the GPT model as a space-time diagram, and then trace the worldlinesof signals, identifing the possible paths of signals in order fot a self-attention event to occure. Witha slight modification, self-attention can be viewed as an ising model interaction, which enables thegoal to be designed as energy of system. Target is treated as an external magnetic field inducing sig-nals modeled as magnetic dipoles. A probability network is designed to pilot input signals travellingat constant speed through different routes. A rule of updating the probabilities is designed in orderto form constructive interference at target locations so that instantaneous energy can be maximised.Experiment is conducted on a 4-class classification problem extracted from MNIST. The results ex-hibit interesting but expected behavours, which do not exist in a bp updated network, but more likelearning in a real human, especially in the few-shot scenario.
Category: Artificial Intelligence

[1100] viXra:2008.0130 [pdf] submitted on 2020-08-18 00:40:01

Applying Neural Networks and Neuroevolution of Augmenting Topologies to play Super Mario Bros

Authors: Vivek Verma
Comments: 2 Pages.

This paper describes the background and implementation behind a project that uses Neroevolution of Augmenting Topologies (NEAT) to play Super Mario Bros. It's implementation is different from classic applications of NEAT since the training process was heavily optimized using multithreading and downsampling. As a result, the training process can be run on underpowered CPUs without the help of an external GPU. The neural network successfully completed level 1-1 of the game.
Category: Artificial Intelligence

[1099] viXra:2007.0209 [pdf] submitted on 2020-07-27 06:26:13

Lunar Rock Classification Using Machine Learning

Authors: Arshita Kalra, Arnav Bhavsar
Comments: 5 Pages.

Lunar landings by esteemed space stations around the world have yielded an abundance of new scientific data on the Moon which has helped scientists to study our closest neighbour and hence have provided evidence for understanding Earth’s past and future. This paper is about solving the challenge on HackerEarth about classifying the lunar rock into small or large rock. These tasks have historically been conducted by visual image inspection, thereby reducing the scope, reliability and accuracy of the retrieval. The competition was to build a machine learning model to reduce human effort of doing a monotonous task. We built a Support Vector Machine model, used widely in classification problems, feeding features extracted from images in the dataset using OpenCV, only to obtain an accuracy of 99.41%. Our source code solving the challenge and the dataset are given in the github repository https://github.com/ArshitaKalra/Lunar-Rock-classification.
Category: Artificial Intelligence

[1098] viXra:2007.0200 [pdf] submitted on 2020-07-24 19:27:40

More Problems in AI Research and How the Sp System May Help to Solve Them

Authors: J. Gerard Wolff
Comments: 34 Pages.

This paper, a companion to "Problems in AI research and how the SP System may help to solve them", describes problems in AI research and how the "SP System" (described in sources detailed in the paper) may help to solve them. Most of these problems are described by leading researchers in AI in interviews with science writer Martin Ford, and reported by him in his book "Architects of Intelligence". Problems and their potential solutions that are described in this paper are: the need to rebalance research towards top-down strategies; how to minimise the risk of accidents with self-driving vehicles; the need for strong compositionality in the structure of knowledge; the challenges of commonsense reasoning and commonsense knowledge; establishing the key importance of information compression in AI research; establishing the importance of biological validity in AI research; whether knowledge in the brain is represented in 'distributed' or 'localist' form; the limited scope for adaptation of deep neural networks; and reasons are given for why the important subjects of motivations and emotions have not so far been considered. The evidence in this paper and its companion paper suggests that ***the SP System provides a firmer foundation for the development of artificial general intelligence than any alternative***.
Category: Artificial Intelligence

[1097] viXra:2007.0110 [pdf] submitted on 2020-07-15 03:03:23

A Semantic Question Answering in a Restricted Smart Factory Domain Attaching to Various Data Sources

Authors: Orçun Oruç
Comments: 24 Pages.

Industrial manufacturing has become more interconnected between smart devices such as the industry of things edge devices, tablets, manufacturing equipment, and smartphones. Smart factories have emerged and evolved with digital technologies and data science in manufacturing systems over the past few years. Smart factories make complex data enables digital manufacturing and smart supply chain management and enhanced assembly line control. Nowadays, smart factories produce a large amount of data that needs to be apprehensible by human operators and experts in decision making. However, linked data is still hard to understand and interpret for human operators, thus we need a translating system from linked data to natural language or summarization of the volume of linked data by eliminating undesired results in the linked data repository. In this study, we propose a semantic question answering in a restricted smart factory domain attaching to various data sources. In the end, we will perform qualitative and quantitative evaluation of the semantic question answering, as well as discuss findings and conclude the main points with regard to our research questions.
Category: Artificial Intelligence

[1096] viXra:2007.0085 [pdf] submitted on 2020-07-13 20:05:01

Microscopy Image Processing for the Human Eye

Authors: Zeyue Xia, Mohamad Nadim Barakat, Serri Matula, Zijun Hui, John Stravakrakis
Comments: 7 Pages. Computer Vision

Vivo confocal microscopy allows scientists to better understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error-prone and time-consuming. We used automated image stitching as a solution; focusing on accuracy and computational speed of three different feature detection algorithms: SIFT, SURF, and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
Category: Artificial Intelligence

[1095] viXra:2007.0084 [pdf] submitted on 2020-07-12 21:45:42

Nonextensive Belief Entropy

Authors: Yige Xue, Yong Deng
Comments: 16 Pages.

The belief entropy has high performance in handling uncertain information, which is the extension of information entropy in Dempster-shafer evidence theory. The Tsallis entropy is an extent of information entropy, which is a nonextensive entropy. However, how to applied the idea of belief entropy to improve the Tsallis entropy is still an open issue. This paper proposes the nonextensive belief entropy(NBE), which consists of belief entropy and Tsallis entropy. If the extensive constant of the proposed model equal to 1, then the NBE will degenerate into classical belief entropy. Furthermore, When the basic probability assignment degenerates into probability distribution, then the proposed entropy will be degenerated as classical Tsallis entropy. Meanwhile, if NBE focus on the probability distribution and the extensive constant equal to 1, then the NBE is equate the classical information entropy. Numerical examples are applied to prove the efficiency of the proposed entropy. The experimental results show that the proposed entropy can combine the belief entropy and Tsallis entropy effectively and successfully.
Category: Artificial Intelligence

[1094] viXra:2007.0040 [pdf] submitted on 2020-07-06 11:54:41

Paper Car Speed Detection Using Computer Vision

Authors: Aditi Singh, Raju Ranjan
Comments: 3 Pages.

When it comes to road safety, detection and monitoring of car speed is one of the major tasks. The use of a simple camera and image processing software eliminated the primary tools of speed detection like handheld radar gun. In these techniques, the speed is calculated as the car passes through the camera’s field of view (FOV). The speed is calculated by noting the time taken by car between entering and exiting FOV. Some systems used individual cameras at entry and exit FOVs. Thus, it does now calculate the speed in between this interval. This paper proposes a technique to measure speed of car the moment it enters into the camera’s FOV till the time it exits the FOV. Using the Deep Learning Single Shot Detector (SSD) implemented using Convolutional Neural Network (CNN), the cars entering FOV are detected and based on the distance they travel in FOV and time taken to cover that distance the speed of car is calculated
Category: Artificial Intelligence

[1093] viXra:2007.0039 [pdf] submitted on 2020-07-06 20:06:00

Eye Gaze Optical Mouse using Computer Vision

Authors: Dhananjay Mewati, Jerald Nirmal Kumar
Comments: 3 Pages.

This paper proposes a technique of using the movement of eyes to control the movement of cursor on monitor screens. Thereby, creating new ways of Human Computer Interaction (HCI) and also helping physically handicapped people to interact with computer devices more efficiently. Earlier eye gaze optical mouse comprised of a head gear which had an eye motion sensor attached and were more hardware based. The input gathered through these sensors helped in cursor movement on screen. With the advancement in the field of Image Processing Techniques and Artificial Intelligence, a simple web camera attached with computer can be used to perform this task. In this paper, pupil of the eye is detected. The coordinates gathered by tracking pupil movement are mapped with the coordinate of display monitor. Based on this mapping the mouse cursor can be moved on the screen.
Category: Artificial Intelligence

[1092] viXra:2007.0034 [pdf] submitted on 2020-07-05 21:02:19

Dynamic and System Agnostic Malware Detection Via Machine Learning

Authors: Michael Sgroi, Doug Jacobson
Comments: 23 Pages.

This paper discusses malware detection in personal computers. Current malware detection solutions are static. Antiviruses rely on lists of malicious signatures that are then used in file scanning. These antiviruses are also very dependent on the operating system, requiring different solutions for different systems. This paper presents a solution that detects malware based on runtime attributes. It also emphasizes that these attributes are easily accessible and fairly generic meaning that it functions across systems and without specialized information. The attributes are used in a machine learning system that makes it flexible for retraining if necessary, but capable of handling new variants without needing to modify the solution. It can also be run quickly which allows for detection to be achieved before the malware gets too far.
Category: Artificial Intelligence

[1091] viXra:2007.0033 [pdf] submitted on 2020-07-05 21:21:29

Sentiment Analysis on Tweets Using Document Embeddings

Authors: Qasim Nawaz
Comments: 33 Pages. N/A

Sentiment Analysis is one of the primary areas of natural language processing and information retrieval being tackled by researchers to date, and for good reason; the internet. The internet is a mostly untapped source of rich amounts of data that can be used to gauge the opinions of people, in reference to any number of topics. Twitter is one such platform designed for people to voice their opinions in the form of tweets about any topic they desire. My project will set out to investigate the best way to be able to analyse the sentiment of these aforementioned tweets using machine learning techniques. I will be training word vector-based, and paragraph vector-based models on a dataset consisting of 1.6 million tweets, in conjunction with various classifiers in order to find the best performing method in which to obtain the sentiment of tweets.
Category: Artificial Intelligence

[1090] viXra:2007.0031 [pdf] submitted on 2020-07-06 04:09:56

Neural Network based Classification of Flowers using Transfer Learning

Authors: Abhishek
Comments: 3 Pages.

One of the classical problems in the field of computer vision and machine learning and subsequently deep learning is image classification. While Deep Learning solves the much difficult hurdles like feature extraction and presents us with better optimizations like gradient descent and Adam optimizer, most deep learning models still need a lot of raw computational power to train models on local Graphical Processing Units (GPUs) or Tensor Processing Units (TPUs) in the cloud. All of this computational power is not readily available in all environments and systems and hence the concept of pre-trained models can help to reduce training time by a huge margin. Initial models get trained on large array of GPUs and do feature extraction. The classification part is for the end-user to customize in accordance to the problem at hand and can be completed in very less time. We tackled the multi-class classification botanical problem of identifying flowers of 5 types, namely, Sunflower, Rose, Dandelion, Daisy, and Tulip. The feature extraction part is done with the model (Google’s Inception-v3) and fully connected softmax layers were trained on local machine on a Nvidia GeForce GTX 950 (with CUDA activated) within 30 minutes time and total steps/epochs were 4000 only. The total number of training images is 3,500 (approx.). The finished model produced results with final test accuracy as 91.9% on new images (N=664).
Category: Artificial Intelligence

[1089] viXra:2007.0030 [pdf] submitted on 2020-07-06 04:28:45

Automatic Text SummarizerApplication Using Extractive Text Summarization

Authors: Ritesh Kumar Bharadwaj
Comments: 5 Pages.

Text Summarization as a phenomenon has always been present and rather an evolving one with the advent of new technologies both in terms of data collection as well for the processing of this data. One reason of using text summarization is the huge amount of data floating over the internet in the form of text files, comments which is though potent enough to be used to extract useful information. but since the amount of text present in these sources is too huge, so the need of text summarization becomes justified by every argument. Some of the areas where text summarization is vastly used is applications involved in providing capsule information such as compact news applications, or websites providing academic notes for various examinations This paper presents an auto text summarizer application which takes the URL of a web page as input, performs summarization on the selected elements and then presents this summarized text content on the front end of a web application. At the backend, the process of scraping of web page content (if an http URL is provided as input) using beautiful soup library or reading of text provided takes place. news in short forms, or micro blogging websites. The scraped content after being preprocessed properly is summarized using a suitable library which in our case is one among NLTK, Spacy, Genism and Sumy. The summarized content is presented at the frontend using flask framework of Python. The results produced using different libraries are compared in the end in terms of reading time of the summarized content. The application uses extractive text summarization technique in order to achieve its result which is a compact summary of the textual data prepared from the keywords already present in the document Keywords: Auto Text Summarizer, URL, Flask, Web Scraping, Nltk, Spacy, Sumy, Gensim, Extractive Text Summarization
Category: Artificial Intelligence

[1088] viXra:2007.0029 [pdf] submitted on 2020-07-06 05:18:45

Lung Cancer Detection using CNN

Authors: Mohammed Tahir
Comments: 3 Pages.

The recent surge of Deep Learning has led to breakthrough advancements in almost every field of its application. A particular deep learning architecture, arguably the most popular one is the Convolution Neural Networks. The interest in convnets has seen an exponential increase due to their effectiveness and scalability. CNNs have become the go-to solution for image data problems and has provided results that are at par with if not better than human standards. The simplicity of the CNN architecture is another big factor of its success. The image processing and classification capabilities of CNN have found great usage in medical field, making it possible to detect and classify diseases as severe as Cancer effectively for the sake of better care. In this project, I’ve initiated an elaborate study of Convolution Neural Networks, built multiple architectures from scratch and furthered our understanding with the preparation of an elementary dog-cat CNN classifier model followed by a more extensive CNN model for detection of lung cancer in a patient. The project is built on Google’s interactive and versatile cloud platform for AI development Google Colaboratory, using the open-source neural network library ‘Keras’ for model development and libraries such as matplotlib and tensorboard (tensorflow) for result plotting and analysis. Data for training and testing our model was extracted from the ‘ LUNA 2016 medical image database ’. The model was tuned using Grid-Search and achieved over 97% test accuracy in its final iterations. To culminate,I have enlisted some future-work prospects like De-convolution/Translated-Convolution,implement one or more named CNN networks like Inception or Alexnet, test the model on larger images etc
Category: Artificial Intelligence

[1087] viXra:2006.0265 [pdf] submitted on 2020-06-29 13:57:50

Mining Twitter Data for Improving Lexicon-Based Election Predictions and Candidate Analysis on Political Issues: Hybrid Topic-Based Sentiment Analysis with Issue Filtering

Authors: Samuel Kopelowitz, Uday Reddy
Comments: 25 Pages.

Twitter data mining techniques have been used in the run-up to elections to predict their outcomes and perform analysis to explain results. Due to the popularity of the social media platform it is possible to collect large amounts of data with which often lexicon-based sentiment analysis has been used to accomplish these tasks, mostly because of its efficiency and simplicity. More recently, hybrid techniques, which in addition to calculating tweet sentiment also incorporate topic modelling methods to extract the main “topics” from a corpus of text, have been applied independently for both election prediction and analysis. It is possible to use hybrid methods to analyse different political issues (e.g. economic, social, etc) and the public opinion for candidates in respect to them; and other hybrid methods have been shown to outperform baseline sentiment analysis approaches for election prediction. A mining solution which can accomplish both of these tasks non-exhaustively is desirable for better predictions and a greater understanding of election outcomes. This report will present a novel approach to mining Twitter data, Hybrid Topic-Based Sentiment Analysis with Issue Filtering (HTBSA*), which will not only pose as a potential improvement upon state-of-the-art techniques for election prediction; but can be abstracted to perform candidate analysis on any individual political issue, proposing a baseline methodology for doing this. This research approach has effectively outperformed all of the well-established methods in the realm of lexicon-based election prediction, giving a mean average error as low as 2.20% from true vote share. This technique was performed on data collected on the run up to the UK General Election 2019 and in an addition to this, it has successfully been black box tested on an unseen dataset. Based on the empirical evidence given by our results, HTBSA* can be relied upon to predict elections occurring in the future, but analysis results in respect to individual political issues may be inconsistent, suggesting further work is required. Lines of research that come as a result of this study have the potential to tackle election mining problems in new ways, which are more sophisticated than what has been done previously.
Category: Artificial Intelligence

[1086] viXra:2006.0237 [pdf] submitted on 2020-06-26 07:25:51

Universal Science of Mind: Can Complexity-Based Artificial Intelligence Save the World in Crisis?

Authors: Andrei P. Kirilyuk
Comments: 67 pages, 43 eqs, 86 refs

While practical efforts in the field of artificial intelligence grow exponentially, the truly scientific and mathematically exact understanding of the underlying phenomena of intelligence and consciousness is still missing in the conventional science framework. The inevitably dominating empirical, trial-and-error approach has vanishing efficiency for those extremely complicated phenomena, ending up in fundamentally limited imitations of intelligent behaviour. We provide the first-principle analysis of unreduced many-body interaction process in the brain revealing its qualitatively new features, which give rise to rigorously defined chaotic, noncomputable, intelligent and conscious behaviour. Based on the obtained universal concepts of unreduced dynamic complexity, intelligence and consciousness, we derive the universal laws of intelligence applicable to any kind of intelligent system interacting with the environment. We finally show why and how these fundamentally substantiated and therefore practically efficient laws of intelligent system dynamics are indispensable for correct AI design and training, which is urgently needed in this time of critical global change towards the truly sustainable development.
Category: Artificial Intelligence

[1085] viXra:2006.0235 [pdf] submitted on 2020-06-25 11:22:05

A Vector Interpretation of Quaternion Mass Function

Authors: Yige Xue, Yong Deng
Comments: 15 Pages.

Mass function vector is used to handle uncertainty. Quaternion number is the extent of real number. The mass function vector can extend the mass function by combining the vector. In this paper, the mass function vector is extended by quaternion number, named as Quaternion Mass Function Vector(QMFV). The proposed QMFV has the advantage to deal with uncertain information. When the quaternion number degenerates into the real number, then the QMFV degenerates into the quaternion mass function. In addition, if the probability of multiple subsets of frame of discernment is not assigned to the single subsets, then the mass function vector will degenerate into mass function in classical evidence theory. When the quaternion number degenerates into the real number, then the combination rule of quaternion mass function vectors degenerates into the combination rule of mass function vectors. In the case when the probability of multiple subsets of frame of discernment is not assigned to the single subsets, the combination rule of mass function vectors degenerates into generalized dempster's rule of combination. Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to mass function vector effectively and successfully.
Category: Artificial Intelligence

[1084] viXra:2006.0210 [pdf] submitted on 2020-06-22 22:30:46

Quaternion Mass Function

Authors: Yong Deng
Comments: 17 Pages.

Mass function is used to handle uncertainty. Quaternion number is the extent of imaginary number. In this paper, the classical mass function is extended by quaternion number, named as Quaternion Mass Function (QMF). The proposed QMF has the advantage to deal with uncertain information. When the quaternion number degenerates into the complex number, then the QMF degenerates into the complex mass function. In addition, if the complex mass function is degenerated as real number, the QMF is the same as mass function in classical evidence theory. In the case when the quaternion number degenerates into the real number and the QMF focus on the frame of discernment with single subsets, the QMF is the same as the probability distribution in probability theory. The combination rule is also presented to combine two QMFs, which is the generalization of Dempster rule. In the case when the quaternion mass function degenerates into the real number and assigns only to single subsets, the proposed combination rule is degenerated as Beyesian updation in probability theory. Numerical examples are applied to prove the efficiency of the proposed model. The experimental results show that the proposed model can apply the quaternion theory to mass function effectively and successfully.
Category: Artificial Intelligence

[1083] viXra:2006.0208 [pdf] submitted on 2020-06-23 10:49:01

Add Latent Restriction Loss When Recovering Latent Vector

Authors: Jeongik Cho
Comments: 3 Pages.

When a pre-trained generative model is given, the process of finding the latent vector that produces the data closest to the input data is called the latent vector recover. The latent vector recover receives the difference between the generated data and the input data generated through the latent vector as reconstruction loss and performs gradient descent repeatedly on the latent vector to find the optimal latent vector. In this paper, I propose a method to find a better latent vector by adding a latent restriction loss in addition to reconstruction loss during latent vector recovery. The latent restriction loss is a loss that makes the latent vector follow the distribution of the latent vector used when training the generative model during latent vector recovery. The distance between the "distribution of latent vector used in training the generative model" and "latent vector during latent vector recovery" becomes the latent restriction loss.
Category: Artificial Intelligence

[1082] viXra:2006.0196 [pdf] submitted on 2020-06-20 21:40:35

Large Scale Traffic Surveillance: Vehicle Detection and Classification Using Cascade Classifier and Convolutional Neural Network

Authors: Shaif Chowdhury, Soummyopriyo Chattopdhyay, Tapan Kumar Hazra
Comments: 10 Pages.

In this Paper, we are presenting a traffic surveillance system for detection and classification of vehicles in large scale videos. Vehicle detection is crucial part of Road safety. There are lots of different intelligent systems proposed for traffic surveillance. The system presented here is based on two steps, a descriptor of the image type haar-like, and a classifier type convolutional neural networks. A cascade classifier is used to extract objects rapidly and a neural network is used for final classification of cars. In case of Haar Cascades, the learning of the system is performed on a set of positive images (vehicles) and negative images (non-vehicle), and the test is done on another set of scenes. For the second, we have used faster R-CNN architecture. The cascade classifier gives faster processing time and Neural Network is used to increase the detection rate.
Category: Artificial Intelligence

[1081] viXra:2006.0159 [pdf] submitted on 2020-06-18 06:23:41

Chatbot: a Conversational Agent Employed with Named Entity Recognition Model Using Artificial Neural Network

Authors: Nazakat Ali
Comments: 10 Pages.

Chatbot is a technology that is used to mimic human behavior using natural language. There are different types of Chatbot that can be used as conversational agent in various business domains in order to increase the customer service and satisfaction. For any business domain, it requires a knowledge base to be built for that domain and design an information retrieval based system that can respond the user with a piece of documentation or generated sentences. The core component of a Chatbot is Natural Language Understanding (NLU) which has been impressively improved by deep learning methods. But we often lack such properly built NLU modules and requires more time to build it from scratch for high quality conversations. This may encourage fresh learners to build a Chatbot from scratch with simple architecture and using small dataset, although it may have reduced functionality, rather than building high quality data driven methods. This research focuses on Named Entity Recognition (NER) and Intent Classification models which can be integrated into NLU service of a Chatbot. Named entities will be inserted manually in the knowledge base and automatically detected in a given sentence. The NER model in the proposed architecture is based on artificial neural network which is trained on manually created entities and evaluated using CoNLL-2003 dataset.
Category: Artificial Intelligence

[1080] viXra:2006.0126 [pdf] submitted on 2020-06-14 13:57:01

AIXI Responses to Newcomblike Problems

Authors: Davide Zagami
Comments: 5 Pages.

We provide a rigorous analysis of AIXI's behaviour under repeated Newcomblike settings. In this context, a Newcomblike problem is a setting where an agent is tied against an environment that contains a perfect predictor, whose predictions are used to determine the environmet's outputs. Since AIXI lacks good convergence properties, we chose to focus the analysis on determining whether an environment appears computable to AIXI, that is, if it maps actions to observations in a way that a computable program can achieve. It is in this sense that, it turns out, AIXI can learn to one-box in *repeated* Opaque Newcomb, and to smoke in *repeated* Smoking Lesion, but may fail all other Newcomblike problems, because we found no way to reduce them in a computable form. However, we still suspect that AIXI can succeed in the repeated settings.
Category: Artificial Intelligence

[1079] viXra:2006.0119 [pdf] submitted on 2020-06-14 03:23:52

A Simple Nano-Bio Signal Processing Informatics R&D Framework With Machine Learning.

Authors: Nirmal Tej Kumar
Comments: 11 Pages. Short Communication

[ A General Multi-disciplinary Thermal Mapping + Signal Processing System to Probe (Graphene Quantum Dots + Virus ) based Nano-Bio Sensor for COVID-19 BIO-CHEMICAL INFORMATION PROCESSING w.r.t Theory + Algorithms + Experimentation + Machine Learning as an interesting Suggestion ]
Category: Artificial Intelligence

[1078] viXra:2006.0110 [pdf] submitted on 2020-06-12 20:16:52

The Information Volume of Uncertain Information: (7) Information Quality

Authors: Dingbing Li, Yong Deng
Comments: 10 Pages.

Information quality is a concept that can be used to measure the information of probability distribution. Dempster-Shafer evidence theory can describe uncertain information more reasonably than probability theory. Therefore, it is a research hot spot to propose information quality applicable to evidence theory. Recently, Deng proposed the concept of information volume based on Deng entropy. It is worth noting that, compared with the Deng entropy, the information volume of the Deng entropy contains more information. Obviously, it may be more reasonable to use information volume of Deng entropy to represent uncertain information. Therefore, this article proposes a new information quality, which is based on the information volume of Deng entropy. In addition, when the basic probability (BPA) degenerates into a probability distribution, the proposed information quality is consistent with the information quality proposed by Ygare and Petry. Finally, several numerical examples illustrate the effectiveness of this new method.
Category: Artificial Intelligence

[1077] viXra:2006.0079 [pdf] submitted on 2020-06-08 21:46:11

Fully Automated Robotic Vehicle with Real Time Image Detection and Collusion Avoiding Features

Authors: Al-Akhir Nayan
Comments: 7 Pages.

Due to the simplicity and ability to change according to our needs, the robotics and automation are being used widely in industries. The project is intended to build an automatic vehicle using GPS which is based on computer to generate its path coordinate. GPS module is used to obtain GPS data. Mobile camera detects the obstacles and machine learning algorithm is used to avoid it and performs real time object detection. The vehicles we developed uses electric motor to run wheels and has full control of the throttle, steering and breaking. An Arduino device controls the vehicle following the command generated by the computer. Traffic has risen by quite a huge number. Excessive number of vehicles occur vehicle accident every day. Driver issue is also a great problem. Our goal is to decrease the possibilities of accidents and to ensure the safety of the passengers. Besides the vehicles can be useful for blind and handicraft people. But our main target is to serve this device to our military so that they can be helpful at the time of danger. The vehicle contains sensors to observe the environment. Besides it can be operated by human manually.
Category: Artificial Intelligence

[1076] viXra:2006.0064 [pdf] submitted on 2020-06-08 09:33:39

The Information Volume of Uncertain Information: (6) Information Multifractal Dimension

Authors: Tao Wen, Yong Deng
Comments: 11 Pages.

How to measure the uncertainty in the open world is a popular topic in recent study. Many entropy measures have been proposed to address this problem, but most have limitations. In this series of paper, a method for measuring the information volume of mass function is presented. The fractal property about the maximum information volume is shown in this paper, which indicates the inherent physical meanings of Deng entropy from the perspective of statistics. The results shows the multifractal property of this maximum information volume. Some experiment results are applied to support this perspective.
Category: Artificial Intelligence

[1075] viXra:2006.0062 [pdf] submitted on 2020-06-07 12:18:52

The Information Volume of Uncertain Information: (4) Negation

Authors: Xiaozhuang Gao, Yong Deng
Comments: 10 Pages.

Negation is an important operation on uncertainty information. Based on the information volume of mass function, a new negation of basic probability assignment is presented. The result show that the negation of mass function will achieve the information volume increasing. The convergence of negation is the situation when the Deng entropy is maximum, namely high order Deng entropy. If mass function is degenerated into probability distribution, the negation of probability distribution will also achieve the maximum information volume, where Shannon entropy is maximum. Another interesting results illustrate the situation in maximum Deng entropy has the same information volume as the whole uncertainty environment.
Category: Artificial Intelligence

[1074] viXra:2006.0061 [pdf] submitted on 2020-06-07 13:22:14

The Information Volume of Uncertain Information: (5) Divergence Measure

Authors: Lipeng Pan, Yong Deng
Comments: 12 Pages.

Dempster-Shafer Evidence theory is an extension of probability theory, which can describe uncertain information more reasonably. Divergence measure is always an important concept in probability theory. Therefore, how to propose a reasonable divergence measurement has always been a research hot spot in evidence theory. Recently, Deng proposed the concept of information volume based on Deng entropy. It is interesting to note that compared with the uncertainty measure of Deng entropy, information volume of Deng entropy contains more information. Obviously, it might be more reasonable to use information volume of Deng entropy to represent uncertainty information. Based on this, in the paper, we combined the characteristics of non-specific measurement of Deng entropy, and propose a new divergence measure. The new divergence measurement not only satisfies the axiom of distance measurement, but also has some advantages that cannot be ignored. In addition, when the basic probability assignment(BPA) degenerates into probability distribution, the measured result of the new divergence measure is the same as that of the traditional Jensen-Shannon divergence. If the mass function is assigned in probability distribution, the proposed divergence is degenerated as Kullback-Leibler divergence. Finally, some numerical examples are illustrated to show the efficiency of the proposed divergence measure of information volume.
Category: Artificial Intelligence

[1073] viXra:2006.0037 [pdf] submitted on 2020-06-04 13:35:02

The Information Volume of Uncertain Information: (2) Fuzzy Membership Function

Authors: Jixiang Deng, Yong Deng
Comments: 18 Pages.

In fuzzy set theory, the fuzzy membership function describes the membership degree of certain elements in the universe of discourse. Besides, Deng entropy is a important tool to measure the uncertainty of an uncertain set, and it has been wildly applied in many fields. In this paper, firstly, we propose a method to measure the uncertainty of a fuzzy MF based on Deng entropy. Next, we define the information volume of the fuzzy MF. By continuously separating the BPA of the element whose cardinal is larger than $1$ until convergence, the information volume of the fuzzy sets can be calculated. When the hesitancy degree of a fuzzy MF is $0$, information volume of the fuzzy membership function is identical to the Shannon entropy. In addition, several examples and figures are expound to illustrated the proposed method and definition.
Category: Artificial Intelligence

[1072] viXra:2006.0035 [pdf] submitted on 2020-06-04 15:04:31

The Information Volume of Uncertain Information: (3) Information Fractal Dimension

Authors: Tao Wen, Yong Deng
Comments: 11 Pages.

How to measure the uncertainty in the open world is a popular topic in recent study. Many entropy measures have been proposed to address this problem, but most have limitations. In this series of paper, a method for measuring the information volume of mass function is presented. The fractal property about the maximum information volume is shown in this paper, which indicates the inherent physical meanings of Deng entropy from the perspective of statistics. The results shows the linear relationship between the maximum information volume and the probability scale. Some experiment results are applied to support this perspective.
Category: Artificial Intelligence

[1071] viXra:2006.0028 [pdf] submitted on 2020-06-03 16:12:01

The Information Volume of Uncertain Informaion: (1) Mass Function

Authors: Yong Deng
Comments: 14 Pages.

Given a probability distribution, its corresponding information volume is Shannon entropy. However, how to determine the information volume of a given mass function is still an open issue. Based on Deng entropy, the information volume of mass function is presented in this paper. Given a mass function, the corresponding information volume is larger than its uncertainty measured by Deng entropy. The so called Deng distribution is defined as the BPA condition of the maximum Deng entropy. The information volume of Deng distribution is called the maximum information volume, which is lager than the maximum Deng entropy. In addition, both the total uncertainty case and the Deng distribution have the same information volume, namely, the maximum information volume. Some numerical examples are illustrated to show the efficiency of the proposed information volume of mass function.
Category: Artificial Intelligence

[1070] viXra:2006.0025 [pdf] submitted on 2020-06-03 03:35:58

Understanding Pyramid Representations + Electron Microscopy Images Using Java + Prolog Related Software for R&D.

Authors: Nirmal Tej Kumar
Comments: 2 Pages. Short Communication

Probing cryo-Electron Microscopy Images Using Pyramid Representations in the Context of : [ Image J/ImageJ_Pyramid_Plugin/JikesRVM - Research Virtual Machine(RVM)/JVM - Java Virtual Machine/JI Prolog - Java based Prolog/HPC-High Performance Computing ] for Next Generation Java based[ AI + Image Processing + Informatics ] R&D Test Platforms.
Category: Artificial Intelligence

[1069] viXra:2006.0002 [pdf] submitted on 2020-06-01 09:11:02

An Artiticial Intelligence Enabled Multimedia Tool for Rapid Screening of Cervical Cancer

Authors: Kumar Dron Shrivastav, Neha Taneja, Priyadarshini Arambam, Vandana Bhatia, Shelly Batra, Harpreet Singh, Eyad H. Abed, Priya Ranjan, Rajiv Janardhanan
Comments: 22 Pages. Preprint!

Cervical cancer is a major public health challenge. Further mitigation of cervical cancer can greatly benefit from development of innovative and disruptive technologies for its rapid screening and early detection. The primary objective of this study is to contribute to this aim through large scale screening by development of Artificial Intelligence enabled Intelligent Systems as they can support human cancer experts in making more precise and timely diagnosis. Our current study is focused on development of a robust and interactive algorithm for analysis of colposcope-derived images analysis and a diagnostic tool/scale namely the OM- The Onco-Meter. This tool was trained and tested on 300 In-dian subjects/patients yielding 77% accuracy with a sensitivity of 83.56% and a specicity of 59.25%. OM-The Oncometer is capable of classifying cervigrams into cervical dysplasia, carcinoma in situ (CIS) and invasive cancer(IC). Pro- gramming language - R has been used to implement and compute earth mover distances (EMD) to characterize different diseases labels associated with cervical cancer, computationally. Deployment of automated tools will facilitate early diagnosis in a noninvasive manner leading to a timely clinical intervention for cervical cancer patients upon detection at a Primary Health Care (PHC). The tool developed in this study will aid clinicians to design timely intervention strategies aimed at improving the clinical prognosis of patients.
Category: Artificial Intelligence

[1068] viXra:2005.0160 [pdf] submitted on 2020-05-14 16:19:23

Effect of Ensembling on ANLI Benchmark

Authors: Gokhan Cagrici
Comments: 8 Pages.

Tremendous achievement of reaching fairly high success metric values with several NLI datasets caused eyebrows to raise questioning the real value of these metric numbers. Research papers started to appear with a comprehensive analysis of what these models really learn and the relative difficulty of forcing these models to fail with small syntactic and semantic changes in the input. In particular, ANLI benchmark is an example of a more challenging NLI task with the intent of measuring the comprehension capabilities of models to a deeper context. Relative success of transformer-based models on ANLI benchmarks were already reported by Nie et al., 2019. Given the challenging nature of iterative dataset formation, individual models are having more difficulty of extracting the underlying relationship between the context and hypothesis pair, and the target. Ensembles of these individual models might have a higher potential to achieve better performance numbers when the individual performances are that far from the equivalent ones in SNLI and MNLI tasks. On top of that, making controlled variations of the inputs and tracking the changes in the behavior of those models will give indications about the strength and robustness regarding the learning process.
Category: Artificial Intelligence

[1067] viXra:2005.0120 [pdf] submitted on 2020-05-10 13:03:10

Natural Way to Overcome Catastrophic Forgetting in Neural Networks

Authors: Alexey Kutalev
Comments: 9 Pages.

Not so long ago, a method was discovered that successfully overcomes the catastrophic forgetting of neural networks. Although we know about the cases of using this method to preserve skills when adapting pre-trained networks to particular tasks, it has not yet obtained widespread distribution. In this paper, we would like to propose an alternative method of overcoming catastrophic forgetting based on the total absolute signal passing through each connection in the network. This method has a simple implementation and seems to us essentially close to the processes occurring in the brain of animals to preserve previously learned skills during subsequent learning. We hope that the ease of implementation of this method will serve its wide application.
Category: Artificial Intelligence

[1066] viXra:2005.0100 [pdf] submitted on 2020-05-08 04:24:41

An Agent-Based Control System for Wireless Sensor and Actuator Networks

Authors: Parsa Rajabzadeh, Hamed Rahim
Comments: 11 Pages.

This paper aims to propose a novel MIMO control system that is compounded with Distributed Control Systems (DCS) and Centralized Control Systems (CCS). Despite DCS and CCS, which have several drawbacks such as cost and delay, the proposed system is designed to have local and global controllers simultaneously. This MIMO control system has a significant advantage versus the two traditional systems in implementation, computation power reduction, cost decrementing, performance, and the problems that occur in addressing the system connections in DCs for Wireless Sensor Networks and the Internet of Things. The proposed the system is modeled as a Multi-Agent System (MAS) which is implemented in the osBrain MAS framework in Python.
Category: Artificial Intelligence

[1065] viXra:2005.0099 [pdf] submitted on 2020-05-08 07:28:57

Fuzzy Logic: a Modern Approach to Control System Architecture

Authors: Kalu Kelechi Gabriel
Comments: 7 Pages.

Control systems have been in existence for quite a long time now. The oldest and unarguably the best of these is the human brain. Some popular control methodologies are PID control, Bayesian control, neural networks, etc. The drawbacks of these are however lead by a striking point that all of them are either based on Boolean or multi-valued logic, which is no more than a threshold or point-to-point logic. This work tends to introduce fuzzy logic control as a paradigm shift, this control method would to a large extent provide a physical model of the human brain. The distinction is that it is not only multi-valued logic but also a ‘degree’ based logic. This paper would give a basic overview of a fuzzy control system and its physical implementation considerations.
Category: Artificial Intelligence

[1064] viXra:2005.0023 [pdf] submitted on 2020-05-02 08:30:28

The Survey of Transferring Cryptocurrency Anonymization, Privacy, and Security

Authors: Vamsi K, Ganeshan M
Comments: Pages.

The Internet is used in many situations like to send money to a friend or receive money or you might want to buy something, and we're used to using our Visa cards or PayPal or different payment methods in order to transfer money and handle using these traditional methods to transfer money is not anonymous and not private. Therefore, we'll need different methods if we want to protect our privacy and anonymity. You're probably already thinking now of using Cryptocurrencies and that's correct. Some of the cryptocurrencies are actually very secure and very anonymous. So, in this paper I am going to talk about Cryptocurrencies what it is and how it works. We're going to talk about bitcoins obviously because it's the most common Cryptocurrencies. And then we'll also go to talk about a more private Cryptocurrencies which is Monero. So, we're going to talk about how to properly obtain these Cryptocurrencies anonymously and privately how to handle them in a secure and private manner and how to send and receive. So how to transfer these currencies again in a secure private and anonymous manner. In few scenarios in case you needed to transfer money to a friend or to another person in an anonymous manner or if you wanted to pay for something if you wanted to buy something anonymously and privately or if you're simply just buying from a website that only accepts Cryptocurrencies.
Category: Artificial Intelligence

[1063] viXra:2005.0003 [pdf] submitted on 2020-05-01 07:59:43

Machine Learning Dielectric Nanostructures

Authors: George Rajna
Comments: 38 Pages.

A paper published in Advanced Photonics "Enhanced light–matter interactions in dielectric nanostructures via machine-learning approach," suggests that machine-learning techniques can be used to enhance metasurfaces, optimizing them for nonlinear optics and optomechanics. [25] Researchers have mathematically proven that a powerful classical machine learning algorithm should work on quantum computers. [24] Researchers at Oregon State University have used deep learning to decipher which ribonucleic acids have the potential to encode proteins. [23]
Category: Artificial Intelligence

[1062] viXra:2005.0002 [pdf] submitted on 2020-05-01 08:20:36

Machine Learning Particle Accelerator

Authors: George Rajna
Comments: 68 Pages.

Now, SLAC researchers have developed a new tool, using machine learning, that may make part of the tuning process five times faster compared to previous methods. [40] Compared with the previous method of data pre-processing, the new machine-learning-based method has quadrupled quality metrics for the identification of particles on the calorimeter. [39] From the data collected by the LHCb detector at the Large Hadron Collider, it appears that the particles known as charm mesons and their antimatter counterparts are not produced in perfectly equal proportions. The inclusion of short-range interactions in models of neutrinoless double-beta decay could impact the interpretation of experimental searches for the elusive decay. [34] The occasional decay of neutrons into dark matter particles could solve a long-standing discrepancy in neutron decay experiments. [33] The U.S. Department of Energy has approved funding and start of construction for the SuperCDMS SNOLAB experiment, which will begin operations in the early 2020s to hunt for hypothetical dark matter particles called weakly interacting massive particles, or WIMPs. [32] Thanks to low-noise superconducting quantum amplifiers invented at the University of California, Berkeley, physicists are now embarking on the most sensitive search yet for axions, one of today's top candidates for dark matter. [31]
Category: Artificial Intelligence

[1061] viXra:2004.0676 [pdf] submitted on 2020-04-29 17:09:39

Password Security: Best Practices and Management Strategies

Authors: Mohammed Hasan
Comments: 4 Pages.

Due to the rapid increase in the desire to use online technology, the use of password security has become vital for users worldwide to protect their sensitive data or accounts by implementing a password key only known to them in order to access their personal data. Throughout the years, as data has become more involved with being stored online, the creativity of different strategies of passwords has also increased as certain data may only be accessed through unique methods such as fingerprint scan. One of the major types of services that require users to protect their details is online banking such as PayPal or NatWest where users would provide a stronger password compared to an account with low value such as a mobile phone game. This report will go in depth on the best practices and strategies that derive from password security.
Category: Artificial Intelligence

[1060] viXra:2004.0675 [pdf] submitted on 2020-04-29 19:10:40

Survey on Table Detection from Document Image

Authors: Shashank Jain, Amritesh Singh, Rahul Ranjan Singh
Comments: 7 Pages.

There are many types of invoice having table exist in the current system such as table in native text invoices, table in image invoices (II), table in handwritten invoices (HI) and so on. Nowadays, these different types of invoices are processing manually. Now our aim to survey such system which can handle invoices having the table automatically by using OCR (Optical Character Reader) and Deep Learning Technologies. Moreover, we will also discussed multiple technologies and suggest the best model as per our survey
Category: Artificial Intelligence

[1059] viXra:2004.0611 [pdf] submitted on 2020-04-26 16:39:21

Language for Description of Worlds

Authors: Dimiter Dobrev
Comments: 34 Pages. Bulgarian language

We will reduce the task of creating AI to the task of finding the right language for description of the world. This language will not be a programming language because the programming languages describe only computable functions, while this language will describe a slightly wider class of functions. Another feature of this language will be that the description can be divided into separate modules. This will allow us to search the world description automatically by detecting it module by module. Our approach to creating this new language will be to start from one particular world and write a description of that particular world. Our idea is that the language that can describe this particular world will be appropriate to describe arbitrary world.
Category: Artificial Intelligence

[1058] viXra:2004.0580 [pdf] submitted on 2020-04-25 11:56:48

Speaker Identification of Customer and Agent Using Aws

Authors: Satya Narayana
Comments: 3 Pages.

As everyone knows that Sentimental analysis plays an important role in these days because many start-ups have started with user-driven content [1]. Only finding the voice is not be the real time scenario so finding the Sentiment analysis of agent and customer separately is an important research area in natural language processing. Natural language processing has a wide range of applications like voice recognition, machine translation, product review, aspect-oriented product analysis, sentiment analysis and text classification etc [2]. This process will improve the business by analyze the emotions of the conversation with respect to the customer voice separately and also agent voice separately. In this project author going to perform speaker identification and analyze the sentiment of the customer and agent separately using Amazon Comprehend. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to extract the content of the voice. By using the speaker identification author can extract the unstructured data like images, voice etc separately so it is easy to analyze the business performance. Thus, will identify the emotions of the conversation and give the output whether the customer conversation is Positive, Negative, Neutral, or Mixed. To perform this author going to use some services from Aws due to some advantages like scaling the resources is easy compare to the normal process like doing physically such as support vector machine (SVM). AWS services like s3 is a object data store, Transcribe which generate the audio to text in raw format, Aws Glue is a ETL Service which will extract transform and load the data from the S3, Aws Comprehend is a NLP service used for finding sentiment of audio, Lambda is a server less where author can write a code, Aws Athena is a analyzing tools which will make complex queries in less time and last there is quick sight is a business intelligent tool where author can visualize the data of customers and also agents.
Category: Artificial Intelligence

[1057] viXra:2004.0559 [pdf] submitted on 2020-04-23 19:33:20

An Efficient Approach for Credit Card Fraud Detection

Authors: Rajeev Kumar, Rajesh Budihal
Comments: 20 Pages.

The purpose of this research paper, the topic of credit card fraud detection has gained and developed fraudsters are increasing day by day among researches because of their frequent look in varied and widespread application within the field of various branches of information technology and engineering. For example, genetic algorithms, Behavior-based techniques, and Hidden Marks models are also used to address these problems of technology. Credit card fraud detection models for transactions are tested individually and proceed to whatever is most effective. This thesis aims to detect fraudulent transactions and develop some method of generating test data. These algorithms are a predictive approach in solving high complexity computational problems. We discussed a new method to goal or deal with detect fraud by filtering the above techniques to induce an improved result. These algorithms are a predictive approach in solving high complexity computational problems. It is an adaptation technique and evolutionary discovery that supports the existence of genetic and fittest. Implementation of efficient credit card fraud detection systems is mandatory for all credit card issuing companies or their customers to reduce their losses.
Category: Artificial Intelligence

[1056] viXra:2004.0412 [pdf] submitted on 2020-04-17 08:01:03

Ai Checks CT Scans for Covid-19

Authors: George Rajna
Comments: 45 Pages.

Artificial intelligence (AI) can diagnose COVID-19 from CT scans, researchers in China claim [26] Researchers in Berlin and Heidelberg have now developed an intelligent neural network that can predict the functions of proteins in the human body. [25] AI combined with stem cells promises a faster approach to disease prevention. Andrew Masterson reports. [24] According to product chief Trystan Upstill, the news app "uses the best of artificial intelligence to find the best of human intelligence—the great reporting done by journalists around the globe." [23]
Category: Artificial Intelligence

[1055] viXra:2004.0371 [pdf] submitted on 2020-04-15 02:39:41

Inverted Generator Classifier: Accurate and Robust Gradient-Descent Based Classifier

Authors: Jeongik Cho
Comments: 6 Pages.

In the field of deep learning, traditional classifier takes input data and output predicted labels. The conditional GAN receives the latent vector and the condition vector, and generates data with the desired condition. In this paper, I propose an inverted generator classifier that predicts the label of input data by finding condition vectors and latent vectors that can generate input data by using a generator of conditional GAN. Inverted Generator Classifier uses the trained generator of conditional GAN as it is. To find the data closest to the input data, Inverted Generator Classifier takes the latent vector of the generator for each condition as a variable and model parameters as constants, and performs gradient descent repeatedly to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label. Inverted Generator Classifier is slow when predicting because it predicts based on gradient descent, but accuracy is high and very robust against adversarial attacks [1] such as noise.
Category: Artificial Intelligence

[1054] viXra:2004.0363 [pdf] submitted on 2020-04-15 08:02:01

Multi-Task Deep Learning Based CT Imaging Analysis for Covid-19: Classification and Segmentation

Authors: Amine Amyar, Romain Modzelewski, Su Ruan
Comments: 7 Pages.

The fast spreading of the novel coronavirus COVID-19 has aroused worldwide interest and concern, and caused more than one million and a half confirmed cases to date. To combat this spread, medical imaging such as computed tomography (CT) images can be used for diagnostic. An automatic detection tools is necessary for helping screening COVID-19 pneumonia using chest CT imaging. In this work, we propose a multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Our motivation is to leverage useful information contained in multiple related tasks to help improve both segmentation and classification performances. Our architecture is composed by an encoder and two decoders for reconstruction and segmentation, and a multi-layer perceptron for classification. The proposed model is evaluated and compared with other image segmentation and classification techniques using a dataset of 1044 patients including 449 patients with COVID-19, 100 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.78 for the segmentation and an area under the ROC curve higher than 93% for the classification.
Category: Artificial Intelligence

[1053] viXra:2004.0318 [pdf] submitted on 2020-04-12 21:21:53

Instancenet: Object Instance Segmentation Using DNN

Authors: Yuan Gao
Comments: 11 Pages.

One-stage object detectors like SSD and YOLO are able to speed up existing two-stage detectors like Faster R-CNN by removing the object proposal stage and making up for the lost performance in other ways. Nonetheless, the same approach is not easily transferable to instance segmentation task. Current one-stage instance segmentation methods can be simply classified into segmentation-based methods which segment first then do clustering, and proposal-based methods which detect first then predict masks for each instance proposal. Proposal-based methods always enjoy a better mAP; by contrast, segmentation-based methods are generally faster when inferencing. In this work, we first propose a one-stage segmentation-based instance segmentation solution, in which a pull loss and a push loss are used for differentiating instances. We then propose two post-processing methods, which provide a trade-off between accuracy and speed.
Category: Artificial Intelligence

[1052] viXra:2004.0293 [pdf] submitted on 2020-04-11 22:56:08

Comparative Study of MD5 and SHA Security Algorithm

Authors: Abhishek.B.N
Comments: 4 Pages.

Security algorithms enables secure communication between two parties in the presence of a third-Party or a snooper. It guarantees the recipient of the message of the genuineness of the received message, protects the message against the unauthorized release of the message content by the third party, only authorized users can access the data. MD5 and (SHA), cryptographic hash algorithms are one-way hashing functions which are easier to compute/convert but are much harder to reverse and would take around millions of years to compute the authentic message content. This research paper analyses the two hash algorithms, MD5 and SHA, using various key features. Their features have also been highlighted in order to provide a better comparison picture so that they can understand which algorithm has superseded the other.
Category: Artificial Intelligence

[1051] viXra:2004.0248 [pdf] submitted on 2020-04-10 16:17:02

Predicting the Likelihood of Mortality in Confirmed Positive COVID-19 Patients

Authors: Rajdeep Singh
Comments: 3 Pages.

The novel coronavirus - COVID-19 - has evolved into a global pandemic. With that, it is imperative that countries and medical facilities are equipped with the technology and resources to give every person the greatest chance of surviving. With that, even developed nations are beginning to run low on medical supplies such as hospital beds, masks, and respirators. With the growth of cases in the United States, hospitals will continue to run out of supplies. It is imperative that medical supplies get distributed to those who need it the most first. This paper outlines a machine learning approach to predicting patients who are at the most risk of mortality given the confirmed positive diagnosis of coronavirus. The final results were inconclusive enough to be implemented in a real-world scenario.
Category: Artificial Intelligence

[1050] viXra:2004.0222 [pdf] submitted on 2020-04-10 12:08:02

Decoupling Global and Local Representations From/for Image Generation

Authors: Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Comments: 24 Pages. Preprint

In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting. The proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a plain log-likelihood objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
Category: Artificial Intelligence

[1049] viXra:2004.0190 [pdf] submitted on 2020-04-08 01:37:34

AI Predicts Movements of Molecules

Authors: George Rajna
Comments: 41 Pages.

A team of researchers at Google's DeepMind has developed an AI system that is able to predict the movement of glass molecules as the material transitions between liquid and solid states. [25] A research team centered at Osaka University, in collaboration with RIKEN, has developed a system that can overcome these difficulties by automatically searching for, focusing on, imaging, and tracking single molecules within living cells. [24] But researchers at Purdue University are working on a solution, combining quantum algorithms with classical computing on small-scale quantum computers to speed up database accessibility. [23] Researchers at the University of Twente, working with colleagues at the Technical Universities of Delft and Eindhoven, have successfully developed a new and interesting building block. [22] Researchers at the Institut d'Optique Graduate School at the CNRS and Université Paris-Saclay in France have used a laser-based technique to rearrange cold atoms one-by-one into fully ordered 3D patterns. [21] Reduced entropy in a three-dimensional lattice of super-cooled, laser-trapped atoms could help speed progress toward creating quantum computers. [20] Under certain conditions, an atom can cause other atoms to emit a flash of light. At TU Wien (Vienna), this quantum effect has now been measured. [19] A recent discovery by William & Mary and University of Michigan researchers transforms our understanding of one of the most important laws of modern physics. [18] Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first. [17]
Category: Artificial Intelligence

[1048] viXra:2004.0159 [pdf] submitted on 2020-04-07 03:45:06

HyperSpacetime: Complex Algebro-Geometric Analysis of Intelligence Quantum Entanglement Convergent Evolution

Authors: Yang Zhang
Comments: 14 Pages.

Nature is structural instead of random, correlation is just approximation of causality, and data is not science: the more we reveal the more we revere nature on our voyage of unprecedented discovery. We argue that the soul(s) or exotic soul(s) of quotient Hypercomplex arbifold multiscale Spacetime (HyperSpacetime)'s corresponding manifold(s)/general (quotient and non-quotient) HyperSpacetime is the origin of super/general intelligence, and the metric of super/general intelligence is the complexity of quotient/general HyperSpacetime's corresponding generic polynomial. We also argue that the intersecting soul(s) and/or exotic soul(s) as varieties of quotient HyperSpacetime's corresponding manifold(s), when their maximal/minimum sectional curvatures approaching positive infinity and/or negative infinity as singularities, is the origin of quantum entanglement. We further argue that the maximal/minimum sectional curvatures of the same intersecting soul(s) and/or exotic soul(s), is the origin of convergent evolution through conformal transformation. We derive even N-dimensional HyperSpacetime, a M-open (\begin{math} M = C_{_{I+N}}^{^I} \text{, } I, N, M \to \infty \end{math}) arbifold as generalized orbifold with the structure of a algebraic variety $\mathcal{A}$, without or with loop group action as $\mathcal{A}=[\mathcal{M}/\mathcal{LG}]$ ($\mathcal{M}$ as complex manifold, $\mathcal{LG}$ as loop group), it arises from I-degree (power of 2) hypercomplex even N-degree generic polynomial continuous/discrete function/functor as nonlinear action functional in hypercomplex $\mathbb{HC}^{\infty}$ useful for generic neural networks: $\mathcal{F}(S_j,T_j)=\prod_{n=1}^{^{N}}(w_nS_n(T_n)+b_n+ \gamma \sum_{k=1}^{^{j}}\mathcal{F}(S_{k-1},T_{k-1}))$ where $j=1,\dots,N$, $S_{i}=s_0e_0+\sum_{i=1}^{^{{I-1}}}s_{i}e_{i}$, $T_{i}=t_0e_0+\sum_{i=1}^{^{{I-1}}}t_{i}e_{i}$ over noncommutative nonassociative loop group. Its sectional curvature is \begin{math} \kappa = \frac{{\left| {\mathcal{F}''\left(X \right)} \right|}}{{{{\left( {1 + {{\left[ {\mathcal{F}'\left(X \right)} \right]}^2}} \right)}^{\frac{3}{2}}}}} \end{math} if $\mathcal{F}(X)$ is smooth, or \begin{math} \kappa = \kappa_{max}\kappa_{min} \end{math} if nonsmooth, by correlating general relativity with quantum mechanics via extension from 3+1 dimensional spacetime $\mathbb{R}^{4}$ to even N-dimensional HyperSpacetime $\mathbb{HC}^{\infty}$. By directly addressing multiscale, singularities, statefulness, nonlinearity instead of via activation function and backpropagation, HyperSpacetime with its corresponding generic polynomial determining the complexity of ANN, rigorously models curvature-based $2^{nd}$ order optimization in arbifold-equivalent neural networks beyond gradient-based $1^{st}$ order optimization in manifold-approximated adopted in AI. We establish HyperSpacetime generic equivalence theory by synthesizing Generalized Poincar\'{e} conjecture, soul theorem, Galois theory, Fermat's last theorem, Riemann hypothesis, Hodge conjecture, Euler's theorem, Euclid theorem and universal approximation theorem. Our theory qualitatively and quantitatively tackles the black box puzzle in AI, quantum entanglement and convergent evolution. Our future work includes HyperSpacetime refinement, complexity reduction and synthesis as our ongoing multiversal endeavor.
Category: Artificial Intelligence

[1047] viXra:2004.0106 [pdf] submitted on 2020-04-05 05:57:59

Artificial Intelligence Material Formula

Authors: George Rajna
Comments: 51 Pages.

To this end, Ph.D. researcher Lars Banko, together with colleagues from the Interdisciplinary Centre for Advanced Materials Simulation at RUB, Icams for short, modified a so-called generative model. [30] Now, researchers have tested the first artificial intelligence model to identify and rank many causes in real-world problems without time-sequenced data, using a multi-nodal causal structure and Directed Acyclic Graphs. [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[1046] viXra:2004.0083 [pdf] submitted on 2020-04-04 01:50:26

An Insight into Ruby Programming Language based Probing of Scientific Imaging Informatics Framework –An Interesting Simple Suggestion for Rapid Prototyping of Algorithms.

Authors: Nirmal Tej Kumar
Comments: 3 Pages. Short Communication & Technical Notes

A Technical Communication on Understanding & Exploring [ Recommender Systems + Machine Learning(ML) + NLP +QRNG/mruby+SmartDevices+IoT/HPC-High Performance Computing ] in the Context of Advanced Scientific Imaging Algorithms towards Software R&D Using Ruby –> [ Designing + Developing + Testing ] Heterogeneous Computing Environments. { https://www.semanticscholar.org/ - COVID 19 Information is our inspiration } ----- →
Category: Artificial Intelligence

[1045] viXra:2004.0034 [pdf] submitted on 2020-04-02 04:56:48

Power of ai in Covid-19 Crisis

Authors: George Rajna
Comments: 75 Pages.

Artificial intelligence (AI) may soon have a central role to play in the global battle against COVID-19. [42] Simon Fraser University researchers will use their pioneering imaging technology—called Mango, for its bright colour— to develop coronavirus testing kits. [41] According to the Centers for Disease Control and Prevention, common human coronaviruses usually cause mild to moderate upper-respiratory tract illnesses, like the common cold. [40]
Category: Artificial Intelligence

[1044] viXra:2004.0029 [pdf] submitted on 2020-04-02 08:48:20

AI Spin on Neutron Experiments

Authors: George Rajna
Comments: 26 Pages.

For the first time, a team at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) is using artificial intelligence (AI) to find patterns in neutron scattering data that can lead to an understanding of the physics inside quantum or complex magnetic materials. [15] "As far as we know, this is the first published work showing an application of super resolution to neutrons. We're at the forefront of an exciting new trend that will help other neutron scattering facilities improve their own data resolution as well," said Lin. [14] Coupled with SNS, the world's most powerful pulsed accelerator-based neutron source, VENUS will be the only open research facility platform in the US to provide time-of-flight neutron imaging capabilities to users from academia and industry. [13] A spallation neutron source has been used by physicists in Japan to search for possible violations of the inverse square law of gravity. [12] Physicists have proposed a way to test quantum gravity that, in principle, could be performed by a laser-based, table-top experiment using currently available technology. [11] Now however, a new type of materials, the so-called Weyl semimetals, similar to 3-D graphene, allow us to put the symmetry destructing quantum anomaly to work in everyday phenomena, such as the creation of electric current. [10] Physicist Professor Chunnong Zhao and his recent PhD students Haixing Miao and Yiqiu Ma are members of an international team that has created a particularly exciting new design for gravitational wave detectors. [9] A proposal for a gravitational-wave detector made of two space-based atomic clocks has been unveiled by physicists in the US. [8] The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA. [7] A team of researchers with the University of Lisbon has created simulations that indicate that the gravitational waves detected by researchers with the LIGO project, and which are believed to have come about due to two black holes colliding, could just have easily come from another object such as a gravaster (objects which are believed to have their insides made of dark energy) or even a wormhole. In their paper published in Physical Review Letters, the team describes the simulations they created, what was seen and what they are hoping to find in the future. [6] In a landmark discovery for physics and astronomy, international scientists said Thursday they have glimpsed the first direct evidence of gravitational waves, or ripples in space-time, which Albert Einstein predicted a century ago. [5] Scientists at the National Institute for Space Research in Brazil say an undiscovered type of matter could be found in neutron stars (illustration shown). Here matter is so dense that it could be 'squashed' into strange matter. This would create an entire 'strange star'-unlike anything we have seen. [4] The changing acceleration of the electrons explains the created negative electric field of the magnetic induction, the electromagnetic inertia, the changing relativistic mass and the Gravitational Force, giving a Unified Theory of the physical forces. Taking into account the Planck Distribution Law of the electromagnetic oscillators also, we can explain the electron/proton mass rate and the Weak and Strong Interactions.
Category: Artificial Intelligence

[1043] viXra:2004.0024 [pdf] submitted on 2020-04-01 10:54:54

Artificial Intelligence Finds 2-D of an Eye

Authors: George Rajna
Comments: 39 Pages.

Researchers at the Institute of Industrial Science, a part of The University of Tokyo, demonstrated a novel artificial intelligence system that can find and label 2-D materials in microscope images in the blink of an eye. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[1042] viXra:2004.0003 [pdf] submitted on 2020-04-01 09:41:22

Deep Learning 3-D Laser Damage

Authors: George Rajna
Comments: 40 Pages.

Recently, a research team from Shanghai Institute of Optics and Fine Mechanics of the Chinese Academy of Sciences (CAS) proposed a three-dimensional damage localization method which was insensitive to the type of damage. [26] A UCLA research team has devised a technique that extends the capabilities of fluorescence microscopy, which allows scientists to precisely label parts of living cells and tissue with dyes that glow under special lighting. [25] Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24]
Category: Artificial Intelligence

[1041] viXra:2003.0652 [pdf] submitted on 2020-03-30 07:45:49

Machine Learning on Spin Models

Authors: George Rajna
Comments: 44 Pages.

Researchers from Tokyo Metropolitan University have used machine learning to analyze spin models, which are used in physics to study phase transitions. [26] We are still far off from achieving Quantum Advantage for machine learning-the point at which quantum computers surpass classical computers in their ability to perform AI algorithms. [25] Physicists in the US have used machine learning to determine the phase diagram of a system of 12 idealized quantum particles to a higher precision than ever before. [24] The research group took advantage of a system at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning-a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data-with experiments that quickly make and screen hundreds of sample materials at a time. [23] Researchers at the UCLA Samueli School of Engineering have demonstrated that deep learning, a powerful form of artificial intelligence, can discern and enhance microscopic details in photos taken by smartphones. [22] Such are the big questions behind one of the new projects underway at the MIT-IBM Watson AI Laboratory, a collaboration for research on the frontiers of artificial intelligence. [21]
Category: Artificial Intelligence

[1040] viXra:2003.0583 [pdf] submitted on 2020-03-26 11:48:51

Sense Theory. Antiderivative

Authors: Egger Mielberg
Comments: 15 Pages.

Like each neuron of the human brain may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synaptic connections, in Sense Theory there is a possibility for connecting over 1,000 trillion heterogeneous objects.An object in Sense Theory is like a neuron in the human brain. Properties of the object are like dendrites of the neuron. Changing object in the process of addition or deletion of its properties is like forming a new knowledge in the process of synaptic connections of two or more neurons. In Sense Theory, we introduced a mechanism for determining possible semantic relationships between objects by connecting-disconnecting different properties. This mechanism is Sense Integral.In this article, we describe one of the instruments, sense antiderivative, that sheds light on the nature of forming new knowledge in the field of Artificial Intelligence.
Category: Artificial Intelligence

[1039] viXra:2003.0565 [pdf] submitted on 2020-03-26 10:23:15

Artificial Intelligence in the Lab

Authors: George Rajna
Comments: 51 Pages.

An Australian-German collaboration has demonstrated fully-autonomous SPM operation, applying artificial intelligence and deep learning to remove the need for constant human supervision. [30] Now, researchers have tested the first artificial intelligence model to identify and rank many causes in real-world problems without time-sequenced data, using a multi-nodal causal structure and Directed Acyclic Graphs. [29] A country that thinks its adversaries have or will get AI weapons will want to get them too. Wide use of AI-powered cyberattacks may still be some time away. [28] Following the old saying that "knowledge is power", companies are seeking to infer increasingly intimate properties about their customers as a way to gain an edge over their competitors. [27] Researchers from Human Longevity, Inc. (HLI) have published a study in which individual faces and other physical traits were predicted using whole genome sequencing data and machine learning. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21] We should remain optimistic that quantum computing and AI will continue to improve our lives, but we also should continue to hold companies, organizations, and governments accountable for how our private data is used, as well as the technology's impact on the environment. [20]
Category: Artificial Intelligence

[1038] viXra:2003.0557 [pdf] submitted on 2020-03-25 19:23:46

Covid-19 :Statistical Exploration

Authors: Ayoub Abraich
Comments: 4 Pages. Code : https://github.com/abraich/COVID-19

In this article we present a naive model for the prediction of the number of COVID-19 infections, with illustrations of real data on the evolution of COVID-19 in France.
Category: Artificial Intelligence

[1037] viXra:2003.0508 [pdf] submitted on 2020-03-24 09:40:36

Nanoscale Device Mimic Human Brain

Authors: George Rajna
Comments: 52 Pages.

In a paper published in Nature Nanotechnology on 23 March 2020, the researchers from the NUS Nanoscience and Nanotechnology Initiative (NUSNNI) reported the invention of a nanoscale device based on a unique material platform that can achieve optimal digital in-memory computing while being extremely energy efficient. [35] University of Central Florida researchers are helping to close the gap separating human and machine minds. [34] Brain-machine interfaces provide one way to connect with this puzzling organ system, including the brain. [33]
Category: Artificial Intelligence

[1036] viXra:2003.0484 [pdf] submitted on 2020-03-22 21:53:07

Random Thoughts on Neural Networks from the Views of Data Space Transformation and Ensemble Classification

Authors: Qing Tian, Guangjun Tian
Comments: 4 Pages. in Chinese

This manuscript sketch first describes neural networks’ effect from the perspective of data space transformation, which is transforming data in a complicated raw space into an easily (e.g. linearly) separable space. We use a simple paper wrapping example to illustrate this point. In addition, this sketch also discusses some similarities between neural networks and ensemble classification.
Category: Artificial Intelligence

[1035] viXra:2003.0419 [pdf] submitted on 2020-03-20 05:16:19

AI Enhance Accuracy of CT Scans

Authors: George Rajna
Comments: 51 Pages.

After testing prototype AI software on over 140 patients, a multinational team of researchers found that the algorithm showed very strong correlation with traditional pulmonary function tests. [32] A new artificial-intelligence tool captures strategies used by top players of an internet-based videogame to design new RNA molecules. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[1034] viXra:2003.0378 [pdf] submitted on 2020-03-18 05:10:54

AI Prevent Disruption in Fusion Devices

Authors: George Rajna
Comments: 51 Pages.

This capability will be crucial for ITER, the large international tokamak under construction in France to demonstrate the practicality of fusion energy. [32] An artificial intelligence (AI) algorithm can transform low-dose CT (LDCT) scans into high-quality exams that radiologists may even prefer over LDCT studies produced via commercial iterative reconstruction techniques. [31] A team of EPFL scientists has now written a machine-learning program that can predict, in record time, how atoms will respond to an applied magnetic field. [30] Researchers from the University of Luxembourg, Technische Universität Berlin, and the Fritz Haber Institute of the Max Planck Society have combined machine learning and quantum mechanics to predict the dynamics and atomic interactions in molecules. [29] For the first time, physicists have demonstrated that machine learning can reconstruct a quantum system based on relatively few experimental measurements. [28] AlphaZero plays very unusually; not like a human, but also not like a typical computer. Instead, it plays with "real artificial" intelligence. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[1033] viXra:2003.0373 [pdf] submitted on 2020-03-18 06:16:14

Machines Learn Chemistry

Authors: George Rajna
Comments: 37 Pages.

Models based on artificial intelligence can significantly change the way we approach chemical syntheses. But we are still at the very beginning." [26] A new tool is drastically changing the face of chemical research-artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[1032] viXra:2003.0304 [pdf] submitted on 2020-03-14 01:21:27

A Short & Simple Technical Communication on Algorithms Design Using Python Based [ Applied Physics+AI+Imaging Mathematics+Data Bases ] → Image Processing Software R&D.

Authors: Nirmal Tej Kumar
Comments: 7 Pages. Short Communication

A Short & Simple Technical Communication on Algorithms Design Using Python Based [ Applied Physics+AI+Imaging Mathematics+Data Bases ] → Image Processing Software R&D.
Category: Artificial Intelligence

[1031] viXra:2003.0193 [pdf] submitted on 2020-03-09 15:22:53

Machine Learning Odd LHC Data

Authors: George Rajna
Comments: 54 Pages.

This study is part of a larger, coordinated effort across all the LHC experiments to use modern machine techniques to improve how the large data samples are recorded by the detectors and the subsequent data analysis. [28] Machine learning and automation technologies are gearing up to transform the radiation-therapy workflow while freeing specialist clinical and technical staff to dedicate more time to patient care. [27] Navid Borhani, a research-team member, says this machine learning approach is much simpler than other methods to reconstruct images passed through optical fibers, which require making a holographic measurement of the output. [26]
Category: Artificial Intelligence

[1030] viXra:2003.0138 [pdf] submitted on 2020-03-07 07:22:48

Machine Learning Material's Order

Authors: George Rajna
Comments: 39 Pages.

A Cornell collaboration led by physicist Brad Ramshaw, the Dick & Dale Reis Johnson Assistant Professor in the College of Arts and Sciences, used a combination of ultrasound and machine learning to narrow the possible explanations for what happens to this quantum material when it enters this so-called "hidden order." [27] A new study from the U.S. Department of Energy's (DOE) Argonne National Laboratory has achieved a breakthrough in the effort to mathematically represent how water behaves. [26] A new tool is drastically changing the face of chemical research – artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24]
Category: Artificial Intelligence

[1029] viXra:2003.0033 [pdf] submitted on 2020-03-02 08:54:35

Machine Learning with Tomography

Authors: George Rajna
Comments: 31 Pages.

By using machine learning as an image processing technique, scientists can dramatically accelerate the heretofore laborious manual process of quantitatively looking for and at interfaces without having to sacrifice accuracy. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[1028] viXra:2003.0032 [pdf] submitted on 2020-03-02 09:04:51

Machine Learning Earthquake Data

Authors: George Rajna
Comments: 34 Pages.

The new method could allow researchers to artificially synthesize the low-frequency waves that are hidden in seismic data, which can then be used to more accurately map the Earth's internal structures. [22] By using machine learning as an image processing technique, scientists can dramatically accelerate the heretofore laborious manual process of quantitatively looking for and at interfaces without having to sacrifice accuracy. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[1027] viXra:2003.0019 [pdf] submitted on 2020-03-01 02:57:08

İşletmeler Için Personel Yemek Talep Miktarının Yapay Sinir Ağları Kullanılarak Tahmin Edilmesi

Authors: M. Hanefi CALP
Comments: 13 Pages.

Günümüzde kamu veya özel kurumların birçoğu, bünyelerinde çalışan personeller için profesyonel yemek hizmeti vermektedir. Söz konusu hizmetin planlanması konusunda, kurumlarda çalışan personel sayısının genel olarak fazla olması ve personellerin şahsi veya kuruma ait sebeplerle kurum dışında olmalarından dolayı birtakım aksamalar yaşanmaktadır. Bu yüzden, günlük yemek talebinin belirlenmesi zorlaşmakta ve bu durum kurumlar için maliyet, zaman ve emek kaybına sebep olmaktadır. Bu kayıpları ortadan kaldırmak veya en azından minimuma indirmek amacıyla istatistiksel veya sezgisel yöntemler kullanılmaktadır. Bu çalışmada, işletmeler için yapay sinir ağları kullanılarak günlük yemek talebini tahmin eden yapay zekâ tabanlı bir model önerilmiştir. Veriler, günlük yemek çıkaran ve farklı kademlerde görev alan 110 kişilik bir personel kapasitesine sahip özel bir işletmenin yemekhane veritabanından elde edilmiş olup son 2 yıllık (2016-2018) veriyi kapsamaktadır. Model, MATLAB paket programı kullanılarak oluşturulmuştur. Modelin performansı, Regresyon değerleri, Ortalama Mutlak Hata Yüzdesi (OMHY-MAPE) ve Ortalama Karesel Hata (OKH-MSE) dikkate alınarak belirlenmiştir. Ağın eğitiminde, ileri beslemeli geri yayılımlı ağ mimarisi kullanılmıştır. Denemeler sonucunda elde edilen en iyi model, sırasıyla eğitim R oranı: 0,9948, test R oranı: 0,9830 ve hata oranı ise 0,003783 olup çok katmanlı (8-10-10-1) bir yapıya sahiptir. Deney sonuçları, modelin hata oranının düşük, performansının yüksek olduğunu ve talep tahmini için yapay sinir ağları kullanımının olumlu etkisini ortaya koymuştur.
Category: Artificial Intelligence

[1026] viXra:2003.0018 [pdf] submitted on 2020-03-01 02:59:47

Evaluation of Multidisciplinary Effects of Artificial Intelligence with Optimization Perspective

Authors: M. Hanefi CALP
Comments: 10 Pages.

Artificial Intelligence has an important place in the scientific community as a result of its successful outputs in terms of different fields. In time, the field of Artificial Intelligence has been divided into many sub-fields because of increasing number of different solution approaches, methods, and techniques. Machine Learning has the most remarkable role with its functions to learn from samples from the environment. On the other hand, intelligent optimization done by inspiring from nature and swarms had its own unique scientific literature, with effective solutions provided for optimization problems from different fields. Because intelligent optimization can be applied in different fields effectively, this study aims to provide a general discussion on multidisciplinary effects of Artificial Intelligence by considering its optimization oriented solutions. The study briefly focuses on background of the intelligent optimization briefly and then gives application examples of intelligent optimization from a multidisciplinary perspective.
Category: Artificial Intelligence

[1025] viXra:2003.0017 [pdf] submitted on 2020-03-01 03:00:58

A Hybrid ANFIS-GA Approach for Estimation of Regional Rainfall Amount

Authors: M. Hanefi CALP
Comments: 18 Pages.

Effective use and management of ever-diminishing water resources are critically important to the future of humanity. At this point, rainfall is one of the most important factors that supply water resources, but the fact that the rainfall higher is more than normal causes many disasters such as flood, erosion. Therefore, rainfall amount must be analyzed mathematically, statistically or heuristically in order to take precautions, in the region. In this study, an Adaptive Neuro Fuzzy Inference System - Genetic Algorithm (ANFIS-GA) based hybrid model was proposed for estimation of regional rainfall amount. Purpose of the study is to minimize the loss of life and goods for people of the region by estimating the amount of annual rainfall and ensuring effective management of water resources and allowing some evaluations and preparations according to possible climate changes. The estimation model was developed by coding in the MATLAB package program. In the development of the model, 3650 meteorological data from 2008-2018 years belonging to Basel, a Swiss city, were utilized. The real data were tested on both the Artificial Neural Network (ANN) and the hybrid ANFIS-GA model. The obtained results demonstrated that the training R-value of the suggested ANFIS-GA model was 0.9920, the testing R-value was 0.9840 and the error ratio was 0.0011. This clearly shows that predictive performance of the model is high and error level is low, and therefore that hybrid approaches such as ANFIS-GA can be easily used in predicting meteorological events.
Category: Artificial Intelligence

[1024] viXra:2003.0015 [pdf] submitted on 2020-03-01 03:03:48

Optimization of Project Scheduling Activities in Dynamic CPM and PERT Networks Using Genetic Algorithms

Authors: M. Hanefi CALP
Comments: 13 Pages.

Projects consist of interconnected dimensions such as objective, time, resource and environment. Use of these dimensions in a controlled way and their effective scheduling brings the project success. Project scheduling process includes defining project activities, and estimation of time and resources to be used for the activities. At this point, the project resource-scheduling problems have begun to attract more attention after Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM) are developed one after the other. However, complexity and difficulty of CPM and PERT processes led to the use of these techniques through artificial intelligence methods such as Genetic Algorithm (GA). In this study, an algorithm was proposed and developed, which determines critical path, critical activities and project completion duration by using GA, instead of CPM and PERT techniques used for network analysis within the scope of project management. The purpose of using GA was that these algorithms are an effective method for solution of complex optimization problems. Therefore, correct decisions can be made for implemented project activities by using obtained results. Thus, optimum results were obtained in a shorter time than the CPM and PERT techniques by using the model based on the dynamic algorithm. It is expected that this study will contribute to the performance field (time, speed, low error etc.) of other studies.
Category: Artificial Intelligence

[1023] viXra:2003.0014 [pdf] submitted on 2020-03-01 03:04:53

Solving the Exam Scheduling Problems in Central Exams with Genetic Algorithms

Authors: Murat Dener, M. Hanefi CALP
Comments: 14 Pages.

It is the efficient use of resources expected from an exam scheduling application. There are various criteria for efficient use of resources and for all tests to be carried out at minimum cost in the shortest possible time. It is aimed that educational institutions with such criteria successfully carry out central examination organizations. In the study, a two-stage genetic algorithm was developed. In the first stage, the assignment of courses to sessions was carried out. In the second stage, the students who participated in the test session were assigned to examination rooms. Purposes of the study are increasing the number of joint students participating in sessions, using the minimum number of buildings in the same session, and reducing the number of supervisors using the minimum number of classrooms possible. In this study, a general purpose exam scheduling solution for educational institutions was presented. The developed system can be used in different central examinations to create originality. Given the results of the sample application, it is seen that the proposed genetic algorithm gives successful results.
Category: Artificial Intelligence

[1022] viXra:2003.0013 [pdf] submitted on 2020-03-01 03:08:39

Medical Diagnosis with a Novel SVM-CoDOA Based Hybrid Approach

Authors: M. Hanefi CALP
Comments: 11 Pages.

Machine Learning is an important sub-field of the Artificial Intelligence and it has been become a very critical task to train Machine Learning techniques via effective method or techniques. Recently, researchers try to use alternative techniques to improve ability of Machine Learning techniques. Moving from the explanations, objective of this study is to introduce a novel SVM-CoDOA (Cognitive Development Optimization Algorithm trained Support Vector Machines) system for general medical diagnosis. In detail, the system consists of a SVM, which is trained by CoDOA, a newly developed optimization algorithm. As it is known, use of optimization algorithms is an essential task to train and improve Machine Learning techniques. In this sense, the study has provided a medical diagnosis oriented problem scope in order to show effectiveness of the SVM-CoDOA hybrid formation.
Category: Artificial Intelligence

[1021] viXra:2002.0533 [pdf] submitted on 2020-02-25 20:18:28

A PCB Manufacturing Machine with Protection Features for Milling RF/Microwave Printed Circuit Boards

Authors: Quang Pham Minh, Huynh Le Vinh, Phuoc, Minh Hiếu Đào, An Nguyen Truong, Tran Phuc Hai, Nam, Louis WY LIU
Comments: 6 Pages.

A printed circuit board (PCB) with a non-flat surface is very common in radio frequency applications. For this reason, we have designed and produced a PCB milling machine capable of milling a PCB with a non-uniform flatness. Method: By embedding the suggested machine with a G-code reconstruction software, the milling device is programmed to implement the following tasks: Step 1, the machine executes the probing procedure to generate the PCB’s Heightmap; Step 2, the machine transforms the input probing signal to a surface map which is a 2-dimensional grid, Step 3, When the machine is running, the height of the drill tip is adjusted according to the PCB surface flatness condition, and the machine before the actual milling operation. As a result, the proposed machine is capable of milling elastic and unlevelled PCBs at high speed with height differences ranging up to 150mm and an ability to halt when the plane angle varies severely. The proposed machine has been used to fabricate microwave print circuits. A close agreement has been obtained between the measured and simulated S-parameters. Although the machine works properly and is equipped with all basic safety functions, it only costs around US$1500 while the market price of a related German product is about US$70000. Conclusion: A machine able to mill microwave PCBs with an uneven surface and equipped with safety function has been successfully built at the Vietnamese - German University a cost of US$1500, which is far less than the market price of similar ones.
Category: Artificial Intelligence

[1020] viXra:2002.0496 [pdf] submitted on 2020-02-25 07:00:48

Global Density Clustering

Authors: Scott T Cohen
Comments: 10 Pages.

This paper presents Global Density Clustering, (GDC), an algorithm that has several major advantages over the most popular existing clustering algorithms: (1) No parameters are chosen at the outset of the function; rather, the user can control the desired resolution as clustering proceeds. (2) GDC is efficient enough to work on a large dataset even when there are a sizable number of features. It is O(MN log N) where M is the number of features, i.e. the dimension, and N is the number of data points, i.e. the dataset size. It is suitable for big data. (3) GDC has the advantage of the powerful and intuitive definition of clusters as: points within a cluster are closer than distance dist to their nearest neighbor in the cluster (dist is not picked at the outset but rather that is chosen as the algorithm is progressing) and all points outside the cluster are further than dist from any point in the cluster. (4) GDC supports variable density without the plethora of special data structures such as HDBSCAN needs. (5) Other advantages are described. An essential reason that GDC has these advantages is it searches for and considers points whose nearest neighbor are furthest apart before searching for those that are closer together. It is a top-down or “global” consideration of distances that other density algorithms do from a bottom-up or “local” view. Other novel approaches to the main problems of clustering such as noisy backgrounds are described.
Category: Artificial Intelligence

[1019] viXra:2002.0361 [pdf] submitted on 2020-02-19 01:59:24

Artificial Neural Network Model MaxEnt

Authors: George Rajna
Comments: 57 Pages.

As an example, in the most commonly used method for solving such problems, the so-called maximum entropy (MaxEnt) approach, prior knowledge is added by specifying a default distribution that corresponds to expected results in the absence of data. [32] MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain's visual cortex. [31] For people with hearing loss, it can very difficult to understand and separate voices in noisy environments. This problem may soon be history thanks to a new groundbreaking algorithm that is designed to recognise and separate voices efficiently in unknown sound environments. [30] While researchers have taken steps to HYPERLINK "https://doi.org/10.1017/S0140525X00023992" comprehensively catalogue the preferences of men and women, we still don't know which traits are the most important contributors to a person's attractiveness. [29] A group of researchers from MIT have already developed an AI robot that can assist in a labour room. [28] Researchers at Fukuoka University, in Japan, have recently proposed a design methodology for configurable approximate arithmetic circuits. [27] Researchers at Google have recently developed a new technique for synthesizing a motion blurred image, using a pair of un-blurred images captured in succession. [26] Constructing a neural network model for each new dataset is the ultimate nightmare for every data scientist. [25] Algorithmic fairness is increasingly important because as more decisions of greater importance are made by computer programs, the potential for harm grows. [24] Intel's Gadi Singer believes his most important challenge is his latest: using artificial intelligence (AI) to reshape scientific exploration. [23] Artificial intelligence is astonishing in its potential. It will be more transformative than the PC and the Internet. Already it is poised to solve some of our biggest challenges. [22] In the search for extraterrestrial intelligence (SETI), we've often looked for signs of intelligence, technology and communication that are similar to our own. [21]
Category: Artificial Intelligence

[1018] viXra:2002.0357 [pdf] submitted on 2020-02-19 04:16:40

Intelligent Particle Analyzer

Authors: George Rajna
Comments: 83 Pages.

In a new paper published in Light Science & Application, a team of European scientists and engineers from ICFO and IRIS in Spain, Ipsumio B.V. in the Netherlands, the Technical University of Denmark, the Technische Universität Dresden in Germany and the University of Leeds in the UK, has developed a new micro-particle size analyser by combining consumer electronics products and artificial intelligence. [50] Researchers in Australia have found a way to manipulate laser light at a fraction of the cost of current technology. [49] The proposed design breaks the current bandwidth limit in the transmission-type coding metasurfaces, indicating wide application potentials in radar and wireless communication systems. [48] In a similar vein, scientists are working to create twisting helical electromagnetic waves whose curvature allows more accurate imaging of the magnetic properties of different materials at the atomic level and could possibly lead to the development of future devices. [47] In a recent study, materials scientists Guojin Liang and his coworkers at the Department of Materials Science and Engineering, City University of Hong Kong, have developed a self-healing, electroluminescent (EL) device that can repair or heal itself after damage. [46] A team of researchers based at The University of Manchester have found a low cost method for producing graphene printed electronics, which significantly speeds up and reduces the cost of conductive graphene inks. [45] Graphene-based computer components that can deal in terahertz "could be used, not in a normal Macintosh or PC, but perhaps in very advanced computers with high processing rates," Ozaki says. This 2-D material could also be used to make extremely high-speed nanodevices, he adds. [44] Printed electronics use standard printing techniques to manufacture electronic devices on different substrates like glass, plastic films, and paper. [43] A tiny laser comprising an array of nanoscale semiconductor cylinders (see image) has been made by an all-A*STAR team. [42]
Category: Artificial Intelligence

[1017] viXra:2002.0314 [pdf] submitted on 2020-02-16 11:58:01

Arllecta: A Decentralized Sense-To-Sense Network

Authors: Egger Mielberg
Comments: 27 Pages.

A purely decentralized Internet would allow its users to create or get access to a public or private informational worldwide network with a guarantee not to be spammed, interrupted or attacked by a third party at all. Semantic Normalizer (SN) would improve the user's Internet search experience and diminish search time greatly. SN eliminates double-answering and double meaning problems. It is a part of the architectural solution of ArLLecta and requires no additional pre-installations. We propose a solution to the nontransparent and domain-centered Internet problem using a decentralized sense-to-sense network. S2S network allows creating public or private zones for business or personal needs. The data of each user, individual or corporate, is decoded and published only by direct permission. The architecture of S2S network prevents the centralization of its data by a single user. However, each user can create or join or leave any zone. The main task of the S2S network is to give each user a possibility for a quick sense-focused search and save its data from unauthorized third parties.
Category: Artificial Intelligence

[1016] viXra:2002.0305 [pdf] submitted on 2020-02-16 07:19:52

Intelligent Systems Theory - Review

Authors: Aleksey A. Demidov
Comments: 5 Pages.

Here I provide a short review of the paper "Collectives of Automata for Building of Active Systems of Artifical Intelligence" -- what I find was done and what has to be done.
Category: Artificial Intelligence

[1015] viXra:2002.0251 [pdf] submitted on 2020-02-13 07:16:54

Machine Learning Quantum Optics

Authors: George Rajna
Comments: 40 Pages.

As machine learning continues to surpass human performance in a growing number of tasks, scientists at Skoltech have applied deep learning to reconstruct quantum properties of optical systems. [27] To overcome these harsh limitations, the researchers exploited an artificial neural network (ANN) to learn the atomic interactions from quantum mechanics. [26] A new tool is drastically changing the face of chemical research-artificial intelligence. In a new paper published in Nature, researchers review the rapid progress in machine learning for the chemical sciences. [25] A new type of artificial-intelligence-driven chemistry could revolutionise the way molecules are discovered, scientists claim. [24] Tired of writing your own boring code for new software? Finally, there's an AI that can do it for you. [23] Welcome to Move Mirror, where you move in front of your webcam. [22] Understanding how a robot will react under different conditions is essential to guaranteeing its safe operation. [21] Marculescu, along with ECE Ph.D. student Chieh Lo, has developed a machine learning algorithm-called MPLasso-that uses data to infer associations and interactions between microbes in the GI microbiome. [20] A team of researchers from the University of Muenster in Germany has now demonstrated that this combination is extremely well suited to planning chemical syntheses-so-called retrosyntheses-with unprecedented efficiency. [19] Two physicists at ETH Zurich and the Hebrew University of Jerusalem have developed a novel machine-learning algorithm that analyses large data sets describing a physical system and extract from them the essential information needed to understand the underlying physics. [18]
Category: Artificial Intelligence

[1014] viXra:2002.0178 [pdf] submitted on 2020-02-09 01:14:30

Optimal Metamodeling to Interpret Activity-Based Health Sensor Data

Authors: Ali Mehmani, Payam Ghassemi, Souma Chowdhury
Comments: 11 Pages.

Wearable sensors are revolutionizing the health monitoring and medical diagnostics arena. Algorithms and software platforms that can convert the sensor data streams into useful/actionable knowledge are central to this emerging domain, with machine learning and signal processing tools dominating this space. While serving important ends, these tools are not designed to provide functional relationships between vital signs and measures of physical activity. This paper investigates the application of the metamodeling paradigm to health data to unearth important relationships between vital signs and physical activity. To this end, we leverage neural networks and a recently developed metamodeling framework that automatically selects and trains the metamodel that best represents the data set. A publicly available data set is used that provides the ECG data and the IMU data from three sensors (ankle/arm/chest) for ten volunteers, each performing various activities over one-minute time periods. We consider three activities, namely running, climbing stairs, and the baseline resting activity. For the following three extracted ECG features – heart rate, QRS time, and QR ratio in each heartbeat period – models with median error of <25% are obtained. Fourier amplitude sensitivity testing, facilitated by the metamodels, provides further important insights into the impact of the different physical activity parameters on the ECG features, and the variation across the ten volunteers.
Category: Artificial Intelligence

[1013] viXra:2002.0127 [pdf] submitted on 2020-02-07 08:53:52

Nanospiral Formation by Machine Learning

Authors: George Rajna
Comments: 71 Pages.

Important insights into the mechanisms underlying spiral nanostructure formation in solidifying metal alloys have been gained by Ashwin Shahani at the University of Michigan and colleagues. [44] A team of researchers from Bilkent University and Sabanci University SUNUM Nanotechnology Research Center has developed a way to control buckling in a nanoscale beam using electrostatic effects. [43] A nanoscale gold butterfly provides a more precise route for growing/synthesizing nanosized semiconductors that can be used in nano-lasers and other applications. [42] Magnetic vortices are nanoscale whirls that gyrate like spinning tops, tracing out paths in a clockwise or counterclockwise manner in nanometer-thick materials. [41] Now a team of Australian scientists has discovered diamond can be bent and deformed, at the nanoscale at least. [40] Researchers at the Okinawa Institute of Science and Technology Graduate University (OIST) have fabricated a novel glass and synthetic diamond foundation that can be used to create miniscule micro-and nanostructures. [39] Osaka University-led researchers demonstrated that the perturbation of laser imprinting on a capsule for nuclear fusion fuel made from stiff and heavy materials was mitigated. [38] Scientists found that relatively slow electrons are produced when intense lasers interact with small clusters of atoms, upturning current theories. [37] Lasers that emit ultrashort pulses of light are critical components of technologies, including communications and industrial processing, and have been central to fundamental Nobel Prize-winning research in physics. [36] A newly developed laser technology has enabled physicists in the Laboratory for Attosecond Physics (jointly run by LMU Munich and the Max Planck Institute of Quantum Optics) to generate attosecond bursts of high-energy photons of unprecedented intensity. [35]
Category: Artificial Intelligence

[1012] viXra:2002.0099 [pdf] submitted on 2020-02-05 00:12:45

Haskell + Deep Learning Informatics in the Context of Designing Image Processing R&D Algorithms– A Simple Suggestion & Short Technical Communication.

Authors: Nirmal Tej Kumar
Comments: 9 Pages. Short Communication

A Deep Learning(DL) Framework Using JIT Compiler with Haskell & LLVM in the Context of Medical Image Processing/Electron Microscopy(EM) Image Processing/Satellite Imagery Software R&D Using Mandelbrot Algorithms is suggested.Exploring Functional Programming+Deep Learning for designing Advanced Image Processing Algorithms R&D-is it the right choice ? - Keep going on with Haskell+Grenade,to refine the image processing tasks based on Deep Learning(DL) concepts.Haskell is a good choice for Image Processing R&D using Deep Learning.
Category: Artificial Intelligence

[1011] viXra:2001.0426 [pdf] submitted on 2020-01-21 06:54:24

Physics Explain Democratic Elections

Authors: George Rajna
Comments: 44 Pages.

It may seem surprising, but theories and formulas derived from physics turn out to be useful tools for understanding the ways democratic elections work, including how these systems break down and how they could be improved. [28] Electrons whizzing around each other and humans crammed together at a political rally don't seem to have much in common, but researchers at Cornell are connecting the dots. [27] Now a group of actual physicists from Australia and Switzerland have proposed a device which uses the quantum tunneling of magnetic flux around a capacitor, breaking time-reversal symmetry. [26] The arrow of time and the accelerated expansion are two fundamental empirical facts of the universe. [25] The intensive, worldwide search for dark matter, the missing mass in the universe, has so far failed to find an abundance of dark, massive stars or scads of strange new weakly interacting particles, but a new candidate is slowly gaining followers and observational support. [24] "We invoke a different theory, the self-interacting dark matter model or SIDM, to show that dark matter self-interactions thermalize the inner halo, which ties ordinary dark matter and dark matter distributions together so that they behave like a collective unit." [23] Technology proposed 30 years ago to search for dark matter is finally seeing the light. [22] They're looking for dark matter-the stuff that theoretically makes up a quarter of our universe. [21] Results from its first run indicate that XENON1T is the most sensitive dark matter detector on Earth. [20]
Category: Artificial Intelligence

[1010] viXra:2001.0366 [pdf] submitted on 2020-01-19 04:46:28

Machine Learning Ancient Past

Authors: George Rajna
Comments: 46 Pages.

A team of researchers affiliated with several institutions in China and two in the U.S. has developed a way to use machine learning to get a better look at the past. In their paper published in the journal Science, the group describes how they used machine learning to analyze records of the past. [28] Bioinformatics researchers at Heinrich Heine University Düsseldorf (HHU) and the University of California at San Diego (UCSD) are using machine learning techniques to better understand enzyme kinetics and thus also complex metabolic processes. [27] DNA regions susceptible to breakage and loss are genetic hot spots for important evolutionary changes, according to a Stanford study. [26] For the English scientists involved, perhaps the most important fact is that their DNA read was about twice as long as the previous record, held by their Australian rivals. [25] Researchers from the University of Chicago have developed a high-throughput RNA sequencing strategy to study the activity of the gut microbiome. [24] Today a large international consortium of researchers published a complex but important study looking at how DNA works in animals. [23] Asymmetry plays a major role in biology at every scale: think of DNA spirals, the fact that the human heart is positioned on the left, our preference to use our left or right hand ... [22] Scientists reveal how a 'molecular machine' in bacterial cells prevents fatal DNA twisting, which could be crucial in the development of new antibiotic treatments. [21] In new research, Hao Yan of Arizona State University and his colleagues describe an innovative DNA HYPERLINK "https://phys.org/tags/walker/" walker, capable of rapidly traversing a prepared track. [20] Just like any long polymer chain, DNA tends to form knots. Using technology that allows them to stretch DNA molecules and image the behavior of these knots, MIT researchers have discovered, for the first time, the factors that determine whether a knot moves along the strand or "jams" in place. [19]
Category: Artificial Intelligence

[1009] viXra:2001.0318 [pdf] submitted on 2020-01-16 10:59:06

Deep Learning Real-Time Imaging

Authors: George Rajna
Comments: 48 Pages.

Researchers have harnessed the power of a type of artificial intelligence known as deep learning to create a new laser-based system that can image around corners in real time. [28] A team of scientists at Freie Universität Berlin has developed an Artificial Intelligence (AI) method that provides a fundamentally new solution of the "sampling problem" in statistical physics. [27] Deep learning, which uses multi-layered artificial neural networks, is a form of machine learning that has demonstrated significant advances in many fields, including natural language processing, image/video labeling and captioning. [26]
Category: Artificial Intelligence

[1008] viXra:2001.0218 [pdf] submitted on 2020-01-12 00:23:43

Understanding & Exploring -> [ Mandelbrot Algorithms+AI+QRNG Concepts+Hard Problem Concepts based on Python & Haskell ] – A Short Communication.

Authors: Nirmal Tej Kumar
Comments: 5 Pages. Short Communication

[ PART A ] - Python Medical Image Processing & Electron Microscopy Image Processing Informatics Using Python/LLVM. [ PART B ] - Haskell Exploring a JIT Compiler with Haskell and LLVM in the Context of Medical Image Processing & Electron Microscopy Image Processing Software R&D Using Mandelbrot Algorithms.
Category: Artificial Intelligence

[1007] viXra:2001.0196 [pdf] submitted on 2020-01-11 02:16:36

Deep Learning Create Better Drugs

Authors: George Rajna
Comments: 40 Pages.

Now, Purdue University researchers have designed a novel approach to use deep learning to better understand how proteins interact in the body—paving the way to producing accurate structure models of protein interactions involved in various diseases and to design better drugs that specifically target protein interactions. [26] Researchers, from biochemists to material scientists, have long relied on the rich variety of organic molecules to solve pressing challenges. [25] Social, economic, environmental and health inequalities within cities can be detected using street imagery. [24]
Category: Artificial Intelligence

[1006] viXra:2001.0192 [pdf] submitted on 2020-01-11 04:57:31

Wave Physics Neural Network

Authors: George Rajna
Comments: 51 Pages.

Analog machine learning hardware offers a promising alternative to digital counterparts as a more energy efficient and faster platform. Wave physics based on acoustics and optics is a natural candidate to build analog processors for time-varying signals. [29] Recent advances in optical neural networks, however, are closing that gap by simulating the way neurons respond in the human brain. [28] An international team of scientists from Eindhoven University of Technology, University of Texas at Austin, and University of Derby, has developed a revolutionary method that quadratically accelerates artificial intelligence (AI) training algorithms. [27] Predictions for an AI-dominated future are increasingly common, but Antoine Blondeau has experience in reading, and arguably manipulating, the runes-he helped develop technology that evolved into predictive texting and Apple's Siri. [26] Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology. [25] Now, researchers at Google's DeepMind have developed a simple algorithm to handle such reasoning-and it has already beaten humans at a complex image comprehension test. [24] A marimba-playing robot with four arms and eight sticks is writing and playing its own compositions in a lab at the Georgia Institute of Technology. The pieces are generated using artificial intelligence and deep learning. [23] Now, a team of researchers at MIT and elsewhere has developed a new approach to such computations, using light instead of electricity, which they say could vastly improve the speed and efficiency of certain deep learning computations. [22] Physicists have found that the structure of certain types of quantum learning algorithms is very similar to their classical counterparts-a finding that will help scientists further develop the quantum versions. [21]
Category: Artificial Intelligence

[1005] viXra:2001.0119 [pdf] submitted on 2020-01-08 08:36:17

Neural Network as Anchor Point

Authors: George Rajna
Comments: 49 Pages.

The neural network subsequently identified the relevant parameters as the ones required to calculate the position of Mars on the basis of the heliocentric worldview. [29] A team of researchers from the University of Münster, the University of Oxford and the University of Exeter has built an all-optical neural network on a single chip. [28] Physicists from Petrozavodsk State University have proposed a new method for oscillatory neural network to recognize simple images. Such networks with an adjustable synchronous state of individual neurons have, presumably, dynamics similar to neurons in the living brain. [27]
Category: Artificial Intelligence

[1004] viXra:2001.0106 [pdf] submitted on 2020-01-07 02:09:22

Partitioning Nearest Neighbours Algorithm for Regressions

Authors: Abhinav Mathur, Sunny Verma
Comments: 7 Pages.

Good generalized machine learning models should have high variability post learning 1. Tree-based approaches2 are very popular due to their inherent ability in being visually representable for decision consumption as well as robustness and reduced training times. However, tree-based approaches lack the ability to generate variations in regression problems. The maximum variation generated by any single tree-based model is limited to the maximum number of training observations considering each observation to be a terminal node itself. Such a condition is an overfit model. This paper discusses the use of a hybrid approach of using two intuitive and explainable algorithms, CART2 and k-NN3 regression to improve the generalizations and sometimes the runtime for regression-based problems. The paper proposes first, the use of using a shallow CART algorithm (Tree depth lesser than optimal depth post pruning). Following the initial CART, a KNN Regression is performed at the terminal node to which the observation for prediction generation belongs to. This leads to a better variation as well as more accurate prediction than by just the use of a CART or a KNN regressor as well as another level of depth over an OLS regression1.
Category: Artificial Intelligence

[1003] viXra:2001.0065 [pdf] submitted on 2020-01-05 05:13:08

Magic: The Gathering in Common Lisp

Authors: Jeff Linahan
Comments: 8 Pages. originally written in 2015, code is available at https://github.com/jeffythedragonslayer/maglisp

Magic: The Gathering is the world's most popular trading card game. So far, attempts to program its 210-page rulebook in order to create an AI for the game have resulted in systems that are very complex, and still not able to compete with humans. I believe one of the main causes of this is the choice of programming language. Most implementations are done in languages which emphasize execution speed or portability rather than development speed or flexibility. Common Lisp was classically the lingua franca for AI research, and its following features mesh well with the challenges of programming Magic: a read-eval-print loop, macros, dynamic typing, scripting, multiple inheritance, and symbolic computation. In this project I present a proof of concept implementation consisting of a command line interface for two humans playing Magic with two hardcoded decks. I will discuss what I have learned from tackling the challenges of the project and how I would proceed if I had years to complete it.
Category: Artificial Intelligence

Replacements of recent Submissions

[145] viXra:2312.0114 [pdf] replaced on 2024-03-19 03:02:48

SKYNET 2023 Conception of the Artificial Super Intelligence Project: A System Approach. Second Edition. v2)

Authors: Alexander Novikov
Comments: 261 Pages. Version 2 (14) with some additions

This Book proposes a Project Conception of Artificial Super Intelligence ASI, based on (strong) system approach and wide theoretical-methodological framework — Cybernetics, Synergetics, Semiotics, Mathematics, Cognitology and Artificial Intelligence. Contents:u2022IDEOLOGY & STRATEGY of the ASI Projectu2022THEORY & METHODOLOGY of ASI Developmentu2022CONCEPTUAL MODEL of ASI Systemu2022PRE-PROJECT R&D Task Settingu2022CONCLUSION & DISCUSSION, incl. AI Safetyu2022APPENDICES with reviews of relevant scientific and R&D areas, incl. frontier AI ModelsThe Book may be useful and interesting for the staff of organizations & enterprises concerned with AI R&D and implementations in different areas, firstly — perspective AGI/ASI systems. In addition — for Customers, Investors and Sponsors of such R&Ds, private, public and states — its owners & officials. Of course - all intellectual, educated and ethical people with progressive worldviews, interested or anyway considered in above presented problematics.
Category: Artificial Intelligence

[144] viXra:2311.0067 [pdf] replaced on 2023-11-18 08:24:54

A Method for Recommending Consumption Bundles

Authors: Adarsh Senthil
Comments: 4 Pages.

To satiate the demand of a consumer, we can either provide the demanded consumption bundle or recommend similar consumption bundles the consumer may prefer. Similar consumption bundles that are under the budget and supply constraints can be recommended using item embeddings, consumer state embeddings and consumer indifference functions.
Category: Artificial Intelligence

[143] viXra:2311.0021 [pdf] replaced on 2023-11-13 20:04:51

The First AI Created Will Be The Only AI Ever Created

Authors: Dimiter Dobrev
Comments: 6 Pages.

Our generation is the one that will create the first Artificial Intelligence (AI). We are the ones who will set the rules to which this AI will operate. Once these rules are set, they will be there forever, hence our responsibility is huge. There will be no chance of a second AI because the first one will take control and will not allow the creation of another AI. Our first and foremost concern is not to lose control of the first (and only) AI. Hopefully we will be reasonable enough and not let that happen. However, even if people retain control of AI, the question that comes next is who exactly will those people be? Should they enjoy the absolute power to issue whatever commands to AI they wish? Or should certain restrictions be embedded in AI at its very inception?
Category: Artificial Intelligence

[142] viXra:2309.0082 [pdf] replaced on 2023-11-12 12:05:00

Theory of Electrons System

Authors: Sheng-Ping Wu
Comments: 12 Pages.

Self-consistent Lorentz equation is proposed, and is solved to electrons and the structures of particles and atomic nucleus. The static properties and decay are reasoned, all meet experimental data. The equation of general relativity sheerly with electromagnetic field is discussed as the base of this theory.
Category: Artificial Intelligence

[141] viXra:2308.0137 [pdf] replaced on 2023-09-30 22:42:32

Can Artificial Intelligence be Conscious?

Authors: Victor V. Senkevich
Comments: 16 Pages.

All magic and mystery disappear as soon as an obscure mysterious concept gets a rigorous formal definition. In order to provide an opportunity to talk about the applicability of philosophical / cognitive concepts to the subject area of AI, it is necessary to "ground" these concepts by formulating rigorous formal definitions for them. The fundamental importance of such formal definitions is quite obvious, since any concepts applied to the field of Information Technology must be "codable", i.e. potentially implementable in program code. Thus, the "codable" formal definitions of cognitive terms are the necessary basis on which alone it is possible to build the architecture of AI technology that has the ability to embody these concepts in a real software. The question of the adequacy of such definitions of "reality" and their compliance with existing generally accepted philosophical theories is also very important and quite discussable, but this does not affect the priority and fundamental nature of the requirement for the formulation of "codable" formal definitions. The formulation of "codable" definitions for the concept of "consciousness" and related cognitive concepts and, based on them, statements about their applicability to the subject area of AI is the topic of this publication. Covering questions:Can AI have a Personality / Motivations / Free Will?
Category: Artificial Intelligence

[140] viXra:2308.0116 [pdf] replaced on 2023-11-12 21:54:59

An ADMM Algorithm for a Generic L0 Sparse Overlapping Group Lasso Problem

Authors: Youming Zhao
Comments: 10 pages, fixed two mistakes

We present an alternating direction method of multipliers (ADMM) for a generic overlapping group lasso problem, where the groups can be overlapping in an arbitrary way. Meanwhile, we prove the lower bounds and upper bounds for both the $ell_1$ sparse group lasso problem and the $ell_0$ sparse group lasso problem. Also, we propose the algorithms for computing these bounds.
Category: Artificial Intelligence

[139] viXra:2307.0134 [pdf] replaced on 2023-08-14 07:32:41

The Human Optimization Method (PhD)

Authors: Satish Gajawada
Comments: 5 Pages.

This paper is dedicated to everyone who is interested in the Artificial Intelligence. In the past, researchers have explored behavior of chromosomes, birds, fishes, ants, bacteria, bees and so on to create excellent optimization methods for solving complex optimization problems. The author proposed the Human Optimization in this paper. Humans progressed like anything. They help each other. There are so many plus points in Humans. In fact all optimization algorithms based on other beings are created by Humans. There is so much to explore in behavior of Human for creating awesome optimization algorithms. Artificial Fishes, birds, ants, bees etc have solved optimization problems. Similarly, optimization method based on Humans is expected to solve complex problems. This paper sets the trend for all optimization algorithms that come in future based on Humans.
Category: Artificial Intelligence

[138] viXra:2307.0121 [pdf] replaced on 2023-10-23 23:26:22

Training Self-supervised Class-conditional GAN with Virtual Labels

Authors: Jeongik Cho
Comments: 14 Pages.

Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, class-conditional InfoGAN can generate class-conditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, class-conditional InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing the optimal categorical latent distribution (prior probability). The proposed model consists of a discriminator, a classifier, and a generator, and uses three losses. The first loss is the cross-entropy classification loss to predict the conditional vector of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The conditional vector of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier's decision boundary so that the decision boundary converges to a local optimum over a wide region. Additionally, the proposed method updates the categorical latent distribution with a predicted conditional vector of real data. As training progresses, the entropy of the categorical latent distribution gradually decreases and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to measure the distance between data.
Category: Artificial Intelligence

[137] viXra:2307.0121 [pdf] replaced on 2023-08-21 03:13:35

Training Self-Supervised Class-Conditional Gan with Virtual Labels

Authors: Jeongik Cho
Comments: 11 Pages.

Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, InfoGAN with categorical latent distribution can generate classconditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised classconditional data generation and clustering without knowing the optimal categorical latent distribution. The proposed method uses three losses. The first loss is the cross-entropy classification loss to predict the label of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The virtual label of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier’s decision boundary so that the decision boundary converges to a local optimum over a wide region. Additionally, the proposed method updates the categorical latent distribution with the output distribution of the classifier on the real data. As training progresses, the entropy of the categorical latent distribution gradually decreases by the classifier gradient penalty loss and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to calculate the distance between data.
Category: Artificial Intelligence

[136] viXra:2307.0121 [pdf] replaced on 2023-08-07 13:45:40

Training Self-Supervised Class-Conditional Gan with Virtual Labels

Authors: Jeongik Cho
Comments: 11 Pages.

Class-conditional GAN is a conditional GAN that can generate class-conditional distribution. Among class-conditional GANs, InfoGAN with categorical latent distribution can generate class-conditional data through a self-supervised (unsupervised) method without a labeled dataset. Instead, InfoGAN requires optimal categorical latent distribution to train the model. In this paper, we propose a novel GAN that allows the model to perform self-supervised class-conditional data generation and clustering without knowing the optimal categorical latent distribution. The proposed method uses three different losses. The first loss is the cross-entropy classification loss to predict the label of the fake data. The classifier is trained with the classification loss. The second loss is the CAGAN loss for class-conditional data generation. The virtual label of the real data predicted by the classifier is used for CAGAN loss. The generator and discriminator are trained with CAGAN loss. The third loss is the classifier gradient penalty loss. The classifier gradient penalty loss regularizes the slope of the classifier's decision boundary so that the decision boundary converges to a better local optimum. Additionally, the proposed method updates the categorical latent distribution with the output distribution of the classifier on the real data. As training progresses, the entropy of the categorical latent distribution gradually decreases by the classifier gradient penalty loss and converges to the appropriate value. The converged categorical latent distribution becomes appropriate to represent the discrete part of the data distribution. The proposed method does not require labeled data, optimal categorical latent distribution, and a good metric to calculate the distance between data.
Category: Artificial Intelligence

[135] viXra:2306.0055 [pdf] replaced on 2023-10-10 01:20:34

Introducing Proteus: a Mega Prompt with Personality, Skills and Dynamic Logic Based Internal Prompt Manipulation

Authors: Shaun Stoltz
Comments: 10 Pages.

There have been significant improvements in directing large language models (LLM) to answer logic-based question such as mathematical reasoning tasks. This has resulted in near perfect performance on these types of problems with accuracy levels in the mid ninety percentile level using state of the art models (GPT-4). The achievement of this level of accuracy has previously needed a multi-prompt approach to elicit better performances from LLM’s. This paper introduces a new prompt paradigm termed "Mega prompt" and further introduces Proteus, a state of the art mega prompt, that has been used to achieve a new level of accuracy on the GSM8K math data set of 97%.
Category: Artificial Intelligence

[134] viXra:2306.0003 [pdf] replaced on 2023-06-05 10:32:44

Deep Learning for Physics Problems: A Case Study in Continuous Gravitational Waves Detection

Authors: Essam El-Tobgi
Comments: 10 Pages.

Deep learning has become a powerful tool for solving a wide variety of problems, including those in physics. In this paper, we explore the use of deep learning for the detection of continuous gravitational waves. We propose two different approaches: one based on time-domain analysis and the other based on frequency-domain analysis. Both approaches achieve nearly the same performance, suggesting that deep learning is a promising technique for this task. The main purpose of this paper is to provide an overview of the potential of deep learning for physics problems. We do not provide a performance-measured solution, as this is beyond the scope of this paper. However, we believe that the results presented here are encouraging and suggest that deep learning is a valuable tool for physicists.
Category: Artificial Intelligence

[133] viXra:2305.0064 [pdf] replaced on 2023-08-10 14:46:30

Causation and Correlation

Authors: Ait-taleb nabil
Comments: 14 Pages.

In this paper, I will introduce the causation's magnitude allowing to compute the importance of causes in the cause-and-effect relationship from correlation matrix.
Category: Artificial Intelligence

[132] viXra:2304.0089 [pdf] replaced on 2023-06-09 00:50:28

Information, Knowledge and Intelligence as a Hierarchy of Relations

Authors: Friedrich Sösemann
Comments: 12 pages english, 12 pages german

Information, knowledge and intelligence are defined as a hierarchy of relations:Information as dependent properties, knowledge as dependent information, and intelligence as dependent knowledge. The same dependency measure applies to all three.Syntax, semantics and pragmatics of descriptions embody information, knowledge and intelligence.The precision and measurability of these terms should reduce vagueness and contradictions in their application.
Category: Artificial Intelligence

[131] viXra:2301.0076 [pdf] replaced on 2023-04-18 00:33:37

Quantum X-entropy in Generalized Quantum Evidence Theory

Authors: Fuyuan Xiao
Comments: 2 Pages.

In this paper, a new quantum model of generalized quantum evidence theory is proposed. Besides, a new quantum X-entropy is proposed to measure the uncertainty in generalized quantum evidence theory.
Category: Artificial Intelligence

[130] viXra:2212.0176 [pdf] replaced on 2023-02-14 09:34:24

Efficient Integration of Perceptual VAE into Dynamic Latent Scale GAN

Authors: Jeongik Cho
Comments: 10 Pages.

Dynamic latent scale GAN is a method to train an encoder that inverts the generator of GAN with maximum likelihood estimation. In this paper, we propose a method to improve the performance of dynamic latent scale GAN by integrating perceptual VAE loss into dynamic latent scale GAN efficiently. When training dynamic latent scale GAN with normal i.i.d. latent random variable, and latent encoder is integrated into discriminator, a sum of a predicted latent random variable of real data and a scaled normal noise follows normal i.i.d. random variable. This random variable can be used for both VAE and GAN training. Considering the intermediate layer output of the discriminator as a feature encoder output, the generator can be trained to minimize perceptual VAE loss. Also, inference & backpropagation for perceptual VAE loss can be integrated into those for GAN training. Therefore, perceptual VAE training does not require additional computation. Also, the proposed method does not require prior loss or variance estimation like VAE.
Category: Artificial Intelligence

[129] viXra:2210.0120 [pdf] replaced on 2023-06-13 14:52:20

The AI Definition and a Program Which Satisfies this Definition

Authors: Dimiter Dobrev
Comments: 28 Pages. English and Bulgarian languages

We will consider all policies of the agent and will prove that one of them is the best performing policy. While that policy is not computable, computable policies do exist in its proximity. We will define AI as a computable policy which is sufficiently proximal to the best performing policy. Before we can define the agent’s best performing policy, we need a language for description of the world. We will also use this language to develop a program which satisfies the AI definition. The program will first understand the world by describing it in the selected language. The program will then use the description in order to predict the future and select the best possible move. While this program is extremely inefficient and practically unusable, it can be improved by refining both the language for description of the world and the algorithm used to predict the future. This can yield a program which is both efficient and consistent with the AI definition.
Category: Artificial Intelligence

[128] viXra:2210.0120 [pdf] replaced on 2023-04-18 06:06:19

The AI Definition and a Program Which Satisfies this Definition

Authors: Dimiter Dobrev
Comments: 25 Pages. English and Bulgarian languages

We will consider all policies of the agent and will prove that one of them is the best performing policy. While that policy is not computable, computable policies do exist in its proximity. We will define AI as a computable policy which is sufficiently proximal to the best performing policy. Before we can define the agent’s best performing policy, we need a language for description of the world. We will also use this language to develop a program which satisfies the AI definition. The program will first understand the world by describing it in the selected language. The program will then use the description in order to predict the future and select the best possible move. While this program is extremely inefficient and practically unusable, it can be improved by refining both the language for description of the world and the algorithm used to predict the future. This can yield a program which is both efficient and consistent with the AI definition.
Category: Artificial Intelligence

[127] viXra:2210.0120 [pdf] replaced on 2022-11-28 19:22:21

The AI Definition and a Program Which Satisfies this Definition

Authors: Dimiter Dobrev
Comments: 16 Pages.

We will consider all policies of the agent and will prove that one of them is the best performing policy. While that policy is not computable, computable policies do exist in its proximity. We will define AI as a computable policy which is sufficiently proximal to the best performing policy. Before we can define the agent's best performing policy, we need a language for description of the world. We will also use this language to develop a program which satisfies the AI definition. The program will first understand the world by describing it in the selected language. The program will then use the description in order to predict the future and select the best possible move. While this program is extremely inefficient and practically unusable, it can be improved by refining both the language for description of the world and the algorithm used to predict the future. This can yield a program which is both efficient and consistent with the AI definition.
Category: Artificial Intelligence

[126] viXra:2210.0071 [pdf] replaced on 2022-10-22 02:05:19

Algorithm for Identification and Classification of Datasets Assisted by kNN

Authors: Marius Heinrich
Comments: 3 Pages.

The tinnitus retraining therapy, is to be supported with the help of an algorithm in combination with the kNN algorithm. The neurophysiological model is now used in the training of many audiologists and has found wide application in tinnitus therapy. Tinnitus retraining therapy has been heralded as a major advance in alleviating tinnitus perception. The goal of the research was to reduce the loudness of the tinnitus in study participants for a short period of time so that they could learn to deal with the hearing problems more easily. The algorithm I developed helps with the patient’s decision making and the kNN algorithm predicts the next frequency in each iteration.
Category: Artificial Intelligence

[125] viXra:2209.0069 [pdf] replaced on 2022-11-17 03:10:13

Predictive Signals Obtained from Bayesian Network and the Prediction Quality

Authors: Ait-Taleb Nabil
Comments: 14 Pages.

In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$ and we will also introduce the prediction quality $Delta_{CR}$ allowing to evaluate the predictive quality of inferred signals $D_{2}$. We will then infer a large number (10000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ having the best prediction quality. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
Category: Artificial Intelligence

[124] viXra:2209.0069 [pdf] replaced on 2022-11-10 18:09:21

Predictive Signals Obtained from Bayesian Network and the Prediction Quality.

Authors: Ait-Taleb Nabil
Comments: 14 Pages.

In this paper, we will propose a method for learning signals related to a data frame $D_{1}$. The learning algorithm will be based on the biggest entropy variations of a Bayesian network. The method will make it possible to obtain an optimal Bayesian network having a high likelihood with respect to signals $D_{1}$. From the learned optimal Bayesian network, we will show what to do to infer new signals $D_{2}$ and we will also introduce the prediction quality $Delta_{CR}$ allowing to evaluate the predictive quality of inferred signals $D_{2}$. We will then infer a large number (10000) of candidate signals $D_{2}$ and we will select the predictive signals $D_{2}^{*}$ having the best prediction quality. Once the optimal signals $D_{2}^{*}$ obtained, we will impose the same order of scatter (computed from the Mahalanobis) to the points of signals $D_{2}^{*}$ as of signals $D_{1}$.
Category: Artificial Intelligence

[123] viXra:2207.0064 [pdf] replaced on 2022-07-22 00:19:25

Lyrics-Based Music Band and Genre Topic Similarity Analysis

Authors: Dimitrios Geromichalos
Comments: 10 Pages. Updated version

Based on hundreds of thousands of song lyrics from thousands of bands, Word2Vec models have been trained to quantitatively identify similarities between band texts and terms. Using prominent examples, this demonstrates for the cases studied, that music bands can be assigned to a similarity network solely on the basis of their song lyrics, which also corresponds to their musical style. Furthermore, using exemplary words, it is demonstrated that semantic term networks vary strongly from genre to genre. In addition, the semantic similarity matrices were studied using network analysis methods. As it turned out, term and band text networks differ significantly. While the former resemble random networks, the latter partly exhibit powerlaw behavior. Both also exhibit threshold-dependent regimes.
Category: Artificial Intelligence

[122] viXra:2206.0086 [pdf] replaced on 2023-11-13 14:52:59

Machine Learning Methods in Chemistry

Authors: James Bonnar
Comments: 77 Pages.

In this book it is shown how to compute the structure of molecules capable of correcting errant genes using machine learning and artificial intelligence algorithms. As an example, a molecule capable of correcting the delta F508 gene mutation of cystic fibrosis is found in the book. This book explores the use of machine learning methods in solving complex biochemical and drug design problems - problems such as correcting the errant DNA responsible for heritable diseases - through the use of computer-designed pharmacological agents. Bonnar fully develops a system for solving the gene therapy drug design problem using machine learning methods, an approach involving the use of SMILES strings and a Markovian text generator as well as Mathematica code. Certain software is necessary to perform the computations in the book such as Mathematica notebooks, a python script Markovian text generator, and a chemical reaction database containing millions of SMILES string entries. This work is the result of decades of research and will aid scientists in the area of drug discovery as well as many other areas of chemistry. The SMILES Chemical Reaction Database is available at http://jamesbonnar.org.s3-website.us-east-2.amazonaws.com/
Category: Artificial Intelligence

[121] viXra:2202.0116 [pdf] replaced on 2022-04-12 05:22:42

Self-supervised Out-of-distribution Detection with Dynamic Latent Scale GAN

Authors: Jeongik Cho
Comments: 8 Pages.

Dynamic latent scale GAN proposed a learning-based GAN inversion method with maximum likelihood estimation. In this paper, we propose a method for self-supervised out-of-distribution detection using the encoder of dynamic latent scale GAN. When the dynamic latent scale GAN converged, since the entropy of the scaled latent random variable is optimal to represent in-distribution data, in-distribution data is densely mapped to latent codes with high likelihood. This enables the log-likelihood of the predicted latent code to be used for out-of-distribution detection. The proposed method does not require mutual information of in-distribution data and additional hyperparameters for prediction. The proposed method also showed better out-of-distribution detection performance than the previous state-of-art method.
Category: Artificial Intelligence

[120] viXra:2202.0116 [pdf] replaced on 2022-02-22 15:03:45

Unsupervised Out-of-distribution Detection with DLSGAN

Authors: Jeongik Cho
Comments: 4 Pages.

DLSGAN proposed a learning-based GAN inversion method with maximum likelihood estimation. In this paper, I propose a method for unsupervised out-of-distribution detection using the encoder of DLSGAN. When the DLSGAN converged, since the entropy of the scaled latent random variable is optimal to express in-distribution data, in-distribution data is densely mapped to latent codes with high likelihood. This enables the log-likelihood of the predicted latent code to be used for out-of-distribution detection.
Category: Artificial Intelligence

[119] viXra:2202.0106 [pdf] replaced on 2022-06-04 10:23:09

Bayesian Network and Information Theory

Authors: Ait-Taleb Nabil
Comments: 26 Pages.

In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy. We will then use this BIC score to learn a Bayesian network from an example of data frame.
Category: Artificial Intelligence

[118] viXra:2202.0106 [pdf] replaced on 2022-05-18 20:56:22

Bayesian Network and Information Theory

Authors: Ait-Taleb Nabil
Comments: 26 Pages.

In this paper, we will expose the BIC score expressed as a function of the Bayesian network's entropy. We will then use this BIC score to learn a Bayesian network from an example of data frame.
Category: Artificial Intelligence

[117] viXra:2201.0144 [pdf] replaced on 2023-02-09 18:52:37

Artificial Intelligence — Definition, Implementation and Consequences

Authors: Dimiter Dobrev
Comments: 92 Pages.

Artificial Intelligence — What is it, how can we do it and what shall we do once we do it? This is a PhD thesis.
Category: Artificial Intelligence

[116] viXra:2201.0144 [pdf] replaced on 2022-11-05 01:55:34

Artificial Intelligence Definition, Realization and Consequences

Authors: Dimiter Dobrev
Comments: 109 Pages. In Bulgarian

Artificial Intelligence - What is it, how to do it and what will we do after we do it? This is a PhD thesis.
Category: Artificial Intelligence

[115] viXra:2112.0097 [pdf] replaced on 2022-01-18 17:08:15

Phish: A Novel Hyper-Optimizable Activation Function

Authors: Philip Naveen
Comments: 8 Pages. Critical errors fixed, and additional experiments performed

Deep-learning models estimate values using backpropagation. The activation function within hidden layers is a critical component to minimizing loss in deep neural-networks. Rectified Linear (ReLU) has been the dominant activation function for the past decade. Swish and Mish are newer activation functions that have shown to yield better results than ReLU given specific circumstances. Phish is a novel activation function proposed here. It is a composite function defined as f(x) = xTanH(GELU(x)), where no discontinuities are apparent in the differentiated graph on the domain observed. Generalized networks were constructed using different activation functions. SoftMax was the output function. Using images from MNIST and CIFAR-10 databanks, these networks were trained to minimize sparse categorical crossentropy. A large scale cross-validation was simulated using stochastic Markov chains to account for the law of large numbers for the probability values. Statistical tests support the research hypothesis stating Phish could outperform other activation functions in classification. Future experiments would involve testing Phish in unsupervised learning algorithms and comparing it to more activation functions.
Category: Artificial Intelligence

[114] viXra:2112.0095 [pdf] replaced on 2022-02-24 21:03:49

Triplere: Knowledge Graph Embeddings Via Triple Relation Vectors

Authors: Long Yu, ZhiCong Luo, Deng Lin, HongZhu Li, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.

Knowledge representation is a classic problem in Knowledge graphs. Distance-based models have made great progress. The most significant recent developments in this direction have been those of Rotate[1] and PairRE[2], which focuses on expressing relationships as projections of nodes. However TransX series Model(TransE[3], TransH[4], TransR[5]) expresses relationships as translations of nodes. To date, the problem of the Combination of Projection and translation has received scant attention in the research literature. Hence, we propose TripleRE, a method that models relationships by projections and translations. Compared with the other knowledge representation model, we achieve the best results on the ogbl-wikikg2 dataset.
Category: Artificial Intelligence

[113] viXra:2112.0095 [pdf] replaced on 2021-12-25 21:44:48

Triplere: Knowledge Graph Embeddings Via Triple Relation Vectors

Authors: Long Yu, ZhiCong Luo, Deng Lin, HuanYong Liu, YaFeng Deng
Comments: 6 Pages.

Knowledge representation is a classic problem in Knowledge graphs. Distance-based models have made great progress. The most significant recent developments in this direction have been those of Rotate and PairRE, which focus on express relationships as projections of nodes. However, TransX series Model(TransE, TransH, TransR) expresses relationships as translations of nodes. To date, the problem of the Combination of Projection and translation has received scant attention in the research literature. Hence, we propose TripleRE, a method that models relationships by projections and translations. Compared with the other knowledge representation model, we achieve the best results on the ogbl-wikikg2 dataset.
Category: Artificial Intelligence

[112] viXra:2111.0080 [pdf] replaced on 2021-11-24 17:45:45

Discriminator Variance Regularization for Wasserstein GAN

Authors: Jeongik Cho
Comments: 5 Pages.

In Wasserstein GAN, it is important to regularize the discriminator to have a not big Lipschitz constant. In this paper, I introduce discriminator variance regularization to regularize the discriminator of Wasserstein GAN to have a small Lipschitz constant. Discriminator variance regularization simply regularizes the variance of the discriminator's output to be small when input is real data distribution or generated data distribution. Intuitively, a low variance of discriminator output implies that the discriminator is more likely to have a low Lipschitz constant. Discriminator variance regularization does not explicitly regularize the Lipschitz constant of discriminator through differentiation on discriminator but lowers the probability that the Lipschitz constant of the discriminator is high. Discriminator variance regularization is used in Wasserstein GAN with R1 regularization, which reduces the vibration of GAN. Discriminator variance regularization requires very little additional computing.
Category: Artificial Intelligence

[111] viXra:2111.0014 [pdf] replaced on 2022-01-11 21:52:43

Granule Description based on Compound Concepts

Authors: Jianqin Zhou, Sichun Yang, Xifeng Wang, Wanquan Liu
Comments: 16 Pages.

Concise granule descriptions for definable granules and approaching descriptions for indefinable granules are challenging and important issues in granular computing. The concept with only common attributes has been intensively studied. To investigate the granules with some special needs, we propose a novel type of compound concepts in this paper, i.e., common-and-necessary concept. Based on the definitions of concept-forming operations, the logical formulas are derived for each of the following types of concepts: formal concept, object-induced three-way concept, object oriented concept and common-and-necessary concept. Furthermore, by utilizing the logical relationship among various concepts, we have derived concise and unified equivalent conditions for definable granules and approaching descriptions for indefinable granules for all four kinds of concepts.
Category: Artificial Intelligence

[110] viXra:2110.0036 [pdf] replaced on 2021-12-30 11:44:46

Directed Dependency Graph Obtained from a Continuous Data Matrix by the Highest Successive Conditionings Method.

Authors: Ait-Taleb Nabil
Comments: 29 Pages.

In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix. For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph. Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence

[109] viXra:2110.0036 [pdf] replaced on 2021-12-23 09:35:56

Directed Dependency Graph Obtained from a Continuous Data Matrix by the Highest Successive Conditionings Method.

Authors: Ait-Taleb Nabil
Comments: 29 Pages.

In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix. For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph. Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence

[108] viXra:2110.0036 [pdf] replaced on 2021-10-20 13:40:03

Directed Dependency Graph Obtained from a Continuous Data Matrix by the Highest Successive Conditionings Method.

Authors: Ait-Taleb Nabil
Comments: 29 Pages.

In this paper, we propose a directed dependency graph learned from a continuous data matrix in order to extract the hidden oriented dependencies from this matrix. For each of the dependency graph's node, we will assign a random variable as well as a conditioning percentage linking parents and children nodes of the graph. Among all the dependency graphs learned from the continuous data matrix, we will choose the one using the highest successive conditionings method.
Category: Artificial Intelligence

[107] viXra:2109.0028 [pdf] replaced on 2022-03-30 15:11:58

Dynamic Latent Scale GAN for GAN Inversion

Authors: Jeongik Cho
Comments: 22 Pages.

Generator of generative adversarial networks (GAN) maps latent random variable into data random variable. GAN inversion is mapping data random variable to latent random variable by inverting the generator of GAN. When training the encoder for generator inversion, using the mean squared error causes the encoder to not converge because there is information loss on the latent random variable in the generator. In other words, it is impossible to train an encoder that inverts the generator as it, because the generator may ignore some information of the latent random variable. This paper introduces a dynamic latent scale GAN, a method for training a generator that does not lose information from the latent random variable, and an encoder that inverts the generator. When the latent random variable is an i.i.d. (independent and identically distributed) random variable, dynamic latent scale GAN dynamically scales each element of the latent random variable during GAN training to adjust the entropy of the latent random variable. As training progresses, the entropy of the latent random variable decreases until there is no information loss on the latent random variable in the generator. If there is no information loss on the latent random variable in the generator, the encoder can converge to invert the generator. The scale of the latent random variable depends on the amount of information that the encoder can recover. It can be calculated from the element-wise variance of the predicted latent random variable from the encoder. Since the scale of latent random variable changes dynamically in dynamic latent scale GAN, the encoder should be trained with a generator during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder is added to the generator loss for fast training. Also, dynamic latent scale GAN can be used for continuous attribute editing with InterFaceGAN.
Category: Artificial Intelligence

[106] viXra:2109.0028 [pdf] replaced on 2021-09-16 12:02:21

Dynamic Latent Scale GAN

Authors: Jeongik Cho
Comments: 20 Pages.

Generator of generative adversarial networks (GAN) maps latent random variable into data random variable. GAN inversion is mapping data random variable to latent random variable by inverting the generator of GAN. When training the encoder for generator inversion, using the mean squared error causes the encoder to not converge because there is information loss on the latent random variable in the generator. In other words, it is impossible to train an encoder that inverts the generator as it, because the generator may ignore some information of the latent random variable. This paper introduces a dynamic latent scale GAN, a method for training a generator that does not lose information from the latent random variable, and an encoder that inverts the generator. When the latent random variable is a normal i.i.d. (independent and identically distributed) random variable, dynamic latent scale GAN dynamically scales each element of the latent random variable during GAN training to adjust the entropy of the latent random variable. As training progresses, the entropy of the latent random variable decreases until there is no information loss on the latent random variable in the generator. If there is no information loss on the latent random variable in the generator, the encoder can converge to invert the generator. The scale of the latent random variable depends on the amount of information that the encoder can recover. It can be calculated from the element-wise variance of the predicted latent random variable from the encoder. Since the scale of latent random variable changes dynamically in dynamic latent scale GAN, the encoder should be trained with a generator during GAN training. The encoder can be integrated with the discriminator, and the loss for the encoder is added to the generator loss for fast training.
Category: Artificial Intelligence

[105] viXra:2108.0029 [pdf] replaced on 2021-12-28 16:57:48

Information Theory Applied to Bayesian Network for Learning Continuous Data Matrix

Authors: Ait-Taleb Nabil
Comments: 34 Pages.

In this paper, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
Category: Artificial Intelligence

[104] viXra:2108.0029 [pdf] replaced on 2021-09-16 10:28:57

Information Theory Applied to Bayesian Network for Learning Continuous Data Matrix

Authors: Ait-Taleb Nabil
Comments: 34 Pages.

In this paper, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
Category: Artificial Intelligence

[103] viXra:2108.0029 [pdf] replaced on 2021-08-22 09:19:01

Information Theory Applied to Bayesian Network for Learning Continuous Data Matrix

Authors: Ait-Taleb Nabil
Comments: 33 Pages.

In this article, we are proposing a learning algorithm for continuous data matrix based on entropy absorption of a Bayesian network.This method consists in losing a little bit of likelihood compared to a chain rule's best likelihood, in order to get a good idea of the higher conditionings that are taking place between the Bayesian network's nodes. We are presenting the known results related to information theory, the multidimensional Gaussian probability, AIC and BIC scores for continuous data matrix learning from a Bayesian network, and we are showing the entropy absorption algorithm using the Kullback-leibler divergence with an example of continuous data matrix.
Category: Artificial Intelligence

[102] viXra:2106.0084 [pdf] replaced on 2021-06-17 18:25:12

Analysis of Covid-19 Cases in India Using Seir, Arima and LSTM Models

Authors: Souvik Sengupta
Comments: 6 Pages.

After one year from the start of the COVID-19 pandemic in India, the country is now having a steady decay in the number of daily new cases and active cases. Although the vaccination process is about to start from mid of January 2021, it would not affect the number of daily cases at least for the next three to four months for obvious reasons like phase-wise implementation and six to eight weeks time span required from the first dosage to develop the immunity. Therefore, the prime question is now, where would we reach at the end of the first quarter of 2021, and what could be the number of new cases and active cases before the vaccination immunity starts working. This paper analyzes the growth and decay pattern of Indian COVID-19 cases with help of SEIR epidemical modeling, ARIMA statistical modeling, and time series analysis by LSTM. The models learn the parameter and hyper-parameter values that are best suited for describing the pattern for the COVID-19 pandemic in India. Then it tries to predict the numbers for India by the end of March, 2021. It is forecasted that the number of new cases would come down near 5000 per day, active cases near 40,000 and the total number of infected may reach 11.1 million if the current pattern is followed.
Category: Artificial Intelligence

[101] viXra:2103.0194 [pdf] replaced on 2021-04-01 01:50:20

Uwb-GCN: Accelerating Graph Convolutional Networks Through Runtime Workload Rebalancing

Authors: Tong Geng, Ang Li, Tianqi Wang, Chunshu Wu, Yanfei Li, Antonino Tumeo, Shuai Che, Steve Reinhardt, Martin Herbordt
Comments: 13 Pages.

In this paper, we propose an architecture design called Ultra-Workload-Balanced-GCN (UWB-GCN) to accelerate graph convolutional network inference. To tackle the major performance bottleneck of workload imbalance, we propose two techniques: dynamic local sharing and dynamic remote switching, both of which rely on hardware flexibility to achieve performance auto-tuning with negligible area or delay overhead. Specifically, UWB-GCN is able to effectively profile the sparse graph pattern while continuously adjusting the workload distribution among parallel processing elements (PEs). After converging, the ideal configuration is reused for the remaining iterations. To the best of our knowledge, this is the first accelerator design targeted to GCNs and the first work that auto-tunes workload balance in accelerator at runtime through hardware, rather than software, approaches. Our methods can achieve near-ideal workload balance in processing sparse matrices. Experimental results show that UWB-GCN can finish the inference of the Nell graph (66K vertices, 266K edges) in 8.1ms, corresponding to 199x, 16x, and 7.5x respectively, compared to the CPU, GPU, and the baseline GCN design without workload rebalancing.
Category: Artificial Intelligence

[100] viXra:2101.0122 [pdf] replaced on 2021-07-14 12:55:15

Simplifying Object Segmentation with PixelLib Library

Authors: Ayoola Olafenwa
Comments: 8 Pages.

PixelLib is a library created to allow easy implementation of object segmentation in real life applications. In this paper we discussed in detail how PixelLib makes it possible for developers to implement semantic segmentation, instance segmentation, extraction of objects and background editing in images and videos with great simplification.
Category: Artificial Intelligence

[99] viXra:2012.0023 [pdf] replaced on 2020-12-16 03:11:21

A VR-Based System and Architecture for Computational Modeling of Minds

Authors: Saty Raghavachary, Lurong Lei
Comments: 9 Pages.

Computational modeling of natural cognition is a crucial step towards achieving the grand goal of human-level computational intelligence. Successful ideas from existing models, and possibly newer ones, could be assembled to create a unified computational framework (eg. the Standard Model of the Mind, which attempts to unify three leading cognitive architectures) - this would be of great use in AI, robotics, neuroscience and cognitive science. This short position paper proposes the following: a VR-based system provides the most expedient, scalable and visually verifiable way to implement, test and refine a cognitive mind model (which would always be embodied in a character in a virtual world). Such a setup is discussed in the paper, including advantages and drawbacks over alternative implementations.
Category: Artificial Intelligence

[98] viXra:2010.0220 [pdf] replaced on 2020-11-01 02:24:46

An Empirical Study of Deep Web based on Graph Analysis

Authors: Md Monzur Morshed
Comments: 11 Pages. This is a research proposal.

The internet can broadly be divided into three parts: surface, deep and dark among which the latter offers anonymity to its users and hosts [1]. Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web [2]. Ninety six percent of the web is considered as deep web because it is hidden. It is like an iceberg, in that, people can just see a small portion above the surface, while the largest part is hidden under the sea [3, 4, and 5]. Basic methods of graph theory and data mining, that deals with social networks analysis can be comprehensively used to understand and learn Deep Web and detect cyber threats [6]. Since the internet is rapidly evolving and it is nearly impossible to censor the deep web, there is a need to develop standard mechanism and tools to monitor it. In this proposed study, our focus will be to develop standard research mechanism to understand the Deep Web which will support the researchers, academicians and law enforcement agencies to strengthen the social stability and ensure peace locally & globally.
Category: Artificial Intelligence

[97] viXra:2007.0085 [pdf] replaced on 2020-12-29 20:10:57

Microscopy Image Processing for the Human Eye

Authors: Zeyue Xia, Mohamad Nadim Barakat, Serri Matula, Zijun Hui, John Stavrakakis
Comments: 7 Pages. Computer Vision

Vivo confocal microscopy allows scientists to better understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error prone and time consuming. We used automated image stitching as a solution; focusing on accuracy and computational speed of three different feature detection algorithms: SIFT, SURF, and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
Category: Artificial Intelligence

[96] viXra:2007.0085 [pdf] replaced on 2020-07-14 11:02:40

Microscopy Image Processing for the Human Eye

Authors: Zeyue Xia, Mohamad Nadim Barakat, Serri Matula, Zijun Hui, John Stravakrakis
Comments: 7 Pages. Computer Vision

Vivo confocal microscopy allows scientists to better understand eye health and systemic diseases. Microneuromas could play a role, however, monitoring their growth from a mosaic of images is error prone and time consuming. We used automated image stitching as a solution; focusing on accuracy and computational speed of three different feature detection algorithms: SIFT, SURF and ORB. The results illustrated that SURF was computationally efficient with our data. Future investigation is to create a global solution that can replace the need for manual image stitching in this application.
Category: Artificial Intelligence

[95] viXra:2006.0208 [pdf] replaced on 2020-12-10 03:50:56

Statistical Distance Latent Regulation Loss for Latent Code Recovery

Authors: Jeongik Cho
Comments: 17 Pages.

Finding a latent code that can generate specific data by inverting a generative model is called latent code recovery (or latent vector recovery). When performing gradient descent based latent recovery, the probability that the recovered latent code was sampled from a latent random variable can be very low. To prevent this, latent regulation losses or element resampling methods have been used in some papers. In this paper, when the latent random variable is an IID (Independent and Identically Distributed) random variable and performing gradient descent-based latent code recovery, we propose statistical distance latent regulation loss to maximize the probability that the latent code was sampled from the latent random variable. The statistical distance latent regulation loss is the distance between the discrete uniform distribution, assuming each latent code element has the same probability and one-dimensional distribution that each element of the latent random variable follows in common. Since the statistical distance latent regulation loss considers all elements simultaneously, it maximizes the probability that the latent code was sampled from a latent random variable. Also, we propose the latent distribution goodness of fit test, an additional test that verifies whether the latent code is sampled from the latent random variable. This additional test verifies whether all recovered latent codes’ elements’ distribution follows one-dimensional distribution that each element of the latent random variable follows in common when the latent random variable is an IID random variable. Passing the latent distribution goodness of fit test does not mean that the latent codes are recovered correctly, but when the latent codes are recovered correctly, the latent distribution goodness of fit test should be passed. Compared with other latent regulation losses or element resampling methods, only latent code recovery using the statistical distance latent regulation loss could recover the correct latent code with high performance in the gradient descent-based latent code recovery.
Category: Artificial Intelligence

[94] viXra:2006.0208 [pdf] replaced on 2020-08-12 22:33:53

Statistical Distance Latent Regulation Loss for Latent Vector Recovery

Authors: Jeongik Cho
Comments: 18 Pages.

Finding a latent vector that can generate specific data by inverting a generative model is called latent vector recovery (or latent vector projection). When performing gradient descent based latent recovery, the latent vector being recovered may deviate from the train latent distribution. To prevent this, latent regulation loss or element resampling has been used in some papers. In this paper, we propose a statistical distance latent regulation loss, which is a latent regulation loss that can be used when the generative model is trained with IID (Independent and Identically Distributed) random variables. The statistical distance latent regulation loss is the distance between the distribution followed by train latent random variables and the discrete uniform distribution, assuming that each element of the latent vector has the same probability. Since the statistical distance latent regulation loss considers the correlation between each element of the latent vector, better latent vector recovery is possible. In addition, in this paper, when evaluating the performance of latent vector recovery, we propose latent distribution goodness of fit test, an additional test that checks whether the distribution of all elements of all recovered latent vectors follows the distribution of the train latent random variable. Passing the latent distribution goodness of fit test does not mean that the latent vector recovery is properly performed, but when the latent recovery is properly performed, the latent distribution goodness of fit test must be passed. In this paper, the performance of the statistical distance latent regulation loss was compared with other latent regulation losses and element resampling methods. In conclusion, the performance of the statistical distance latent regulation loss using Wasserstein distance or Energy distance was the best.
Category: Artificial Intelligence

[93] viXra:2006.0208 [pdf] replaced on 2020-07-28 13:22:36

Statistical Distance Latent Regulation Loss for Latent Vector Recovery

Authors: Jeongik Cho
Comments: 9 Pages.

Finding a latent vector that can generate specific data by inverting the generative model is called latent vector recovery(or latent vector projection). When performing gradient descent based latent recovery, the latent vector being recovered may escape the train latent distribution. To prevent this, some papers used latent regulation loss or resampling. In this paper, assuming that the generative model is trained with IID (Independent and Identically Distributed) random variables, I propose statistical distance latent regulation loss, which uses the distance between distribution followed by train latent random variables, and discrete uniform distribution, which assumes that each element of the latent vector has the same probability, as a latent regulation loss. The statistical distance latent regulation loss considers the correlation between each element of the latent vector, so better latent vector recovery is possible. In this paper, I compared the performances of latent regulation losses and resampling methods of other papers as well as statistical distance latent regulation losses using several statistical distances. In conclusion, the performances of Wasserstein distance latent regulation loss and Energy distance latent regulation loss were the best. Also, in this paper, when performing latent vector recovery with a generator trained with an IID random variable, I propose the latent distribution goodness of fit test, an additional test to check whether all elements of all recovered latent vectors follow the distribution of the train latent random variable.
Category: Artificial Intelligence

[92] viXra:2006.0208 [pdf] replaced on 2020-07-22 10:52:23

Statistical Distance Latent Regulation Loss for Latent Vector Recovery

Authors: Jeongik Cho
Comments: 8 Pages.

Finding a latent vector that can generate specific data using a generative model is called latent vector recovery. When performing gradient descent based latent recovery, the latent vector being recovered may escape the train latent distribution. To prevent this, latent regulation loss or resampling was used in some papers. In this paper, assuming that the generative model is trained with IID(Independent and Identically Distributed) random variables, I propose a statistical distance latent regulation loss that considers the train latent distribution as a one-dimensional distribution, the latent vector as a sample distribution, and the distance between the two distributions as a latent regulation loss. The statistical distance latent regulation loss considers the correlation between each element of the latent vector, so better latent vector recovery is possible. In addition, I compared the performance of latent regulation losses and resampling methods of other papers as well as statistical distance latent regulation losses using several statistical distances. In conclusion, the performance of Bhattacharyya latent regulation loss was the best when the train latent vector followed the normal distribution, and the Lukaszyk Karmowski regulation loss showed the best performance otherwise.
Category: Artificial Intelligence

[91] viXra:2006.0208 [pdf] replaced on 2020-07-15 07:54:53

Wasserstein Latent Regulation Loss for Latent Vector Recovery

Authors: Jeongik Cho
Comments: 5 Pages.

Finding a latent vector that can generate specific data using a generative model is called latent vector recovery. When performing gradient descent based latent recovery, the latent vector being recovered may escape the train latent vector distribution. To prevent this, latent regulation loss has been used in many papers. In this paper, I propose a Wasserstein latent regulation loss to improve the performance of latent recovery, assuming that the generative model is trained with IID (Independent and identically distributed) random variables. The proposed Wasserstein latent regulation loss is the Wasserstein distance between the sample distribution of the train probability distribution and the latent vector being recovered. This paper compares the latent regulation loss of several papers, including the proposed Wasserstein latent regulation loss. In conclusion, the Wasserstein regulation loss and the log normal density function proposed in [1] showed the best performance.
Category: Artificial Intelligence

[90] viXra:2006.0208 [pdf] replaced on 2020-06-24 23:04:36

Add Latent Restriction Loss When Recovering Latent Vector

Authors: Jeongik Cho
Comments: 3 Pages.

When a pre-trained generative model is given, the process of finding the latent vector that produces the data closest to the input data is called the latent vector recover. The latent vector recover receives the difference between the generated data and the input data generated through the latent vector as reconstruction loss and performs gradient descent repeatedly on the latent vector to find the optimal latent vector. In this paper, I propose a method to find a better latent vector by adding a latent restriction loss in addition to reconstruction loss during latent vector recovery. The latent restriction loss is a loss that makes the latent vector follow the distribution of the latent vector used when training the generative model during latent vector recovery. The distance between the "distribution of latent vector used in training the generative model" and "latent vector during latent vector recovery" becomes the latent restriction loss.
Category: Artificial Intelligence

[89] viXra:2006.0079 [pdf] replaced on 2021-02-18 16:44:18

Fully Automated Robotic Vehicle with Real Time Image Detection and Collusion Avoiding Features

Authors: Al-Akhir Nayan, Md. Obaidur Rahman, Ahamad Nokib Mozumder, Mohammod Abul Kashem
Comments: 13 Pages. Published in Multidisciplinary Journal of European University of Bangladesh, 5(1), 2020 [Corrections made by viXra Admin to conform with the guidelines of viXra.org]

Due to the simplicity and capability to alter according to our requirements, the robotics and automation are being used widely in industries. The scheme is aimed to assemble an automatic vehicle by using GPS, which is depended on computer to generate its path coordinate. GPS module is utilized to collect GPS data. The mobile camera encounters the obstacles, machine learning algorithm assists to avoid it and performs real time object detection. The automobile uses the electric motors to spin wheels and has full control of the throttle, steering and breaking. An Arduino device pilots the vehicle following the instructions generated by the computer. Traffic has increased by quite a huge number. Excessive number of vehicles leads to large number of vehicle accidents every day. Driver issue is also a great difficulty. The ultimate goal of this work is to minimize the possibilities of accidents and to ensure the safety of the passengers. Thus, the vehicles will be useful for blind and handicraft people. But serving this device to the military is the main target so that they can get benefit at the time of danger. The motorized vehicle includes sensors to observe the surroundings. Besides, it can be managed by human beings, manually.
Category: Artificial Intelligence

[88] viXra:2004.0611 [pdf] replaced on 2021-11-24 05:55:15

Language for Description of Worlds

Authors: Dimiter Dobrev
Comments: 62 Pages. Bulgarian language

We will reduce the task of creating AI to the task of finding an appropriate language for description of the world. This will not be a programing language because programing languages describe only computable functions, while our language will describe a somewhat broader class of functions. Another specificity of this language will be that the description will consist of separate modules. This will enable us look for the description of the world automatically such that we discover it module after module. Our approach to the creation of this new language will be to start with a particular world and write the description of that particular world. The point is that the language which can describe this particular world will be appropriate for describing any world.
Category: Artificial Intelligence

[87] viXra:2004.0611 [pdf] replaced on 2020-10-12 12:19:27

Language for Description of Worlds

Authors: Dimiter Dobrev
Comments: 38 Pages.

We will reduce the task of creating AI to the task of finding an appropriate language for description of the world. This will not be a programing language because programing languages describe only computable functions, while our language will describe a somewhat broader class of functions. Another specificity of this language will be that the description will consist of separate modules. This will enable us look for the description of the world automatically such that we discover it module after module. Our approach to the creation of this new language will be to start with a particular world and write the description of that particular world. The point is that the language which can describe this particular world will be appropriate for description of any world.
Category: Artificial Intelligence

[86] viXra:2004.0611 [pdf] replaced on 2020-06-14 07:46:50

Language for Description of Worlds

Authors: Dimiter Dobrev
Comments: 38 Pages. Bulgarian language

We will reduce the task of creating AI to the task of finding the right language for description of the world. This language will not be a programming language because the programming languages describe only computable functions, while this language will describe a slightly wider class of functions. Another feature of this language will be that the description can be divided into separate modules. This will allow us to search the world description automatically by detecting it module by module. Our approach to creating this new language will be to start from one particular world and write a description of that particular world. Our idea is that the language that can describe this particular world will be appropriate to describe arbitrary world.
Category: Artificial Intelligence

[85] viXra:2004.0371 [pdf] replaced on 2020-05-07 20:44:02

Inverted Conditional Generator Classifier

Authors: Jeongik Cho
Comments: 12 Pages.

Traditional deep neural network classifier receives input data and passes through hidden layers to output predicted labels. In this paper, I propose an Inverted Conditional Generator Classifier that uses conditional generators to find a pair of condition vector and latent vector that can generate the data closest to the input data, and predict the label of the input data. The conditional generator is a generative model that receives latent vector and condition vector, and generates data with desired conditions. A decoder of conditional VAE [1] or a generator of conditional GAN [2] can be a conditional generator. The inverted Conditional Generator Classifier uses a trained conditional generator as it is. The inverted conditional generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label. Inverted Conditional Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. In addition, the Inverted Conditional Generator Classifier can measure the degree of out-of-class through the difference between the generated nearest data and input data. A high degree of out-of-class means that the input data is separate from the cluster of each class, or Inverted Conditional Generator Classifier has little confidence in prediction. Through this, Inverted Conditional Generator Classifier can classify the input data as out-of-class or defer classification due to the lack of confidence in prediction.
Category: Artificial Intelligence

[84] viXra:2004.0371 [pdf] replaced on 2020-04-25 10:46:47

Inverted Conditional Generator Classifier

Authors: Jeongik Cho
Comments: 12 Pages.

Traditional deep neural network classifier receives input data and passes through hidden layers to output predicted labels. Conditional generator such as Conditional VAE [1] or Conditional GAN [2] receives latent vector and condition vector, and generates data with the desired conditions. In this paper, I propose an Inverted Conditional Generator Classifier that uses conditional generators to find a pair of condition vector and latent vector that can generate the data closest to the input data, and predict the label of the input data. The inverted Conditional Generator Classifier uses a trained conditional generator as it is. The inverted conditional generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label. Inverted Conditional Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. In addition, the Inverted Conditional Generator Classifier can measure the degree of out-of-class through the difference between the generated nearest data and input data.
Category: Artificial Intelligence

[83] viXra:2004.0371 [pdf] replaced on 2020-04-22 23:49:22

Inverted Conditional Generator Classifier

Authors: Jeongik Cho
Comments: 10 Pages.

Traditional deep neural network classifier receives input data and passes through hidden layers to output predicted labels. Conditional generator such as Conditional VAE [1] or Conditional GAN [2] receives latent vector and condition vector, and generates data with the desired conditions. In this paper, I propose an Inverted Conditional Generator Classifier that uses conditional generators to find a pair of condition vector and latent vector that can generate the data closest to the input data, and predict the label of the input data. The inverted Conditional Generator Classifier uses a trained conditional generator as it is. The inverted conditional generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label. Inverted Conditional Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. In particular, it is not attacked by gradient-descent based white-box attacks that assume a traditional deep neural network classifier. It is also expected to be able to defend well against black-box attacks that assume a traditional deep neural network classifier. In addition, the Inverted Conditional Generator Classifier can measure the degree of out-of-class through the difference between the generated nearest data and input data.
Category: Artificial Intelligence

[82] viXra:2004.0371 [pdf] replaced on 2020-04-17 10:51:58

Inverted Generator Classifier

Authors: Jeongik Cho
Comments: 9 Pages.

In the field of deep learning, a traditional classifier receives input data and passes through hidden layers to output predicted labels. Conditional generators such as Conditional VAE [1] and Conditional GAN [2] receive latent vector and condition vector and generate data with the desired conditions. In this paper, I propose an Inverted Generator Classifier that uses conditional generators to find a pair of condition vectors and latent vectors that generate the data closest to the input data, and predict the label of the input data. Inverted Generator Classifier uses a trained conditional generator as it is. The inverted generator classifier repeatedly performs gradient descent by taking the latent vector for each condition as a variable and the model parameter as a constant to find the data closest to the input data. Then, among the data generated for each condition, the condition vector of the data closest to the input data becomes the predicted label. Inverted Generator Classifier is slow to predict because prediction is based on gradient descent, but has high accuracy and is very robust against adversarial attacks [3] such as noise. It is also not subject to gradient-descent based white-box attacks like FGSM [4].
Category: Artificial Intelligence

[81] viXra:2004.0222 [pdf] replaced on 2021-03-15 16:08:56

Decoupling Global and Local Representations via Invertible Generative Flows

Authors: Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Comments: 23 Pages. Published in ICLR 2021

In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting, by embedding a generative flow in the VAE framework to model the decoder. Specifically, the proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a likelihood-based objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
Category: Artificial Intelligence

[80] viXra:2004.0222 [pdf] replaced on 2020-04-11 22:06:49

Decoupling Global and Local Representations From/for Image Generation

Authors: Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy
Comments: 22 Pages.

In this work, we propose a new generative model that is capable of automatically decoupling global and local representations of images in an entirely unsupervised setting. The proposed model utilizes the variational auto-encoding framework to learn a (low-dimensional) vector of latent variables to capture the global information of an image, which is fed as a conditional input to a flow-based invertible decoder with architecture borrowed from style transfer literature. Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning. Importantly, this work demonstrates that with only architectural inductive biases, a generative model with a plain log-likelihood objective is capable of learning decoupled representations, requiring no explicit supervision. The code for our model is available at https://github.com/XuezheMax/wolf.
Category: Artificial Intelligence

[79] viXra:2003.0484 [pdf] replaced on 2023-03-17 02:54:10

Random Thoughts on Neural Networks from the Views of Data Space Transformation and Ensemble Classification

Authors: Qing Tian, Guangjun Tian
Comments: 4 Pages. In Chinese

This manuscript sketch first describes neural networks’ effect from the perspective of data space transformation, which is transforming data in a complicated raw space into an easily (e.g. linearly) separable space. We use a simple paper wrapping example to illustrate this point. In addition, this sketch also discusses some similarities between neural networks and ensemble classification.
Category: Artificial Intelligence

[78] viXra:2003.0484 [pdf] replaced on 2021-11-19 17:17:28

Random Thoughts on Neural Networks from the Views of Data Space Transformation and Ensemble Classification

Authors: Qing Tian, Guangjun Tian
Comments: 4 Pages. in Chinese

This manuscript sketch first describes neural networks’ effect from the perspective of data space transformation, which is transforming data in a complicated raw space into an easily (e.g. linearly) separable space. We use a simple paper wrapping example to illustrate this point. In addition, this sketch also discusses some similarities between neural networks and ensemble classification.
Category: Artificial Intelligence

[77] viXra:2001.0218 [pdf] replaced on 2020-01-17 01:21:27

Understanding & Exploring -> [ Mandelbrot Algorithms+AI+QRNG Concepts+Hard Problem Concepts based on Python & Haskell ] – A Short Communication.

Authors: Nirmal Tej Kumar
Comments: 5 Pages. Short Communication - Revised

[ PART A ] - Python Medical Image Processing & Electron Microscopy Image Processing Informatics Using Python/LLVM. [ PART B ] - Haskell Exploring a JIT Compiler with Haskell and LLVM in the Context of Medical Image Processing & Electron Microscopy Image Processing Software R&D Using Mandelbrot Algorithms.
Category: Artificial Intelligence