Latest YouTube Video
Saturday, November 26, 2016
Researchers Show How to Steal Tesla Car by Hacking into Owner's Smartphone
from The Hacker News http://ift.tt/2grcduh
via IFTTT
Beware! Malicious JPG Images on Facebook Messenger Spreading Locky Ransomware
from The Hacker News http://ift.tt/2fOK4xj
via IFTTT
Apollo 17 VIP Site Anaglyph
Friday, November 25, 2016
I have a new follower on Twitter
UserIQ
UserIQ is the first and only Customer Growth Platform™ that empowers SaaS companies to foster growth beyond the funnel.
Atlanta, GA
http://t.co/NsNSrVrIdo
Following: 854 - Followers: 1505
November 25, 2016 at 04:07PM via Twitter http://twitter.com/UserIQ
Ravens: LB Elvis Dumervil (foot), who has not played since Oct. 9, questionable for Sunday's game against the Bengals (ESPN)
via IFTTT
Ravens: CB Jimmy Smith (back) doubtful for Sunday's game against the Bengals (ESPN)
via IFTTT
pendo.io (@pendoio) liked one of your Tweets!
Source: Gmail -> IFTTT-> Blogger
I have a new follower on Twitter
Billy Soden
Always a teacher, and more so a learner. Dreamer, musician lifelong learner, and hope dealer. The most amazing people in the world call me Dad.
USA
https://t.co/rRQ63DbMFt
Following: 13196 - Followers: 12077
November 25, 2016 at 10:12AM via Twitter http://twitter.com/sodenbsoden
Live chat text strikethrough, no notification (anonymous user)
from Google Alert - anonymous http://ift.tt/2gnAvVW
via IFTTT
I have a new follower on Twitter
TTI Wireless
THE Leader for wired and wireless networks, data security and wireless communications.
Sayreville, NJ
http://t.co/wWHOdLnN0o
Following: 3832 - Followers: 4381
November 25, 2016 at 07:57AM via Twitter http://twitter.com/ttiwireless
[FD] NEW VMSA-2016-0021 VMware product updates address partial information disclosure vulnerability
Source: Gmail -> IFTTT-> Blogger
[FD] NEW VMSA-2016-0022 VMware product updates address information disclosure vulnerabilities
Source: Gmail -> IFTTT-> Blogger
[FD] [SYSS-2016-107] EASY HOME Alarmanlagen-Set - Cryptographic Issues (CWE-310)
Source: Gmail -> IFTTT-> Blogger
[FD] [SYSS-2016-106] EASY HOME Alarmanlagen-Set - Missing Protection against Replay Attacks
Source: Gmail -> IFTTT-> Blogger
[FD] [SYSS-2016-072] Olypmia Protect 9061 - Missing Protection against Replay Attacks
Source: Gmail -> IFTTT-> Blogger
[FD] [SYSS-2016-071] Blaupunkt Smart GSM Alarm SA 2500 Kit - Missing Protection against Replay Attacks
Source: Gmail -> IFTTT-> Blogger
[FD] [SYSS-2016-066] Multi Kon Trade M2B GSM Wireless Alarm System - Missing Protection against Replay Attacks
Source: Gmail -> IFTTT-> Blogger
[FD] [SYSS-2016-064] Multi Kon Trade M2B GSM Wireless Alarm System - Improper Restriction of Excessive Authentication Attempts (CWE-307)
Source: Gmail -> IFTTT-> Blogger
[FD] Faraday v2.2: Collaborative Penetration Test and Vulnerability Management Platform
Source: Gmail -> IFTTT-> Blogger
[FD] [CVE-2016-7098] GNU Wget < 1.18 Access List Bypass / Race Condition
Source: Gmail -> IFTTT-> Blogger
[FD] The HS-110 Smart Plug aka Projekt Kasa
Source: Gmail -> IFTTT-> Blogger
[FD] CVE-2013-3120 MSIE 10 MSHTML CEditAdorner::Detach use-after-free details
Source: Gmail -> IFTTT-> Blogger
[FD] Microsoft Internet Explorer 11 MSHTML CGeneratedContent::HasGeneratedSVGMarker type confusion
Source: Gmail -> IFTTT-> Blogger
[FD] CVE-2015-1251: Chrome blink SpeechRecognitionController use-after-free details
Source: Gmail -> IFTTT-> Blogger
[FD] CVE-2015-0050: Microsoft Internet Explorer 8 MSHTML SRunPointer::SpanQualifier/RunType OOB read details
Source: Gmail -> IFTTT-> Blogger
[FD] MobSF v0.9.3 is Released: Now supports Windows APPX Static Analysis
Source: Gmail -> IFTTT-> Blogger
Thursday, November 24, 2016
I have a new follower on Twitter
hulu
Bond is now on Hulu. Oh, fine. Bond... James Bond is now on Hulu. Need help? Tweet @hulu_support
Los Angeles, CA, USA
https://t.co/vTv6CeaZbx
Following: 31498 - Followers: 262432
November 24, 2016 at 09:42PM via Twitter http://twitter.com/hulu
I have a new follower on Twitter
MBOSSTRADE
MARTINGALE BINARY OPTIONS SIGNALS SERVICE 📱 Real-time Signals via Whatsapp 💹 Register with our broker and enjoy 🆓 membership
England, United Kingdom
https://t.co/TTx1SQg4si
Following: 1252 - Followers: 615
November 24, 2016 at 05:30PM via Twitter http://twitter.com/mbosstrade1
Microsoft Shares Telemetry Data Collected from Windows 10 Users with 3rd-Party
from The Hacker News http://ift.tt/2g8JhYN
via IFTTT
I have a new follower on Twitter
pendo.io
Pendo delivers product success. Understand and guide the complete user journey through your application with analytics, user feedback, & contextual help.
United States
http://t.co/fZUIdIoxzd
Following: 16400 - Followers: 17466
November 24, 2016 at 12:09PM via Twitter http://twitter.com/pendoio
[FD] [RT-SA-2016-003] Less.js: Compilation of Untrusted LESS Files May Lead to Code Execution through the JavaScript Less Compiler
Source: Gmail -> IFTTT-> Blogger
THN Deal — Learn Wi-Fi Hacking & Penetration Testing [Online Course: 83% OFF]
from The Hacker News http://ift.tt/2gk0CwO
via IFTTT
Antivirus Firm Kaspersky launches Its Own Secure Operating System
from The Hacker News http://ift.tt/2gjxs14
via IFTTT
Binary options anonymous
from Google Alert - anonymous http://ift.tt/2fVcXLj
via IFTTT
FBI Hacked into 8,000 Computers in 120 Countries Using A Single Warrant
from The Hacker News http://ift.tt/2fUJpgH
via IFTTT
NGC 7635: Bubble in a Cosmic Sea
Wednesday, November 23, 2016
Feature Importance Measure for Non-linear Learning Algorithms. (arXiv:1611.07567v1 [cs.AI])
Complex problems may require sophisticated, non-linear learning methods such as kernel machines or deep neural networks to achieve state of the art prediction accuracies. However, high prediction accuracies are not the only objective to consider when solving problems using machine learning. Instead, particular scientific applications require some explanation of the learned prediction function. Unfortunately, most methods do not come with out of the box straight forward interpretation. Even linear prediction functions are not straight forward to explain if features exhibit complex correlation structure.
In this paper, we propose the Measure of Feature Importance (MFI). MFI is general and can be applied to any arbitrary learning machine (including kernel machines and deep learning). MFI is intrinsically non-linear and can detect features that by itself are inconspicuous and only impact the prediction function through their interaction with other features. Lastly, MFI can be used for both --- model-based feature importance and instance-based feature importance (i.e, measuring the importance of a feature for a particular data point).
from cs.AI updates on arXiv.org http://ift.tt/2g5J9cw
via IFTTT
Programs as Black-Box Explanations. (arXiv:1611.07579v1 [stat.ML])
Recent work in model-agnostic explanations of black-box machine learning has demonstrated that interpretability of complex models does not have to come at the cost of accuracy or model flexibility. However, it is not clear what kind of explanations, such as linear models, decision trees, and rule lists, are the appropriate family to consider, and different tasks and models may benefit from different kinds of explanations. Instead of picking a single family of representations, in this work we propose to use "programs" as model-agnostic explanations. We show that small programs can be expressive yet intuitive as explanations, and generalize over a number of existing interpretable families. We propose a prototype program induction method based on simulated annealing that approximates the local behavior of black-box classifiers around a specific prediction using random perturbations. Finally, we present preliminary application on small datasets and show that the generated explanations are intuitive and accurate for a number of classifiers.
from cs.AI updates on arXiv.org http://ift.tt/2g5HnYF
via IFTTT
Efficient Delivery Policy to Minimize User Traffic Consumption in Guaranteed Advertising. (arXiv:1611.07599v1 [cs.DS])
In this work, we study the guaranteed delivery model which is widely used in online display advertising. In the guaranteed delivery scenario, ad exposures (which are also called impressions in some works) to users are guaranteed by contracts signed in advance between advertisers and publishers. A crucial problem for the advertising platform is how to fully utilize the valuable user traffic to generate as much as possible revenue.
Different from previous works which usually minimize the penalty of unsatisfied contracts and some other cost (e.g. representativeness), we propose the novel consumption minimization model, in which the primary objective is to minimize the user traffic consumed to satisfy all contracts. Under this model, we develop a near optimal method to deliver ads for users. The main advantage of our method lies in that it consumes nearly as least as possible user traffic to satisfy all contracts, therefore more contracts can be accepted to produce more revenue. It also enables the publishers to estimate how much user traffic is redundant or short so that they can sell or buy this part of traffic in bulk in the exchange market. Furthermore, it is robust with regard to priori knowledge of user type distribution. Finally, the simulation shows that our method outperforms the traditional state-of-the-art methods.
from cs.AI updates on arXiv.org http://ift.tt/2gCwkK7
via IFTTT
Multi-Modal Mean-Fields via Cardinality-Based Clamping. (arXiv:1611.07941v1 [cs.CV])
Mean Field inference is central to statistical physics. It has attracted much interest in the Computer Vision community to efficiently solve problems expressible in terms of large Conditional Random Fields. However, since it models the posterior probability distribution as a product of marginal probabilities, it may fail to properly account for important dependencies between variables. We therefore replace the fully factorized distribution of Mean Field by a weighted mixture of such distributions, that similarly minimizes the KL-Divergence to the true posterior. By introducing two new ideas, namely, conditioning on groups of variables instead of single ones and using a parameter of the conditional random field potentials, that we identify to the temperature in the sense of statistical physics to select such groups, we can perform this minimization efficiently. Our extension of the clamping method proposed in previous works allows us to both produce a more descriptive approximation of the true posterior and, inspired by the diverse MAP paradigms, fit a mixture of Mean Field approximations. We demonstrate that this positively impacts real-world algorithms that initially relied on mean fields.
from cs.AI updates on arXiv.org http://ift.tt/2gCwp0i
via IFTTT
On Design Mining: Coevolution and Surrogate Models. (arXiv:1506.08781v6 [cs.NE] UPDATED)
Design mining is the use of computational intelligence techniques to iteratively search and model the attribute space of physical objects evaluated directly through rapid prototyping to meet given objectives. It enables the exploitation of novel materials and processes without formal models or complex simulation. In this paper, we focus upon the coevolutionary nature of the design process when it is decomposed into concurrent sub-design threads due to the overall complexity of the task. Using an abstract, tuneable model of coevolution we consider strategies to sample sub-thread designs for whole system testing and how best to construct and use surrogate models within the coevolutionary scenario. Drawing on our findings, the paper then describes the effective design of an array of six heterogeneous vertical-axis wind turbines.
from cs.AI updates on arXiv.org http://ift.tt/1Nx7TnZ
via IFTTT
A Brain-like Cognitive Process with Shared Methods. (arXiv:1507.04928v4 [cs.AI] UPDATED)
This paper describes a detailed cognitive structure with related processes. The system is modelled on the human brain, where pattern creation and activation would be automatic and distributed. Methods for carrying out the processes are also suggested. The main purpose of this paper is to reaffirm earlier research on different knowledge-based and experience-based clustering techniques. The overall architecture has stayed essentially the same and so it is the localised processes or smaller details that have been updated. For example, a counting mechanism is used slightly differently, to measure a level of 'cohesion' instead of a 'correct' classification, over pattern instances. The introduction of features has further enhanced the architecture and a new entropy-style of equation is proposed. While an earlier paper defined three levels of functional requirement, this paper re-defines the levels in a more human vernacular, with higher-level goals described in terms of action-result pairs.
from cs.AI updates on arXiv.org http://ift.tt/1LlCiaY
via IFTTT
Beyond knowing that: a new generation of epistemic logics. (arXiv:1605.01995v3 [cs.AI] UPDATED)
Epistemic logic has become a major field of philosophical logic ever since the groundbreaking work by Hintikka (1962). Despite its various successful applications in theoretical computer science, AI, and game theory, the technical development of the field has been mainly focusing on the propositional part, i.e., the propositional modal logics of "knowing that". However, knowledge is expressed in everyday life by using various other locutions such as "knowing whether", "knowing what", "knowing how" and so on (knowing-wh hereafter). Such knowledge expressions are better captured in quantified epistemic logic, as was already discussed by Hintikka (1962) and his sequel works at length. This paper aims to draw the attention back again to such a fascinating but largely neglected topic. We first survey what Hintikka and others did in the literature of quantified epistemic logic, and then advocate a new quantifier-free approach to study the epistemic logics of knowing-wh, which we believe can balance expressivity and complexity, and capture the essential reasoning patterns about knowing-wh. We survey our recent line of work on the epistemic logics of "knowing whether", "knowing what" and "knowing how" to demonstrate the use of this new approach.
from cs.AI updates on arXiv.org http://ift.tt/24EIYeE
via IFTTT
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. (arXiv:1605.09304v5 [cs.NE] UPDATED)
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right - similar to why we study the human brain - and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).
from cs.AI updates on arXiv.org http://ift.tt/1U8Wg8k
via IFTTT
VIME: Variational Information Maximizing Exploration. (arXiv:1605.09674v3 [cs.LG] UPDATED)
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.
from cs.AI updates on arXiv.org http://ift.tt/1RKQdoC
via IFTTT
Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates. (arXiv:1610.00633v2 [cs.RO] UPDATED)
Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.
from cs.AI updates on arXiv.org http://ift.tt/2dDMPQm
via IFTTT
Anonymous read access to a project
from Google Alert - anonymous http://ift.tt/2gClTpU
via IFTTT
anonymous users seeing empty results page, with no URL parameter passed after using /search/site
from Google Alert - anonymous http://ift.tt/2fGFykr
via IFTTT
Ravens: Steve Smith jokes about "losing sleep" over rookie CBs saying they don't respect him; "they don't really get it" (ESPN)
via IFTTT
[FD] Stored Cross-Site Scripting in Gallery - Image Gallery WordPress Plugin
Source: Gmail -> IFTTT-> Blogger
Your Headphones Can Spy On You — Even If You Have Disabled Microphone
from The Hacker News http://ift.tt/2gggcd7
via IFTTT
ISS Daily Summary Report – 11/22/2016
from ISS On-Orbit Status Report http://ift.tt/2g3lIR0
via IFTTT
How to solve this problem? Assertion in void __stdcall
from Google Alert - anonymous http://ift.tt/2g2XExA
via IFTTT
NTP DoS Exploit Released — Update Your Servers to Patch 10 Flaws
from The Hacker News http://ift.tt/2gjsNOb
via IFTTT
Plutos Sputnik Planum
Massive Lightning Storm Lights up Northern Alabama
from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2f5fl2C
via IFTTT
Tuesday, November 22, 2016
Lulu Jieyao (Anonymous)
from Google Alert - anonymous http://ift.tt/2fpiR6K
via IFTTT
An Efficient Training Algorithm for Kernel Survival Support Vector Machines. (arXiv:1611.07054v1 [cs.LG])
Survival analysis is a fundamental tool in medical research to identify predictors of adverse events and develop systems for clinical decision support. In order to leverage large amounts of patient data, efficient optimisation routines are paramount. We propose an efficient training algorithm for the kernel survival support vector machine (SSVM). We directly optimise the primal objective function and employ truncated Newton optimisation and order statistic trees to significantly lower computational costs compared to previous training algorithms, which require $O(n^4)$ space and $O(p n^6)$ time for datasets with $n$ samples and $p$ features. Our results demonstrate that our proposed optimisation scheme allows analysing data of a much larger scale with no loss in prediction performance. Experiments on synthetic and 5 real-world datasets show that our technique outperforms existing kernel SSVM formulations if the amount of right censoring is high ($\geq85\%$), and performs comparably otherwise.
from cs.AI updates on arXiv.org http://ift.tt/2f4NPCg
via IFTTT
A Deep Learning Approach for Joint Video Frame and Reward Prediction in Atari Games. (arXiv:1611.07078v1 [cs.AI])
Reinforcement learning is concerned with learning to interact with environments that are initially unknown. State-of-the-art reinforcement learning approaches, such as DQN, are model-free and learn to act effectively across a wide range of environments such as Atari games, but require huge amounts of data. Model-based techniques are more data-efficient, but need to acquire explicit knowledge about the environment dynamics or the reward structure.
In this paper we take a step towards using model-based techniques in environments with high-dimensional visual state space when system dynamics and the reward structure are both unknown and need to be learned, by demonstrating that it is possible to learn both jointly. Empirical evaluation on five Atari games demonstrate accurate cumulative reward prediction of up to 200 frames. We consider these positive results as opening up important directions for model-based RL in complex, initially unknown environments.
from cs.AI updates on arXiv.org http://ift.tt/2fnecyJ
via IFTTT
Interpreting Finite Automata for Sequential Data. (arXiv:1611.07100v1 [stat.ML])
Automaton models are often seen as interpretable models. Interpretability itself is not well defined: it remains unclear what interpretability means without first explicitly specifying objectives or desired attributes. In this paper, we identify the key properties used to interpret automata and propose a modification of a state-merging approach to learn variants of finite state automata. We apply the approach to problems beyond typical grammar inference tasks. Additionally, we cover several use-cases for prediction, classification, and clustering on sequential data in both supervised and unsupervised scenarios to show how the identified key properties are applicable in a wide range of contexts.
from cs.AI updates on arXiv.org http://ift.tt/2f4KEuA
via IFTTT
CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression. (arXiv:1611.07233v1 [cs.CV])
Lossy image compression algorithms are pervasively used to reduce the size of images transmitted over the web and recorded on data storage media. However, we pay for their high compression rate with visual artifacts degrading the user experience. Deep convolutional neural networks have become a widespread tool to address high-level computer vision tasks very successfully. Recently, they have found their way into the areas of low-level computer vision and image processing to solve regression problems mostly with relatively shallow networks.
We present a novel 12-layer deep convolutional network for image compression artifact suppression with hierarchical skip connections and a multi-scale loss function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an improvement of up to 0.36 dB over the best previous ConvNet result. We show that a network trained for a specific quality factor (QF) is resilient to the QF used to compress the input image - a single network trained for QF 60 provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76.
from cs.AI updates on arXiv.org http://ift.tt/2fDoRWW
via IFTTT
Limbo: A Fast and Flexible Library for Bayesian Optimization. (arXiv:1611.07343v1 [cs.LG])
Limbo is an open-source C++11 library for Bayesian optimization which is designed to be both highly flexible and very fast. It can be used to optimize functions for which the gradient is unknown, evaluations are expensive, and runtime cost matters (e.g., on embedded systems or robots). Benchmarks on standard functions show that Limbo is about 2 times faster than BayesOpt (another C++ library) for a similar accuracy.
from cs.AI updates on arXiv.org http://ift.tt/2fDrkAZ
via IFTTT
Randomized Mechanisms for Selling Reserved Instances in Cloud. (arXiv:1611.07379v1 [cs.GT])
Selling reserved instances (or virtual machines) is a basic service in cloud computing. In this paper, we consider a more flexible pricing model for instance reservation, in which a customer can propose the time length and number of resources of her request, while in today's industry, customers can only choose from several predefined reservation packages. Under this model, we design randomized mechanisms for customers coming online to optimize social welfare and providers' revenue.
We first consider a simple case, where the requests from the customers do not vary too much in terms of both length and value density. We design a randomized mechanism that achieves a competitive ratio $\frac{1}{42}$ for both \emph{social welfare} and \emph{revenue}, which is a improvement as there is usually no revenue guarantee in previous works such as \cite{azar2015ec,wang2015selling}. This ratio can be improved up to $\frac{1}{11}$ when we impose a realistic constraint on the maximum number of resources used by each request. On the hardness side, we show an upper bound $\frac{1}{3}$ on competitive ratio for any randomized mechanism. We then extend our mechanism to the general case and achieve a competitive ratio $\frac{1}{42\log k\log T}$ for both social welfare and revenue, where $T$ is the ratio of the maximum request length to the minimum request length and $k$ is the ratio of the maximum request value density to the minimum request value density. This result outperforms the previous upper bound $\frac{1}{CkT}$ for deterministic mechanisms \cite{wang2015selling}. We also prove an upper bound $\frac{2}{\log 8kT}$ for any randomized mechanism. All the mechanisms we provide are in a greedy style. They are truthful and easy to be integrated into practical cloud systems.
from cs.AI updates on arXiv.org http://ift.tt/2fnfrhr
via IFTTT
Deep Learning Approximation for Stochastic Control Problems. (arXiv:1611.07422v1 [cs.LG])
Many real world stochastic control problems suffer from the "curse of dimensionality". To overcome this difficulty, we develop a deep learning approach that directly solves high-dimensional stochastic control problems based on Monte-Carlo sampling. We approximate the time-dependent controls as feedforward neural networks and stack these networks together through model dynamics. The objective function for the control problem plays the role of the loss function for the deep neural network. We test this approach using examples from the areas of optimal trading and energy storage. Our results suggest that the algorithm presented here achieves satisfactory accuracy and at the same time, can handle rather high dimensional problems.
from cs.AI updates on arXiv.org http://ift.tt/2f4IZFd
via IFTTT
An unexpected unity among methods for interpreting model predictions. (arXiv:1611.07478v1 [cs.AI])
Understanding why a model made a certain prediction is crucial in many data science fields. Interpretable predictions engender appropriate trust and provide insight into how the model may be improved. However, with large modern datasets the best accuracy is often achieved by complex models even experts struggle to interpret, which creates a tension between accuracy and interpretability. Recently, several methods have been proposed for interpreting predictions from complex models by estimating the importance of input features. Here, we present how a model-agnostic additive representation of the importance of input features unifies current methods. This representation is optimal, in the sense that it is the only set of additive values that satisfies important properties. We show how we can leverage these properties to create novel visual explanations of model predictions. The thread of unity that this representation weaves through the literature indicates that there are common principles to be learned about the interpretation of model predictions that apply in many scenarios.
from cs.AI updates on arXiv.org http://ift.tt/2fneUw3
via IFTTT
Variational Intrinsic Control. (arXiv:1611.07507v1 [cs.LG])
In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.
from cs.AI updates on arXiv.org http://ift.tt/2f4Jr6C
via IFTTT
Associative Adversarial Networks. (arXiv:1611.06953v1 [cs.LG])
We propose a higher-level associative memory for learning adversarial networks. Generative adversarial network (GAN) framework has a discriminator and a generator network. The generator (G) maps white noise (z) to data samples while the discriminator (D) maps data samples to a single scalar. To do so, G learns how to map from high-level representation space to data space, and D learns to do the opposite. We argue that higher-level representation spaces need not necessarily follow a uniform probability distribution. In this work, we use Restricted Boltzmann Machines (RBMs) as a higher-level associative memory and learn the probability distribution for the high-level features generated by D. The associative memory samples its underlying probability distribution and G learns how to map these samples to data space. The proposed associative adversarial networks (AANs) are generative models in the higher-levels of the learning, and use adversarial non-stochastic models D and G for learning the mapping between data and higher-level representation spaces. Experiments show the potential of the proposed networks.
from cs.AI updates on arXiv.org http://ift.tt/2flLnpO
via IFTTT
Alcholics anonymous
from Google Alert - anonymous http://ift.tt/2f4o2Kt
via IFTTT
Anonymous Just Took Down Website of Company that Sells Concussion Grenades to DAPL Cops
from Google Alert - anonymous http://ift.tt/2fPcXw2
via IFTTT
Anonymous Report of PCard Fraud or Abuse
from Google Alert - anonymous http://ift.tt/2fP85qU
via IFTTT
Custom Essay Writers Anonymous
from Google Alert - anonymous http://ift.tt/2fo46kg
via IFTTT
Orioles name Roger McDowell pitching coach; spent the last 11 seasons in same role with the Braves (ESPN)
via IFTTT
Ravens (5-5) drop 1 spot to No. 14 in Week 12 NFL Power Rankings next game Sunday vs. Bengals (3-6-1) (ESPN)
via IFTTT
ISS Daily Summary Report – 11/21/2016
from ISS On-Orbit Status Report http://ift.tt/2fBCnuo
via IFTTT
how can i give anonymous user as a group some permission in odoo?
from Google Alert - anonymous http://ift.tt/2ghzgJ1
via IFTTT
[FD] [CVE-2016-7434] ntpd remote pre-auth DoS
Source: Gmail -> IFTTT-> Blogger
[FD] [ERPSCAN-16-033] SAP NetWeaver AS JAVA icman - DoS vulnerability
Source: Gmail -> IFTTT-> Blogger
[FD] [x33fcon] Call for Papers (and Trainers)
Source: Gmail -> IFTTT-> Blogger
[FD] MSIE8 MSHTML Ptls5::LsFindSpanVisualBoundaries memory corruption
Source: Gmail -> IFTTT-> Blogger
[FD] PHDays VII Call for Papers: How to Stand Up at the Standoff
Source: Gmail -> IFTTT-> Blogger
[FD] Reflected XSS in WonderCMS <= v0.9.8
Source: Gmail -> IFTTT-> Blogger
Hackers Steal Millions From European ATMs Using Malware That Spit Out Cash
from The Hacker News http://ift.tt/2fmu0oM
via IFTTT
Oracle acquires DNS provider Dyn for more than $600 Million
from The Hacker News http://ift.tt/2gx7gUP
via IFTTT
Your prepaid card can no longer be anonymous
from Google Alert - anonymous http://ift.tt/2gx5aUT
via IFTTT
Monday, November 21, 2016
Nassau County Schools Respond To Anonymous Bomb Threat
from Google Alert - anonymous http://ift.tt/2f14CGD
via IFTTT
Advice- Anonymous and Free
from Google Alert - anonymous http://ift.tt/2fMvMAe
via IFTTT
Nassau County Schools Respond To Anonymous Bomb Threat
from Google Alert - anonymous http://ift.tt/2gwzsqN
via IFTTT
Invertible Conditional GANs for image editing. (arXiv:1611.06355v1 [cs.CV])
Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.
from cs.AI updates on arXiv.org http://ift.tt/2fWM9rD
via IFTTT
A Survey of Credit Card Fraud Detection Techniques: Data and Technique Oriented Perspective. (arXiv:1611.06439v1 [cs.CR])
Credit card plays a very important rule in today's economy. It becomes an unavoidable part of household, business and global activities. Although using credit cards provides enormous benefits when used carefully and responsibly,significant credit and financial damages may be caused by fraudulent activities. Many techniques have been proposed to confront the growth in credit card fraud. However, all of these techniques have the same goal of avoiding the credit card fraud; each one has its own drawbacks, advantages and characteristics. In this paper, after investigating difficulties of credit card fraud detection, we seek to review the state of the art in credit card fraud detection techniques, data sets and evaluation criteria.The advantages and disadvantages of fraud detection methods are enumerated and compared.Furthermore, a classification of mentioned techniques into two main fraud detection approaches, namely, misuses (supervised) and anomaly detection (unsupervised) is presented. Again, a classification of techniques is proposed based on capability to process the numerical and categorical data sets. Different data sets used in literature are then described and grouped into real and synthesized data and the effective and common attributes are extracted for further usage.Moreover, evaluation employed criterions in literature are collected and discussed.Consequently, open issues for credit card fraud detection are explained as guidelines for new researchers.
from cs.AI updates on arXiv.org http://ift.tt/2geNhb3
via IFTTT
Generating machine-executable plans from end-user's natural-language instructions. (arXiv:1611.06468v1 [cs.AI])
It is critical for advanced manufacturing machines to autonomously execute a task by following an end-user's natural language (NL) instructions. However, NL instructions are usually ambiguous and abstract so that the machines may misunderstand and incorrectly execute the task. To address this NL-based human-machine communication problem and enable the machines to appropriately execute tasks by following the end-user's NL instructions, we developed a Machine-Executable-Plan-Generation (exePlan) method. The exePlan method conducts task-centered semantic analysis to extract task-related information from ambiguous NL instructions. In addition, the method specifies machine execution parameters to generate a machine-executable plan by interpreting abstract NL instructions. To evaluate the exePlan method, an industrial robot Baxter was instructed by NL to perform three types of industrial tasks {'drill a hole', 'clean a spot', 'install a screw'}. The experiment results proved that the exePlan method was effective in generating machine-executable plans from the end-user's NL instructions. Such a method has the promise to endow a machine with the ability of NL-instructed task execution.
from cs.AI updates on arXiv.org http://ift.tt/2fWS4fY
via IFTTT
Fair Division via Social Comparison. (arXiv:1611.06589v1 [cs.DS])
In the classical cake cutting problem, a resource must be divided among agents with different utilities so that each agent believes they have received a fair share of the resource relative to the other agents. We introduce a variant of the problem in which we model an underlying social network on the agents with a graph, and agents only evaluate their shares relative to their neighbors' in the network. This formulation captures many situations in which it is unrealistic to assume a global view, and also exposes interesting phenomena in the original problem.
Specifically, we say an allocation is locally envy-free if no agent envies a neighbor's allocation and locally proportional if each agent values her own allocation as much as the average value of her neighbor's allocations, with the former implying the latter. While global envy-freeness implies local envy-freeness, global proportionality does not imply local proportionality, or vice versa. A general result is that for any two distinct graphs on the same set of nodes and an allocation, there exists a set of valuation functions such that the allocation is locally proportional on one but not the other.
We fully characterize the set of graphs for which an oblivious single-cutter protocol-- a protocol that uses a single agent to cut the cake into pieces --admits a bounded protocol with $O(n^2)$ query complexity for locally envy-free allocations in the Robertson-Webb model. We also consider the price of envy-freeness, which compares the total utility of an optimal allocation to the best utility of an allocation that is envy-free. We show that a lower bound of $\Omega(\sqrt{n})$ on the price of envy-freeness for global allocations in fact holds for local envy-freeness in any connected undirected graph. Thus, sparse graphs surprisingly do not provide more flexibility with respect to the quality of envy-free allocations.
from cs.AI updates on arXiv.org http://ift.tt/2fWQiM4
via IFTTT
Non-Local Color Image Denoising with Convolutional Neural Networks. (arXiv:1611.06757v1 [cs.CV])
We propose a novel deep network architecture for grayscale and color image denoising that is based on a non-local image model. Our motivation for the overall design of the proposed network stems from variational methods that exploit the inherent non-local self-similarity property of natural images. We build on this concept and introduce deep networks that perform non-local processing and at the same time they significantly benefit from discriminative learning. Experiments on the Berkeley segmentation dataset, comparing several state-of-the-art methods, show that the proposed non-local models achieve the best reported denoising performance both for grayscale and color images for all the tested noise levels. It is also worth noting that this increase in performance comes at no extra cost on the capacity of the network compared to existing alternative deep network architectures. In addition, we highlight a direct link of the proposed non-local models to convolutional neural networks. This connection is of significant importance since it allows our models to take full advantage of the latest advances on GPU computing in deep learning and makes them amenable to efficient implementations through their inherent parallelism.
from cs.AI updates on arXiv.org http://ift.tt/2fWNGO3
via IFTTT
Generalized Dropout. (arXiv:1611.06791v1 [cs.LG])
Deep Neural Networks often require good regularizers to generalize well. Dropout is one such regularizer that is widely used among Deep Learning practitioners. Recent work has shown that Dropout can also be viewed as performing Approximate Bayesian Inference over the network parameters. In this work, we generalize this notion and introduce a rich family of regularizers which we call Generalized Dropout. One set of methods in this family, called Dropout++, is a version of Dropout with trainable parameters. Classical Dropout emerges as a special case of this method. Another member of this family selects the width of neural network layers. Experiments show that these methods help in improving generalization performance over Dropout.
from cs.AI updates on arXiv.org http://ift.tt/2geLt1C
via IFTTT
Options Discovery with Budgeted Reinforcement Learning. (arXiv:1611.06824v1 [cs.LG])
We consider the problem of learning hierarchical policies for Reinforcement Learning able to discover options, an option corresponding to a sub-policy over a set of primitive actions. Different models have been proposed during the last decade that usually rely on a predefined set of options. We specifically address the problem of automatically discovering options in decision processes. We describe a new RL learning framework called Bi-POMDP, and a new learning model called Budgeted Option Neural Network (BONN) able to discover options based on a budgeted learning objective. Since Bi-POMDP are more general than POMDP, our model can also be used to discover options for classical RL tasks. The BONN model is evaluated on different classical RL problems, demonstrating both quantitative and qualitative interesting results.
from cs.AI updates on arXiv.org http://ift.tt/2fWM0V2
via IFTTT
Learning From Graph Neighborhoods Using LSTMs. (arXiv:1611.06882v1 [cs.LG])
Many prediction problems can be phrased as inferences over local neighborhoods of graphs. The graph represents the interaction between entities, and the neighborhood of each entity contains information that allows the inferences or predictions. We present an approach for applying machine learning directly to such graph neighborhoods, yielding predicitons for graph nodes on the basis of the structure of their local neighborhood and the features of the nodes in it. Our approach allows predictions to be learned directly from examples, bypassing the step of creating and tuning an inference model or summarizing the neighborhoods via a fixed set of hand-crafted features. The approach is based on a multi-level architecture built from Long Short-Term Memory neural nets (LSTMs); the LSTMs learn how to summarize the neighborhood from data. We demonstrate the effectiveness of the proposed technique on a synthetic example and on real-world data related to crowdsourced grading, Bitcoin transactions, and Wikipedia edit reversions.
from cs.AI updates on arXiv.org http://ift.tt/2geMZkf
via IFTTT
Memory Lens: How Much Memory Does an Agent Use?. (arXiv:1611.06928v1 [cs.AI])
We propose a new method to study the internal memory used by reinforcement learning policies. We estimate the amount of relevant past information by estimating mutual information between behavior histories and the current action of an agent. We perform this estimation in the passive setting, that is, we do not intervene but merely observe the natural behavior of the agent. Moreover, we provide a theoretical justification for our approach by showing that it yields an implementation-independent lower bound on the minimal memory capacity of any agent that implement the observed policy. We demonstrate our approach by estimating the use of memory of DQN policies on concatenated Atari frames, demonstrating sharply different use of memory across 49 games. The study of memory as information that flows from the past to the current action opens avenues to understand and improve successful reinforcement learning algorithms.
from cs.AI updates on arXiv.org http://ift.tt/2fWV2kE
via IFTTT
Enforcing Relational Matching Dependencies with Datalog for Entity Resolution. (arXiv:1611.06951v1 [cs.DB])
Entity resolution (ER) is about identifying and merging records in a database that represent the same real-world entity. Matching dependencies (MDs) have been introduced and investigated as declarative rules that specify ER policies. An ER process induced by MDs over a dirty instance leads to multiple clean instances, in general. General "answer sets programs" have been proposed to specify the MD-based cleaning task and its results. In this work, we extend MDs to "relational MDs", which capture more application semantics, and identify classes of relational MDs for which the general ASP can be automatically rewritten into a stratified Datalog program, with the single clean instance as its standard model.
from cs.AI updates on arXiv.org http://ift.tt/2fWRteo
via IFTTT
Coherent Dialogue with Attention-based Language Models. (arXiv:1611.06997v1 [cs.CL])
We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.
from cs.AI updates on arXiv.org http://ift.tt/2fWN637
via IFTTT
Context Encoders: Feature Learning by Inpainting. (arXiv:1604.07379v2 [cs.CV] UPDATED)
We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.
from cs.AI updates on arXiv.org http://ift.tt/1SGbSlQ
via IFTTT
"Knowing value" logic as a normal modal logic. (arXiv:1604.08709v3 [cs.AI] UPDATED)
Recent years witness a growing interest in nonstandard epistemic logics of "knowing whether", "knowing what", "knowing how", and so on. These logics are usually not normal, i.e., the standard axioms and reasoning rules for modal logic may be invalid. In this paper, we show that the conditional "knowing value" logic proposed by Wang and Fan \cite{WF13} can be viewed as a disguised normal modal logic by treating the negation of the Kv operator as a special diamond. Under this perspective, it turns out that the original first-order Kripke semantics can be greatly simplified by introducing a ternary relation $R_i^c$ in standard Kripke models, which associates one world with two $i$-accessible worlds that do not agree on the value of constant $c$. Under intuitive constraints, the modal logic based on such Kripke models is exactly the one studied by Wang and Fan (2013,2014}. Moreover, there is a very natural binary generalization of the "knowing value" diamond, which, surprisingly, does not increase the expressive power of the logic. The resulting logic with the binary diamond has a transparent normal modal system, which sharpens our understanding of the "knowing value" logic and simplifies some previously hard problems.
from cs.AI updates on arXiv.org http://ift.tt/1W181Dx
via IFTTT
MCMC assisted by Belief Propagaion. (arXiv:1605.09042v5 [stat.ML] UPDATED)
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach which allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.
from cs.AI updates on arXiv.org http://ift.tt/27ZAKgo
via IFTTT
Word Sense Disambiguation using a Bidirectional LSTM. (arXiv:1606.03568v2 [cs.CL] UPDATED)
In this paper we present a clean, yet effective, model for word sense disambiguation. Our approach leverage a bidirectional long short-term memory network which is shared between all words. This enables the model to share statistical strength and to scale well with vocabulary size. The model is trained end-to-end, directly from the raw text to sense labels, and makes effective use of word order. We evaluate our approach on two standard datasets, using identical hyperparameter settings, which are in turn tuned on a third set of held out data. We employ no external resources (e.g. knowledge graphs, part-of-speech tagging, etc), language specific features, or hand crafted rules, but still achieve statistically equivalent results to the best state-of-the-art systems, that employ no such limitations.
from cs.AI updates on arXiv.org http://ift.tt/1PZbHEa
via IFTTT