Latest YouTube Video
Saturday, November 12, 2016
I have a new follower on Twitter
Laurent Dunys 🤓
#Startup #Entrepreneur #Chatbot #NLP #Engineer | Organisateur du #Meetup #voitureconnectee à #Paris
Paris
https://t.co/xnrgnmfn5P
Following: 4492 - Followers: 4465
November 12, 2016 at 05:47AM via Twitter http://twitter.com/LDunys
Russian Court bans LinkedIn in Russia; Facebook and Twitter Could be Next
from The Hacker News http://ift.tt/2fZkWbP
via IFTTT
Facebook Bug Declares Millions of Users Dead, Including Zuckerberg!
from The Hacker News http://ift.tt/2fFWBnE
via IFTTT
Friday, November 11, 2016
Rumor Central: Orioles pursuing free-agent OF Josh Reddick - MLB Network; 10 HR, 37 RBI in 2016 (ESPN)
via IFTTT
Anonymous access to owncloud server
from Google Alert - anonymous http://ift.tt/2fk4Ya6
via IFTTT
[FD] Trango Systems hidden default root login (all models)
Source: Gmail -> IFTTT-> Blogger
[FD] Google Chrome blink Serializer::doSerialize bad cast details
Source: Gmail -> IFTTT-> Blogger
Google Pixel Phone Hacked in 60 Seconds at PwnFest 2016
from The Hacker News http://ift.tt/2fIvpXK
via IFTTT
5 Major Russian Banks Hit With Powerful DDoS Attacks
from The Hacker News http://ift.tt/2fJl0J9
via IFTTT
I have a new follower on Twitter
Nutanix Inc.
Nutanix delivers an enterprise cloud platform that natively converges compute, virtualization and storage into a resilient, software-defined solution.
San Jose, CA
http://t.co/dHATyUqEES
Following: 5430 - Followers: 56716
November 11, 2016 at 10:52AM via Twitter http://twitter.com/nutanix
I have a new follower on Twitter
Solvitur Systems
#Cybersecurity #InformationAssurance #Governance, #Risk & #Compliance #Information #Security #InfoSec #Cloud #HIPAA #PCIDSS #CISO #FedRAMP #Consulting Services
http://t.co/zPCCcB0zsT
Following: 5772 - Followers: 6636
November 11, 2016 at 04:52AM via Twitter http://twitter.com/Solvitursystems
Warning: Beware of Post-Election Phishing Emails Targeting NGOs and Think Tanks
from The Hacker News http://ift.tt/2eIDNC2
via IFTTT
I have a new follower on Twitter
Marcos Battisti
European Growth & Venture Capitalist - Investing in tech since 98 - Unashamed Europhile - Geopolitics lover, science geek and bon vivant - Opinions are my own
London, England
https://t.co/5122Ttdbry
Following: 943 - Followers: 1239
November 11, 2016 at 01:57AM via Twitter http://twitter.com/Marcolino748
Early 2016 Winter Storm Melts Arctic Sea Ice
from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2eXO26m
via IFTTT
Weekly Animation of Arctic Sea Ice Age with Two Graphs: 1984 - 2016
from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2eXMGZc
via IFTTT
Ravens: Breshad Perriman registers first career TD with a 27-yard catch in the 4th quarter against the Browns (ESPN)
via IFTTT
Thursday, November 10, 2016
Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks. (arXiv:1611.03130v1 [cs.CV])
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
from cs.AI updates on arXiv.org http://ift.tt/2eGE3kV
via IFTTT
Learning to Play Guess Who? and Inventing a Grounded Language as a Consequence. (arXiv:1611.03218v1 [cs.AI])
Learning your first language is an incredible feat and not easily duplicated. Doing this using nothing but a few pictureless books, a corpus, would likely be impossible even for humans. As an alternative we propose to use situated interactions between agents as a driving force for communication, and the framework of Deep Recurrent Q-Networks (DRQN) for learning a common language grounded in the provided environment. We task the agents with interactive image search in the form of the game Guess Who?. The images from the game provide a non trivial environment for the agents to discuss and a natural grounding for the concepts they decide to encode in their communication. Our experiments show that it is possible to learn this task using DRQN and even more importantly that the words the agents use correspond to physical attributes present in the images that make up the agents environment.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.03218
via IFTTT
A stochastically verifiable autonomous control architecture with reasoning. (arXiv:1611.03372v1 [cs.RO])
A new agent architecture called Limited Instruction Set Agent (LISA) is introduced for autonomous control. The new architecture is based on previous implementations of AgentSpeak and it is structurally simpler than its predecessors with the aim of facilitating design-time and run-time verification methods. The process of abstracting the LISA system to two different types of discrete probabilistic models (DTMC and MDP) is investigated and illustrated. The LISA system provides a tool for complete modelling of the agent and the environment for probabilistic verification. The agent program can be automatically compiled into a DTMC or a MDP model for verification with Prism. The automatically generated Prism model can be used for both design-time and run-time verification. The run-time verification is investigated and illustrated in the LISA system as an internal modelling mechanism for prediction of future outcomes.
from cs.AI updates on arXiv.org http://ift.tt/2eGK5SE
via IFTTT
XCSP3: An Integrated Format for Benchmarking Combinatorial Constrained Problems. (arXiv:1611.03398v1 [cs.AI])
We propose a major revision of the format XCSP 2.1, called XCSP3, to build integrated representations of combinatorial constrained problems. This new format is able to deal with mono/multi optimization, many types of variables, cost functions, reification, views, annotations, variable quantification, distributed, probabilistic and qualitative reasoning. The new format is made compact, highly readable, and rather easy to parse. Interestingly, it captures the structure of the problem models, through the possibilities of declaring arrays of variables, and identifying syntactic and semantic groups of constraints. The number of constraints is kept under control by introducing a limited set of basic constraint forms, and producing almost automatically some of their variations through lifting, restriction, sliding, logical combination and relaxation mechanisms. As a result, XCSP3 encompasses practically all constraints that can be found in major constraint solvers developed by the CP community. A website, which is developed conjointly with the format, contains many models and series of instances. The user can make sophisticated queries for selecting instances from very precise criteria. The objective of XCSP3 is to ease the effort required to test and compare different algorithms by providing a common test-bed of combinatorial constrained instances.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.03398
via IFTTT
Importance Sampling with Unequal Support. (arXiv:1611.03451v1 [cs.LG])
Importance sampling is often used in machine learning when training and testing data come from different distributions. In this paper we propose a new variant of importance sampling that can reduce the variance of importance sampling-based estimates by orders of magnitude when the supports of the training and testing distributions differ. After motivating and presenting our new importance sampling estimator, we provide a detailed theoretical analysis that characterizes both its bias and variance relative to the ordinary importance sampling estimator (in various settings, which include cases where ordinary importance sampling is biased, while our new estimator is not, and vice versa). We conclude with an example of how our new importance sampling estimator can be used to improve estimates of how well a new treatment policy for diabetes will work for an individual, using only data from when the individual used a previous treatment policy.
from cs.AI updates on arXiv.org http://ift.tt/2eGE34p
via IFTTT
Song From PI: A Musically Plausible Network for Pop Music Generation. (arXiv:1611.03477v1 [cs.AI])
We present a novel framework for generating pop music. Our model is a hierarchical Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the bottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our generated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.03477
via IFTTT
A fast PC algorithm for high dimensional causal discovery with multi-core PCs. (arXiv:1502.02454v3 [cs.AI] UPDATED)
Discovering causal relationships from observational data is a crucial problem and it has applications in many research areas. The PC algorithm is the state-of-the-art constraint based method for causal discovery. However, runtime of the PC algorithm, in the worst-case, is exponential to the number of nodes (variables), and thus it is inefficient when being applied to high dimensional data, e.g. gene expression datasets. On another note, the advancement of computer hardware in the last decade has resulted in the widespread availability of multi-core personal computers. There is a significant motivation for designing a parallelised PC algorithm that is suitable for personal computers and does not require end users' parallel computing knowledge beyond their competency in using the PC algorithm. In this paper, we develop parallel-PC, a fast and memory efficient PC algorithm using the parallel computing technique. We apply our method to a range of synthetic and real-world high dimensional datasets. Experimental results on a dataset from the DREAM 5 challenge show that the original PC algorithm could not produce any results after running more than 24 hours; meanwhile, our parallel-PC algorithm managed to finish within around 12 hours with a 4-core CPU computer, and less than 6 hours with a 8-core CPU computer. Furthermore, we integrate parallel-PC into a causal inference method for inferring miRNA-mRNA regulatory relationships. The experimental results show that parallel-PC helps improve both the efficiency and accuracy of the causal inference algorithm.
from cs.AI updates on arXiv.org http://ift.tt/16MqdcX
via IFTTT
Learning Natural Language Inference with LSTM. (arXiv:1512.08849v2 [cs.CL] UPDATED)
Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1%, outperforming the state of the art.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1512.08849
via IFTTT
Ravens: 5-time Pro Bowl G Marshal Yanda (shoulder) among inactives against the Browns (ESPN)
via IFTTT
Imagination Library Clears Waiting List With Anonymous Donation
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=http://www.pembinavalleyonline.com/local/imagination-library-clears-waiting-list-with-anonymous-donation&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNHjpdrmlLdMzhVzteCkiExpAVYrTQ
via IFTTT
I have a new follower on Twitter
MSFT in Business
The official Twitter page for Microsoft in Business: designed exclusively for business leaders to discover thought leadership content and trends in technology.
Redmond, Washington
https://t.co/8FD8lQw2wV
Following: 97471 - Followers: 198796
November 10, 2016 at 05:14PM via Twitter http://twitter.com/MSFT_Business
I have a new follower on Twitter
Vidao App Android
Start chatting, calling, and sharing for free with all your friends on Vidao Messenger.
https://t.co/FuEMtnOjbU
Following: 1000 - Followers: 707
November 10, 2016 at 03:05PM via Twitter http://twitter.com/VidaoAppBetaBD
[FD] Weak validation of Amazon SNS push messages in W3 Total Cache WordPress Plugin
Source: Gmail -> IFTTT-> Blogger
[FD] Persistent Cross-Site Scripting in WP Google Maps Plugin via CSRF
Source: Gmail -> IFTTT-> Blogger
OpenSSL Releases Patch For "High" Severity Vulnerability
from The Hacker News http://thehackernews.com/2016/11/openssl-patch-update.html
via IFTTT
Facebook Buys Leaked Passwords From Black Market, But Do You Know Why?
from The Hacker News http://thehackernews.com/2016/11/facebook-acccount-password.html
via IFTTT
encoding/json: Doesn't Unmarshal anonymous field with custom unmarshaler
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=https://github.com/golang/go/issues/17877&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNHhmyXHoboz4kHaVYJwCo8XetC-jg
via IFTTT
[FD] [CT-2016-1110] Unauthenticated RCE in Observium network monitor
Source: Gmail -> IFTTT-> Blogger
[FD] e107 CMS <= 2.1.2 Privilege Escalation
[FD] CA20161109-01: Security Notice for CA Unified Infrastructure Management
Source: Gmail -> IFTTT-> Blogger
[FD] CA20161109-02: Security Notice for CA Service Desk Manager
Source: Gmail -> IFTTT-> Blogger
[FD] Vlany: A Linux (LD_PRELOAD) rootkit
Source: Gmail -> IFTTT-> Blogger
Re: [FD] WININET CHttpHeaderParser::ParseStatusLine out-of-bounds read details
Source: Gmail -> IFTTT-> Blogger
[FD] WININET CHttpHeaderParser::ParseStatusLine out-of-bounds read details
Source: Gmail -> IFTTT-> Blogger
[FD] MSIE 9-11 MSHTML PROPERTYDESC::HandleStyleComponentProperty OOB read details
Source: Gmail -> IFTTT-> Blogger
ISS Daily Summary Report – 11/09/2016
from ISS On-Orbit Status Report https://blogs.nasa.gov/stationreport/2016/11/09/iss-daily-summary-report-11092016/
via IFTTT
Ravens: WR Mike Wallace's redemption tour extends beyond just rebounding from career-worst 2015 season - Jamison Hensley (ESPN)
via IFTTT
Anonymous user 954036
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=https://addons.mozilla.org/en-US/thunderbird/user/anonymous-95403676dfbf2c9a5a271f575fce3304/&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNHlAdHDpznXNa5EOHBunvCsrawV6Q
via IFTTT
SWIFT Hack: Bangladesh Bank Recovers $15 Million from a Philippines Casino
from The Hacker News http://thehackernews.com/2016/11/bangladesh-swift-hack-casino_9.html
via IFTTT
I have a new follower on Twitter
Environmental Info
Environmental news. Environmental events. Environmental jobs. Environmental education. LinkedIn http://t.co/zonCZ4Uy6I
Australia
http://t.co/EoICNFBIk2
Following: 7934 - Followers: 9377
November 10, 2016 at 01:57AM via Twitter http://twitter.com/ejn_greencareer
Icesat-2 Measurements Over Antarctica (prelaunch)
from NASA's Scientific Visualization Studio: Most Recent Items http://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=4492
via IFTTT
Correlation Between GLOBE CItizen Science and NASA Satellite Observations
from NASA's Scientific Visualization Studio: Most Recent Items http://svs.gsfc.nasa.gov/cgi-bin/details.cgi?aid=4524
via IFTTT
M63: The Sunflower Galaxy from Hubble
Wednesday, November 9, 2016
I have a new follower on Twitter
David Corrigan
I help orgs become truly #customercentric w new tech #CustomerIntelligenceManagement. #CMO @Infotrellis. Disrupting the #customerdata market w @AllSightBigData
Toronto, Canada
https://t.co/nOKsyx0JrD
Following: 4378 - Followers: 4709
November 09, 2016 at 11:12PM via Twitter http://twitter.com/DCorrigan
The New Cyber Bully: Apps Allow Anonymous Threats
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=http://www.centerdigitaled.com/k-12/The-New-Cyber-Bully-Apps-Allow-Anonymous-Threats.html&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNESjMtw0GGQaDPf2W0k8jT8gjeJyA
via IFTTT
I have a new follower on Twitter
Intersog®
We build custom software solutions & provide on-demand IT resources for in-house & offshore software development projects. Email us: contact@intersog.com
Chicago, IL, USA
http://t.co/mDSENJutKt
Following: 2845 - Followers: 4044
November 09, 2016 at 10:52PM via Twitter http://twitter.com/Intersog
Recursive Decomposition for Nonconvex Optimization. (arXiv:1611.02755v1 [cs.AI])
Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the objective function have combinatorial structure, and thus ideas from combinatorial optimization can be brought to bear. Based on this, we propose a problem-decomposition approach to nonconvex optimization. Similarly to DPLL-style SAT solvers and recursive conditioning in probabilistic inference, our algorithm, RDIS, recursively sets variables so as to simplify and decompose the objective function into approximately independent sub-functions, until the remaining functions are simple enough to be optimized by standard techniques like gradient descent. The variables to set are chosen by graph partitioning, ensuring decomposition whenever possible. We show analytically that RDIS can solve a broad class of nonconvex optimization problems exponentially faster than gradient descent with random restarts. Experimentally, RDIS outperforms standard techniques on problems like structure from motion and protein folding.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02755
via IFTTT
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning. (arXiv:1611.02779v1 [cs.AI])
Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a "fast" reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose ("slow") RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the "fast" RL algorithm on the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL$^2$ is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL$^2$ on a vision-based navigation task and show that it scales up to high-dimensional problems.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02779
via IFTTT
Tuning Recurrent Neural Networks with Reinforcement Learning. (arXiv:1611.02796v1 [cs.LG])
Sequence models can be trained using supervised learning and a next-step prediction objective. This approach, however, suffers from known failure modes. For example, it is notoriously difficult to ensure multi-step generated sequences have coherent global structure. Motivated by the fact that reinforcement learning (RL) can be used to impose arbitrary properties on generated data by choosing appropriate reward functions, in this paper we propose a novel approach for sequence training which combines Maximum Likelihood (ML) and RL training. We refine a sequence predictor by optimizing for some imposed reward functions, while maintaining good predictive properties learned from data. We propose efficient ways to solve this by augmenting deep Q-learning with a cross-entropy reward and deriving novel off-policy methods for RNNs from stochastic optimal control (SOC). We explore the usefulness of our approach in the context of music generation. An LSTM is trained on a large corpus of songs to predict the next note in a musical sequence. This Note-RNN is then refined using RL, where the reward function is a combination of rewards based on rules of music theory, as well as the output of another trained Note-RNN. We show that by combining ML and RL, this RL Tuner method can not only produce more pleasing melodies, but that it can significantly reduce unwanted behaviors and failure modes of the RNN.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02796
via IFTTT
Encoding monotonic multi-set preferences using CI-nets: preliminary report. (arXiv:1611.02885v1 [cs.AI])
CP-nets and their variants constitute one of the main AI approaches for specifying and reasoning about preferences. CI-nets, in particular, are a CP-inspired formalism for representing ordinal preferences over sets of goods, which are typically required to be monotonic.
Considering also that goods often come in multi-sets rather than sets, a natural question is whether CI-nets can be used more or less directly to encode preferences over multi-sets. We here provide some initial ideas on how to achieve this, in the sense that at least a restricted form of reasoning on our framework, which we call "confined reasoning", can be efficiently reduced to reasoning on CI-nets. Our framework nevertheless allows for encoding preferences over multi-sets with unbounded multiplicities. We also show the extent to which it can be used to represent preferences where multiplicites of the goods are not stated explicitly ("purely qualitative preferences") as well as a potential use of our generalization of CI-nets as a component of a recent system for evidence aggregation.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02885
via IFTTT
Harnessing disordered quantum dynamics for machine learning. (arXiv:1602.08159v2 [quant-ph] UPDATED)
Quantum computer has an amazing potential of fast information processing. However, realisation of a digital quantum computer is still a challenging problem requiring highly accurate controls and key application strategies. Here we propose a novel platform, quantum reservoir computing, to solve these issues successfully by exploiting natural quantum dynamics, which is ubiquitous in laboratories nowadays, for machine learning. In this framework, nonlinear dynamics including classical chaos can be universally emulated in quantum systems. A number of numerical experiments show that quantum systems consisting of at most seven qubits possess computational capabilities comparable to conventional recurrent neural networks of 500 nodes. This discovery opens up a new paradigm for information processing with artificial intelligence powered by quantum physics.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1602.08159
via IFTTT
MCMC assisted by Belief Propagaion. (arXiv:1605.09042v4 [stat.ML] UPDATED)
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach which allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1605.09042
via IFTTT
I have a new follower on Twitter
Daniel Peter
Salesforce MVP. Bay Area Salesforce Developer User Group Organizer. https://t.co/PEIpb72j3O MacGyver, 20x certifed, biz/IT, Ham Radio K6DJP, work at @kenandy
Foster City, CA
https://t.co/Tn8Fg4JzEX
Following: 14372 - Followers: 16265
November 09, 2016 at 08:37PM via Twitter http://twitter.com/danieljpeter
and '!' for JavaScript self-calling anonymous function modular pattern?
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=https://teamtreehouse.com/community/is-there-a-difference-between-and-for-javascript-selfcalling-anonymous-function-modular-pattern&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNHLqlsndfrgxhRlMT657WEWXFipzA
via IFTTT
Thank you for your service
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=http://www.powerathens.com/news/national/thank-you-for-your-service-anonymous-person-buys-fire-stateion-groceries/3N9UhDT1WSxgJoh0x6b7zO/&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNHSh0WE8kaNDPkPY66dB3mR1d5aHg
via IFTTT
Anonymous
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=https://www.worldacceptance.com/testimonial/anonymous/&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNFEe3c_1hSY_wj7PG8flbDER9zp-g
via IFTTT
Microsoft Patches Windows Zero-Day Flaw Disclosed by Google
from The Hacker News http://thehackernews.com/2016/11/microsoft-windows-update.html
via IFTTT
[FD] Avira Antivirus >= 15.0.21.86 Command Execution (SYSTEM)
Source: Gmail -> IFTTT-> Blogger
ISS Daily Summary Report – 11/08/2016
from ISS On-Orbit Status Report https://blogs.nasa.gov/stationreport/2016/11/08/iss-daily-summary-report-11082016/
via IFTTT
Ocean City, MD's surf is Good
Ocean City, MD Summary
Surf: shoulder to head high
Maximum: 1.53m (5.02ft)
Minimum: 1.224m (4.02ft)
Maryland-Delaware Summary
from Surfline http://www.surfline.com/surfdata/spot_forecast.cfm?id=4406
via IFTTT
DDoS Attack Takes Down Central Heating System Amidst Winter In Finland
from The Hacker News http://thehackernews.com/2016/11/heating-system-hacked.html
via IFTTT
Over 300,000 Android Devices Hacked Using Chrome Browser Vulnerability
from The Hacker News http://thehackernews.com/2016/11/chrome-android-virus.html
via IFTTT
Tuesday, November 8, 2016
Normalizing Flows on Riemannian Manifolds. (arXiv:1611.02304v1 [stat.ML])
We consider the problem of density estimation on Riemannian manifolds. Density estimation on manifolds has many applications in fluid-mechanics, optics and plasma physics and it appears often when dealing with angular variables (such as used in protein folding, robot limbs, gene-expression) and in general directional statistics. In spite of the multitude of algorithms available for density estimation in the Euclidean spaces $\mathbf{R}^n$ that scale to large n (e.g. normalizing flows, kernel methods and variational approximations), most of these methods are not immediately suitable for density estimation in more general Riemannian manifolds. We revisit techniques related to homeomorphisms from differential geometry for projecting densities to sub-manifolds and use it to generalize the idea of normalizing flows to more general Riemannian manifolds. The resulting algorithm is scalable, simple to implement and suitable for use with automatic differentiation. We demonstrate concrete examples of this method on the n-sphere $\mathbf{S}^n$.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02304
via IFTTT
Learning from Untrusted Data. (arXiv:1611.02315v1 [cs.LG])
The vast majority of theoretical results in machine learning and statistics assume that the available training data is a reasonably reliable reflection of the phenomena to be learned or estimated. Similarly, the majority of machine learning and statistical techniques used in practice are brittle to the presence of large amounts of biased or malicious data. In this work we propose two novel frameworks in which to study estimation, learning, and optimization in the presence of significant fractions of arbitrary data. The first framework, which we term list-decodable learning, asks whether it is possible to return a list of answers, with the guarantee that at least one of them is accurate. For example, given a dataset of $n$ points for which an unknown subset of $\alpha n$ points are drawn from a distribution of interest, and no assumptions are made about the remaining $(1-\alpha)n$ points, is it possible to return a list of $poly(1/\alpha)$ answers, one of which is correct? The second framework, which we term the semi-verified learning model considers the extent to which a small dataset of trusted data (drawn from the distribution in question) can be leveraged to enable the accurate extraction of information from a much larger but untrusted dataset (of which only an $\alpha$-fraction is drawn from the distribution).
We show strong positive results in both settings, and provide an algorithm for robust learning in a very general stochastic optimization setting. This general result has immediate implications for robust estimation in a number of settings, including for robustly estimating the mean of distributions with bounded second moments, robustly learning mixtures of such distributions, and robustly finding planted partitions in random graphs in which significant portions of the graph have been perturbed by an adversary.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02315
via IFTTT
Combining observational and experimental data to find heterogeneous treatment effects. (arXiv:1611.02385v1 [cs.AI])
Every design choice will have different effects on different units. However traditional A/B tests are often underpowered to identify these heterogeneous effects. This is especially true when the set of unit-level attributes is high-dimensional and our priors are weak about which particular covariates are important. However, there are often observational data sets available that are orders of magnitude larger. We propose a method to combine these two data sources to estimate heterogeneous treatment effects. First, we use observational time series data to estimate a mapping from covariates to unit-level effects. These estimates are likely biased but under some conditions the bias preserves unit-level relative rank orderings. If these conditions hold, we only need sufficient experimental data to identify a monotonic, one-dimensional transformation from observationally predicted treatment effects to real treatment effects. This reduces power demands greatly and makes the detection of heterogeneous effects much easier. As an application, we show how our method can be used to improve Facebook page recommendations.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02385
via IFTTT
Proceedings of the First International Workshop on Argumentation in Logic Programming and Non-Monotonic Reasoning (Arg-LPNMR 2016). (arXiv:1611.02439v1 [cs.AI])
This volume contains the papers presented at Arg-LPNMR 2016: First International Workshop on Argumentation in Logic Programming and Nonmonotonic Reasoning held on July 8-10, 2016 in New York City, NY.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02439
via IFTTT
The Data Complexity of Description Logic Ontologies. (arXiv:1611.02453v1 [cs.AI])
We analyze the data complexity of ontology-mediated querying where the ontologies are formulated in a description logic (DL) of the ALC family and queries are conjunctive queries, positive existential queries, or acyclic conjunctive queries. Our approach is non-uniform in the sense that we aim to understand the complexity of each single ontology instead of for all ontologies formulated in a certain language. While doing so, we quantify over the queries and are interested, for example, in the question whether all queries can be evaluated in polynomial time w.r.t. a given ontology. Our results include a PTime/coNP-dichotomy for ontologies of depth one in the description logic ALCFI, the equivalence of a PTime/coNP-dichotomy for ALC and ALCI-ontologies of unrestricted depth to the famous dichotomy conjecture for CSPs by Feder and Vardi, and the failure of PTime/coNP-dichotomy theorem for ALCF-ontologies. Regarding the latter DL, we also show that it is undecidable whether a given ontology admits PTime query evaluation.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02453
via IFTTT
Cognitive Discriminative Mappings for Rapid Learning. (arXiv:1611.02512v1 [cs.AI])
Humans can learn concepts or recognize items from just a handful of examples, while machines require many more samples to perform the same task. In this paper, we build a computational model to investigate the possibility of this kind of rapid learning. The proposed method aims to improve the learning task of input from sensory memory by leveraging the information retrieved from long-term memory. We present a simple and intuitive technique called cognitive discriminative mappings (CDM) to explore the cognitive problem. First, CDM separates and clusters the data instances retrieved from long-term memory into distinct classes with a discrimination method in working memory when a sensory input triggers the algorithm. CDM then maps each sensory data instance to be as close as possible to the median point of the data group with the same class. The experimental results demonstrate that the CDM approach is effective for learning the discriminative features of supervised classifications with few training sensory input instances.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02512
via IFTTT
The Neural Noisy Channel. (arXiv:1611.02554v1 [cs.CL])
We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02554
via IFTTT
On interestingness measures of formal concepts. (arXiv:1611.02646v1 [cs.AI])
Formal concepts and closed itemsets proved to be of big importance for knowledge discovery, both as a tool for concise representation of association rules and a tool for clustering and constructing domain taxonomies and ontologies. Exponential explosion makes it difficult to consider the whole concept lattice arising from data, one needs to select most useful and interesting concepts. In this paper interestingness measures of concepts are considered and compared with respect to various aspects, such as efficiency of computation and applicability to noisy data and performing ranking correlation.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02646
via IFTTT
Sentence Ordering using Recurrent Neural Networks. (arXiv:1611.02654v1 [cs.CL])
Modeling the structure of coherent texts is a task of great importance in NLP. The task of organizing a given set of sentences into a coherent order has been commonly used to build and evaluate models that understand such structure. In this work we propose an end-to-end neural approach based on the recently proposed set to sequence mapping framework to address the sentence ordering problem. Our model achieves state-of-the-art performance in the order discrimination task on two datasets widely used in the literature. We also consider a new interesting task of ordering abstracts from conference papers and research proposals and demonstrate strong performance against recent methods. Visualizing the sentence representations learned by the model shows that the model has captured high level logical structure in these paragraphs. The model also learns rich semantic sentence representations by learning to order texts, performing comparably to recent unsupervised representation learning methods in the sentence similarity and paraphrase detection tasks.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1611.02654
via IFTTT
Truth Serums for Massively Crowdsourced Evaluation Tasks. (arXiv:1507.07045v3 [cs.GT] UPDATED)
A major challenge in crowdsourcing evaluation tasks like labeling objects, grading assignments in online courses, etc., is that of eliciting truthful responses from agents in the absence of verifiability. In this paper, we propose new reward mechanisms for such settings that, unlike many previously studied mechanisms, impose minimal assumptions on the structure and knowledge of the underlying generating model, can account for heterogeneity in the agents' abilities, require no extraneous elicitation from them, and furthermore allow their beliefs to be (almost) arbitrary. These mechanisms have the simple and intuitive structure of an output agreement mechanism: an agent gets a reward if her evaluation matches that of her peer, but unlike the classic output agreement mechanism, this reward is not the same across evaluations, but is inversely proportional to an appropriately defined popularity index of each evaluation. The popularity indices are computed by leveraging the existence of a large number of similar tasks, which is a typical characteristic of these settings. Experiments performed on MTurk workers demonstrate higher efficacy (with a $p$-value of $0.02$) of these mechanisms in inducing truthful behavior compared to the state of the art.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1507.07045
via IFTTT
Moving Target Defense for Web Applications using Bayesian Stackelberg Games. (arXiv:1602.07024v2 [cs.CR] UPDATED)
The present complexity in designing web applications makes software security a difficult goal to achieve. Moreover, an attacker can explore a deployed service on the web and attack at his/her own leisure. Although Moving Target Defense (MTD) in web applications is an effective mechanism to nullify this advantage of their reconnaissance, the framework demands a good switching strategy when switching between multiple configurations for its web-stack. To address this issue, we propose modeling of a real-world MTD web application as a repeated Bayesian game to generate an effective switching strategy. To incorporate this model into a real system, we propose an automated system for generating attack sets of Common Vulnerabilities and Exposures (CVEs) for input attacker types with predefined capabilities. Our framework obtains realistic reward values for the players (defenders and attackers) in this game by using security domain expertise on CVEs obtained from the National Vulnerability Database (NVD). Also, with our model, we address the issue of prioritizing vulnerabilities that when fixed, improves the security of the MTD system. Lastly, we demonstrate the robustness of our proposed model by evaluating its performance when there is uncertainty about input attacker information.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1602.07024
via IFTTT
Unifying Count-Based Exploration and Intrinsic Motivation. (arXiv:1606.01868v2 [cs.AI] UPDATED)
We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across observations. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into intrinsic rewards and obtain significantly improved exploration in a number of hard games, including the infamously difficult Montezuma's Revenge.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1606.01868
via IFTTT
Safe and Efficient Off-Policy Reinforcement Learning. (arXiv:1606.02647v2 [cs.LG] UPDATED)
In this work, we take a fresh look at some old and new algorithms for off-policy, return-based reinforcement learning. Expressing these in a common form, we derive a novel algorithm, Retrace($\lambda$), with three desired properties: (1) it has low variance; (2) it safely uses samples collected from any behaviour policy, whatever its degree of "off-policyness"; and (3) it is efficient as it makes the best use of samples collected from near on-policy behaviour policies. We analyze the contractive nature of the related operator under both off-policy policy evaluation and control settings and derive online sample-based algorithms. We believe this is the first return-based off-policy control algorithm converging a.s. to $Q^*$ without the GLIE assumption (Greedy in the Limit with Infinite Exploration). As a corollary, we prove the convergence of Watkins' Q($\lambda$), which was an open problem since 1989. We illustrate the benefits of Retrace($\lambda$) on a standard suite of Atari 2600 games.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1606.02647
via IFTTT
A Fuzzy Logic System to Analyze a Student's Lifestyle. (arXiv:1610.03957v2 [cs.CY] UPDATED)
A college student's life can be primarily categorized into domains such as education, health, social and other activities which may include daily chores and travelling time. Time management is crucial for every student. A self realisation of one's daily time expenditure in various domains is therefore essential to maximize one's effective output. This paper presents how a mobile application using Fuzzy Logic and Global Positioning System (GPS) analyzes a student's lifestyle and provides recommendations and suggestions based on the results.
from cs.AI updates on arXiv.org http://arxiv.org/abs/1610.03957
via IFTTT
Quasi-Recurrent Neural Networks. (arXiv:1611.01576v1 [cs.NE])
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep's computation on the previous timestep's output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasi-recurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and character-level neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks.
from cs.AI updates on arXiv.org http://ift.tt/2fzc0ot
via IFTTT
A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks. (arXiv:1611.01587v1 [cs.CL])
Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce such a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. All layers include shortcut connections to both word representations and lower-level task predictions. We use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end trainable model obtains state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment. It also performs competitively on POS tagging. Our dependency parsing layer relies only on a single feed-forward pass and does not require a beam search.
from cs.AI updates on arXiv.org http://ift.tt/2fVB5xW
via IFTTT
Dynamic Coattention Networks For Question Answering. (arXiv:1611.01604v1 [cs.CL])
Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.
from cs.AI updates on arXiv.org http://ift.tt/2exT6xr
via IFTTT
Anonymous unblock proxy sites
from Google Alert - anonymous https://www.google.com/url?rct=j&sa=t&url=http://jm.cmnamarillo.org/Izb&ct=ga&cd=CAIyGjgxMzAxNTQ0ZWE3M2NhMmQ6Y29tOmVuOlVT&usg=AFQjCNHN1rSU69zkAnviM_KK7t-PFGQTDg
via IFTTT
I have a new follower on Twitter
Codeanywhere
A complete toolset for web development. Enabling you to edit, collaborate and run your projects from any device. Check out our Developer Conference @shiftsplit
San Francisco, CA
https://t.co/i6fjwA54ct
Following: 38220 - Followers: 45875
November 08, 2016 at 04:28PM via Twitter http://twitter.com/Codeanywhere
Ravens get midseason grade of C-plus - Jamison Hensley; who to watch, what to expect in 2nd half and give your own grade (ESPN)
via IFTTT
[FD] Stored Cross-Site Scripting vulnerability in 404 to 301 WordPress Plugin
Source: Gmail -> IFTTT-> Blogger
ISS Daily Summary Report – 11/07/2016
from ISS On-Orbit Status Report http://ift.tt/2fvEcK7
via IFTTT
[FD] Cross-Site Scripting in Calendar WordPress Plugin
Source: Gmail -> IFTTT-> Blogger
I have a new follower on Twitter
Oforth
http://t.co/hTKXLsUvyc
Following: 24 - Followers: 42
November 08, 2016 at 04:46AM via Twitter http://twitter.com/OforthSupport
[FD] Cross Site Scripting Vulnerability In Verint Impact 360
Source: Gmail -> IFTTT-> Blogger
[FD] Crashing Android devices with large Proxy Auto Config (PAC) Files [CVE-2016-6723]
Source: Gmail -> IFTTT-> Blogger
Authenticate with Firebase Anonymously using Unity
from Google Alert - anonymous http://ift.tt/2ehlokq
via IFTTT
Facebook agrees to Stop using UK Users' WhatsApp Data for Targeted Ads
from The Hacker News http://ift.tt/2fWp4s4
via IFTTT
'Web Of Trust' Browser Add-On Caught Selling Users' Data — Uninstall It Now
from The Hacker News http://ift.tt/2eyPMBS
via IFTTT