Latest YouTube Video
Saturday, January 2, 2016
I have a new follower on Twitter
Anonymous 24 Italia
Ultime notizie sulle attività del gruppo Anonymous segnalate dagli utenti di https://t.co/E4jOMPckbd account non ufficiale.
Italia
https://t.co/5ONXkOtp5Q
Following: 2963 - Followers: 1085
January 02, 2016 at 12:55PM via Twitter http://twitter.com/Anonymous24ita
1 Today 07:52:39
from Google Alert - anonymous http://ift.tt/1IJAIzY
via IFTTT
I have a new follower on Twitter
Linqapp
Overcome your language barriers with Linqapp. Get free language support and assistance from REAL people, anytime and anywhere.
Taipei
http://t.co/WSbHCrAHXr
Following: 7393 - Followers: 7007
January 02, 2016 at 07:22AM via Twitter http://twitter.com/LINQAPP
PlayStation 4 Hacked to Run Linux
from The Hacker News http://ift.tt/1Oqpm4l
via IFTTT
Mobile Soup Kitchen Receives Anonymous Gift of Gold
from Google Alert - anonymous http://ift.tt/1TuNWky
via IFTTT
Juvenile Sports or Youth's Pastimes by Anonymous
from Google Alert - anonymous http://ift.tt/1RW2fkx
via IFTTT
Comet Catalina Tails
Friday, January 1, 2016
Canto delle Palle (Anonymous)
from Google Alert - anonymous http://ift.tt/1YWbhgg
via IFTTT
Defense opposes anonymous jury in February terrorism trial
from Google Alert - anonymous http://ift.tt/1SqjSs2
via IFTTT
I have a new follower on Twitter
Jerry Katz ツ
Jerry Katz Social Media, Broadcasting, Racing, Camping, Great Friends, Cold Beer & Everything Opelika! My Tweets, My Shared Thoughts, My Opinions!
Opelika, AL
http://t.co/WuT1nAffDF
Following: 1949 - Followers: 3187
January 01, 2016 at 08:37AM via Twitter http://twitter.com/JerryKatz
I have a new follower on Twitter
Sean Burke
CEO of @KiteDesk. Passionate about #Sales, #Marketing, #Entrepreneurship and #Tech. Great family with lots of laughs.
Tampa, Florida
http://t.co/Wgsso2EoLu
Following: 11004 - Followers: 13417
January 01, 2016 at 01:52AM via Twitter http://twitter.com/seanburkeh
Solstice Sun at Lulworth Cove
Thursday, December 31, 2015
I have a new follower on Twitter
Robert Dale Smith
Sumo Developer at @SumoMe. INTJ, hacker, entrepreneur, startup junkie, IT service veteran. Repaired tens of thousands of PCs/Macs in a past life.
Austin, TX
http://t.co/ID3DMqbAH2
Following: 3832 - Followers: 4401
December 31, 2015 at 10:53PM via Twitter http://twitter.com/RobertDaleSmith
I have a new follower on Twitter
Epi Ludvik Nekaj
founder + ceo of @CrowdWeek + @LAdvertising Stay Hungry, Stay Foolish. http://t.co/SNb0iR3hd5
NYC✈ Brussels✈ Singapore✈ SF
http://t.co/HvISlO1BxC
Following: 6562 - Followers: 8143
December 31, 2015 at 09:38PM via Twitter http://twitter.com/LPlus
Bayes-Optimal Effort Allocation in Crowdsourcing: Bounds and Index Policies. (arXiv:1512.09204v1 [cs.LG])
We consider effort allocation in crowdsourcing, where we wish to assign labeling tasks to imperfect homogeneous crowd workers to maximize overall accuracy in a continuous-time Bayesian setting, subject to budget and time constraints. The Bayes-optimal policy for this problem is the solution to a partially observable Markov decision process, but the curse of dimensionality renders the computation infeasible. Based on the Lagrangian Relaxation technique in Adelman & Mersereau (2008), we provide a computationally tractable instance-specific upper bound on the value of this Bayes-optimal policy, which can in turn be used to bound the optimality gap of any other sub-optimal policy. In an approach similar in spirit to the Whittle index for restless multiarmed bandits, we provide an index policy for effort allocation in crowdsourcing and demonstrate numerically that it outperforms other stateof- arts and performs close to optimal solution.
from cs.AI updates on arXiv.org http://ift.tt/1msMxQo
via IFTTT
Evolving Non-linear Stacking Ensembles for Prediction of Go Player Attributes. (arXiv:1512.09254v1 [cs.AI])
The paper presents an application of non-linear stacking ensembles for prediction of Go player attributes. An evolutionary algorithm is used to form a diverse ensemble of base learners, which are then aggregated by a stacking ensemble. This methodology allows for an efficient prediction of different attributes of Go players from sets of their games. These attributes can be fairly general, in this work, we used the strength and style of the players.
from cs.AI updates on arXiv.org http://ift.tt/1OvA4SG
via IFTTT
An (MI)LP-based Primal Heuristic for 3-Architecture Connected Facility Location in Urban Access Network Design. (arXiv:1512.09354v1 [math.OC])
We investigate the 3-architecture Connected Facility Location Problem arising in the design of urban telecommunication access networks. We propose an original optimization model for the problem that includes additional variables and constraints to take into account wireless signal coverage. Since the problem can prove challenging even for modern state-of-the art optimization solvers, we propose to solve it by an original primal heuristic which combines a probabilistic fixing procedure, guided by peculiar Linear Programming relaxations, with an exact MIP heuristic, based on a very large neighborhood search. Computational experiments on a set of realistic instances show that our heuristic can find solutions associated with much lower optimality gaps than a state-of-the-art solver.
from cs.AI updates on arXiv.org http://ift.tt/1Sp8k8i
via IFTTT
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. (arXiv:1502.05698v10 [cs.AI] UPDATED)
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
from cs.AI updates on arXiv.org http://ift.tt/1ApWocn
via IFTTT
ISS Daily Summary Report – 12/30/15
from ISS On-Orbit Status Report http://ift.tt/1RbaxTW
via IFTTT
Shopper Anonymous - Your own mystery shopper business
from Google Alert - anonymous http://ift.tt/1Ui2gNk
via IFTTT
I have a new follower on Twitter
NFLPicker
We make it simple. You pick the games… we provide fun & easy to read sports pick data. Find out how you stack up against the top players. Play PickingDuck Today
https://t.co/npdxp48lRx
Following: 2671 - Followers: 1068
December 31, 2015 at 05:31AM via Twitter http://twitter.com/NFLPicker
Microsoft will Inform You If Government is Spying on You
from The Hacker News http://ift.tt/1YQoaxS
via IFTTT
The Fox Fur Nebula
Wednesday, December 30, 2015
Anonymous donor leaves 2000 lottery tickets at Tochigi city hall
from Google Alert - anonymous http://ift.tt/1P0ZLem
via IFTTT
Mining Massive Hierarchical Data Using a Scalable Probabilistic Graphical Model. (arXiv:1512.08525v1 [cs.AI])
Probabilistic Graphical Models (PGM) are very useful in the fields of machine learning and data mining. The crucial limitation of those models,however, is the scalability. The Bayesian Network, which is one of the most common PGMs used in machine learning and data mining, demonstrates this limitation when the training data consists of random variables, each of them has a large set of possible values. In the big data era, one would expect new extensions to the existing PGMs to handle the massive amount of data produced these days by computers, sensors and other electronic devices. With hierarchical data - data that is arranged in a treelike structure with several levels - one would expect to see hundreds of thousands or millions of values distributed over even just a small number of levels. When modeling this kind of hierarchical data across large data sets, Bayesian Networks become infeasible for representing the probability distributions. In this paper we introduce an extension to Bayesian Networks to handle massive sets of hierarchical data in a reasonable amount of time and space. The proposed model achieves perfect precision of 1.0 and high recall of 0.93 when it is used as multi-label classifier for the annotation of mass spectrometry data. On another data set of 1.5 billion search logs provided by CareerBuilder.com the model was able to predict latent semantic relationships between search keywords with accuracy up to 0.80.
from cs.AI updates on arXiv.org http://ift.tt/1kuBogW
via IFTTT
Conditional probability generation methods for high reliability effects-based decision making. (arXiv:1512.08553v1 [cs.AI])
Decision making is often based on Bayesian networks. The building blocks for Bayesian networks are its conditional probability tables (CPTs). These tables are obtained by parameter estimation methods, or they are elicited from subject matter experts (SME). Some of these knowledge representations are insufficient approximations. Using knowledge fusion of cause and effect observations lead to better predictive decisions. We propose three new methods to generate CPTs, which even work when only soft evidence is provided. The first two are novel ways of mapping conditional expectations to the probability space. The third is a column extraction method, which obtains CPTs from nonlinear functions such as the multinomial logistic regression. Case studies on military effects and burnt forest desertification have demonstrated that so derived CPTs have highly reliable predictive power, including superiority over the CPTs obtained from SMEs. In this context, new quality measures for determining the goodness of a CPT and for comparing CPTs with each other have been introduced. The predictive power and enhanced reliability of decision making based on the novel CPT generation methods presented in this paper have been confirmed and validated within the context of the case studies.
from cs.AI updates on arXiv.org http://ift.tt/1R9uzhB
via IFTTT
On the Foundations of the Brussels Operational-Realistic Approach to Cognition. (arXiv:1512.08710v1 [cs.AI])
The scientific community is becoming more and more interested in the research that applies the mathematical formalism of quantum theory to model human decision-making. In this paper, we provide the theoretical foundations of the quantum approach to cognition that we developed in Brussels. These foundations rest on the results of two decade studies on the axiomatic and operational-realistic approaches to the foundations of quantum physics. The deep analogies between the foundations of physics and cognition lead us to investigate the validity of quantum theory as a general and unitary framework for cognitive processes, and the empirical success of the Hilbert space models derived by such investigation provides a strong theoretical confirmation of this validity. However, two situations in the cognitive realm, 'question order effects' and 'response replicability', indicate that even the Hilbert space framework could be insufficient to reproduce the collected data. This does not mean that the mentioned operational-realistic approach would be incorrect, but simply that a larger class of measurements would be in force in human cognition, so that an extended quantum formalism may be needed to deal with all of them. As we will explain, the recently derived 'extended Bloch representation' of quantum theory (and the associated 'general tension-reduction' model) precisely provides such extended formalism, while remaining within the same unitary interpretative framework.
from cs.AI updates on arXiv.org http://ift.tt/1kuBmWm
via IFTTT
Combining Fuzzy Cognitive Maps and Discrete Random Variables. (arXiv:1512.08811v1 [cs.AI])
In this paper we propose an extension to the Fuzzy Cognitive Maps (FCMs) that aims at aggregating a number of reasoning tasks into a one parallel run. The described approach consists in replacing real-valued activation levels of concepts (and further influence weights) by random variables. Such extension, followed by the implemented software tool, allows for determining ranges reached by concept activation levels, sensitivity analysis as well as statistical analysis of multiple reasoning results. We replace multiplication and addition operators appearing in the FCM state equation by appropriate convolutions applicable for discrete random variables. To make the model computationally feasible, it is further augmented with aggregation operations for discrete random variables. We discuss four implemented aggregators, as well as we report results of preliminary tests.
from cs.AI updates on arXiv.org http://ift.tt/1R9uxWW
via IFTTT
Learning Natural Language Inference with LSTM. (arXiv:1512.08849v1 [cs.CL])
Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for the NLI task. In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neutral attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a matching-LSTM that performs word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. Our experiments on the SNLI corpus show that our model outperforms the state of the art, achieving an accuracy of 86.1% on the test data.
from cs.AI updates on arXiv.org http://ift.tt/1kuBogI
via IFTTT
Modeling Variations of First-Order Horn Abduction in Answer Set Programming using On-Demand Constraints and Flexible Value Invention. (arXiv:1512.08899v1 [cs.AI])
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for prefered solutions with respect to objective functions cardinality minimality, Coherence, or Weighted Abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP, because realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants as the Unique Names Assumption does not hold in general. For permitting reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We evaluate our encodings and extensions experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark. We identify term equivalence as a main instantiation bottleneck, and experiment with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers and make them applicable for larger datasets. Surprisingly, experiments show that this method is beneficial only for cardinality minimality with our ASP encodings.
from cs.AI updates on arXiv.org http://ift.tt/1R9uz1c
via IFTTT
Simple, Robust and Optimal Ranking from Pairwise Comparisons. (arXiv:1512.08949v1 [cs.LG])
We consider data in the form of pairwise comparisons of n items, with the goal of precisely identifying the top k items for some value of k < n, or alternatively, recovering a ranking of all the items. We consider a simple counting algorithm that ranks the items in order of the number of pairwise comparisons won, and show it has three important and useful features: (a) Computational efficiency: the simplicity of the method leads to speed-ups of several orders of magnitude in computation time as compared to prior work; (b) Robustness: our theoretical guarantees make no assumptions on the pairwise-comparison probabilities, while prior work is restricted to the specific BTL model and performs poorly if the data is not true to it; and (c) Optimality: we show that up to constant factors, our algorithm achieves the information-theoretic limits for recovering the top-k subset. Finally, we extend our results to obtain sharp guarantees for approximate recovery under the Hamming distortion metric.
from cs.AI updates on arXiv.org http://ift.tt/1JKngGM
via IFTTT
Evaluating Go Game Records for Prediction of Player Attributes. (arXiv:1512.08969v1 [cs.AI])
We propose a way of extracting and aggregating per-move evaluations from sets of Go game records. The evaluations capture different aspects of the games such as played patterns or statistic of sente/gote sequences. Using machine learning algorithms, the evaluations can be utilized to predict different relevant target variables. We apply this methodology to predict the strength and playing style of the player (e.g. territoriality or aggressivity) with good accuracy. We propose a number of possible applications including aiding in Go study, seeding real-work ranks of internet players or tuning of Go-playing programs.
from cs.AI updates on arXiv.org http://ift.tt/1kuBlld
via IFTTT
A Notation for Markov Decision Processes. (arXiv:1512.09075v1 [cs.AI])
This paper specifies a notation for Markov decision processes.
from cs.AI updates on arXiv.org http://ift.tt/1R9uxGn
via IFTTT
Anonymous Functions and MapReduce
from Google Alert - anonymous http://ift.tt/1R9pmpU
via IFTTT
R.I.P Ian Murdock, Founder of Debian Linux, Dead at 42
from The Hacker News http://ift.tt/1SmWlZ1
via IFTTT
SportsCenter Video: Tedy Bruschi says \"I'm not\" surprised by Raves WR Steve Smith Sr.'s decision to come back in 2016 (ESPN)
via IFTTT
ProxyBack Malware Targets PCs And Sets Up Anonymous Proxies
from Google Alert - anonymous http://ift.tt/1mRFOAh
via IFTTT
NFL: Ravens WR Steve Smith Sr. posts on Twitter he will return and play in 2016; had announced retirement in August (ESPN)
via IFTTT
Ravens: John Harbaugh says Eagles' job \"not even part of the conversation\"; assistant in Philadelphia from 1998-2007 (ESPN)
via IFTTT
Final opportunity to support Yellowstone in 2015
Source: Gmail -> IFTTT-> Blogger
Ashley Madison claims to have gained 4.6 million
from Google Alert - anonymous http://ift.tt/1NRibAr
via IFTTT
Google 'Android N' Will Not Use Oracle's Java APIs
from The Hacker News http://ift.tt/1RRrvbq
via IFTTT
Anonymous have apparently foiled their first terror attack
from Google Alert - anonymous http://ift.tt/1Smu9p4
via IFTTT
Re: [FD] Executable installers are vulnerable^WEVIL (case 15):F-SecureOnlineScanner.exe allows arbitrary (remote) codeexecution and escalation of privilege
Source: Gmail -> IFTTT-> Blogger
[FD] Netduma R1 Router CSRF
Source: Gmail -> IFTTT-> Blogger
Rules: send notifications to all tokens, including the anonymous ones...
from Google Alert - anonymous http://ift.tt/1Ugkhf4
via IFTTT
Tor Project to Start Bug Bounty Program — Get Paid for HACKING!
from The Hacker News http://ift.tt/1VprPNJ
via IFTTT
Anonymous and Delegated Grading Release Notes
from Google Alert - anonymous http://ift.tt/1RR3scI
via IFTTT
North Korea's Red Star OS (Looks Like Mac OS X) Spies on its Own People
from The Hacker News http://ift.tt/1MGp5og
via IFTTT
Dust of the Orion Nebula
[FD] Vulnerabilities in Mobile Safari
Source: Gmail -> IFTTT-> Blogger
Tuesday, December 29, 2015
[FD] Local root vulnerability in DeleGate v9.9.13
Source: Gmail -> IFTTT-> Blogger
Design film festival 2015 by Anonymous
from Google Alert - anonymous http://ift.tt/1PvA62e
via IFTTT
Caleno custure me
from Google Alert - anonymous http://ift.tt/1YQCMIk
via IFTTT
Ravens: Baltimore (5-10) No. 24 in Week 17 NFL Power Rankings, can play spoiler for 2nd straight week Sunday at Bengals (ESPN)
via IFTTT
Municipality to experiment with anonymous job applications
from Google Alert - anonymous http://ift.tt/1JH26JR
via IFTTT
ISS Daily Summary Report – 12/28/15
from ISS On-Orbit Status Report http://ift.tt/1OqI1IW
via IFTTT
Jail Authorities Mistakenly Early Released 3,200 Prisoners due to a Silly Software Bug
from The Hacker News http://ift.tt/1JGHxxc
via IFTTT
Employee Stole 'Yandex Search Engine' Source Code, Tried to Sell it for Just $29K
from The Hacker News http://ift.tt/1Zz1IWU
via IFTTT
I have a new follower on Twitter
ronaldslasfhter
Get Handmade word press website Design only in $249. call +1-800-219-0366 visit: https://t.co/3o7Ek7Lvaz
San Jose, CA
https://t.co/3o7Ek7Lvaz
Following: 33 - Followers: 0
December 29, 2015 at 04:58AM via Twitter http://twitter.com/ronaldslasfhter
Microsoft Keeps Backup of Your Encryption Key on it's Server — Here's How to Delete it
from The Hacker News http://ift.tt/1RP91bE
via IFTTT
Patch now! Adobe releases Emergency Security Updates for Flash Player
from The Hacker News http://ift.tt/1PtSc2u
via IFTTT
Monday, December 28, 2015
Probabilistic Model-Based Approach for Heart Beat Detection. (arXiv:1512.07931v1 [cs.AI])
Nowadays, hospitals are ubiquitous and integral to modern society. Patients flow in and out of a veritable whirlwind of paperwork, consultations, and potential inpatient admissions, through an abstracted system that is not without flaws. One of the biggest flaws in the medical system is perhaps an unexpected one: the patient alarm system. One longitudinal study reported an 88.8% rate of false alarms, with other studies reporting numbers of similar magnitudes. These false alarm rates lead to a number of deleterious effects that manifest in a significantly lower standard of care across clinics.
This paper discusses a model-based probabilistic inference approach to identifying variables at a detection level. We design a generative model that complies with an overview of human physiology and perform approximate Bayesian inference. One primary goal of this paper is to justify a Bayesian modeling approach to increasing robustness in a physiological domain.
We use three data sets provided by Physionet, a research resource for complex physiological signals, in the form of the Physionet 2014 Challenge set-p1 and set-p2, as well as the MGH/MF Waveform Database. On the extended data set our algorithm is on par with the other top six submissions to the Physionet 2014 challenge.
from cs.AI updates on arXiv.org http://ift.tt/1UeH3Uv
via IFTTT
Multi-Level Cause-Effect Systems. (arXiv:1512.07942v1 [stat.ML])
We present a domain-general account of causation that applies to settings in which macro-level causal relations between two systems are of interest, but the relevant causal features are poorly understood and have to be aggregated from vast arrays of micro-measurements. Our approach generalizes that of Chalupka et al. (2015) to the setting in which the macro-level effect is not specified. We formalize the connection between micro- and macro-variables in such situations and provide a coherent framework describing causal relations at multiple levels of analysis. We present an algorithm that discovers macro-variable causes and effects from micro-level measurements obtained from an experiment. We further show how to design experiments to discover macro-variables from observational micro-variable data. Finally, we show that under specific conditions, one can identify multiple levels of causal structure. Throughout the article, we use a simulated neuroscience multi-unit recording experiment to illustrate the ideas and the algorithms.
from cs.AI updates on arXiv.org http://ift.tt/1mmV7QF
via IFTTT
Toward a Research Agenda in Adversarial Reasoning: Computational Approaches to Anticipating the Opponent's Intent and Actions. (arXiv:1512.07943v1 [cs.AI])
This paper defines adversarial reasoning as computational approaches to inferring and anticipating an enemy's perceptions, intents and actions. It argues that adversarial reasoning transcends the boundaries of game theory and must also leverage such disciplines as cognitive modeling, control theory, AI planning and others. To illustrate the challenges of applying adversarial reasoning to real-world problems, the paper explores the lessons learned in the CADET - a battle planning system that focuses on brigade-level ground operations and involves adversarial reasoning. From this example of current capabilities, the paper proceeds to describe RAID - a DARPA program that aims to build capabilities in adversarial reasoning, and how such capabilities would address practical requirements in Defense and other application areas.
from cs.AI updates on arXiv.org http://ift.tt/1IzWmGO
via IFTTT
Device and System Level Design Considerations for Analog-Non-Volatile-Memory Based Neuromorphic Architectures. (arXiv:1512.08030v1 [cs.NE])
This paper gives an overview of recent progress in the brain inspired computing field with a focus on implementation using emerging memories as electronic synapses. Design considerations and challenges such as requirements and design targets on multilevel states, device variability, programming energy, array-level connectivity, fan-in/fanout, wire energy, and IR drop are presented. Wires are increasingly important in design decisions, especially for large systems, and cycle-to-cycle variations have large impact on learning performance.
from cs.AI updates on arXiv.org http://ift.tt/1UeH3Ur
via IFTTT
Using Data Analytics to Detect Anomalous States in Vehicles. (arXiv:1512.08048v1 [cs.AI])
Vehicles are becoming more and more connected, this opens up a larger attack surface which not only affects the passengers inside vehicles, but also people around them. These vulnerabilities exist because modern systems are built on the comparatively less secure and old CAN bus framework which lacks even basic authentication. Since a new protocol can only help future vehicles and not older vehicles, our approach tries to solve the issue as a data analytics problem and use machine learning techniques to secure cars. We develop a Hidden Markov Model to detect anomalous states from real data collected from vehicles. Using this model, while a vehicle is in operation, we are able to detect and issue alerts. Our model could be integrated as a plug-n-play device in all new and old cars.
from cs.AI updates on arXiv.org http://ift.tt/1IzWmGM
via IFTTT
Regularized Orthogonal Tensor Decompositions for Multi-Relational Learning. (arXiv:1512.08120v1 [cs.LG])
Multi-relational learning has received lots of attention from researchers in various research communities. Most existing methods either suffer from superlinear per-iteration cost, or are sensitive to the given ranks. To address both issues, we propose a scalable core tensor trace norm Regularized Orthogonal Iteration Decomposition (ROID) method for full or incomplete tensor analytics, which can be generalized as a graph Laplacian regularized version by using auxiliary information or a sparse higher-order orthogonal iteration (SHOOI) version. We first induce the equivalence relation of the Schatten p-norm (0<p<\infty) of a low multi-linear rank tensor and its core tensor. Then we achieve a much smaller matrix trace norm minimization problem. Finally, we develop two efficient augmented Lagrange multiplier algorithms to solve our problems with convergence guarantees. Extensive experiments using both real and synthetic datasets, even though with only a few observations, verified both the efficiency and effectiveness of our methods.
from cs.AI updates on arXiv.org http://ift.tt/1JcTjEo
via IFTTT
GELATO and SAGE: An Integrated Framework for MS Annotation. (arXiv:1512.08451v1 [cs.AI])
Several algorithms and tools have been developed to (semi) automate the process of glycan identification by interpreting Mass Spectrometric data. However, each has limitations when annotating MSn data with thousands of MS spectra using uncurated public databases. Moreover, the existing tools are not designed to manage MSn data where n > 2. We propose a novel software package to automate the annotation of tandem MS data. This software consists of two major components. The first, is a free, semi-automated MSn data interpreter called the Glycomic Elucidation and Annotation Tool (GELATO). This tool extends and automates the functionality of existing open source projects, namely, GlycoWorkbench (GWB) and GlycomeDB. The second is a machine learning model called Smart Anotation Enhancement Graph (SAGE), which learns the behavior of glycoanalysts to select annotations generated by GELATO that emulate human interpretation of the spectra.
from cs.AI updates on arXiv.org http://ift.tt/1UeH6ja
via IFTTT
Bayesian Matrix Completion via Adaptive Relaxed Spectral Regularization. (arXiv:1512.01110v2 [cs.NA] UPDATED)
Bayesian matrix completion has been studied based on a low-rank matrix factorization formulation with promising results. However, little work has been done on Bayesian matrix completion based on the more direct spectral regularization formulation. We fill this gap by presenting a novel Bayesian matrix completion method based on spectral regularization. In order to circumvent the difficulties of dealing with the orthonormality constraints of singular vectors, we derive a new equivalent form with relaxed constraints, which then leads us to design an adaptive version of spectral regularization feasible for Bayesian inference. Our Bayesian method requires no parameter tuning and can infer the number of latent factors automatically. Experiments on synthetic and real datasets demonstrate encouraging results on rank recovery and collaborative filtering, with notably good results for very sparse matrices.
from cs.AI updates on arXiv.org http://ift.tt/1jC8VFg
via IFTTT
On the Min-cost Traveling Salesman Problem with Drone. (arXiv:1512.01503v2 [cs.AI] UPDATED)
Once known to be used exclusively in military domain, unmanned aerial vehicles (drones) have stepped up to become a part of new logistic method in commercial sector called "last-mile delivery". In this novel approach, small unmanned aerial vehicles (UAV), also known as drones, are deployed alongside with trucks to deliver goods to customers in order to improve the service quality or reduce the transportation cost. It gives rise to a new variant of the traveling salesman problem (TSP), of which we call TSP with drone (TSP-D). In this article, we consider a variant of TSP-D where the main objective is to minimize the total transportation cost. We also propose two heuristics: "Drone First, Truck Second" (DFTS) and "Truck First, Drone Second" (TFDS), to effectively solve the problem. The former constructs route for drone first while the latter constructs route for truck first. We solve a TSP to generate route for truck and propose a mixed integer programming (MIP) formulation with different profit functions to build route for drone. Numerical results obtained on many instances with different sizes and characteristics are presented. Recommendations on promising algorithm choices are also provided.
from cs.AI updates on arXiv.org http://ift.tt/1OJgCXo
via IFTTT
On Voting and Facility Location. (arXiv:1512.05868v1 [cs.GT] CROSS LISTED)
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of a candidate is the sum of its distances to each voter. Some of our work assumes that these points can be modeled on a real line, but other results of ours are more general.
A question closely related to candidate selection is that of minimizing the sum of distances for facility location. The difference is that in our setting there is a fixed set of candidates, whereas the large body of work on facility location seems to consider every point in the metric space to be a possible candidate. This gives rise to three types of mechanisms which differ in the granularity of their input space (voting, ranking and location mechanisms). We study the relationships between these three classes of mechanisms.
While it may seem that Black's 1948 median algorithm is optimal for candidate selection on the line, this is not the case. We give matching upper and lower bounds for a variety of settings. In particular, when candidates and voters are on the line, our universally truthful spike mechanism gives a [tight] approximation of two. When assessing candidate selection mechanisms, we seek several desirable properties: (a) efficiency (minimizing the social cost) (b) truthfulness (dominant strategy incentive compatibility) and (c) simplicity (a smaller input space). We quantify the effect that truthfulness and simplicity impose on the efficiency.
from cs.AI updates on arXiv.org http://ift.tt/1ZjHUqq
via IFTTT
Re: [FD] libtiff: invalid write (CVE-2015-7554)
Source: Gmail -> IFTTT-> Blogger
Boutique Sales Associate/E-Commerce Sales at Anonymous LA
from Google Alert - anonymous http://ift.tt/1mLLhbN
via IFTTT
191 Million US Voters' Personal Info Exposed by Misconfigured Database
from The Hacker News http://ift.tt/1JEvpwG
via IFTTT
Increasing Raspberry Pi FPS with Python and OpenCV
Today is the second post in our three part series on milking every last bit of performance out of your webcam or Raspberry Pi camera.
Last week we discussed how to:
- Increase the FPS rate of our video processing pipeline.
- Reduce the affects of I/O latency on standard USB and built-in webcams using threading.
This week we’ll continue to utilize threads to improve the FPS/latency of the Raspberry Pi using both the
picameramodule and a USB webcam.
As we’ll find out, threading can dramatically decrease our I/O latency, thus substantially increasing the FPS processing rate of our pipeline.
Looking for the source code to this post?
Jump right to the downloads section.
Note: A big thanks to PyImageSearch reader, Sean McLeod, who commented on last week’s post and mentioned that I needed to make the FPS rate and the I/O latency topic more clear.
Increasing Raspberry Pi FPS with Python and OpenCV
In last week’s blog post we learned that by using a dedicated thread (separate from the main thread) to read frames from our camera sensor, we can dramatically increase the FPS processing rate of our pipeline. This speedup is obtained by (1) reducing I/O latency and (2) ensuring the main thread is never blocked, allowing us to grab the most recent frame read by the camera at any moment in time. Using this multi-threaded approach, our video processing pipeline is never blocked, thus allowing us to increase the overall FPS processing rate of the pipeline.
In fact, I would argue that it’s even more important to use threading on the Raspberry Pi 2 since resources (i.e., processor and RAM) are substantially more constrained than on modern laptops/desktops.
Again, our goal here is to create a separate thread that is dedicated to polling frames from the Raspberry Pi camera module. By doing this, we can increase the FPS rate of our video processing pipeline by 246%!
In fact, this functionality is already implemented inside the imutils package. To install
imutilson your system, just use
pip:
$ pip install imutils
If you already have
imutilsinstalled, you can upgrade to the latest version using this command:
$ pip install --upgrade imutils
We’ll be reviewing the source code to the
videosub-package of
imutilsto obtain a better understanding of what’s going on under the hood.
To handle reading threaded frames from the Raspberry Pi camera module, let’s define a Python class named
PiVideoStream:
# import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera from threading import Thread import cv2 class PiVideoStream: def __init__(self, resolution=(320, 240), framerate=32): # initialize the camera and stream self.camera = PiCamera() self.camera.resolution = resolution self.camera.framerate = framerate self.rawCapture = PiRGBArray(self.camera, size=resolution) self.stream = self.camera.capture_continuous(self.rawCapture, format="bgr", use_video_port=True) # initialize the frame and the variable used to indicate # if the thread should be stopped self.frame = None self.stopped = False
Lines 2-5 handle importing our necessary packages. We’ll import both
PiCameraand
PiRGBArrayto access the Raspberry Pi camera module. If you do not have the picamera Python module already installed (or have never worked with it before), I would suggest reading this post on accessing the Raspberry Pi camera for a gentle introduction to the topic.
On Line 8 we define the constructor to the
PiVideoStreamclass. We’ll can optionally supply two parameters here, (1) the
resolutionof the frames being read from the camera stream and (2) the desired frame rate of the camera module. We’ll default these values to
(320, 240)and
32, respectively.
Finally, Line 19 initializes the latest
frameread from the video stream and an boolean variable used to indicate if the frame reading process should be stopped.
Next up, let’s look at how we can read frames from the Raspberry Pi camera module in a threaded manner:
# import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera from threading import Thread import cv2 class PiVideoStream: def __init__(self, resolution=(320, 240), framerate=32): # initialize the camera and stream self.camera = PiCamera() self.camera.resolution = resolution self.camera.framerate = framerate self.rawCapture = PiRGBArray(self.camera, size=resolution) self.stream = self.camera.capture_continuous(self.rawCapture, format="bgr", use_video_port=True) # initialize the frame and the variable used to indicate # if the thread should be stopped self.frame = None self.stopped = False def start(self): # start the thread to read frames from the video stream Thread(target=self.update, args=()).start() return self def update(self): # keep looping infinitely until the thread is stopped for f in self.stream: # grab the frame from the stream and clear the stream in # preparation for the next frame self.frame = f.array self.rawCapture.truncate(0) # if the thread indicator variable is set, stop the thread # and resource camera resources if self.stopped: self.stream.close() self.rawCapture.close() self.camera.close() return
Lines 22-25 define the
startmethod which is simply used to spawn a thread that calls the
updatemethod.
The
updatemethod (Lines 27-41) continuously polls the Raspberry Pi camera module, grabs the most recent frame from the video stream, and stores it in the
framevariable. Again, it’s important to note that this thread is separate from our main Python script.
Finally, if we need to stop the thread, Lines 38-40 handle releasing any camera resources.
Note: If you are unfamiliar with using the Raspberry Pi camera and the
picameramodule, I highly suggest that you read this tutorial before continuing.
Finally, let’s define two more methods used in the
PiVideoStreamclass:
# import the necessary packages from picamera.array import PiRGBArray from picamera import PiCamera from threading import Thread import cv2 class PiVideoStream: def __init__(self, resolution=(320, 240), framerate=32): # initialize the camera and stream self.camera = PiCamera() self.camera.resolution = resolution self.camera.framerate = framerate self.rawCapture = PiRGBArray(self.camera, size=resolution) self.stream = self.camera.capture_continuous(self.rawCapture, format="bgr", use_video_port=True) # initialize the frame and the variable used to indicate # if the thread should be stopped self.frame = None self.stopped = False def start(self): # start the thread to read frames from the video stream Thread(target=self.update, args=()).start() return self def update(self): # keep looping infinitely until the thread is stopped for f in self.stream: # grab the frame from the stream and clear the stream in # preparation for the next frame self.frame = f.array self.rawCapture.truncate(0) # if the thread indicator variable is set, stop the thread # and resource camera resources if self.stopped: self.stream.close() self.rawCapture.close() self.camera.close() return def read(self): # return the frame most recently read return self.frame def stop(self): # indicate that the thread should be stopped self.stopped = True
The
readmethod simply returns the most recently read frame from the camera sensor to the calling function. The
stopmethod sets the
stoppedboolean to indicate that the camera resources should be cleaned up and the camera polling thread stopped.
Now that the
PiVideoStreamclass is defined, let’s create the
picamera_fps_demo.pydriver script:
# import the necessary packages from __future__ import print_function from imutils.video.pivideostream import PiVideoStream from imutils.video import FPS from picamera.array import PiRGBArray from picamera import PiCamera import argparse import imutils import time import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-n", "--num-frames", type=int, default=100, help="# of frames to loop over for FPS test") ap.add_argument("-d", "--display", type=int, default=-1, help="Whether or not frames should be displayed") args = vars(ap.parse_args()) # initialize the camera and stream camera = PiCamera() camera.resolution = (320, 240) camera.framerate = 32 rawCapture = PiRGBArray(camera, size=(320, 240)) stream = camera.capture_continuous(rawCapture, format="bgr", use_video_port=True)
Lines 2-10 handle importing our necessary packages. We’ll import the
FPSclass from last week so we can approximate the FPS rate of our video processing pipeline.
From there, Lines 13-18 handle parsing our command line arguments. We only need two optional switches here,
--num-frames, which is the number of frames we’ll use to approximate the FPS of our pipeline, followed by
--display, which is used to indicate if the frame read from our Raspberry Pi camera should be displayed to our screen or not.
Finally, Lines 21-26 handle initializing the Raspberry Pi camera stream — see this post for more information.
Now we are ready to obtain results for a non-threaded approach:
# import the necessary packages from __future__ import print_function from imutils.video.pivideostream import PiVideoStream from imutils.video import FPS from picamera.array import PiRGBArray from picamera import PiCamera import argparse import imutils import time import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-n", "--num-frames", type=int, default=100, help="# of frames to loop over for FPS test") ap.add_argument("-d", "--display", type=int, default=-1, help="Whether or not frames should be displayed") args = vars(ap.parse_args()) # initialize the camera and stream camera = PiCamera() camera.resolution = (320, 240) camera.framerate = 32 rawCapture = PiRGBArray(camera, size=(320, 240)) stream = camera.capture_continuous(rawCapture, format="bgr", use_video_port=True) # allow the camera to warmup and start the FPS counter print("[INFO] sampling frames from `picamera` module...") time.sleep(2.0) fps = FPS().start() # loop over some frames for (i, f) in enumerate(stream): # grab the frame from the stream and resize it to have a maximum # width of 400 pixels frame = f.array frame = imutils.resize(frame, width=400) # check to see if the frame should be displayed to our screen if args["display"] > 0: cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # clear the stream in preparation for the next frame and update # the FPS counter rawCapture.truncate(0) fps.update() # check to see if the desired number of frames have been reached if i == args["num_frames"]: break # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup cv2.destroyAllWindows() stream.close() rawCapture.close() camera.close()
Line 31 starts the FPS counter, allowing us to approximate the number of frames our pipeline can process in a single second.
We then start looping over frames read from the Raspberry Pi camera module on Line 34.
Lines 41-43 make a check to see if the
frameshould be displayed to our screen or not while Line 48 updates the FPS counter.
Finally, Lines 61-63 handle releasing any camera sources.
The code for accessing the Raspberry Pi camera in a threaded manner follows below:
# import the necessary packages from __future__ import print_function from imutils.video.pivideostream import PiVideoStream from imutils.video import FPS from picamera.array import PiRGBArray from picamera import PiCamera import argparse import imutils import time import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-n", "--num-frames", type=int, default=100, help="# of frames to loop over for FPS test") ap.add_argument("-d", "--display", type=int, default=-1, help="Whether or not frames should be displayed") args = vars(ap.parse_args()) # initialize the camera and stream camera = PiCamera() camera.resolution = (320, 240) camera.framerate = 32 rawCapture = PiRGBArray(camera, size=(320, 240)) stream = camera.capture_continuous(rawCapture, format="bgr", use_video_port=True) # allow the camera to warmup and start the FPS counter print("[INFO] sampling frames from `picamera` module...") time.sleep(2.0) fps = FPS().start() # loop over some frames for (i, f) in enumerate(stream): # grab the frame from the stream and resize it to have a maximum # width of 400 pixels frame = f.array frame = imutils.resize(frame, width=400) # check to see if the frame should be displayed to our screen if args["display"] > 0: cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # clear the stream in preparation for the next frame and update # the FPS counter rawCapture.truncate(0) fps.update() # check to see if the desired number of frames have been reached if i == args["num_frames"]: break # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup cv2.destroyAllWindows() stream.close() rawCapture.close() camera.close() # created a *threaded *video stream, allow the camera sensor to warmup, # and start the FPS counter print("[INFO] sampling THREADED frames from `picamera` module...") vs = PiVideoStream().start() time.sleep(2.0) fps = FPS().start() # loop over some frames...this time using the threaded stream while fps._numFrames < args["num_frames"]: # grab the frame from the threaded video stream and resize it # to have a maximum width of 400 pixels frame = vs.read() frame = imutils.resize(frame, width=400) # check to see if the frame should be displayed to our screen if args["display"] > 0: cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # update the FPS counter fps.update() # stop the timer and display FPS information fps.stop() print("[INFO] elasped time: {:.2f}".format(fps.elapsed())) print("[INFO] approx. FPS: {:.2f}".format(fps.fps())) # do a bit of cleanup cv2.destroyAllWindows() vs.stop()
This code is very similar to the code block above, only this time we initialize and start the threaded
PiVideoStreamclass on Line 68.
We then loop over the same number of frames as with the non-threaded approach, update the FPS counter, and finally print our results to the terminal on Lines 89 and 90.
Raspberry Pi FPS Threading Results
In this section we will review the results of using threading to increase the FPS processing rate of our pipeline by reducing the affects of I/O latency.
The results for this post were gathered on a Raspberry Pi 2:
- Using the
picamera
module. - And a Logitech C920 camera (which is plug-and-play capable with the Raspberry Pi).
I also gathered results using the Raspberry Pi Zero. Since the Pi Zero does not have a CSI port (and thus cannot use the Raspberry Pi camera module), timings were only gathered for the Logitech USB camera.
I used the following command to gather results for the
picameramodule on the Raspberry Pi 2:
$ python picamera_fps_demo.py
As we can see from the screenshot above, using no threading obtained 14.46 FPS.
However, by using threading, our FPS rose to 226.67, an increase of over 1,467%!
But before we get too excited, keep in mind this is not a true representation of the FPS of the Raspberry Pi camera module — we are certainly not reading a total of 226 frames from the camera module per second. Instead, this speedup simply demonstrates that our
forloop pipeline is able to process 226 frames per second.
This increase in FPS processing rate comes from decreased I/O latency. By placing the I/O in a separate thread, our main thread runs extremely fast — faster than the I/O thread is capable of polling frames from the camera, in fact. This implies that we are actually processing the same frame multiple times.
Again, what we are actually measuring is the number of frames our video processing pipeline can process in a single second, regardless if the frames are “new” frames returned from the camera sensor or not.
Using the current threaded scheme, we can process approximately 226.67 FPS using our trivial pipeline. This FPS number will go own as our video processing pipeline becomes more complex.
To demonstrate this, let’s insert a
cv2.imshowcall and display each of the frames read from the camera sensor to our screen. The
cv2.imshowfunction is another form of I/O, only now we are both reading a frame from the stream and then writing the frame to our display:
$ python picamera_fps_demo.py --display 1
Using no threading, we reached only 14.97 FPS.
But by placing the frame I/O into a separate thread, we reached 51.83 FPS, an improvement of 246%!
It’s also worth noting that the Raspberry Pi camera module itself can reportedly get up to 90 FPS.
To summarize the results, by placing the blocking I/O call in our main thread, we only obtained a very low 14.97 FPS. But by moving the I/O to an entirely separate thread our FPS processing rate has increased (by decreasing the affects of I/O latency), bringing up the FPS rate to an estimated 51.83.
Simply put: When you are developing Python scripts on the Raspberry Pi 2 using the
picameramodule, move your frame reading to a separate thread to speedup your video processing pipeline.
As a matter of completeness, I’ve also ran the same experiments from last week using the
fps_demo.pyscript (see last week’s post for a review of the code) to gather FPS results from a USB camera on the Raspberry Pi 2:
$ python fps_demo.py --display 1
With no threading, our pipeline obtained 22 FPS. But by introducing threading, we reached 36.09 FPS — an improvement of 64%!
Finally, I also ran the
fps_demo.pyscript on the Raspberry Pi Zero as well:
With no threading, we hit 6.62 FPS. And with threading, we only marginally improved to 6.90 FPS, an increase of only 4%.
The reason for the small performance gain is simply because the Raspberry Pi Zero processor has only one core and one thread, thus the same thread of execution must be shared for all processes running on the system at even given time.
Given the quad-core processor of the Raspberry Pi 2, it’s suffice to say the Pi 2 should be used for video processing.
Summary
In this post we learned how threading can be used to increase our FPS processing rate and reduce the affects of I/O latency on the Raspberry Pi.
Using threading allowed us to increase our video processing rate by a nice 246%; however, its important to note that as the processing pipeline becomes more complex, the FPS processing rate will go down as well.
In next week’s post, we’ll create a Python class that incorporates last week’s
WebcamVideoStreamand today’s
PiVideoStreaminto a single class, allowing new video processing blog posts on PyImageSearch to run on either a USB camera or a Raspberry Pi camera module without changing a single line of code!
Sign up for the PyImageSearch newsletter using the form below to be notified when the post goes live.
Downloads:
The post Increasing Raspberry Pi FPS with Python and OpenCV appeared first on PyImageSearch.
from PyImageSearch http://ift.tt/1Vmlvq6
via IFTTT
Preis Ruhm und Ehre, DB: Mus.ms. 30282 (Anonymous)
from Google Alert - anonymous http://ift.tt/1mKHOtQ
via IFTTT
ISS Daily Summary Report – 12/24/15
from ISS On-Orbit Status Report http://ift.tt/1Zx4YSN
via IFTTT