Anonymous 24 Italia Ultime notizie sulle attività del gruppo Anonymous segnalate dagli utenti di https://t.co/E4jOMPckbd account non ufficiale.
Italia
https://t.co/5ONXkOtp5Q
Following: 2963 - Followers: 1085
January 02, 2016 at 12:55PM via Twitter http://twitter.com/Anonymous24ita
Linqapp Overcome your language barriers with Linqapp. Get free language support and assistance from REAL people, anytime and anywhere.
Taipei
http://t.co/WSbHCrAHXr
Following: 7393 - Followers: 7007
January 02, 2016 at 07:22AM via Twitter http://twitter.com/LINQAPP
Hackers enjoy much playing with PlayStation and Xbox, rather than playing on them. And this time, they have done some crazy things with Sony's PlayStation gaming console. It appears that a console-hacking that goes by the name of Fail0verflow have managed to hack PlayStation 4 (PS4) to run a Linux kernel-based operating system. Fail0verflow announced this week that they successfully
from The Hacker News http://ift.tt/1Oqpm4l
via IFTTT
A new year's treat for binoculars, as 2016 begins Comet Catalina (C/2013 US10) now sweeps through planet Earth's predawn skies near bright Arcturus, alpha star of Bootes. But this telescopic mosaic from December 21 follows the pretty tails of the comet across a field of view as wide as 10 full moons. The smattering of distant galaxies and faint stars in the background are in the constellation Virgo. Trailing behind the comet's orbit, Catalina's dust tail fans out below and left in the frame. Its ion tail is angled toward the top right, away from the Sun and buffeted by the solar wind. On January 17, the outward bound visitor from the Oort Cloud will make its closest approach to Earth, a mere 110 million kilometers away, seen near bright stars along the handle of the Big Dipper. via NASA http://ift.tt/1R3Yx8t
Jerry Katz ツ Jerry Katz Social Media, Broadcasting, Racing, Camping, Great Friends, Cold Beer & Everything Opelika! My Tweets, My Shared Thoughts, My Opinions!
Opelika, AL
http://t.co/WuT1nAffDF
Following: 1949 - Followers: 3187
January 01, 2016 at 08:37AM via Twitter http://twitter.com/JerryKatz
Sean Burke CEO of @KiteDesk. Passionate about #Sales, #Marketing, #Entrepreneurship and #Tech. Great family with lots of laughs.
Tampa, Florida
http://t.co/Wgsso2EoLu
Following: 11004 - Followers: 13417
January 01, 2016 at 01:52AM via Twitter http://twitter.com/seanburkeh
A southern exposure and striking symmetry made Lulworth Cove, along the Jurassic Coast of England, planet Earth a beautiful setting during December's Solstice. Five frames in this dramatic composite view follow the lowest arc of the Sun, from sunrise to sunset, during the shortest day of the year. The solstice arc spans about 103 degrees at this northern latitude. Of course, erosion by wave action has produced the cove's remarkable shape in the coastal limestone layers. The cove's narrow entrance is responsible, creating a circular wave diffraction pattern. The wave pattern is made clearer by the low solstice Sun. via NASA http://ift.tt/1OuqcZp
Robert Dale Smith Sumo Developer at @SumoMe. INTJ, hacker, entrepreneur, startup junkie, IT service veteran. Repaired tens of thousands of PCs/Macs in a past life.
Austin, TX
http://t.co/ID3DMqbAH2
Following: 3832 - Followers: 4401
December 31, 2015 at 10:53PM via Twitter http://twitter.com/RobertDaleSmith
We consider effort allocation in crowdsourcing, where we wish to assign labeling tasks to imperfect homogeneous crowd workers to maximize overall accuracy in a continuous-time Bayesian setting, subject to budget and time constraints. The Bayes-optimal policy for this problem is the solution to a partially observable Markov decision process, but the curse of dimensionality renders the computation infeasible. Based on the Lagrangian Relaxation technique in Adelman & Mersereau (2008), we provide a computationally tractable instance-specific upper bound on the value of this Bayes-optimal policy, which can in turn be used to bound the optimality gap of any other sub-optimal policy. In an approach similar in spirit to the Whittle index for restless multiarmed bandits, we provide an index policy for effort allocation in crowdsourcing and demonstrate numerically that it outperforms other stateof- arts and performs close to optimal solution.
The paper presents an application of non-linear stacking ensembles for prediction of Go player attributes. An evolutionary algorithm is used to form a diverse ensemble of base learners, which are then aggregated by a stacking ensemble. This methodology allows for an efficient prediction of different attributes of Go players from sets of their games. These attributes can be fairly general, in this work, we used the strength and style of the players.
We investigate the 3-architecture Connected Facility Location Problem arising in the design of urban telecommunication access networks. We propose an original optimization model for the problem that includes additional variables and constraints to take into account wireless signal coverage. Since the problem can prove challenging even for modern state-of-the art optimization solvers, we propose to solve it by an original primal heuristic which combines a probabilistic fixing procedure, guided by peculiar Linear Programming relaxations, with an exact MIP heuristic, based on a very large neighborhood search. Computational experiments on a set of realistic instances show that our heuristic can find solutions associated with much lower optimality gaps than a state-of-the-art solver.
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
Cardio Ox Ultrasound: Kopra, with Peake assisting, performed his first (Flight Day 15) of three ultrasound and blood pressure measurement sessions for the Cardio Ox experiment. The objective of Cardio Ox is to determine whether biological markers of oxidative and inflammatory stress are elevated during and after space flight and whether this results in an increased, long-term risk of atherosclerosis in astronauts. Twelve crewmembers provide blood and urine samples to assess biomarkers before launch, 15 and 60 days after launch, 15 days before returning to Earth, and within days after landing. Ultrasound scans of the carotid and brachial arteries are obtained at the same time points, as well as through 5 years after landing, as an indicator of cardiovascular health. On Board Training (OBT) Emergency Procedures Review: The USOS crew practiced/reviewed emergency mask don/purge technique and demonstrated the ability to communicate with Mission Control Center-Moscow (MCC-M) from the Soyuz wearing an emergency mask. The crew also reviewed locations of equipment and positions of valves used in emergencies. During training, crewmembers consulted and coordinated with specialists at MCC-Houston, MCC-M, Columbus Control Center (COL-CC) and JAXA Space Station Integration and Promotion Center (SSIPC). Today’s Planned Activities All activities were completed unless otherwise noted. NEIROIMMUNITET. Saliva Sample / r/g 1009 HRF – Sample Collection and Prep for Stowage Insertion CORRECTSIA. Closeout Ops / r/g 1009 HRF – Sample Insertion into MELFI Eye Imaging (Ocular Health) – OCT Setup Eye Imaging (Ocular Health) – OCT Exam (Operator) Eye Imaging (Ocular Health) – OCT Exam (Subject) Setup and Activation of КСПЭ Equipment for Hatch Closure from MRM1 TV coverage in MPEG2 Crew Prep for PAO Eye Imaging (Ocular Health) – OCT Exam (Operator) TV Conference with RCC Energia, IMBP, GCTC Management. New Year Greetings Eye Imaging (Ocular Health) – OCT Exam (Subject) DOSETRK – Medication Tracking Update FINEMOTR – Experiment Test Crew Onboard Support System (КСПЭ) Equipment Deactivation after TV conference Treatment of FGB structural elements and shell areas with Fungistat r/g 1013 Kulonovskiy Kristall Experiment Run r/g 1026 Checkout of ВП-2 Pilot’s Sight and Comm Interfaces r/g 1011 USND2 – Hardware Activation Eye Imaging (Ocular Health) – OCT Equipment Stowage CARDOX – Setup Ops CARDOX – Experiment Ops CARDOX – Scan (Operator) Intermodular TORU Test with Docked Progress 431 (DC1) r/g 1024 CARDOX – Blood Pressure Operations Checkout of the Wide Angle Vertical Sight (ВШТВ) r/g 1011 KULONOVSKIY KRISTALL. Copy and Downlink Data / r/g 1026 CARDOX – Doffing and Stowage Ops DOSETRK – Medication Tracking Update Compound Specific Analyzer-Combustion (CSA-CP) Checkout Part 2 Total Organic Carbon Analyzer (TOCA) Buffer Container (BC) Changeout Treatment of FGB structural elements and shell areas with Fungistat r/g 1013 PUMA Health Check r/g 1011 CEVIS Exercise WRS Water Sample Analysis Formaldehyde Monitoring Kit (FMK) Stow Operations JRNL – Journal Entry USND2 – Hardware Deactivation Emergency Mask Review OBT Kazbek Fit Check (Soyuz 718) OBT ISS Emergency Hardware Familiarization Drill Preventive Maintenance of FS1 Laptop (Cleaning and rebooting) / r/g 1023 Treatment of FGB structural elements and shell areas with Fungistat r/g 1013 OBT ISS Emergency Hardware Familiarization Drill BRI Monthly Maintenance r/g 0681 [Aborted] Cleaning ГЖТ4 (Gas-Liquid Heat Exchanger) ВТ-7 fan screen / FGB System Operations INTERACTION-2. Experiment Ops / r/g 1015 Verification of ИП-1 Flow Sensor Position / Pressure Control & Atmosphere Monitoring System IMS Delta File Prep TOCA Data Recording Eye Imaging (Ocular Health) – Prep for Fundoscope Ops (Ophthalmoscope) Crew time for ISS adaptation and orientation HABIT. Overview Video Eye Imaging (Ocular Health) – Fundoscope Setup (Ophthalmoscope) Eye Imaging (Ocular Health) – Prep for Fundoscope Ops (Ophthalmoscope) Eye Imaging (Ocular Health) – Fundoscope Ops (Ophthalmoscope) (СМО) On-orbit hearing assessment using EARQ / See OPTIMIS Viewer for Procedure Eye Imaging (Ocular Health) – Fundoscope Ops (Ophthalmoscope) (Subject) CONTENT. Experiment Ops / r/g 1016 СОЖ Maintenance Eye Imaging (Ocular Health) – Fundoscope Ops (Ophthalmoscope) (СМО) Eye Imaging (Ocular Health) – Fundoscope Ops (Ophthalmoscope) (Subject) Crew time for ISS adaptation and orientation Eye Imaging (Ocular Health) – Post-Ops Fundoscope (Ophthalmoscope) Stowage Completed Task List Items None Ground Activities All activities were completed unless otherwise noted. Nominal system commanding Three-Day Look Ahead: Thursday, 12/31: Ocular Health, EVA Preparations, CIR Operations, ISS Safety Video Survey Friday, 01/01: Crew holiday Saturday, 01/02: Crew off duty; housekeeping QUICK ISS Status – Environmental Control Group: Component Status Elektron Off Vozdukh Manual [СКВ] 1 – SM Air Conditioner System (“SKV1”) On [СКВ] 2 – SM Air Conditioner System (“SKV2”) Off Carbon Dioxide Removal Assembly (CDRA) Lab Standby Carbon Dioxide Removal Assembly (CDRA) Node 3 Operate Major Constituent Analyzer (MCA) Lab Idle Major Constituent Analyzer (MCA) Node 3 Operate Oxygen Generation Assembly (OGA) Process Urine Processing Assembly (UPA) Norm Trace Contaminant Control System (TCCS) Lab Full Up Trace Contaminant Control System (TCCS) Node 3 Off
from ISS On-Orbit Status Report http://ift.tt/1RbaxTW
via IFTTT
NFLPicker We make it simple. You pick the games… we provide fun & easy to read sports pick data. Find out how you stack up against the top players. Play PickingDuck Today
Following in the footsteps of Twitter, Facebook and Google, Microsoft promises to notify users of its e-mail (Outlook) and cloud storage (OneDrive) services if government hackers may have targeted their accounts. The company already notifies users if an unauthorized person tries to access their Outlook or OneDrive accounts. But from now on, the company will also inform if it suspects
from The Hacker News http://ift.tt/1YQoaxS
via IFTTT
This interstellar canine is formed of cosmic dust and gas interacting with the energetic light and winds from hot young stars. The shape, visual texture, and color, combine to give the region the popular name Fox Fur Nebula. The characteristic blue glow on the left is dust reflecting light from the bright star S Mon, the bright star just below the top edge of the featured image. Textured red and black areas are a combination of the cosmic dust and reddish emission from ionized hydrogen gas. S Mon is part of a young open cluster of stars, NGC 2264, located about 2,500 light years away toward the constellation of the Unicorn (Monoceros). via NASA http://ift.tt/1RRhE5q
Probabilistic Graphical Models (PGM) are very useful in the fields of machine learning and data mining. The crucial limitation of those models,however, is the scalability. The Bayesian Network, which is one of the most common PGMs used in machine learning and data mining, demonstrates this limitation when the training data consists of random variables, each of them has a large set of possible values. In the big data era, one would expect new extensions to the existing PGMs to handle the massive amount of data produced these days by computers, sensors and other electronic devices. With hierarchical data - data that is arranged in a treelike structure with several levels - one would expect to see hundreds of thousands or millions of values distributed over even just a small number of levels. When modeling this kind of hierarchical data across large data sets, Bayesian Networks become infeasible for representing the probability distributions. In this paper we introduce an extension to Bayesian Networks to handle massive sets of hierarchical data in a reasonable amount of time and space. The proposed model achieves perfect precision of 1.0 and high recall of 0.93 when it is used as multi-label classifier for the annotation of mass spectrometry data. On another data set of 1.5 billion search logs provided by CareerBuilder.com the model was able to predict latent semantic relationships between search keywords with accuracy up to 0.80.
Decision making is often based on Bayesian networks. The building blocks for Bayesian networks are its conditional probability tables (CPTs). These tables are obtained by parameter estimation methods, or they are elicited from subject matter experts (SME). Some of these knowledge representations are insufficient approximations. Using knowledge fusion of cause and effect observations lead to better predictive decisions. We propose three new methods to generate CPTs, which even work when only soft evidence is provided. The first two are novel ways of mapping conditional expectations to the probability space. The third is a column extraction method, which obtains CPTs from nonlinear functions such as the multinomial logistic regression. Case studies on military effects and burnt forest desertification have demonstrated that so derived CPTs have highly reliable predictive power, including superiority over the CPTs obtained from SMEs. In this context, new quality measures for determining the goodness of a CPT and for comparing CPTs with each other have been introduced. The predictive power and enhanced reliability of decision making based on the novel CPT generation methods presented in this paper have been confirmed and validated within the context of the case studies.
The scientific community is becoming more and more interested in the research that applies the mathematical formalism of quantum theory to model human decision-making. In this paper, we provide the theoretical foundations of the quantum approach to cognition that we developed in Brussels. These foundations rest on the results of two decade studies on the axiomatic and operational-realistic approaches to the foundations of quantum physics. The deep analogies between the foundations of physics and cognition lead us to investigate the validity of quantum theory as a general and unitary framework for cognitive processes, and the empirical success of the Hilbert space models derived by such investigation provides a strong theoretical confirmation of this validity. However, two situations in the cognitive realm, 'question order effects' and 'response replicability', indicate that even the Hilbert space framework could be insufficient to reproduce the collected data. This does not mean that the mentioned operational-realistic approach would be incorrect, but simply that a larger class of measurements would be in force in human cognition, so that an extended quantum formalism may be needed to deal with all of them. As we will explain, the recently derived 'extended Bloch representation' of quantum theory (and the associated 'general tension-reduction' model) precisely provides such extended formalism, while remaining within the same unitary interpretative framework.
In this paper we propose an extension to the Fuzzy Cognitive Maps (FCMs) that aims at aggregating a number of reasoning tasks into a one parallel run. The described approach consists in replacing real-valued activation levels of concepts (and further influence weights) by random variables. Such extension, followed by the implemented software tool, allows for determining ranges reached by concept activation levels, sensitivity analysis as well as statistical analysis of multiple reasoning results. We replace multiplication and addition operators appearing in the FCM state equation by appropriate convolutions applicable for discrete random variables. To make the model computationally feasible, it is further augmented with aggregation operations for discrete random variables. We discuss four implemented aggregators, as well as we report results of preliminary tests.
Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for the NLI task. In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neutral attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a matching-LSTM that performs word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. Our experiments on the SNLI corpus show that our model outperforms the state of the art, achieving an accuracy of 86.1% on the test data.
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for prefered solutions with respect to objective functions cardinality minimality, Coherence, or Weighted Abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP, because realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants as the Unique Names Assumption does not hold in general. For permitting reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We evaluate our encodings and extensions experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark. We identify term equivalence as a main instantiation bottleneck, and experiment with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers and make them applicable for larger datasets. Surprisingly, experiments show that this method is beneficial only for cardinality minimality with our ASP encodings.
We consider data in the form of pairwise comparisons of n items, with the goal of precisely identifying the top k items for some value of k < n, or alternatively, recovering a ranking of all the items. We consider a simple counting algorithm that ranks the items in order of the number of pairwise comparisons won, and show it has three important and useful features: (a) Computational efficiency: the simplicity of the method leads to speed-ups of several orders of magnitude in computation time as compared to prior work; (b) Robustness: our theoretical guarantees make no assumptions on the pairwise-comparison probabilities, while prior work is restricted to the specific BTL model and performs poorly if the data is not true to it; and (c) Optimality: we show that up to constant factors, our algorithm achieves the information-theoretic limits for recovering the top-k subset. Finally, we extend our results to obtain sharp guarantees for approximate recovery under the Hamming distortion metric.
We propose a way of extracting and aggregating per-move evaluations from sets of Go game records. The evaluations capture different aspects of the games such as played patterns or statistic of sente/gote sequences. Using machine learning algorithms, the evaluations can be utilized to predict different relevant target variables. We apply this methodology to predict the strength and playing style of the player (e.g. territoriality or aggressivity) with good accuracy. We propose a number of possible applications including aiding in Go study, seeding real-work ranks of internet players or tuning of Go-playing programs.
Ian Murdock, the founder the Debian Linux operating system and the creator of apt-get, has passed away. Yes, it is very sad to announce that Ian Murdock is not between us. His death has touched the entire software community. He was just 42. <!-- adsense --> The announcement of Murdock death came out via a blog post on Docker website, where Murdock was working as a member of the technical
from The Hacker News http://ift.tt/1SmWlZ1
via IFTTT
Google appears to be no longer using Java application programming interfaces (APIs) from Oracle in future versions of its Android mobile operating system, and switching to an open source alternative instead. Google will be making use of OpenJDK – an open source version of Oracle’s Java Development Kit (JDK) – for future Android builds. This was first highlighted by a "mysterious Android
from The Hacker News http://ift.tt/1RRrvbq
via IFTTT
After weeks of rickrolls and rubber ducks, Anonymous are finally ready to step into the real world. According to new claims made on Twitter earlier this ...
from Google Alert - anonymous http://ift.tt/1Smu9p4
via IFTTT
Hi Stefan and all, > See the "CWDIllegalInDllSearchPath" setting introduced with KB2264107 > about 5 years ago, after ACROS finally got enough attention for the > vulnerability first published as CVE-2000-0854 (that was 15 years ago, > but the vulnerability is still present in ALL installation programs): > there were^Ware applications that relied^Wy on loading DLLs from the > CWD, so Microsoft CAN'T exclude CWD from the PATH. > Microsoft can only offer support to exclude the CWD from the DLL search > order: developers can call SetDllDirectory(""), administrators can add > the global setting "CWDIllegalInDllSearchPath" or add this setting for > individual programs. While we finally did get CVE-2000-0854 the overdue attention, we apparently didn't promote this enough: http://ift.tt/1KWK7kU (presented at Source Boston in 2012). So now you'll have to do it - good luck :) BTW, Stefan, soon you'll be able to create your own patches for these, and many other bugs, with http://0patch.com. You're welcome. Cheers, Mitja Mitja Kolsek, CEO / @mkolsek ACROS, d.o.o. Makedonska ulica 113, SI - 2000 Maribor, Slovenia Tel +386.2.3000.280 Fax +386.2.3000.282 Web http://ift.tt/1MGLKkb Blg http://ift.tt/1OkGA34 Twt @acrossecurity ACROS Security: Finding Your Digital Vulnerabilities Before Others Do
## Introduction Affected Product: Netduma R1 Router Affected Version(s): 1.03.4 and 1.03.5 Link: http://ift.tt/1QZb9xo Vendor Website: https://netduma.com/ Vulnerability Type: CSRF Remote Exploitable: Yes Reported to vendor: 11/19/2015 Disclosed to public: 12/29/2015 Credits: @joshchaney ## Vulnerability Summary There is no CSRF protection for any administrative actions, which would allow an attacker to modify router settings or reboot the router by getting the victim to visit an attacker controlled website. ## Proof of Concept Reboot router: ## Report Timeline 11/19/2015 Informed vendor about issue through email 11/29/2015 Tweeted to vendor about issue 11/30/2015 Vendor tweeted that they would respond to email about issue 12/07/2015 Emailed known customer about issue who forwarded email to CEO 12/08/2015 CEO responded to email explaining he had passed the information to the lead developer 12/29/2015 Disclosed to public for lack of acknowledgement of issue
Hi, I need to send push notification to all the registered tokens via Rules, but the only rule action provided by the module allows only a list of integers ...
from Google Alert - anonymous http://ift.tt/1Ugkhf4
via IFTTT
The non-profit organization behind TOR – the largest online anonymity network that allows people to hide their real identity online – will soon be launching a "Bug Bounty Program" for researchers who find loopholes in Tor apps. The bounty program was announced during the recurring 'State of the Onion' talk by Tor Project at Chaos Communication Congress held in Hamburg, Germany. Bug
from The Hacker News http://ift.tt/1VprPNJ
via IFTTT
North Korea has its own homegrown computer operating system that looks remarkably just like Apple’s OS X, which not only prevents potential foreign hacking attempts but also provides extensive surveillance capabilities. Two German researchers have just conducted an in-depth analysis of the secretive state's operating system and found that the OS does more than what is known about it.
from The Hacker News http://ift.tt/1MGp5og
via IFTTT
What surrounds a hotbed of star formation? In the case of the Orion Nebula -- dust. The entire Orion field, located about 1600 light years away, is inundated with intricate and picturesque filaments of dust. Opaque to visible light, dust is created in the outer atmosphere of massive cool stars and expelled by a strong outer wind of particles. The Trapezium and other forming star clusters are embedded in the nebula. The intricate filaments of dust surrounding M42 and M43 appear brown in the featured image, while central glowing gas is highlighted in red. Over the next few million years much of Orion's dust will be slowly destroyed by the very stars now being formed, or dispersed into the Galaxy. via NASA http://ift.tt/1PuLd9H
Hello list! There are multiple vulnerabilities in Mobile Safari. There are Denial of Service and Cross-Site Scripting vulnerabilities. In the middle of December I checked all exploits for different browsers, which I published and non-published since 2006, in Mobile Safari for iOS 6.0.1 and 8.4.1. This is the first part of vulnerabilities.
Title: Local root vulnerability in DeleGate v9.9.13 Author: Larry W. Cashdollar, @_larry0 Date: 2015-12-17 Advisory: http://ift.tt/1MAvS2B Download Sites: http://ift.tt/1YEikzv http://ift.tt/1MAvSzT Vendor: National Institute of Advanced Industrial Science and Technology Vendor Notified: 2015-12-17 Vendor Contact: y.sato@delegate.org ysato@etl.go.jp Description: DeleGate is a multipurpose proxy server which relays various application protocols on TCP/IP or UDP/IP, including HTTP, FTP, Telnet, NNTP, SMTP, POP, IMAP, LPR, LDAP, ICP, DNS, SSL, Socks, and more. DeleGate mediates communication between servers and clients where direct communication is impossible, inefficient, or inconvenient. Vulnerability: Installation of delegate 9.9.13 sets some binaries setuid root, at least one of these binaries can be used to escalate the privileges of a local user. The binary dgcpnod creates a node allowing a local unprivileged user to create files anywhere on disk. By creating a file in /etc/cron.hourly a local user can execute commands as root. Installation of software via source or binary distribution with option to not run as root results in a script set-subin.sh to run setting the setuid bit on four binaries. In Linux distributions where this software is part of the package list these binaries are not setuid root. (archlinux) From documentation http://ift.tt/1VoVUwE (translated to english): Go is included in the binary distribution, or DGROOT that you can build from the source to the location of preference, and then change the name if necessary. This is the DgRoot. In addition, if needed, you can rename the executable file of DeleGate to the name of the preference. This is the DgExe. "In Unix version subin in if you want to use "(such as when using a privileged port), do the following. (3-2uk) $ cd DgRoot / subin $ Sh setup-subin.sh larry@f4ult:~/dg9_9_13/DGROOT/subin$ ls -l total 1916 -r-sr-
Thermal Radiator Rotary Joint (TRRJ) Repositioning for Alpha Magnetic Spectrometer (AMS)-02: Due to the high negative beta angle, the AMS team requested that the Starboard-Thermal Radiator Rotary Joint (S-TRRJ) be moved to a better angle to provide a more optimal temperature for the Transition Radiation Detector (TRD) pump. Ground teams will continue to monitor the TRD pump temperature, and will be required to activate the pump if the temperature reaches 7° Celsius resulting in a loss of science. AMS-02 has collected and analyzed billions of cosmic ray events, and identified 9 million of these as electrons or positrons (anti-matter). The number of high energy positons increases steadily rather than decaying, conflicting with theoretical models and indicates a yet to be identified source of positrons. Researchers also observed a plateau in the positron growth curve and need additional data to determine why. Results suggest that high-energy positrons and cosmic ray electrons may come from different and mysterious sources. Solving the origin of cosmic rays and antimatter increases understanding of our galaxy. ISS RapidScat: The RapidScat payload, located on the nadir Columbus Exposed Facility Unit (EFU) went into a Digital Interface Bridge (DIB) only mode on 24 December, with no science collection or antenna spinning due to a combination of the high negative beta and the ISS attitude with a yaw-bias for Service Module shadowing. Ground teams reactivated RapidScat today to the nominal wind-gathering observation mode. ISS RapidScat is a space-based scatterometer that measures wind speed and direction over the ocean, and is useful for weather forecasting, hurricane monitoring, and observations of large-scale climate phenomena. The ISS RapidScat instrument enhances measurements from other international scatterometers by cross-checking their data, and demonstrates a unique way to replace an instrument aboard an aging satellite. Sprint Ultrasound: Kopra performed his Flight Day (FD) 14 thigh and calf ultrasound scans with assistance from Kelly and guidance from the Sprint ground team. Ultrasound scans are used to evaluate spaceflight-induced changes in the muscle volume. The investigation evaluates the use of high intensity, low volume exercise training to minimize loss of muscle, bone, and cardiovascular function in ISS crewmembers during long-duration missions. Upon completion of this study, investigators expect to provide an integrated resistance and aerobic exercise training protocol capable of maintaining muscle, bone and cardiovascular health while reducing total exercise time over the course of a long-duration space flight. This will provide valuable information in support of the long term goal of protecting human fitness for even longer space exploration missions. Skin-B: Peake performed his first Skin-B activity, completing Corneometer measurements of the hydration level of the stratus coreum (outer layer of the skin), Tewameter measurements of the skin barrier function, and Visioscan measurements of skin surface topography. The European Space Agency (ESA) Skin-B investigation aims to improve the understanding of skin aging, which is greatly accelerated in space. The data will also be used to verify the results from previous SkinCare investigation testing on the ISS. Sleep Actiwatch Downlink and Configuration: Kelly downloaded data from his and Kornienko’s Actiwatch Spectrums and configured the devices to continue collecting data. The actiwatches have a photodiode that measures ambient light and an accelerometer to measure the movement of the arm or leg to which the watch is attached. The actiwatch data recorded on the watch supports the Sleep ISS-12 experiment which assesses the effects of space flight and ambient light exposure on sleep during a year-long mission on the ISS. Education Payloads Operations (EPO) Destination Space: Peake recorded video that will provide the raw film footage for videos created on the ground for use in Destination Space educational shows and workshops for school groups and families. Individualized Real-Time Neurocognitive Assessment Toolkit for Space Flight Fatigue (Cognition): This afternoon Peake will perform his FD 13 session of the Cognition experiment. The investigation is a battery of tests that measures how spaceflight-related physical changes, such as microgravity and lack of sleep, can affect cognitive performance. Cognition includes ten brief computerized tests that cover a wide range of cognitive functions and provides immediate feedback on current and past test results. The software allows for real-time measurement of cognitive performance while in space. Biochemical Profile and Cardio Ox: Kopra performed his FD 15 blood and urine collections for the Biochem Profile and Cardio Ox investigations. The Biochemical Profile experiment tests blood and urine samples obtained from astronauts before, during, and after spaceflight. Specific proteins and chemicals in the samples are used as biomarkers, or indicators of health. Post-flight analysis yields a database of samples and test results which scientists can use to study the effects of spaceflight on the body. The objective of Cardio Ox is to determine whether biological markers of oxidative and inflammatory stress are elevated during and after space flight and whether this results in an increased, long-term risk of atherosclerosis in astronauts. Microbiome and Salivary Markers: Blood and saliva samples were collected from Kopra to support the Microbiome, Telomeres and Salivary Markers investigations. Microbiome investigates the impact of space travel on both the human immune system and an individual’s microbiome (the collection of microbes that live in and on the human body at any given time). Telomeres investigates how telomeres and telomerase are affected by space travel. Salivary Markers data will be used to identify any risks of an adverse health event in crewmembers due to an impaired immune system. Portable Emergency Provisions (PEPS) Inspection: Peake conducted a regular inspection of the Portable Fire Extinguisher (PFE), Extension Hose Tee Kit (EHTK), Portable Breathing Apparatus (PBA), and Pre-Breathe Masks. Pre-Breathe Masks are not emergency equipment, but have similar maintenance requirements and are included in this inspection. Extravehicular Activity (EVA) Preparation: Kelly configured a lithium-ion battery charger and initiated an Extravehicular Mobility Unit (EMU) Long Life Battery (LLB) charge cycle. The activity was performed in preparation for the Sequential Shunt Unit (SSU) EVA planned for January 15th. Today’s Planned Activities All activities were completed unless otherwise noted. Morning Inspection. SM ПСС (Caution & Warning Panel) Test SLEEP Questionnaire Morning Inspection, […]
from ISS On-Orbit Status Report http://ift.tt/1OqI1IW
via IFTTT
Washington State Department of Corrections (DoC) is facing an investigation after it early released around 3,200 prisoners per year, since 2002, when a bug was introduced in the software used to calculate time credits for inmates' good behavior. The software glitch led to a miscalculation of sentence reductions that US prisoners were receiving for their good behaviour. Over the next 13
from The Hacker News http://ift.tt/1JGHxxc
via IFTTT
A former employee of Russian search engine Yandex allegedly stole the source code and key algorithms for its search engine site and then attempted to sell them on the black market to fund his own startup. Russian publication Kommersant reports that Dmitry Korobov downloaded a type of software nicknamed "Arcadia" from Yandex's servers, which contained highly critical information, including
from The Hacker News http://ift.tt/1Zz1IWU
via IFTTT
ronaldslasfhter Get Handmade word press website Design only in $249. call +1-800-219-0366 visit: https://t.co/3o7Ek7Lvaz
San Jose, CA
https://t.co/3o7Ek7Lvaz
Following: 33 - Followers: 0
December 29, 2015 at 04:58AM via Twitter http://twitter.com/ronaldslasfhter
Have you recently purchased a Windows computer? Congratulations! As your new Windows computer has inbuilt disk encryption feature that is turned on by default in order to protect your data in case your device is lost or stolen. Moreover, In case you lost your encryption keys then don't worry, Microsoft has a copy of your Recovery Key. But Wait! If Microsoft already has your Disk
from The Hacker News http://ift.tt/1RP91bE
via IFTTT
The Adobe Flash Player just said goodbye to the year with another bunch of vulnerability patches. Adobe released an out-of-band security update on Monday to address Nineteen (19) vulnerabilities in its Flash Player, including one (CVE-2015-8651) that is being exploited in the wild. All the programming loopholes could be abused to execute malicious code (here malicious Flash file on a
from The Hacker News http://ift.tt/1PtSc2u
via IFTTT
Nowadays, hospitals are ubiquitous and integral to modern society. Patients flow in and out of a veritable whirlwind of paperwork, consultations, and potential inpatient admissions, through an abstracted system that is not without flaws. One of the biggest flaws in the medical system is perhaps an unexpected one: the patient alarm system. One longitudinal study reported an 88.8% rate of false alarms, with other studies reporting numbers of similar magnitudes. These false alarm rates lead to a number of deleterious effects that manifest in a significantly lower standard of care across clinics.
This paper discusses a model-based probabilistic inference approach to identifying variables at a detection level. We design a generative model that complies with an overview of human physiology and perform approximate Bayesian inference. One primary goal of this paper is to justify a Bayesian modeling approach to increasing robustness in a physiological domain.
We use three data sets provided by Physionet, a research resource for complex physiological signals, in the form of the Physionet 2014 Challenge set-p1 and set-p2, as well as the MGH/MF Waveform Database. On the extended data set our algorithm is on par with the other top six submissions to the Physionet 2014 challenge.
We present a domain-general account of causation that applies to settings in which macro-level causal relations between two systems are of interest, but the relevant causal features are poorly understood and have to be aggregated from vast arrays of micro-measurements. Our approach generalizes that of Chalupka et al. (2015) to the setting in which the macro-level effect is not specified. We formalize the connection between micro- and macro-variables in such situations and provide a coherent framework describing causal relations at multiple levels of analysis. We present an algorithm that discovers macro-variable causes and effects from micro-level measurements obtained from an experiment. We further show how to design experiments to discover macro-variables from observational micro-variable data. Finally, we show that under specific conditions, one can identify multiple levels of causal structure. Throughout the article, we use a simulated neuroscience multi-unit recording experiment to illustrate the ideas and the algorithms.
This paper defines adversarial reasoning as computational approaches to inferring and anticipating an enemy's perceptions, intents and actions. It argues that adversarial reasoning transcends the boundaries of game theory and must also leverage such disciplines as cognitive modeling, control theory, AI planning and others. To illustrate the challenges of applying adversarial reasoning to real-world problems, the paper explores the lessons learned in the CADET - a battle planning system that focuses on brigade-level ground operations and involves adversarial reasoning. From this example of current capabilities, the paper proceeds to describe RAID - a DARPA program that aims to build capabilities in adversarial reasoning, and how such capabilities would address practical requirements in Defense and other application areas.
This paper gives an overview of recent progress in the brain inspired computing field with a focus on implementation using emerging memories as electronic synapses. Design considerations and challenges such as requirements and design targets on multilevel states, device variability, programming energy, array-level connectivity, fan-in/fanout, wire energy, and IR drop are presented. Wires are increasingly important in design decisions, especially for large systems, and cycle-to-cycle variations have large impact on learning performance.
Vehicles are becoming more and more connected, this opens up a larger attack surface which not only affects the passengers inside vehicles, but also people around them. These vulnerabilities exist because modern systems are built on the comparatively less secure and old CAN bus framework which lacks even basic authentication. Since a new protocol can only help future vehicles and not older vehicles, our approach tries to solve the issue as a data analytics problem and use machine learning techniques to secure cars. We develop a Hidden Markov Model to detect anomalous states from real data collected from vehicles. Using this model, while a vehicle is in operation, we are able to detect and issue alerts. Our model could be integrated as a plug-n-play device in all new and old cars.
Multi-relational learning has received lots of attention from researchers in various research communities. Most existing methods either suffer from superlinear per-iteration cost, or are sensitive to the given ranks. To address both issues, we propose a scalable core tensor trace norm Regularized Orthogonal Iteration Decomposition (ROID) method for full or incomplete tensor analytics, which can be generalized as a graph Laplacian regularized version by using auxiliary information or a sparse higher-order orthogonal iteration (SHOOI) version. We first induce the equivalence relation of the Schatten p-norm (0<p<\infty) of a low multi-linear rank tensor and its core tensor. Then we achieve a much smaller matrix trace norm minimization problem. Finally, we develop two efficient augmented Lagrange multiplier algorithms to solve our problems with convergence guarantees. Extensive experiments using both real and synthetic datasets, even though with only a few observations, verified both the efficiency and effectiveness of our methods.
Several algorithms and tools have been developed to (semi) automate the process of glycan identification by interpreting Mass Spectrometric data. However, each has limitations when annotating MSn data with thousands of MS spectra using uncurated public databases. Moreover, the existing tools are not designed to manage MSn data where n > 2. We propose a novel software package to automate the annotation of tandem MS data. This software consists of two major components. The first, is a free, semi-automated MSn data interpreter called the Glycomic Elucidation and Annotation Tool (GELATO). This tool extends and automates the functionality of existing open source projects, namely, GlycoWorkbench (GWB) and GlycomeDB. The second is a machine learning model called Smart Anotation Enhancement Graph (SAGE), which learns the behavior of glycoanalysts to select annotations generated by GELATO that emulate human interpretation of the spectra.
Bayesian matrix completion has been studied based on a low-rank matrix factorization formulation with promising results. However, little work has been done on Bayesian matrix completion based on the more direct spectral regularization formulation. We fill this gap by presenting a novel Bayesian matrix completion method based on spectral regularization. In order to circumvent the difficulties of dealing with the orthonormality constraints of singular vectors, we derive a new equivalent form with relaxed constraints, which then leads us to design an adaptive version of spectral regularization feasible for Bayesian inference. Our Bayesian method requires no parameter tuning and can infer the number of latent factors automatically. Experiments on synthetic and real datasets demonstrate encouraging results on rank recovery and collaborative filtering, with notably good results for very sparse matrices.
Once known to be used exclusively in military domain, unmanned aerial vehicles (drones) have stepped up to become a part of new logistic method in commercial sector called "last-mile delivery". In this novel approach, small unmanned aerial vehicles (UAV), also known as drones, are deployed alongside with trucks to deliver goods to customers in order to improve the service quality or reduce the transportation cost. It gives rise to a new variant of the traveling salesman problem (TSP), of which we call TSP with drone (TSP-D). In this article, we consider a variant of TSP-D where the main objective is to minimize the total transportation cost. We also propose two heuristics: "Drone First, Truck Second" (DFTS) and "Truck First, Drone Second" (TFDS), to effectively solve the problem. The former constructs route for drone first while the latter constructs route for truck first. We solve a TSP to generate route for truck and propose a mixed integer programming (MIP) formulation with different profit functions to build route for drone. Numerical results obtained on many instances with different sizes and characteristics are presented. Recommendations on promising algorithm choices are also provided.
We study mechanisms for candidate selection that seek to minimize the social cost, where voters and candidates are associated with points in some underlying metric space. The social cost of a candidate is the sum of its distances to each voter. Some of our work assumes that these points can be modeled on a real line, but other results of ours are more general.
A question closely related to candidate selection is that of minimizing the sum of distances for facility location. The difference is that in our setting there is a fixed set of candidates, whereas the large body of work on facility location seems to consider every point in the metric space to be a possible candidate. This gives rise to three types of mechanisms which differ in the granularity of their input space (voting, ranking and location mechanisms). We study the relationships between these three classes of mechanisms.
While it may seem that Black's 1948 median algorithm is optimal for candidate selection on the line, this is not the case. We give matching upper and lower bounds for a variety of settings. In particular, when candidates and voters are on the line, our universally truthful spike mechanism gives a [tight] approximation of two. When assessing candidate selection mechanisms, we seek several desirable properties: (a) efficiency (minimizing the social cost) (b) truthfulness (dominant strategy incentive compatibility) and (c) simplicity (a smaller input space). We quantify the effect that truthfulness and simplicity impose on the efficiency.
look at the image library deployed with php, libgd, if you want to have some more fun. for a start, let's look at http://ift.tt/1OV0f5P http://ift.tt/1IzBgs2 cheers! mar77i
BREAKING: A misconfigured database has resulted in the exposure of around 191 Million voter records including voters' full names, their home addresses, unique voter IDs, date of births and phone numbers. The database was discovered on December 20th by Chris Vickery, a white hat hacker, who was able to access over 191 Million Americans’ personal identifying information (PII) that are just
from The Hacker News http://ift.tt/1JEvpwG
via IFTTT
Note: A big thanks to PyImageSearch reader, Sean McLeod, who commented on last week’s post and mentioned that I needed to make the FPS rate and the I/O latency topic more clear.
Increasing Raspberry Pi FPS with Python and OpenCV
In last week’s blog post we learned that by using a dedicated thread (separate from the main thread) to read frames from our camera sensor, we can dramatically increase the FPS processing rate of our pipeline. This speedup is obtained by (1) reducing I/O latency and (2) ensuring the main thread is never blocked, allowing us to grab the most recent frame read by the camera at any moment in time. Using this multi-threaded approach, our video processing pipeline is never blocked, thus allowing us to increase the overall FPS processing rate of the pipeline.
In fact, I would argue that it’s even more important to use threading on the Raspberry Pi 2 since resources (i.e., processor and RAM) are substantially more constrained than on modern laptops/desktops.
Again, our goal here is to create a separate thread that is dedicated to polling frames from the Raspberry Pi camera module. By doing this, we can increase the FPS rate of our video processing pipeline by 246%!
In fact, this functionality is already implemented inside the imutils package. To install
imutils
on your system, just use
pip
:
$ pip install imutils
If you already have
imutils
installed, you can upgrade to the latest version using this command:
$ pip install --upgrade imutils
We’ll be reviewing the source code to the
video
sub-package of
imutils
to obtain a better understanding of what’s going on under the hood.
To handle reading threaded frames from the Raspberry Pi camera module, let’s define a Python class named
PiVideoStream
:
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
from threading import Thread
import cv2
class PiVideoStream:
def __init__(self, resolution=(320, 240), framerate=32):
# initialize the camera and stream
self.camera = PiCamera()
self.camera.resolution = resolution
self.camera.framerate = framerate
self.rawCapture = PiRGBArray(self.camera, size=resolution)
self.stream = self.camera.capture_continuous(self.rawCapture,
format="bgr", use_video_port=True)
# initialize the frame and the variable used to indicate
# if the thread should be stopped
self.frame = None
self.stopped = False
Lines 2-5 handle importing our necessary packages. We’ll import both
PiCamera
and
PiRGBArray
to access the Raspberry Pi camera module. If you do not have the picamera Python module already installed (or have never worked with it before), I would suggest reading this post on accessing the Raspberry Pi camera for a gentle introduction to the topic.
On Line 8 we define the constructor to the
PiVideoStream
class. We’ll can optionally supply two parameters here, (1) the
resolution
of the frames being read from the camera stream and (2) the desired frame rate of the camera module. We’ll default these values to
(320, 240)
and
32
, respectively.
Finally, Line 19 initializes the latest
frame
read from the video stream and an boolean variable used to indicate if the frame reading process should be stopped.
Next up, let’s look at how we can read frames from the Raspberry Pi camera module in a threaded manner:
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
from threading import Thread
import cv2
class PiVideoStream:
def __init__(self, resolution=(320, 240), framerate=32):
# initialize the camera and stream
self.camera = PiCamera()
self.camera.resolution = resolution
self.camera.framerate = framerate
self.rawCapture = PiRGBArray(self.camera, size=resolution)
self.stream = self.camera.capture_continuous(self.rawCapture,
format="bgr", use_video_port=True)
# initialize the frame and the variable used to indicate
# if the thread should be stopped
self.frame = None
self.stopped = False
def start(self):
# start the thread to read frames from the video stream
Thread(target=self.update, args=()).start()
return self
def update(self):
# keep looping infinitely until the thread is stopped
for f in self.stream:
# grab the frame from the stream and clear the stream in
# preparation for the next frame
self.frame = f.array
self.rawCapture.truncate(0)
# if the thread indicator variable is set, stop the thread
# and resource camera resources
if self.stopped:
self.stream.close()
self.rawCapture.close()
self.camera.close()
return
Lines 22-25 define the
start
method which is simply used to spawn a thread that calls the
update
method.
The
update
method (Lines 27-41) continuously polls the Raspberry Pi camera module, grabs the most recent frame from the video stream, and stores it in the
frame
variable. Again, it’s important to note that this thread is separate from our main Python script.
Finally, if we need to stop the thread, Lines 38-40 handle releasing any camera resources.
Note:If you are unfamiliar with using the Raspberry Pi camera and the
picamera
module, I highly suggest that you read this tutorial before continuing.
Finally, let’s define two more methods used in the
PiVideoStream
class:
# import the necessary packages
from picamera.array import PiRGBArray
from picamera import PiCamera
from threading import Thread
import cv2
class PiVideoStream:
def __init__(self, resolution=(320, 240), framerate=32):
# initialize the camera and stream
self.camera = PiCamera()
self.camera.resolution = resolution
self.camera.framerate = framerate
self.rawCapture = PiRGBArray(self.camera, size=resolution)
self.stream = self.camera.capture_continuous(self.rawCapture,
format="bgr", use_video_port=True)
# initialize the frame and the variable used to indicate
# if the thread should be stopped
self.frame = None
self.stopped = False
def start(self):
# start the thread to read frames from the video stream
Thread(target=self.update, args=()).start()
return self
def update(self):
# keep looping infinitely until the thread is stopped
for f in self.stream:
# grab the frame from the stream and clear the stream in
# preparation for the next frame
self.frame = f.array
self.rawCapture.truncate(0)
# if the thread indicator variable is set, stop the thread
# and resource camera resources
if self.stopped:
self.stream.close()
self.rawCapture.close()
self.camera.close()
return
def read(self):
# return the frame most recently read
return self.frame
def stop(self):
# indicate that the thread should be stopped
self.stopped = True
The
read
method simply returns the most recently read frame from the camera sensor to the calling function. The
stop
method sets the
stopped
boolean to indicate that the camera resources should be cleaned up and the camera polling thread stopped.
Now that the
PiVideoStream
class is defined, let’s create the
picamera_fps_demo.py
driver script:
# import the necessary packages
from __future__ import print_function
from imutils.video.pivideostream import PiVideoStream
from imutils.video import FPS
from picamera.array import PiRGBArray
from picamera import PiCamera
import argparse
import imutils
import time
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-n", "--num-frames", type=int, default=100,
help="# of frames to loop over for FPS test")
ap.add_argument("-d", "--display", type=int, default=-1,
help="Whether or not frames should be displayed")
args = vars(ap.parse_args())
# initialize the camera and stream
camera = PiCamera()
camera.resolution = (320, 240)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(320, 240))
stream = camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True)
Lines 2-10 handle importing our necessary packages. We’ll import the
FPS
class from last week so we can approximate the FPS rate of our video processing pipeline.
From there, Lines 13-18 handle parsing our command line arguments. We only need two optional switches here,
--num-frames
, which is the number of frames we’ll use to approximate the FPS of our pipeline, followed by
--display
, which is used to indicate if the frame read from our Raspberry Pi camera should be displayed to our screen or not.
Now we are ready to obtain results for a non-threaded approach:
# import the necessary packages
from __future__ import print_function
from imutils.video.pivideostream import PiVideoStream
from imutils.video import FPS
from picamera.array import PiRGBArray
from picamera import PiCamera
import argparse
import imutils
import time
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-n", "--num-frames", type=int, default=100,
help="# of frames to loop over for FPS test")
ap.add_argument("-d", "--display", type=int, default=-1,
help="Whether or not frames should be displayed")
args = vars(ap.parse_args())
# initialize the camera and stream
camera = PiCamera()
camera.resolution = (320, 240)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(320, 240))
stream = camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True)
# allow the camera to warmup and start the FPS counter
print("[INFO] sampling frames from `picamera` module...")
time.sleep(2.0)
fps = FPS().start()
# loop over some frames
for (i, f) in enumerate(stream):
# grab the frame from the stream and resize it to have a maximum
# width of 400 pixels
frame = f.array
frame = imutils.resize(frame, width=400)
# check to see if the frame should be displayed to our screen
if args["display"] > 0:
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame and update
# the FPS counter
rawCapture.truncate(0)
fps.update()
# check to see if the desired number of frames have been reached
if i == args["num_frames"]:
break
# stop the timer and display FPS information
fps.stop()
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
# do a bit of cleanup
cv2.destroyAllWindows()
stream.close()
rawCapture.close()
camera.close()
Line 31 starts the FPS counter, allowing us to approximate the number of frames our pipeline can process in a single second.
We then start looping over frames read from the Raspberry Pi camera module on Line 34.
Lines 41-43 make a check to see if the
frame
should be displayed to our screen or not while Line 48 updates the FPS counter.
Finally, Lines 61-63 handle releasing any camera sources.
The code for accessing the Raspberry Pi camera in a threaded manner follows below:
# import the necessary packages
from __future__ import print_function
from imutils.video.pivideostream import PiVideoStream
from imutils.video import FPS
from picamera.array import PiRGBArray
from picamera import PiCamera
import argparse
import imutils
import time
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-n", "--num-frames", type=int, default=100,
help="# of frames to loop over for FPS test")
ap.add_argument("-d", "--display", type=int, default=-1,
help="Whether or not frames should be displayed")
args = vars(ap.parse_args())
# initialize the camera and stream
camera = PiCamera()
camera.resolution = (320, 240)
camera.framerate = 32
rawCapture = PiRGBArray(camera, size=(320, 240))
stream = camera.capture_continuous(rawCapture, format="bgr",
use_video_port=True)
# allow the camera to warmup and start the FPS counter
print("[INFO] sampling frames from `picamera` module...")
time.sleep(2.0)
fps = FPS().start()
# loop over some frames
for (i, f) in enumerate(stream):
# grab the frame from the stream and resize it to have a maximum
# width of 400 pixels
frame = f.array
frame = imutils.resize(frame, width=400)
# check to see if the frame should be displayed to our screen
if args["display"] > 0:
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# clear the stream in preparation for the next frame and update
# the FPS counter
rawCapture.truncate(0)
fps.update()
# check to see if the desired number of frames have been reached
if i == args["num_frames"]:
break
# stop the timer and display FPS information
fps.stop()
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
# do a bit of cleanup
cv2.destroyAllWindows()
stream.close()
rawCapture.close()
camera.close()
# created a *threaded *video stream, allow the camera sensor to warmup,
# and start the FPS counter
print("[INFO] sampling THREADED frames from `picamera` module...")
vs = PiVideoStream().start()
time.sleep(2.0)
fps = FPS().start()
# loop over some frames...this time using the threaded stream
while fps._numFrames < args["num_frames"]:
# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = vs.read()
frame = imutils.resize(frame, width=400)
# check to see if the frame should be displayed to our screen
if args["display"] > 0:
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
# update the FPS counter
fps.update()
# stop the timer and display FPS information
fps.stop()
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
This code is very similar to the code block above, only this time we initialize and start the threaded
PiVideoStream
class on Line 68.
We then loop over the same number of frames as with the non-threaded approach, update the FPS counter, and finally print our results to the terminal on Lines 89 and 90.
Raspberry Pi FPS Threading Results
In this section we will review the results of using threading to increase the FPS processing rate of our pipeline by reducing the affects of I/O latency.
The results for this post were gathered on a Raspberry Pi 2:
Using the
picamera
module.
And a Logitech C920 camera (which is plug-and-play capable with the Raspberry Pi).
I also gathered results using the Raspberry Pi Zero. Since the Pi Zero does not have a CSI port (and thus cannot use the Raspberry Pi camera module), timings were only gathered for the Logitech USB camera.
I used the following command to gather results for the
picamera
module on the Raspberry Pi 2:
$ python picamera_fps_demo.py
Figure 1: Increasing the FPS processing rate of the Raspberry Pi 2.
As we can see from the screenshot above, using no threading obtained 14.46 FPS.
However, by using threading, our FPS rose to 226.67, an increase of over 1,467%!
But before we get too excited, keep in mind this is not a true representation of the FPS of the Raspberry Pi camera module — we are certainly not reading a total of 226 frames from the camera module per second. Instead, this speedup simply demonstrates that our
for
loop pipeline is able to process 226 frames per second.
This increase in FPS processing rate comes from decreased I/O latency. By placing the I/O in a separate thread, our main thread runs extremely fast — faster than the I/O thread is capable of polling frames from the camera, in fact. This implies that we are actually processing the same frame multiple times.
Again, what we are actually measuring is the number of frames our video processing pipeline can process in a single second, regardless if the frames are “new” frames returned from the camera sensor or not.
Using the current threaded scheme, we can process approximately 226.67 FPS using our trivial pipeline. This FPS number will go own as our video processing pipeline becomes more complex.
To demonstrate this, let’s insert a
cv2.imshow
call and display each of the frames read from the camera sensor to our screen. The
cv2.imshow
function is another form of I/O, only now we are both reading a frame from the stream and then writing the frame to our display:
$ python picamera_fps_demo.py --display 1
Figure 2: Reducing the I/O latency and improving the FPS processing rate of our pipeline using Python and OpenCV.
Using no threading, we reached only 14.97 FPS.
But by placing the frame I/O into a separate thread, we reached 51.83 FPS, an improvement of 246%!
To summarize the results, by placing the blocking I/O call in our main thread, we only obtained a very low 14.97 FPS. But by moving the I/O to an entirely separate thread our FPS processing rate has increased (by decreasing the affects of I/O latency), bringing up the FPS rate to an estimated 51.83.
Simply put: When you are developing Python scripts on the Raspberry Pi 2 using the
picamera
module, move your frame reading to a separate thread to speedup your video processing pipeline.
As a matter of completeness, I’ve also ran the same experiments from last week using the
Figure 3: Obtaining 36.09 FPS processing rate using a USB camera and a Raspberry Pi 2.
With no threading, our pipeline obtained 22 FPS. But by introducing threading, we reached 36.09 FPS — an improvement of 64%!
Finally, I also ran the
fps_demo.py
script on the Raspberry Pi Zero as well:
Figure 4: Since the Raspberry Pi Zero is a single core/single threaded machine, the FPS processing rate improvements are very small.
With no threading, we hit 6.62 FPS. And with threading, we only marginally improved to 6.90 FPS, an increase of only 4%.
The reason for the small performance gain is simply because the Raspberry Pi Zero processor has only one core and one thread, thus the same thread of execution must be shared for all processes running on the system at even given time.
Given the quad-core processor of the Raspberry Pi 2, it’s suffice to say the Pi 2 should be used for video processing.
Summary
In this post we learned how threading can be used to increase our FPS processing rate and reduce the affects of I/O latency on the Raspberry Pi.
Using threading allowed us to increase our video processing rate by a nice 246%; however, its important to note that as the processing pipeline becomes more complex, the FPS processing rate will go down as well.
In next week’s post, we’ll create a Python class that incorporates last week’s
WebcamVideoStream
and today’s
PiVideoStream
into a single class, allowing new video processing blog posts on PyImageSearch to run on either a USB camera or a Raspberry Pi camera module without changing a single line of code!
Sign up for the PyImageSearch newsletter using the form below to be notified when the post goes live.
Staatsbibliothek zu Berlin (D-B): Mus.ms. 30282 (15). This file is based on high-resolution images obtained from the source using a method explained ...
from Google Alert - anonymous http://ift.tt/1mKHOtQ
via IFTTT
Advanced Resistive Exercise Device (ARED) Status: This morning, crew reported that one of the two Vibration Isolation System dashpots on the left side of ARED was broken at the rod end. As a result, the crew was unable to perform morning session ARED exercises. Kopra and Kelly subsequently changed out the left and right side dashpots utilizing newer, more robust designed replacements to restore ARED functionality. Marrow: Peake took his Flight Day 9 blood and breath and air samples after waking today, in support of the Canadian Space Agency (CSA) Marrow investigation. This investigation looks at the effect of microgravity on human bone marrow. It is believed that microgravity, like long-duration bed rest on Earth, has a negative effect on the bone marrow and the blood cells that are produced in the marrow. The extent of this effect, and its recovery, are of interest to space research and healthcare providers on Earth. Circadian Rhythms: Peake doffed the Armband Monitor and Thermolab sensors and belt that he has worn for 36 hours and downloaded the recorded data. Circadian Rhythms investigates the role of synchronized circadian rhythms, or the “biological clock,” and how it changes during long-duration spaceflight. Researchers hypothesize that a non-24-hour cycle of light and dark affects crewmembers’ circadian clocks. The investigation also addresses the effects of reduced physical activity, microgravity and an artificially controlled environment. Changes in body composition and body temperature, which also occur in microgravity, can affect crewmembers’ circadian rhythms as well. Understanding how these phenomena affect the biological clock will improve performance and health for future crewmembers. Neuromapping: Kelly completed a NeuroMapping Neurocognitive test on a Human Research Facility laptop today. The Neuromapping experiment studies whether long-duration spaceflight causes any changes to the brain, including brain structure and function, motor control, and multi-tasking; as well as measuring how long it takes for the brain and body to recover from those possible changes. Previous research and anecdotal evidence from crewmembers returning from a long-duration spaceflight suggests that movement control and cognition are affected in microgravity. Radi-N Neutron Field Study (Radi-N): Kornienko handed over eight Radi-N detectors to Kelly, who then deployed them in the Japanese Experiment Module (JEM) for the Radi-N experiment. This is the fourth of six RADI-N dosimeter deploys for Increments 45 and 46. The objective of this investigation is to better characterize the ISS neutron environment and define the risk posed to the crewmembers’ health and provide the data necessary to develop advanced protective measures for future space flight. Ocular Health Ocular and Cardiac Ultrasounds: Kelly and Kornienko took ocular and cardiac ultrasounds as part of their series of Flight Day 270 Ocular Health tests. The ultrasound images will be used to identify changes in globe morphology and document optic nerve sheath diameter, optic nerve sheath tortuosity, globe axial measurements, and choroidal engorgement. The Ocular Health protocol calls for a systematic gathering of physiological data to characterize the risks of microgravity-induced visual impairment and increased intracranial pressure in ISS crewmembers. Researchers believe that the measurement of visual, vascular and central nervous system changes over the course of this experiment and during the subsequent post-flight recovery will assist in the development of countermeasures, clinical monitoring strategies, and clinical practice guidelines. Cardio Ox: Kopra and Peake collected blood pressure and ultrasound measurements of their carotid and brachial arteries to support their Flight Day 15 Cardio Ox session. The objective of Cardio Ox is to determine whether biological markers of oxidative and inflammatory stress are elevated during and after space flight and whether this results in an increased, long-term risk of atherosclerosis in astronauts. Twelve crewmembers will provide blood and urine samples to assess biomarkers before launch, 15 and 60 days after launch, 15 days before returning to Earth, and within days after landing. Ultrasound scans are obtained at the same time points and through 5 years after landing. Spare Sequential Shunt Unit (SSU) Checkout Teardown: Following yesterday’s successful checkout of the spare SSU, Peake removed power and data cables, then stowed the spare SSU. In addition, he relocated the Station Support Computer (SSC) used for the checkout back to its original location in Node 3. The SSU was declared GO to replace SSU 1B during an Extravehicular Activity (EVA) planned for January 15th Today’s Planned Activities All activities were completed unless otherwise noted. MARROW – Air Sampling Reaction Self-Test Biochemical Urine Test HRF Urine Sample Collection Biochemical Urine Test URISYS Hardware Stowage HRF – Sample MELFI Insertion HRF – Centrifuge Setup Assistance HRF Closeout Ops HRF – Centrifuge Configuration USND2 – Hardware Activation FINEMOTR – Experiment Test OTKLIK. Hardware Monitoring HRF – Centrifuge Configuration HRF – Sample MELFI Insertion ISS crew and ГОГУ (RSA Flight Control Management Team) weekly conference HRF – Blood Sample Collection Closeout Ops Study of cardiovascular system under graded physical load on VELO ТКГ 431 (DC1) Early Unstow and US Cargo Items Transfers and IMS Ops HRF – Sample MELFI Insertion Eye Imaging – Ultrasound Scan Prep Ultrasound 2 – Scan Ultrasound 2 – Scan performed by a Crew Medical Officer (CMO) Deinstallation of Thermolab Instrumentation for Circadian Rhythms Ultrasound Data Export BIOME – Survey Questionnaire Completion NMAP – Experiment Test HRF – Hardware Setup MRM2 comm config to support the P/L Ops MATRYOSHKA-R. Gathering and Initialization of BUBBLE-dosimeter detectors. KULONOVSKIY KRISTALL. Experiment Ops BIOME – Sampling Setup ECLSS Recycle Tank Remove and Replace RADIN – Handover of detectors to USOS for deployment RADIN – Dosimeter Deployment MATRYOSHKA-R. Handover of BUBBLE-dosimeters to USOS / r/g 0972 MATRYOSHKA-R. BUBBLE-dosimeter initialization and deployment for exposure CIRCADIAN RHYTHMS – Equipment Stowage MRM2 Comm Reconfig for Nominal Ops DC1 Staging Area Setup, Transfer of Orlan 5 to DC1 and Deactivation of Orlan-4 and Orlan-5 KULONOVSKIY KRISTALL. Copy and Downlink Data Image Processing Unit (IPU) Hard Disk Exchange Life On The Station Photo and Video Eye Imaging – Ultrasound Scan Prep WRS – Recycle Tank Fill Eye Imaging (assistance) Eye Imaging – Subject Recovery of Orlan No. 5 […]
from ISS On-Orbit Status Report http://ift.tt/1Zx4YSN
via IFTTT