Thursday, July 27, 2017

BHUSA17: Behind the Plexiglass Curtain: Stats and the Stories from the BlackHat NOC

Bart Stump, Neil Wyler

Both of the presenters have been on the review boards for DefCon and BlackHat and have been working on the NOC (Network Operations Center) for many years. Additionally there are 21 industry professionals in the NOC.

Used Palo Alto Networking as their core firewall vendor, 2G of bandwidth. Fewer wired rooms and fewer APs.  Everybody gets segmented to protect you as much as possible. You're paying a lot to be here, so availability is important, too.  Working with RSA and gigamon this year for analytics.

Most of the gear is in the basement to keep it from being so hot and noisy in the NOC.

The NOC is now on display and has a wifi "cactus". You can look at them, but not come into the actual NOC.

working with Century Link, PAN, RSA, Gigamon, Ruckus and Pwnie Express.

Hit the limit of their networking capacity for the first time this year. Was saturated the first few hours on Monday AM, when people were downloading Windows updates.

Last year, some changes were made to the network outside of their control and it caused a 4 hour outage. This year they had a much better lock down.

Found rogue access points - one in a plant!

They don'[t block anything or any DNS requests, because many demos or training sessions need to access malware sites.

There were over 300,000 DNS  queries were observed to domains known to be malicious or host malware. Over 12,000 queries went to dynamically generated domains, over 7,800 NEWLY seen domans where queried from here.

The top 2 sites visited for windows updates. And the 5th, 7, 8 and 10th, too. Apple and Ubuntu hit hard, too. They advise users to patch before coming to this conference - many of these hits could be from expo systems, training, VMs, etc.

About 50% of the traffic was encrypted, down from last year. They did see one "VPN" connection that was done in the clear - oops! So, check your VPN is actually encrypting!

Found a new version of Emotet. Saw 404 errors from a site that kept returning different data sizes. This version was released on Tuesday and discovered at BlackHat on Wednesday.

There was 500K unique wireless MACs, 65K unique bluetooth MACs (80% Apple). Many devices moving between trusted BH Wifi network to Open Wifi. Next year people won't be allowed to take laptops in and out of NOC and will use preimaged machines.

Discovered 94 adhoc wifi networks, 55 APs on Non-USA wifi channels and 17 Pineapple APs.

See lots of old, unpatched OSes, out of date IOS, out of date apps. See things like webcams where authentication was encrypted but video stream of home security camera was not...

Going to use the BlackHat wifi? Recommend using VPN (NOTE: I do that at every conference or public wifi).

BHUSA17: Evolutionary Kernel Fuzzing

Richard Johnson

Johnson has been working in fuzzing for awhile and releasing new tools over the last few years. New tool will allow people to do fuzzing in Windows Kernel w/out modifying the binaries.

Kernels are a critical attack surfaces and modern mitigations usually utilize isolation and sandboxing. There are weaponized exploits against the kernel, but little progress in vulndev research.

Evolutionary fuzzing is not a new concept, first introduced at the 2006 BlackHat (Sparks & Cunningham, Sidewinder) and lots of other papers and presentations and open source projects.

Evolutionary fuzzing needs a fast tracing engine, fast logging and a fast evolutionary algorithm. Highly desireable to be easy to use and portable.  His new tool is useful out of the box! First tool only targeted source code originally. American Fuzzy Lop (AFL) features a variety of mutation strategies, block coverage via compile time instrumentation and simplified approach to genetic algorithm.

AFL has a UI and tracks edge transitions. Lots of demos followed.

BHUSA2017: Free-Fall: Tesla Hacking 2016: Hacking Tesla from Wirelss to CAN Bus

This is their first time going through all of the remote attacks on the Tesla in public. In September 2016 they successfully implemented a remote attack against the Tesla without physical access to the car.

There was an OLD WebKit used in QtCarBrowser on Tesla. Tesla car automatically scan and connect to known SSIDs, has a "tesla guest" with password abcd123456 in Body shop and Supercharger. QtCarBrowser will automatcially reload its current webpage and trigger their webkit exploit.  In Cellular mode they can target phising and mistyped URLs.

Tesla has since patched all of the vulnerabilities found by KeenLab. They had attacked a vuln in JSArray::sort(compareFunction).  They could use this to leak some addresses. They took advantage of type confusion and an overlap in array storage. They shifted the array one time in the compareFunction, copy backed into JSC::JSArray::sort() and then unshif TWICE to trigger increaseVectorPrefixLength() and fastFree arbitrary address (palyload of JSValue-A).

Took advantage of powerful CVE-2011-3928 for leak, with corrupted HTMLInputElement structure.. Arbitrary address READ/WRITE to leak JSCell address of Uint32Array, got address of Uint32Array from JSCell, fastFree the address and defined a new Uint32Array to achieve AAR/RRW.

Finally they got a shell from the browser, but was a low privelge account (browser uid=2222), so they needed to elevate their privilege. The kernel was very old, so there were many well known kernel attacks - were able to fully dump the kernel with a corrupted syscall entry in syscall table.

A brief introduction of the gateway (gw/gtw): it's a powerPC chip running RTOS (most likely FreeRTOS), and SDCard. They were able to get their own code on the firmware. Can do this during an ECU Upgrade (the details are fuzzy here, as the slides moved by really fast). Best - trigger the ECU upgrade by giving a command to the gw (0x08 - update trigger) and get your own boot image loaded and get it to run with taskUpdate.

They sent messages then to other ECUs, including the CAN bus using diagnosis (though the ability to do this is limited when the car is in drive mode). But, can work around this by swapping the handlers.  Still, some ECUs will not respond at all under drive mode. Some ECUs will notice the speed and disable dangerous functions if necessary. They then focused on the forwarding table to block the forwarding process.  Could use the UDS to unlock the ECU, becauser the seed/key for security access to ECU is fixed 

They were able to get access with 3G/Wi-FI, exploit a webKit Brwoser, root the in-vehicle systems, patch and disable appArmor, bypass ECUs firmware integrity verification, reprogram and do dangerous things while the car was driving. Tesla was very responsive and appreciative about the disclosure and fixed the issues in 10 days. Browser security enhancements, firmeware improvements also came out at the same time.  there are now more restictive AppArmor rules (Yes, Tesla uses AppArmor instead of SELinux) and restricted where you can run executibles and where you can write files.


Additionally added code signing everywhere, to protect the ECU, but found some issues due to ECU not being able to verify signature itself. Reported issues to Tesla in the end of June 2017 and all cars have the fixes by July 2017.

Cool video demonstrating the results. Apparently they had to use all of their hacks to do a fun music show with headlights going on and off (single lamp at a time) and doors opening and shutting.

Wednesday, July 26, 2017

BHUSA17: Fighting the Previous War

This talk is brought to us by the folks at Thinkst. Back in 2009, they worked on various attacks in the cloud. Attacks like taking extra resources from Amazon. Amazon limited each account to 20 machines, but then they'd get their 20 machines to each get 20 machines... and so on. Cloud is different and needs different thinking.

People still think SaaS as "another webapp" or "just a linux host", but it is very different and should be treated as such.

Footprinting is under-valued. While pentesting a network, they'd spend the bulk of the time finding all the machines - often long forgotten boxes.

Hardly anybody is setting up sendmail anymore, but their apps are sending emails. They need microservices for this, but where do the responsibilities live?  Who is processing your email, and who is making sure the services you integrate with are behaving properly (or used correctly). (reference: White Hats - Nepal: Bug Bounty from Uber, read).

Canary tokens is a framework released in 2015 is a framework to make honeytokens easy to get. It can help you learn about attacks on your systems.

You have to keep in mind that attacks don't look like they used to - devices are getting harder, but boundaries are fuzzier.

Looking at the Atom editor, there are lots of plugins available - it's easy to put in a malicious plugin. You can then easily steal keystrokes or get a user to unknowingly run commands for you. The plugins are not screened and can be very complicated.

This is no longer just hosted virtual machines. Look at how WordPress Hosting on AWS works - it only touches one virtual machine, but there is a very big graph of very complicated issue.

To attack AWS a good bit of recon is to find account ids. It is a 12-digit number, considered private (but not secure).  You can make one call with a valid account ID and an invalid ID and get different responses.  It works, but it is very sloooooow

But you can find these more easily by looking at github or on help forums. Even though Amazon says to not post them, people do. Easy!

Another area to do recon is on the S3 bucket discovery, similar discovery vectors. Areas of potential compromise are around the APIs, for example it is possible to enumerate permissions with a variety of calls without actually knowing what the permissions should be.

Sadly running out of battery.... posting now!

BHUSA2017: Automated Testing of Crypto Software Using Differential Fuzzing

JP Aumasson and Yolan Romailler

JP just released a new book on cryptography - check it out. Yolan is working on his masters.

What do we want to accomplish? We want to prove valid functionality works and that the program cannot be abused and secrets won't leak. They are testing code against code. For example, if you're porting from one language to another, they should be able to do the same things. (the assumption is that the reference code is correct - not always true!).  Additionally, want to test the code against the specifications, though those are sometimes not even complete or leave exercises to the users.

Automated testing can cover static analyzers, test vectors, dub fuzzing, smart fuzzing and formal verification. With things like test vectors, the more vectors you have, the better your coverage. That's a lot of testing, so need to think about how to maximize the efficiency (ease of use x coverage).

There are limitations on the current methods, like randomness quality, timing leaks and test vectors focus on valid inputs.

The researchers came up with a new tool, CDF, to do crypto differential fuzzing. It's a command line tool written in Go, portable to WIndows/Linux/MacOS and made it fast so it won't be a bottleneck. This tool will check for both correctness and security of implemetnations and interoperability between implementations. It will check for insecure parameters, non-compliance with standards (e.g. FIPS) and edge cases for specific algorithms.

This is similar to WycheProof, but different. The two tools will complement each other.

One of their checks for ECDSA will make sure that sending 00 hash or 00 secret key will not send it into an infinite loop. For RSA they do various checks for timing leaks.  With their testing, they found potential DoS with OpenSSL.

Discovered that most libraries are not testing the DSA parameters, found issues with almost every library tested. Even though they did not find issues with DSA in crypto++, it doesn't mean they don't exist - just the test did not trip over it.

Several libraries got an infinite loop with generator set to 0 in DSA. It's not allowed by the  standard, but tripped over several libraries.

Still working on adding additional tests and making the suite more robust.

BHUSA17: How We Created the First SHA-1 Collision and What It Means for Hash Security

Elie Bursztein

There are two other types of attacks, besides collision attack, and those are pre-image attacks (not feasible at this time).

When doing this attack, could not use bruteforce as you would need millions of years of bruteforcing with a GPU - we don't have that much time. Better to use cryptanalysis!

Before you can attack, you must understand how a hash function is constructed. In SHA1, we use a Merkle-Damgard construction. This works by taking the first block of the file, adding an IV and perform compression. Then go block by block until you get to the end and you have a short string of compression that is related to the file.

We got  a cool picture of what SHA1 compression looks like "unrolled". To do cryptanlysis, need to look at the messages differential path and the equation system (we have solved 16 steps and that makes it predictable to step 24).

It turns out it is easier to do collision wit two blocks instead of 1 block. Find two blocks that have almost the same hash for a near collision, then resolve the difference to get a full collision.

So - how do you exploit this? Look at a fixed prefix attack for SHA1. Finda prefix before you find a collision. If you carefully choose the prefix, you can improve the attack.

Let's take a look at real world attacks exploiting MD5. In 2009 there was an SSL certificate forgery. This was exploited by leveraging wildcard (instead of domain name) and adding the old public key and signature as a "comment" in the new certificate.

In 2012 there was massive malware called "Flame" that was used to spy on Iranian computers with fake windows update certificates. The collision in practice used 4 blocks, instead of 2. Shows that the attackers had their own cryptographer.

As of today, MD5 is broken - you can create collisions on your cell phone. For SH1 you still need a lot of time with current computing power.

So - how do create a new collision? Choose carefully what prefix you want, as you cannot change it after the fact. Through 2015 and 2016, worked on near-collision blocks. Used about 3000 CPUS for about 8 months to calculate a near collision. Throughout 2016 worked on a full collision attacka nd 2017 the attack was completed.

You have to find a tradeoff between failure and efficiency and continue to scale the computation. This team did it in 1 hour batches.

In 2016 found their first collision - had to spend a few days analyzing it, and they found a problem. It wasn't usable for finding a full collision.

The team had to make efficient use of GPUs. Work step by step to generate enough solutions for the next step, always try to work at the highest step and backtrack when pool empty.  Also, the work was parallelized: one thread and one solution.

The new attack is called Shattered - it takes 110 GPU for 1 year, which is less than 12milion years for bruteforce. We saw a demo of getting two different PDF files to hash to the same SHA1 hash (but still have different SHA2 hashes)

This is already being exploited, for example on WebKit.  A developer submitted a test to prove WebKit was resistant but tripped an unforeseen bug in SVN and took the site down.

The collision is in the wild - what to do with the legacy software? SHA1 is deeply integated into GIT - how can they protect themselves? They can test for collisions and has negligilve false positives.

The takeaway: SHA1 is dead. Do not do new deployments with it and try to move away from it. We need to continue doing counter-cryptanalysis and keep in mind hash diversity.

BHUSA17: When IoT Attacks: Understanding the Safety Risks Associated with Connected Devices

Billy Rios, Jonathan Butts

There will be 26-30 billion connected devices by 2020! Need to worry about confidentiality, integrity and availability - but is that enough? Is there something more imortant than keeping your credentials safe? Yes - safety! Many of these devices are controlling environmental factors in laboratories, chemical mixtures, etc.

Rios and Butts looked at devices conencted to the internet, in a public space accessible to the general public, and exploitation of the device could result in a safety issue.

Currently, there are only a few devices that meet all 3 criteria. One surprising device they found - car washes! The looked at Laserwash. Car washes are really just industrial control systems (ICS) - and come with all the attitude and controls those systems come with. The car wash is different than most ICSs - it's accessible to the general public, with no screening at all.

The researchers wrote an exploit that can cause a car wash system to physically attack an occupant. Currently there is no patch for the vulnerability - if you own one of these car washes, please contact the manufacturer.

The big takeaway here - you should wear a hard hat to go into a car wash.

When Charlie and Chris attacked a car, they had to buy a car and about $15,000 in tools to analyze the car. The systems are so specialized you must buy specilized tools. In that referenced paper, the tools they bought were only good for Fiat-Chrysler vehicles.

Their cost considerations - acquired firmware in 2014 through compensated operator. Did not find a willing owner until 2017 (which they also had to compensate and pay for the car washes).  Buying a carwash is a large sum ($250K?) - so they really had to find people that already owned them that were interested in the academic interest. shouldn't there be a better way? Should manufacturers give access to systems?  Without this, researchers are looking at live and deployed systems and spending their own money.

Initially disclosed the bug to vendor in February 2015. Reached out repeatedely through April 2017... still no response. In May they got a fully working remote exploit code (PoC) - still no response. Once posted to BlackHat schedule, vendor asked if they tested against a demo system.

From other vendors, got a lot of comments like "that's not how we designed the system to work", etc. so writing up a vulnerability is not sufficient. Researchers had to do the PoC and prove their exploit worked - very costly and time consuming. Could vendors do better?

Need to remember these devices are just a computer - the car wash has storage, cables, disks, programs. Older car washes had a manual physically connected interface with a joy stick to manually control the arm, etc. Now they have close proximity remotes. If you are within line of site of the car wash, you can control the car wash.

When these car washes are deployed, they likely come with warnings for the new owner that the car wash is connected to the Internet. But, why?  You can configure the car wash to send emails. Maybe the owner wants to see how many car washes happened in any given day, which packages are the most popular and what times are the busiest - business reasons.

But - this car wash is on facebook, YouTube and LinkedIn. Now, that is perplexing.

At the end of the day, this is a computer running WindowsCE, Intrinsyc Rainbow web server with a Binary Gateway Interface. Windows CE is end of life - there is no more support for any vulnerabilities.  The webserver calls mapped to an unmanaged ARM DLLs. There are a lot of DLLs on the system that could be abused.

From the web browser, you can point to various DLLs and access them directly via rbhttp22.dll.

Now... there are credentials. The owner credentials are ...12345. This gives you all access, including free car washes. THe engineer creds (PDQENG) are 83340.  But, the researchers don't think having the default creds is a true exploit.

There is a PLC driving the functioality of the car wash. This is a system of systems, lots of communication happening. There are 3 key DLLs for exploits.

They won't be publishing the details of the exploit, because these vulnerabilities are just not fixed.

One of the basic issues is that authentication is handled very simply. The authentication level is set to OWNER before the credentials are checked...Just cause an exception in the authentication routine, and you will remain as OWNER!

It doesn't matter if the owner changes tha password, you can read it back.

The researchers identified where the hardware safety mechanisms were - those are difficult to override, much easier to do the software mechanisms. An example of a hardware mechanism si an example of a welded on safety stopper - more difficult to defeat.

Software controls the doors that go up and down, after interacting with sensors that report "all clear". You can exploit the door, for example, to disregard the response from the sensors. Video was shown of a car wash door crushing th ehood of a car that was only part way into the car wash.

There is another issue with CVSS scoring. There is a medical infusion device with a vulnerability that can kill the user - it's rated at 7.1.  A bug in a medical cabinent that allows people to steal drugs - 9.7. Why should that be "more severe" than death?  The speakers have additionally been working on a scoring system specifically for medical devices.

You cannot rely on software solely for physical security, and you should never respond to a vulnerability researcher with "the system wasn't designed to do that" :-)

BHUSA2017: BlackHat USA posts will be out of order

Due to connectivity issues, some posts are being written offline. Hope to get them up by the end of the week, but will live blog what I can.

Friday, May 19, 2017

ICMC17: Thomas Jefferson and Apple versus the FBI

Daniel J. Bernstein, University of Illinois at Chicago & Technische Universiteit Eindoven

Gutenberg's original printing press was based on a wine press - who knew? If you think beer or wine is dangerous, you may think the best thing to do is prohibit alcohol. In 1919, the Womens Christian Temperance Union requested that the public library to remove books and pamplets on the home production of alcohol for drinks. the librarians would not destroy the books, but did remove them from public access.

Why do censors try to ban instructions? "It might be bad if people follow these instructions" - stop people from acting on information.   We have freedom of speech, though, so we shouldn't accept this. You can try to hide the information, but they will still find it and figure it out. Censorship adds very little benefit, and often causes massive damage.

There are careful exceptions to free speech in the US- you cannot intentionally solicit criminal activity. You also cannot advocate an imminent lawless action if it's likely to produce such an action: "Let's burn down that mosque" - not protected by free speech.  You also can't make false promises (breach of contract), deceive people for profit (fraud), or make false statements that damage reputation with reckless disregard for the truth (defamation).

What about training videos? Is Ocean's Eleven a training video? What about Tom Clancy's Debt of Honor, 1994 that described something very similar to 9-11 attacks. Some people also don't want to see historical documents and books on things like Kamikaze pilots - what if terrorists act on these examples?  It turns out they will come up with it themselves, even without such inspiration.
So, the court has to look at it from the point of view - are you intentionally aiding and abetting criminal activities?

What if a terrorist stays hidden and alive in the woods by reading a book on "how to fish"? It's clearly not intended to help criminals.  That type of book is protected under free speech.

On software - it's usually (always?) something a human could do/calculate by hand, with time, but we're using the computer to help make it faster. If you hear statements from the government that is talking about restricting computers - remove the word "computer" and see if the same rationale for censoring instructions followed by people?

People are using encryption to protect files and conversations, as the FBI calls it "going dark".  So, should we be allowed to publish encryption software?  Imagine if you remove the computer from this situation

Jefferson and James Madison communicated via 'encrypted' messages, encoded. Thomas Jefferson distributed instructions that James Madison used, by hand, to encrypt private letters. No computer involved here, doing it by hand. What if they  published how to do this in a book? And then a criminal used it, and the FBI comes and says you can't publish this. Is that allowed? If the book is intended to help criminals, the government can censor.

Lawyers will claim that free speech needs a software exception. Imagine sw made to destroy navigational systems on airplanes? What if it was a book that described how to do this? The computer is irrelevant to the question. The courts should look at the intent, just like they do when you present them with a book.

According to the FBI, in 1963, the Domestic Intelligence head thought he was a Russian agent. In 1964 he won the Nobel Peace Prize. In 1964 FBI sent King an anonymous letter encouraging him to commit suicide.  In 1967, NSA also started surveillance on King.

As far back as 1977, the NSA (Joseph Meyer) threatened  organizers of a crypto conference with prosecution under export laws.

For Dan himself, he sent a crypto paper and crypto software to NSA asking them for permission to publish. NSA refused, classifying paper and software as "munitions" and subject to export control.  Though in 1995, the NSA told the courts they were trying to protect America - but papers were okay (free speech) and allowed Dan to publish the paper (but not the software).

Unfortunately for the NSA, Judge Marilyn Hall Patel disagreed with them in 1996, and agreed that software was free speech. It's just language. The court of appeals agreed in 1999.

Now back to modern day - Apple vs FBI - but imagine w/out the computer. Imagine the FBI coming to Jefferson and demand that he write a new anti-encryption instructions and falsely sign those instructions as being legitimate.  Jefferson says that the instructions are too dangerous to create. The US Supreme Court notes that freedom of speech includes "both what to say and what not to say".

Ask yourself - what is the software doing in this picture?  What if we were doing this ourselves? The courts know how to handle that and you should, too.

ICMC17: Zero Knowledge Doesn't Mean Zero Ethics

Joshua Marpet, SVP, Compliance and Managed Services CyberGRC

Zero knowledge system: A mathematical proof: zero knowledge proofs and verifiable secret sharing are vital for mutli-party secure sharing. Can be used in health care, blockchain, etc.

Can use blockchain in  healthcare to exchange information across health care networks (for example between a hospital in DC and hospital in California).

How do you know you are working with an ethical party? Is the NSA ethical? What about Geek Squad? If you are building a zero knowledge system, will you be fostering bad ethics?  For example, the blockchain for bitcoin contains child porn.

Think about free speech - you can talk about all things, but not necessarily incite behaviours. For example, you cannot shout fire in a theater.

So, you need a very clear Terms of Servie and Acceptable Use and a provisioning checkbox along the lines of "Will you be hosting illegal content?"  Yes, they can break it - but then you will not have an ethical conundrum when law enforcement asks for that users illegal data.

Now, don't be a bad provider. Don't monitor your customer's content, be inconsistent or non responsive. Respect warrants - but use reason. Something doesn't seem right? Consult your lawyer, EFF, etc.

ICMC17: Revisiting Threat Models for Cryptography

Bart Preneel, imec-COSIC KU Leuven, Belgium

Rule #1 of cryptanalysis: search for plaintext first :-)

With the Snowden documents, we learned that the NSA is foiling much of the deployed encryption - using super computers, turnkeys, backdoors, etc.

If you can't get the plain text - try just asking for the key, then you can do the decryption.  About 300,000 NSA letters for keys have  been issued since 2001.  Most come with gag orders, so it's difficult to get this information.

Yahoo fought the security letter they received. Others, like Silent Circle and Lavabit just shut down.

So, think about PFS - if someone gets one of your keys, can they get your older data as well?  You can replace RSA with DH for perfect forward secrecy.  logjam, though, was able to subvert the system by downgrading the negotiation and then read your data.

If you can't get the private key, try substituting the public key (because you have the private key for your public key!)  The most recent attack in this area was fake SSL certificates or SSL person-in-th-middle attacks.

this brought about "Let's Encrypt" that has been live since 2015.

If you can't get the key, try cryptovirology (book by Young and Yung).

Or, how about a trapdoor in your PRNG (Dual EC DRBG, in Juno's ScreenOS).

What other technology might be similarly subverted?

If you can't undermine the encryption, how about attacking the end systems?

Hardware hacking: intercepted packages are opened carfully and a "load station" implants a beacon. If you don't want your  routers to come with "extra bits", you might want to pick them up from the manufacturer (pictures shown of this  happening to Cisco routers).

There is a chip that can be installed between monitor and keyboard, can be powered up remotely by radar and then the remote attacker can read what's on your screen.

Maybe we need offense over defense?  How many 0-days do our governments have? Are they revealed to vendors? If so, when?  NSA claims that they have released more than 90% of the 0-days to vendors, but didn't say anything about how long they hold onto the attacks before doing the notification.

another good way to fight encryption - complicated standards! Does anyone really fully understand IPsec, for example. Backdoors are another way, but we should be able to see from DUAL_EC_DRBG where the backdoor was backdoored....

There are 18Billion encrypted deployed devices to protect industry - not you. Like DRM to control content.

There are 14B encryption devises to protect users, but there are issues. Look at encryption on phones - it's not end to end, so still issues. Consumers might have "encrypted harddrives", but without key management, the hard drive can just be pulled out and put into another machine and read.

There are issues with many messaging services - they back up your messages in the clear in the cloud.

Secure channels are still a challenge with lack of forward secrecy, denial of service, lack of secure routing, and lack of control over meta data (which is still data!)  TOR hides your IP address, but not your location, so it is limited.

when doing design, avoid a single point of trust that becomes a single point of failure. stop collecting massive amounts of data.

distributed systems work: Root keys of some CAs, Skype (pre-2011) and bitcoin.

We need new ways to detect fraud and abuse.  We need open source solutions, open standards, effective governance and transparency for service providers.  And finally, deploy more advanced crypto.

ICMC17: Encryption and Cybersecurity Policy Under the New Administration

Neema Singh Gulani, Legislative Counsel (Privacy and Technology), ACLU

We still don't know what the policies are going to be, yet, but she's here to give us her understanding of where we are and where she thinks we're going.

Why should you care, if you're not a lawyer? Look at lavabit - a company that offered an encrypted email service. All was well and good until it was discovered that Edward Snowden used their service. The US Government requested their encryption keys (under a gag order, so they could not tell their users). Judge ordered them to give up their keys. Not just the keys that protected Snowden's mail, but to everyone's. The company shut down, because they no longer felt they could protect their users.

Right now we are seeing a very divided government, polar opposites on a lot of issues - but they will work together on preventing NSA surveillance and protecting encryption keys.

Obama administration considered various technical options to get around the "going dark" problem - so law enforcement could access information they had before encryption became more pervasive. Several things like backdoors, remote access, forced updates, etc - and the administration decided to work with the commercial providers of the products, as opposed to building legislation.

We don't know clearly where the Trump administration stands, we know that Trump was critical about Apple not wanting to give a back door to law enforcement to get into an iPhone. Jeff Sessions noted once that he was in favor of encryption, but that criminal investigators need to be able to"overcome" encryption.

There is proposed legislation from Burr/Feinstein that requires manufactures provide data in "intelligible forms" (covers software and device manufacturers.).  The ACLU is not in favor of this bill. It has been called  "technically tone deaf".

We know that under the Obama Administration had an interagency process run out of the White HOuse that didn't have any "hard and fast rules" on vulnerability disclosure.  It was used to balance risk with intelligence needs.   NSA said most vulnerabilities are disclosed.  Is that good enough to protect users of tech?

NSA surveillance: Section 702. This targets 106,000 foreign targets where they collect over 250 million internet transactions annually, about 50% of that information is about a U.S. resident. This is up for review again this year in congress.

Because of Trump's accusations of wire tapping, this may be an opportunity to reform Section 702.

Right now, based on a 6th circuit court decision from 2010, most US companies require a warrant before they will provide content to the FBI or other law enforcement.

An email privacy act was passed by congress 419-0, but the bill got stuck in the senate where too many unrelated thongs were added.

Many users would be surprised at the low bar required to hand over their data to the FBI or local law enforcement, or that they also would not necessarily be notified when it happens.

If you're building products for the government, think about how the product will be used and can you audit that it's being used as intended?

Look to see what lobbyists your employer is backing and see if it lines up with their public press releases - if not, say something.  Consider also direct lobbying - there is a dearth of technical knowledge in Capital Hill - they need your knowledge!

ICMC17: Crypto: You're Doing it Wrong

Jon Green, Sr. Director, Security Architecture and Federal CTO, Aruba Networks/HPE

Flaws can be varied and sad - like forgetting to use crypto (like calling a function that was never completed for your DRBG! Jon showed us an example of validated code that was an empty function who's comment contained TODO). Other issues in large multi-module products that may contain code written in C, Java, Python, PhP, JavaScript, Go, Bash... claiming to get FIPS from OpenSSL. Most of languages aren't going to be using OpenSSL, so they won't be using FIPS validated crypto.

Developers often don't know where the crypto is happening. They may forget to complete certain code segments, relay on 3rd party and open source and rely on multiple frameworks. Or even if they do, they may not want to dive in because of the amount of work required to make things work correctly, particularly from a  FIPS perspective..

What about the FIPS code review done by the lab? Almost certainly not, as they are typically looking at the application code - just the low level crypto and RNG functions. Even with the old German scheme for EAL4 deeper code review, still miss issues (like the above TODO code that went through EAL2 and EAL4 review).  Testing misses these nuances as well.

Security audits of your code are very fruitful, but very expensive. He's seen success with bug bounty programs, even if the code is closed.

He's also seen problems with FIPS deployments that are leveraging "FIPS inside" where they leverage another module, like OpenSSL, but forgot to turn on FIPS mode and forgot to update the applications so they would not try to use non-FIPS algorithms.

Another problematic approach - the dev follows all the steps to deploy CenOS in FIPS mode by following the RedHat documentation... except that documentation only applies to RedHat and *not* CentOS. Yes, it's the same source code , but validations are not transitive. A RedHat validation only applies to RedHat deployments.

To get this right, identify services that really need to be FIPS validated and focus your efforts there..

ICMC17: Keynote: From Heartbleed to Juniper and Beyond

Matthew Green, Johns Hopkins University.

Kleptography - the study of stealing cryptographic secrets. Most people did not think the government was really doing this. But, we do know there was a company, Crypto AG, that worked with the NSA on their cipher machines available between 1950s and 1980s.

Snowden's leak contained documents referring to SIGINT - a plan to insert back doors into commercial encryption devises and add vulnerabilities into standards.

How can the government do this?  We can't really change existing protocols, but you can mandate use of specific algorithms. This brings us to the 'Achilles heel' of crypto - (P)RNG. If that's broken, everything is broken.

There are two ways to subvert an RNG - attack the lower level TRNG or the PRNG. The TRNG is probabilistic and hardware specific and too much variance. The PRNG/DRBG is software and everyone has to use a specific one to comply with US Government standards - a more appealing target.

Young and Yung predicted an attack against the DRBG and how it might work in the 1990s - where you could use internal state, a master key and a trap door will let you decrypt the data.  This sounds a lot like Dual EC DRBG. It was almost immediately identified as having this weakness - if the values were not chosen correctly. NSA chose the values - and we trusted them.

Snowden leaks found documents referring to "challenge of finesse" regarding pushing this backdoor through a standards body.  Most of us don't have to worry about the government snooping on us, but ... what if someone else could leverage this back door?

This is what happened when Juniper discovered unauthorized code in their ScreenOS that allowed an attacker to passively decrypt VPN traffic.  Analysis of the code changes discovered that Juniper changed the parameters for DUAL EC DRBG. But, Juniper said they didn't use Dual EC DRBG, according to their security policy, other than as input into 3DES (which should cover anything bad from DRBG).  The problem came up when a global variable was used in a for loop (bad idea), which in effect means the for loop that was supposed to do the 3DES mixing never runs (as the Dual EC DRBG subroutine also uses the same global variable).

More specifically, there are issues with how IKE was implemented.  The impacted version, ScreenOS 6.2 (the version that adds Dual EC DRBG) adds a pre-generation nonce.

Timeline: 1996 Young and Yung propose the attack, 1998 Dual EC DRBG developed at NSA, 2007 became a final NISt standard, 2008 Juniper added Dual EC DRBG and it was exploited in 2012 and not discovered until 2015.

Before Dual EC DRBG, people used ANSI X9.31 - which had a problem if you used a fixed K, someone can recover the state and subvert the systems.

How do we protect ourselves? We should build protocols that are more resilient to bad RNG (though that's not what is happening). But maybe protocols are not the issue - maybe it's how we're doing the validation.  Look at FortiOS, who had a hard coded key in their devices, which was used for testing FIPS requirements, and documented it in their security policy.

Thursday, May 18, 2017

ICMC17: Evolving Practice in TLS, VPNs, and Secrets Management

Kenneth White (@KennWhite)

A good quote starts: "There is no difference, from the attacker's point of view, between a gross and tiny errors. Both of them are equally exploitable."..."This lesson is very hard to internalize. In the real world, if you build a bookshelf and forget to tighten one of the screws all the way, it does not burn down your house".

We look for the following in network transport encryption: data exposure, network intercept, credential theft, identity theft, authenticated cipher suites, etc.

We have learned, the hard way, the problem with unauthenticated block modes. If you don't compute the hash correctly or in the wrong order - it's useless.

After POODLE, SSLv3 is dead. It's still out there, but as a practical matter, it's gone.

Getting good data on who is impacted by a security vulnerability is hard - even Gartner got this wrong, by overestimating who was impacted by FREAK just by how many devices still supported SSLv3 (even if they did not actually have the vuln).

Advice going forward: use AEAD!

ICMC17: Crypto++: Past Validations and Future Directions

Jeffrey Walton, Security Consultant.

This is an older toolkit, Jeff fell in love with it when he was in college in the 90s. He's been working in computer security ever since.

Crypto++ is a C++ class library, written by Wei Dai in June 1995. It's a general purpose crypto library, handed over to the community in 2015.

When the library was hit with CVE in 2015, he handed it over to the community to develop. Since Jeffrey has been using it since 1990s, he was chosen to become one of the maintainers. Wei Dei still advises.

C++03 through C++17 have heavy use of templates and static polymorphism (yay C++). Makes things faster, but makes it hard to adopt, especially as there is not excellent documentation. He tries to use questions on stackoverflow to demonstrate where to spend time on the documentation.

Right now, crypto++ is on the historical validation list, which makes it pretty much useless... (them and everyone else ended up on the list last year, due to DRBG changes).

Crypto++ validations are on Windows only. Includes NIST approved algorithms; RNG, AES, SHA, MAC, RSA, DH. There are non-FIPS routines in other DLLs.

Going forward, he'd like to add C bindings. Would like to add an engine-like interface.  Will they do another validation? Probably not - too expensive. But, could wrap around other validated crypto to take advantage.

Crypto++ now uses OpenSSL's FIPS Object Module, to effectively provide a FIPS validated module - so you can stay on your C++ bindings and not make changes to your application.

Going forward, he

ICMC17: Penetration Testing: TLS 1.2 and Initial Research on How to Attack TLS 1.3 Stacks

Scapy TLS: A scriptable TLS stack, Alex Moneger, Citrix Systems

TLS is the protocol that secures the internet, and there are very few alternatives. It's a session layer protocol for other protocols, and it is very complex. Sure, you can implement it in 3 weeks - but will you get it right?

TLS is under scrutiny and there is growth in the number of attacks and their frequency.

 We need to make sure we understand the attack properly and understand the practical impact. How reproducible is the attack? how can we fix it and make sure it stays fixed? Customers often don't understand the impact or how to fix.

Scapy TLS is TLS & DTLS scriptable stack built above scapy. It's as stateless as possible, includes packet crafting and dissecting and crypto session handling.

The goals of the project are to make sure it's easy to install and use to simplify discovery and exploitation of TLS vulnerabilities - very customizable.

We then got to see some code - it looks very simple to use.

The theory here is you can use this tool to help work on PoCs faster. It's on GitHub :)

ICMC17: What's new in TLS 1.3 (and OpenSSL as a result)

Rich Salz, Akami, OpenSSL Developer

TLS 1.0 was a slight modification of the original SSL protocol by Netscape, published in January 1999. Basically the same as SSL3 - it is bad, it is weak, it has no good ciphers and it's still in wide use.

TLS 1.1 came out in April 2006. It would be great to kill both TLS 1.0 and 1.1 off - but it's just too hard.
TLS 1.2 has been around since August 2008. Since then, there's been a bunch of new RFCs and algorithms from all over the world.

TLS 1.3 was approved in October 2015 in IETF.

Background on IETF: If it doesn't happen over email where everyone can see it, it didn't happen.  It is divided into areas like Security, Operations, DNS, etc - each area has area directors and working groups and working group chairs.  Very well laid out process, very egalitarian.

The working groups work by consensus - they demonstrate by humming, so people can't be singled out by raising hands or doing a verbal roll call. Many documents now do their iterations in GitHub.

IETF also has RFC editors who make sure final revisions follow the consistent and correct format for an RFC.

TLS 1.3 had a few goals - encrypt as much of the handshake as possible, reduce the handshake latency - ideally to one or zero roundtrip for repeated handshakes, update record payload and make it more privacy friendly.

Only uses ECDH and DHE (but nobody will use DHE - too expensive). All connections have perfect forward secrecy.  Most things are encrypted (SNI is not). The most common curves will be NIST P-256 or X25519.

There will be improvements to bulk encryption. Three cipher types and two variations: ChaCha220/Poly1305, AESGCM128(256) and AESCCM 128 (256). The cipher lists and choices were just getting too long and mistakes were being made. these are the ones you need.  The cipher suite no longer specifies key exchange or authentication mechanism - they are negotiated now.

The bulk encryption ciphers chose are all modern, secure and AEAD-only - in addition to confidentiality, you also get integrity (authenticity).

Authentication improvements: DSA was removed, RSA was kept and prefers RSA-PSS over PKCS #1.5.

There was general cleanup, like the removal of renegotiation (ie export crypto). Almost every message now has extensions: OCSP or Certificate Transparency stapling on server and client side. There is now padding at the TLS layer (this was slightly controversial). There is only one hash mechanism: HKDF (RFC 5869) and it's used consistently (and correctly). They've done cryptanalysis and attack events, found issues and fixed them. There will be no SHA.  Does that mean SHA3 is no good? No, there were just concerns about performance and newness when TLS 1.3 work started.

SHA will still be around in certificates, though even SHA1 is being rapidly phased out.

Renegotiation is gone. There were many use cases: request client cert, "step-up" crypto algs and re-key. It was buggy and unclean and a source of problems. Only strong crypto is available.

There is allowance for session resumption. It prefers sessions over tickets. Sever can send session information at any time and the session is like PSK. PSK is like session resumption. If you're reconnecting, send data with the resumption :-)  This gets you O-RTT - Zero Round Trip Tickets.

0-RTT: client connects, C&S do the ECDHE dance. Client remembers the server's key share. Next time, client reconnects and sends data encrypted with that key.  This helps to avoid an extra round-trip with less latency.  This makes your web faster and then we can all make more money.

0-RTT has a big butt... there is no PFS with early data. That data can be replayed elsewhere (GEt is idempotent). Nobody was really listening to these concerns until Colm@Amazon spoke and posted two weeks ago. No everybody's trying to figure out what to do. We want to do the right thing, but one of the primary goals was to improve performance - and browser vendors will use it anyways.

TLS 1.3 status is very close to IESG review. Chrome and FF and some others are at Draft-18. Draft 19 had minor changes, Draft 20 is incompatible.

ICMC17: Control Your Cloud: BYOK is Good, But Not Enough

Matt Landrock, CEO, Cryptomathic

BYOK suggests a one-way mechanism: your key, my cloud.

The word "Key" tends to be generally understood in a very broad sense (symmetric and asymmetric),  however in cloud service providers it has a bit of a different definition.

The current key management servifes offered are MS Azure, Amazon AWS and Google Cloud Platform. Azure uses Thales and Amazon uses Gemalto, Google doesn't appear to use an HSM at this time. Their biggest differences are around the BYOK protocols (key wrapping, etc).

BYOK is an important tool, but should not be the only tool in your tool box.  It will help you get your own key into the cloud, so you know it meets your standards for generation. The cloud provider will handle things for you, but not in a consistent way - so lots of hurdles to go through to get this done.

MYOK - Manage Your Own Keys! MYOK implies you can manage import/export, lifecycle and generation all by yourself. Provision when you need them, destroy when you're done.  How can you do this in a way that is meaningful for your business?

Centralized key managers are moving into the market space.

Key management is more than just keys - name, algorithm, length, export settings. And, many key formats end up being very vendor specific (some use standards like PKCS#8, but many more are just proprietary).

ICMC17: TLS Panel Discussion

Tim Hudson - Cryptsoft
Steve Marquess - OpenSSL
David Hook - Bouincy Castle
Kenn White - Open Crypto Audit
Nicko van Someren - Linux Foundation

There are many implementations of TLS, from C and assembly implementations to Java implementations. OpenSSL has many forks - some obvious, some hidden under the covers. More implementations are good, as we don't want to suffer from monoculture.

There is possible value in creating a drop in API that multiple implementations could use. Nicko suggests creating an open process to create that API - it would not necessarily be from an existing implementation. It could possibly allow for more automation. APIs tend to grow out of necessity, and are not always pretty. A common API could help with security and fuzz testing.

There was then a long side discussion on libsodium (NACL), which is only a crypto library at this point as the "N" piece of networking/TLS hasn't been implemented, yet, but there are lots of language bindings out there.

How long does it take to create a new TLS library?  Apparently a new one was created recently in 3 weeks - leveraging someone else's crypto.  Language choice is important.

The older APIs have to consider legacy deployments, for example OpenSSL still supports VAX/VMS.  Another perspective - a new TLS implementation has taken another team 5 months already. When you have a user base you need to support, that seems to add time to the implementation.

There was a question about moving your implementation from TlS 1.0 to TLS 1.3? One of the OpenSSL developers noted that there is a lot of code reuse, but also a lot of #ifdefs. (TLS 1.0 is not, yet, #ifdef''d out of OpenSSL, but probably will be within the year).

There was a question about what is the biggest issue with FIPS validations? The general answer: consistency. That is, OpenSSL helped take multiple validations through at the same time, which was based on the same code and mostly the same documentation took a different amount of time and got completely different feedbacks.

ICMC17: Inside the OpenSSL 1.1 FIPS Module Project

Tim Hudson, CTO Cryptsoft and Mark Minnoch from SafeLogic.

In July 2016, OpenSSL announced the commencement of a fresh attempt to do a FIPS validation of OpenSSL. There are over 244 validated products on the NIST list that obviously use OpenSSL in their validation boundaries and it's included in most (all?) operating systems - it's pervasive!  So, why is it so hard to validate?  It starts out as open source with lots of competitors/stakeholders interested in it.

Unfortunately, stakeholder goals and project goals do not always align. For example, the project wants to support many platforms - stakeholders want to focus on only one or two. The same goes for the number of algorithms supported and validated.

Previously, FIPS140 work was effectively entirely funded for the OpenSSL project from 2009-2014, as there was no long term or major sponsor at this time.  The sponsors funding OpenSSL FIPS all had different goals (other than wanting to sell into the US government), which made it very difficult to manage. This is a hard project, with many people yelling at you with different goals and wasn't very rewarding - can't just expect people to do this for "fun". [note: yes, nothing about FIPS is "fun" - practical, yes, but not fun]

The first validation was very painful for the developers, so OpenSSL knows they have to do it differently if they are ever going to do it again.  OpenSSL started their first FIPS 140-2 validation in June 2002, certificates were not received until March 2006!

There have been a total of 9 unique validations, to keep up with new hardware platforms and implementation guidance changes.

The OpenSSL FIPS 1.0 module based off of the OpenSSL-0.9.x is no longer usable, there is still a bit of life left in OpenSSL FIPS 2.0 module (#1747, #2389, #2437) as it is based off of the OpenSSL-1.0.x code. But, a major update is required for a new OpenSSL FIPS module to work with OpenSSL-1.1.x.   For this go round, goal is to make the FIPS140 related changes "less intrusive".

Current validations cover dozens (hundreds?) of platforms (OS vs hardware).

For the new validation, the only current sponsor is SafeLogic, but additional sponsors are needed to fund OpenSSL FIPS development and FIPS lab testing - resources are available now to begin work. 

This is a high risk validation, many people will be watching the validation which means people are cautious to enter - which creates a longer timeline. Keep in mind that TLS 1.3 is only available in OpenSSL 1.1.x, so if that's important to your customers, consider helping out financially to get this project going.

It's hard to get the sponsors on board, as they all want to see another sponsor already on board and to share the cost, but they still want to wield great influence over the work.

If this project doesn't happen, there are fewer options for FIPS libraries and will require you to do more of your own FIPS work.  Taking multiple versions from different companies through CAVP/CMVP is a waste of their resources as well.  Also, if everyone develops independently, the federal government will end up with inconsistent implementations.

Originally team was going to work on FIPS 140-2 work before TLSv1.3, but swapped the priorities. That was easier to get a sponsor for, as it's well defined project and now the FIPS work can happen with the TLSv1.3 in place.

OpenSSL has refactored algorithm testing approach and want to better support embedded systems, and do better with entropy generation.  Need to pick up extra NIST work and try to take SHA3 through CAVP/CMVP.

Will continue to look at improvements to POST (like defining what it means for software). Also considering add ChaCha/Poly1305.

Currently cannot commit to many requested features, just due to trying to keep a reasonable timeline.  The current schedule estimate from "start" to certificate is 18months, based on their experience taking other modules through.

Please consider sponsoring this project so it can get off of the ground!

ICMC17: Keynote: Driving Security Improvements in Critical Open Source Projects

Nicko van Someren, CTO, Linux Foundation.

Open Source is huge and it's here to stay, with nearly 4 million contributors world wide, 31 billion lines of committed open source, etc - we aren't getting away from it now! Open Source is the "roads and bridges" of the Internet, which runs on Open Source.

Sometimes open source breaks... things like heartbleed, shellshock, Poodle, etc. The Internet runs on opensource, but it's not always properly looked after. Linus's Law: "Given enough eyeballs, all bugs are shallow" - so why are there still bugs? Well, not enough eyeballs!

Open source software is not more or less secure than closed source - but different. Typically there are more diverse group of people working on the source, but serially over a long period of time. There is often a culture of "code is more important than specification" - a cultural difference from most businesses.

Major projects are very under resourced, like OpenSSL - run  by millions of businesses, but only got $2000 in support in 2013.  NTPD is run by every major stock exchange, but some of the code is 35 years old, maintained by one guy, part time.  Same for bash, GnuPG, and OpenSSH.

These open source projects are not  given the resources they deserve.

The Linux Foundation created the Core Infrastructure Initiative. The CII aims to substantially improve security outcomes in the OSS projects that underpin the Internet. The CII funds work in security engineering, security architecture, tooling and training on key OSS projects.

This market is changing quite quickly as well - who would've known 4 years ago how important node.js would be?

CII is a non profit funded by industry partners, like Intel, Microsoft, Google, Hitatchi, Dell, Cisco, Amazon, Bloomberg, Fujitsu, etc.

Open source can do all of the same things commercial enterprise does for building secure software - just harder, because there is no way to give a top-down mandate (ala Bill Gates fixing security mindset at Microsoft).

Groups and individuals must think about security early and often, it cannot be just one squeaky wheel mentioning security. It requires buy-in from the entire community. Fostering this culture of security within your open source project is the single most important thing that you can do to improve your security outcomes.  Security needs to be given equal weight with scalability, performance, usability and other design factors.

CII is trying to find out where the risks and problems are by doing the CII Census Project to discover the really critical open source projects, how responsive the developers are, historic trends for bug and vuln density and how healthy the development community is. Did a snapshot a couple of years ago and created a scorecard. working now on updating it to be a continuous evaluation.

Once critical projects have issues identified, CII is trying to focus their resources on fixing it. Maintenance work is not fun, but it is vital. They are trying to pay developers to work on key projects full time, match willing and able developers to relevant projects and encourage educational establishments to get students involved.

Additionally, working on improving open source security tools. This means funding development of new or improved OSS security tools, make sure they are usable and have a good signal to noise ratio. Problem with some of the existing tools - terrible documentation! So, there is even a need for paying people to write documentation for how to use and deploy continuous security testing.

CII also wants to drive better security process in OSS projects with their CII Badge program - an open process for evaluating security processes in your community. It's a self assessment, with the goal of avoiding security theater, so it only includes items that really improve security.

CII has a travel fund to send developers to security conferences to learn about security and additional funding to get key OSS developer teams to meet face to face to set priorities and collaborate (like OpenSSL).

If your company is building your business on open source software, you should consider funding those projects and CII to help push better security practices, etc.