1 2 3 Previous Next 133 posts

Dell InspironA number of security researchers recently discovered that Dell laptops come pre-installed with an additional root certificate call eDellRoot. Since the private key is also available on the machine this exposes their customers to the risk of a Man-in-the-Middle (MITM) attack. In a MITM attack, the attacker sits on the network between server and client and uses the eDellRoot certificate to intercept and manipulate HTTPS connections. This vulnerability leaves anyone using these Dell laptops at risk for sensitive data exposure and even infections with malicious payload, all under the cover of a trusted connection.

 

Dell has released an automatic update to uninstall the certificate; however, we can’t assume that all affected machines will receive this update in a timely fashion. In the meantime, the crucial next step is to know immediately which machines have the eDellRoot certificate, so that they can be fixed. We’d recommend using the power of our Cloud Agent and AssetView query service to instantly determine which machines are at risk and automatically group and tag these assets for remediation.

 

Find Affected Machines Instantly and Continuously

With a simple query, you can instantly find all machines that have eDellRoot installed. You can also convert this query into a dynamic dashboard, to constantly monitor the scope and impact of this vulnerability. Continuous monitoring is essential, because if any of these Dell computers are ever set back to the factory default, the eDellRoot certificate will once again be restored and the vulnerability reinstated.

 

Here’s the specific query syntax you can use to find systems with the eDellRoot certificate:

manufacturer.name=dell and vulnerability.vulnerabilities.qid=1018

 

Qualys Cloud Agent transmits installed Certificate Authorities to the Qualys platform and makes them available for reporting in AssetView. That way, you can continue to monitor the health and validity of all of your SSL certificates.

Joomla logoA few days ago, SpiderLabs researcher Osaf Orpani disclosed an important vulnerability targeting Joomla, one of the most popular Content Management Systems (CMS). By exploiting this vulnerability, researchers were able to remotely gain full administrative access to the CMS.

 

Joomla versions 3.2 to 3.4.4 are affected by this major security issue. Since the vulnerability targets the core of the CMS, all websites based on Joomla are vulnerable, whatever the modules used.

 

Vulnerabilities discovered by Orpani are:

  • CVE-2015-7297
  • CVE-2015-7857
  • CVE-2015-7858

 

Like Wordpress did when its market-leading CMS was exposed to multiple vulnerabilities, Joomla has reacted by publishing a quick Security Fix version 3.4.5, which we encourage you to apply immediately.

 

What that story doesn’t tell is whether Orpani was the first to discover that vulnerability or if it was exploited before. With 9 percent share of the CMS market, Joomla powers around 2.8 million web applications and websites, meaning a lot of websites would be vulnerable if a malicious hacker has already discovered this vulnerability.

 

The Role of Qualys WAF

Qualys Web Application Firewall (WAF) users are already protected since this exploit is based on generic SQL injection that WAF already has the ability to recognize and block. This is true not only for this vulnerability, but for many others as well. That's because a large number of web exploits are based on well-known attack vectors such as SQL Injection, Cross Site Scripting, etc. for which Qualys WAF automatically provides protection.

 

As a best practice, we always encourage you to actively protect your web applications from this kind of attack by applying the relevant patch, in this case from Joomla. In all cases, the CMS and more generally all web applications have to be updated to benefit from the latest security fixes.

 

Qualys Web Application Firewall then not only protects your applications from attacks executed via known exploits including undisclosed ones. It also buys you and your security operations team time to upgrade and patch your applications.

file upload iconHow boring would social networking websites, blogs, forums and other web applications with a social component be if they didn't allow their users to upload rich media like photos, videos and MP3s?  The answer is easy: very, very boring! Thankfully, these social sites allow end-users to upload rich media and other files, and this makes communication on the world wide web more impactful and interesting.

 

But user-uploaded files also give hackers a potential entry-point into the same web apps, making their safe handling an extremely important task for administrators and the security team. If these files are not validated properly, a remote attacker could upload a malicious file on the web server and cause a serious breach.

 

Qualys Web Application Firewall protects against uploads of malicious files by providing automatic validation of uploaded files. Specifically, it inspects the contents of the HTTP request and response associated with the file upload, which allows it to identify specific indicators of whether the contents of the file upload are legitimate or not.

 

This blog post describes how Qualys WAF does its magic.

 

About Unrestricted File Upload Vulnerabilities

Malicious files uploads are the result of improper file validation: OWASP calls it Unrestricted File Upload, and Mitre calls it Unrestricted Upload of File with Dangerous Type. According to OWASP, unrestricted file upload vulnerabilities can allow two different types of attacks:

 

1)  Missing proper validation of file name

This can allow an attacker to overwrite application files using a specially crafted request, for example “../../../index.php”. If not handled correctly, the request in this scenario may overwrite the default application home page, or worse, upload the file to a user-accessible location which is outside the file storage sandbox.

 

2) Missing proper validation of file content and size

Allowing attackers to upload a file of any size without restriction may allow consuming all storage space on the server, potentially causing a denial of service and even crashing the server in some cases.

 

The allowed file content type is the most critical issue. This should be taken care of properly, otherwise it may result in arbitrary code execution on the server. Let’s discuss more about this second case. If we look at software affected by unrestricted file upload issues on exploit sharing sites such as exploit-db we can find hundreds of applications affected. Most of these are unauthenticated issues, meaning attackers don’t need to have valid credentials to exploit the issue. This gives us a rough idea of the scope of the issue; it is quite common.

 

Common But Ineffective Mitigations

An examination of some common but ineffective mitigation techniques gives insight into how hackers can attack your web apps.

 

File Extension Verification

Blacklisting and whitelisting of file extensions is the most common validation method implemented by developers.

 

To implement blacklisting, the developer needs to gather all executable extensions disallowed by the server, which is obviously a tricky task, which can be defeated several ways in practice:

  • Blacklisting can often be bypassed using uncommon executable extensions such as php3, php4, php5, shtml, phtml, cgi which are understood by server.
  • Hackers can also use “.htaccess” file tricks to upload a malicious file with any extension and execute it. For a simple example, imagine uploading to the vulnerabler server an .htaccess file that has AddType application/x-httpd-php .htaccess configuration and also contains PHP shellcode. Because of the malicious .htaccess file, the web server considers the .htaccess file as an executable php file and executes its malicious PHP shellcode. One thing to note: .htaccess configurations are applicable only for the same directory and sub-directories where the .htaccess file is uploaded.
  • In certain configurations, multiple file extensions such as shell.php.jpg will trick the server to execute file.
  • The infamous null character injection (%00) attack falls under this category.

 

The whitelisting approach gives developers more control over security as compared to blacklisting, since only allowed file extensions are specified and every other file extension is refused. Still there are some pitfalls in this method. There is a history of server side bugs which allow bypassing such protections, including:

 

Content-type Verification

This kind of verification completely depends upon content-type header, e.g. Content-Type: image/jpeg, containing the MIME type. This is a very weak validation mechanism, as this header is supplied by the user (attacker).

 

Image Type Content Verification

Many developers believe this is the safest method to prevent malicious file upload issues, it is not foolproof with certain configurations and can still cause issues. For example, using the image processing getimagesize() function which provides information about uploaded file including filetype,size,dimensions which is helpful to detect if an uploaded file is indeed an image.

 

Security researchers already demonstrated a way to inject executable code in certain sections of images. Some examples of such attack are JPEG image EXIF header injections and encoding of executable code in PNG IDAT chunks. Injecting malicious code inside another file format is not completely new; in the past we have also seen some interesting attack methods like GIF/Javascript Polyglots and the GIFAR attack which bypassed almost all protection mechanisms implemented at the time.

 

How Qualys Web Application Firewall Protects

As can be seen in the above examples, most file upload attacks are triggered by the fact that the application relies on an established protocol for communication with the client. By playing with this protocol, an hacker can deceive the application and make it think that uploaded files are legitimate when in fact they are malicious.

 

When the application relies on a content-type verification, the hacker sets an accepted content-type while uploading a PHP script. If the application expects a specific extension, the hacker will be happy to rename his payload file with the desired extension. And if the application tries to validate the file format with classic PHP functions, the hacker will insert the code inside a file that respects the desired format (EXIF header injection described above).

 

Qualys WAF throws standard protection techniques on their heads by applying deep inspection mechanisms to the bodies of requests instead of performing file validation. Qualys WAF analyzes all parts of the file upload request for signs of trickery or malicious payloads. While parsing a file upload request, it looks for any malicious content indicators such as executable code containing classical dangerous functions (system, exec, kill), functions used for code obfuscation (base64_decode, urlencode, preg_replace) or uncommon tricks used for evading detection. When enough signs are present the WAF will block the request to prevent the application to be damaged.

 

All signs of attack will be accurately reported to help the user to understand what's happening. When a user tries to upload a malicious file such as a web shell, Qualys WAF blocks that attempt. The detailed information about this malicious user action gets logged and displayed in the user interface. The event details contain information about detection type, method, severity, user request details, origin of request, etc. which provides valuable insight of attack.

 

Example: Blocked Upload

The screenshot below shows an example of an attempted file upload that was blocked by Qualys WAF.

 

The Qualys WAF administrator can configure what level of threat is reported and what level is blocked. In this case, the threat level of 80 shown in the red box exceeds the blocking threshold, so the event was blocked.

 

The Event Details section shows the detection information details. For the event shown here, QID 150114 is triggered which indicates the user tried to upload a malicious file. QID 226016 indicates the detected event threshold (Threat Level 80) is greater than the blocking threshold (31 in this example), and as a result this request is blocked.

 

The Event Details section also shows what is in the headers and body for both the request and response, which makes it easy for the administrator to analyze the specifics of the threat.

 

Click the image below to view full-size.

 

combined-waf.jpg

The X-Frame-Options HTTP response header is a common method to protect against the clickjacking vulnerability since it is easy to implement and configure, and all modern browsers support it. As awareness of clickjacking has grown in the past several years, I have seen more and more Qualys customers adopt X-Frame-Options to improve the security of their web applications.

 

However, I have also noticed there is a common implementation mistake that causes some web applications to be vulnerable to clickjacking attack even though they have X-Frame-Options configured. In this article, I describe the implementation mistake and show how to check your web applications to ensure X-Frame-Options is implemented correctly.

 

About Clickjacking and X-Frame-Options

As I wrote in my previous article, clickjacking is an attack that tricks a web user into clicking a button, a link or a picture, etc. that the web user didn’t intend to click, typically by overlaying the web page with a (typically transparent) iframe. The user thinks he is clicking the link on the legitimate page, but actually clicks an unseen overlaid link or button. This malicious technique can potentially expose confidential information or, less commonly, take control of the user’s computer. For example, on Facebook, a clickjack can lead to an unauthorized user spamming your entire network of friends from your account. We’ve known about clickjacking, also called “UI redress attacks,” for years now, which Robert Hansen and Jeremiah Grossman originally described in 2008.

 

So, how do X-Frame Options work? The X-Frame-Options HTTP response header can be used to specify whether or not the browser should be allowed to render content in a <frame> or <iframe>. If an iframe can’t be loaded in the browser and overlaid on the legitimate page, then a clickjacking attack is not possible.

 

Multiple X-Frame-Options in the Response Header

I have seen claims by Qualys customers that Qualys Web Application Scanning (WAS) flagged false positives of the Clickjacking vulnerability during scanning, even though they had deployed X-Frame-Options countermeasures in their web applications. These typically turn out to be true positives because of a common implementation error: more than one X-Frame-Options item presented in the response headers.

 

To understand the error, imagine making a request to http://foo.org and getting the following response headers with two X-Frame-Options fields:

 

HTTP/1.1 200 OK

Server: Apache-Coyote/1.1

X-FRAME-OPTIONS: SAMEORIGIN

Set-Cookie: JSESSIONID=E0BF8BA2829148A9D3C5370FB2A03820; Path=/; HttpOnly

X-FRAME-OPTIONS: SAMEORIGIN

X-Content-Type-Options: nosniff

X-XSS-Protection: 1; mode=block

 

When more than one X-Frame-Options item is used, browser engines will combine the multiple header fields into one by appending each subsequent field-value to the first when multiple message-headers fields with the same field name according to the HTTP RFC 2616 section 4.  It means browser engines will modify the previous response header into the following format.

 

HTTP/1.1 200 OK

Server: Apache-Coyote/1.1

Set-Cookie: JSESSIONID=E0BF8BA2829148A9D3C5370FB2A03820; Path=/; HttpOnly

X-FRAME-OPTIONS: SAMEORIGIN, SAMEORIGIN

X-Content-Type-Options: nosniff

X-XSS-Protection: 1; mode=block

 

According to RFC7034, only these three values, DENY, SAMEORIGIN and ALLOW FROM are valid values and they are mutually exclusive; that is, the header field must be set to exactly one of these three values. Some browsers will take the header item “X-Frame-Options: SAMEORIGIN, SAMEORIGIN” as invalid because the field value “SAMEORIGIN, SAMEORIGIN” is not any of the three values. As a consequence, the X-Frame-Options feature will not be effective in some browsers, such as Safari browser (6.0.5) and an attacker could launch clickjacking attacks against the victim when they are using an older version Safari browser to view the website.  I have tested this with Safari (5.1.7) on a Windows machine and Safari 6.0.5 on Mac.  Although Safari 7 (tested with Safari 7.1.7) has fixed this issue, it still imposes a danger if the user is using old Safari browsers.

 

How Common Are X-Frame-Options Implementation Errors?

I did some extra research on the Alexa Top 20 after deciding to write this article in order to check whether this kind of implementation error could also happen to some popular and big websites or if this is just a small issue caused by inexperienced developers.  The result was surprising.  I found out that several domains from one website in the Alexa Top 20 suffered from this error.

 

After some investigation, I found I could launch an attack using this vulnerability, and I am sure damage could be done if an attacker combined an attack against this vulnerability with some social engineering work. I've informed the owners of the vulnerable website, and they are working on mitigations.

 

Root Cause of the Implementation Error

Multiple reasons could lead to this kind of implementation errors. From the feedback of our customers that are suffering from these mistakes and my own developing experience, these two conditions will cause the more than one X-Frame-Options in the response header:

 

Condition 1: X-Frame-Options header is added in the source code and got deployed again in apache, IIS server

Condition 2: X-Frame-Options header is added in the source code or configure in apache/IIS server, meanwhile, load balance set “x-frame-options” again in its policy

 

For those in charge, I would advise them to check whether the response headers contain more than one X-Frame-Options headers if they are deploying X-Frame-Options to protect against a clickjacking attack.

For more than two decades SSL has ruled the roost as the predominant encryption protocol on the Web. This is unfortunate, at least because in recent years many vulnerabilities have surfaced in SSL. It’s had its day, done its job, and is now battle weary. Today, to say the least, early versions of SSL and TLS don’t get the job done when it comes to securing website traffic.

 

ivan-300-2x.jpgIn fact, earlier this year, the PCI Security Standards Council removed SSL from its list of strong crypto protocols in the PCI Data Security Standard, and as of June 30, 2016 it will no longer be permitted as a security control. “That isn’t much time for everyone who needs to become compliant to become compliant,” said Ivan Ristic, director of application security at Qualys during his presentation The TLS Maturity Model here at Qualys Security Conference 2015 in Las Vegas.

 

“Life was much simpler back when we thought that encrypted communication via TLS was just secure. Not so any longer,” Ristic said.

 

Why does it matter? SSL and TLS, simply put, encrypt traffic between two endpoints, such as the web browser of a shopper and the server of an eCommerce provider. SSL has shown that it remains vulnerable to all sorts of attacks, such as the ability to grab data during communications, man in the middle attacks, among others.

 

“The SSL protocol (all versions) cannot be fixed; there are no known methods to remediate vulnerabilities such as POODLE. SSL and early TLS no longer meet the security needs of entities implementing strong cryptography to protect payment data over public or untrusted communications channels. Additionally, modern web browsers will begin prohibiting SSL connections in the very near future, preventing users of these browsers from accessing web servers that have not migrated to a more modern protocol,” the PCI Security Standards organization wrote in its report Migrating from SSL and Early TLS.

 

What should organizations do? One would think that it would be very straightforward, but it's not, Ristic explained in his keynote. He developed a TLS Maturity Model that is designed to help enterprises get to where they need to be, not only to be compliant, but to be secure.

 

The model has five levels, which range from utter chaos in Level 1 to a Level 5 which is probably more security than is necessary for most mere mortals. Level 4 is were most organizations will want to be, Ristic said.

 

Here is the model as he described in this post:

 

 

At level 1, there is chaos. Because you don't have any policies or rules related to TLS, you're leaving your security to chance (e.g., vendor defaults), individuals, and ad-hoc efforts generally. As a result, you don't know what you have or what your security will be. Even if your existing sites have good security, you can't guarantee that your new projects will do equally well. Everyone starts at this level.

 

Level 2, configuration, concerns itself only with the security of the TLS protocol, ignoring higher protocols. This is the level that we spend most time talking about, but it's usually the easiest one to achieve. With modern systems, it's largely a matter of server reconfiguration. Older systems might require an upgrade, or, as a last resort, a more secure proxy installed in front of them.

 

Level 3, application security, is about securing those higher application protocols, avoiding issues that might otherwise compromise the encryption. If we're talking about websites, this level requires avoiding mixing plaintext and encrypted areas in the same application, or within the same page. In other words, the entire application surface must be encrypted. Also, all application cookies must be secure and checked for integrity as they arrive in order to defend against cookie injection attacks.

 

Level 4, commitment, is about long-term commitment to encryption. For web sites, you achieve this level by activating HTTP Strict Transport Security (HSTS), which is a relatively new standard supported by modern browsers (IE support coming in Windows 10). HSTS enforces a stricter TLS security model and, as a result, defeats SSL stripping attacks and attacks that rely on users clicking-through certificate warnings.

 

Finally, at level 5, robust security, you're carving out your own private sliver of the PKI cloud to insulate yourself from the PKI's biggest weakness, which is the fact that any CA can issue a certificate for any website without the owner's permission. You do this by deploying public key pinning. In one approach, you restrict which CAs can issue certificates for your web sites. Or, in a more secure case, you effectively approve each certificate individually.

Enterprises are having a challenging time securing their data and systems. But it doesn’t have to be that way. We recently reached out to Tyler Shields, principal analyst at Forrester to discuss his presentation at Qualys Security Conference 2015, and what it means to be able to secure enterprises at “cloud scale.” And what it’s going to take for enterprises to succeed in security in the years ahead.

 

Tyler ShieldsShields is an expert on mobile and application security. Before joining Forrester, Shields was product owner and manager for mobile solutions at Veracode. Previously, he was a security consultant for the boutique consulting firm @Stake, which was acquired by Symantec in 2006. There, he assessed the security of Fortune 500 customers, financial firms, educational institutions, and segments of the U.S. government.

 

George: A good place to start this discussion would be how mobile, cloud, and all of the network connectivity surrounding the Internet of Things is changing the enterprise threat posture and how they are securing themselves?

 

Tyler: Realistically, it’s a completely new paradigm for security right? When you add always on, always connected, high enough data and bandwidth to make that always connected useful. That has to be coupled with the fact that we no longer are keeping data in our own premises. We're putting all of our enterprise data into the cloud. It completely changes how we have to do security. The only way to truly effectively do security in this new environment is to do it at cloud scale, meaning you have to actually be able to capture security data, analyze that data, and then make decisions on it and enforce your security controls all at cloud scale; because to do it at anything less they'll never be able to keep up with the pace of the movement of the data.

 

It's very different now than a decade ago. You take the IDS model of just looking at some data and looking for anomalous behavior on network traffic inside your environment. That’s not going to do it now. Now the right way to do security is to look at data movement. Look at containers for example, you have to look at metadata underneath your containers to look at application events, and look at log files in real time. The quantity of data is now so immense that it's unreal.

 

George: What does it mean for mid and large enterprises to manage security at “cloud scale”?

 

Tyler: The enterprise has to look at security differently than they ever have in the past. They have to look at security in places that they never had to before. They have to look at security in a operational model instead of the CAPEX model. It's an OPEX versus CAPEX difference too, because you’re no longer spending CAPEX on the things you own and securing items you own, but you're actually spending OPEX; operational expenditure around operations resources and the time to secure it. That OPEX spend is going to be so much higher than the CAPEX spend that we've seen in the past, both on our products that we use, our services we use and our security of those services.

 

I think what that means is that the enterprise has to look at things very, very differently. They have to become procurement experts. The CISO needs to understand every service that he buys from a security perspective. That's so weird when the CISO used to have to care about security in the data center and that was it. It's just a very different world.

 

George: This move to continuous integration and continuous development is changing how enterprises handle risk. How do you see this changing how enterprises handle risk in how they secure their internal infrastructure and application development lifecycle?

 

Tyler: It certainly does. It used to be where your development life cycle could be 18 months long. You had security stage gates that would trigger within that life cycle, such as a design security stage gate, a code review stage gate, a pre-production pen test and then a post production pen test. You used to have these stage gates across 18 months that you could run the tests. Once every 3 months, you'd have a little project you had to run or whatever and it wasn't that big a deal, but when you're pushing the production to say 20, 30, 50 times a day, how do you maintain those 4 traditional stage gates in a model where you're pushing 30 to 50 times a day?

 

That completely flips itself upside down on its head as well and now it's less about stage gates and security being the team that sits in the middle and block and stops, and blocks and tackles things. Instead now it's embedding security right into the developer. Not even the development life cycle, but the developer the person. It’s so the developer can do unit tests in real time that are security-centric unit tests. It's about actually doing security in real time and then even more so than that, it's about having the ability to respond in seconds versus days, weeks, months, or years.

 

George: The first thing that comes to mind is anything that can be automated must be automated if you’re going to survive.

 

Tyler: That's the fundamental piece. Everything needs to be automated. There's two things. Everything needs to be automated, fully automated from a security continuous security review perspective. If you’re not automated, forget it. You'll never keep up. The other side to that coin is to spend a lot of resources on when you do find a problem, handling it in the quickest, most expeditious way possible.

LAS VEGAS – Philippe Courtot, Qualys (QLYS) founder and CEO, in his keynote address today at the Qualys Security Conference 2015, spoke to the massive and rapid evolution in business-technology systems currently underway in the enterprise. They are grappling with the complexities of securing their information in the public and private cloud, on mobile devices, and the data gathered by all of the sensors associated with the Internet of Things. Enterprises are “faced with the challenge of having to retool their entire infrastructure,” Courtot said.

 

Philippe Courtot Keynote at QSC15While all of these new, emerging, and some rapidly maturing technologies are helping the enterprise to be more agile and respond to changing market conditions – all of these efforts need to be done securely.

 

“We still need to secure everything,” Courtot said. “In the old days everything was essentially perimeter-centric, and we were living very happily as the networks were evolving. But the problem with security started to become very critical as we needed to deploy more and more applications. “Unfortunately, enterprises are still architected for the old client/server world,” Courtot said.

 

So how do enterprises secure themselves in an "everything is connected to everything” world, Courtot asked? Well, what enterprises have been doing to date has not been working well for anyone. They’ve been turning to a plethora of point solutions: data leak protection/prevention, anti-virus and anti-malware, intrusion detection/prevention systems, network and next-generation firewalls, vulnerability assessment tools, threat intelligence and more. It’s very difficult to adequately protect enterprise systems when those enterprise systems, applications, and data are so dispersed across so many cloud services and endpoint devices, Courtot said.

 

Sensors and the Cloud

When it comes to building a security framework that would work for the modern and highly-agile enterprise, Courtot pointed to an analogy that is familiar to most everyone: home security systems. How are homes secured today? Home security systems rely on sensors and management systems that monitor homes for changes in heat, or signs of fire or flooding, motion detectors for intruders, and the status of garage and building doors and windows. "All of that information is beamed up to a cloud platform were all of that data is analyzed. And depending on conditions it then sends alerts and information to incident response, such as the local fire department, police, or perhaps private services. And all of that information is centrally managed [by the homeowner] on their phone,” Courtot said.

 

That’s how cloud security services for the enterprise need to also work: sensors in the enterprise environment that gather security and compliance information, asset information, and other data about the state of the systems and all of that data is then sent to a cloud service for analysis, which then provides security teams the information they need to protect their environments.

 

“Our appliance is unique from others,” Courtot said, and made the parallel to home security systems regarding how the Qualys Cloud Platform gathers all of the information security teams need about the state of their network, and how they can manage their security from anywhere in an app.

 

Going forward, that’s the kind of security capability enterprises will need to manage security at the scale that their clouds services are growing. “You need sensors that are gathering data from everywhere in the enterprise. And you need to integrate that security data with information about your assets, and analyze it all to see if they are secure and in compliance. If not, it needs to be acted upon,” Courtot said. “And today that means it has to scale, it has to be in the cloud,” he said.

Let’s face it, cloud computing, artificial intelligence, mobile, big data, automation, DevOps, and the Internet of Things have all been hyped for some time. While the impact of these trends has likely been overstated in the short run, they’ve been likely understated over the long run. That is to say when it comes to the next decade, buckle up and get ready for there is a significant amount of disruption on its way.

 

Speaking of disruption, when it comes to cybersecurity, with the many high profile government and private sector breaches in the past year and the rapid growth in mobile and cloud computing have all created enough disruption for most of us. The research firm IDC expects cloud spending on public cloud alone to reach $70 billion this year.Brace Yourselves

 

Security spending is also up, and more enterprises are using cloud-based security toolsets to secure their systems. In the CSO story, Survey says enterprises are stepping up their security game I covered how that PwC, CIO, and CSO survey showed that enterprises are reaping benefits from cloud based security services.

 

These benefits include: real-time monitoring and analytics (56%), authentication (55%), identity and access management (48%), threat intelligence (47%), and end-point protection (44%).

 

At Qualys Security Conference 2015 which kicked off today, the increased importance in cloud-based security services will be an important focus. In addition to keeping attendees informed about the latest enhancements to the Qualys Cloud Platform, as well as future Qualys roadmaps, the conference will show how enterprises can obtain more insight through security and compliance data, and how enterprises must evolve as technology trends evolve.

 

Cloud, mobile, and automation were big themes last year, and will be themes that are built upon even more this year. Enterprises need to get better at continuously monitoring their systems for security defects and vulnerabilities, policy violations, and intrusions. But as more of the data center is automated, teams are going to have to get better at automating security policies and in security and compliance control enforcement.

 

The impact all of this disruption is having on enterprise security teams will be a central part of the presentation by Tyler Shields’, principal analyst, at Forrester Research: Security is Breaking Down... Why Now, and What Can We Do About It? Shields will show how we came to arrive here and where the security industry and enterprise teams need to improve in order to succeed in the age of mobile, cloud, and IoT. What has to be done to secure applications and data when networks, operating systems, and applications have been so transformed? Shields promises to delve into these trends “and see what they really mean to our security future.”

 

The conference will conclude with a broader view from author and entrepreneur Martin Ford, who will provide a warning about how unjust an automated economy can become and what must be done to avoid a dismal future. Ford is the author of the New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future and The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future.

Sometimes standard web application scanning techniques are too intrusive. The web application owner may not want to run a scan that tests for a vulnerability by uploading application data because that might have negative side effects for the application. It can be better to use an indirect method like web application fingerprinting which inspects static files in the web app to determine its version, and then reports the known vulnerabilities for that version.

 

BlindElephant logoBlind Elephant is a trustworthy open-source static-file web application fingerprinter. It attempts to discover the version of a (known) web application by comparing static files at known locations against pre-computed hashes for versions of those files in all available releases. This technique works well when the static files change with every release, allowing the fingerprinter to identify the application version based on the contents of the files. This technique is non-invasive and generic, and the use of pre-computed hashes means it is fast, low-bandwidth and highly automatable.

 

Over the five years since the open source Blind Elephant project was introduced, Qualys has maintained it, integrated it with the Qualys Cloud Suite, and added lots and lots of detections. That makes the Qualys integration a useful tool for web application security teams.

 

Qualys Integration & Detections

Qualys has been steadily adding detectable applications to the Qualys integration, which now detects over 200 web applications, plugins and extensions, and this number continues to grow every week. Qualys customers can look up QID 45114 in their scan reports and see a listing of the web applications found in their environment.

 

In order to add a detection, the Qualys team needs access to the source code and a few versions of the web application. With too few versions or files that remain unchanged across versions, it is not possible to create detections.

 

Uses Cases

The Blind Elephant engine can be most effective in the following scenarios that are too intrusive into the customer's application, and where it's better to determine the existence of the vulnerability indirectly, i.e. via fingerprinting:

  • Post-Authentication Vulnerabilities: For example, persistent cross-site scripting vulnerabilities which need a user with certain rights in order to be successfully exploited.
  • File Upload Vulnerabilities: Vulnerabilities that require the upload of arbitrary data on a customer's application to be successfully exploited.
  • Remote Code Execution Vulnerabilities: Vulnerabilities that require execution of arbitrary code on a targeted system. For example, command injection vulnerabilities can be safely identified via Blind Elephant.
  • SQL Injection: Because the number and names of tables may vary with the implementation of the application, so it's not possible to automate table lookups.

 

I remember a particular case when Blind Elephant was really helpful: MediaWiki DjVu and PDF File Upload Remote Code Execution Vulnerability (QID 12832). This was a zero-day affecting the software that powers Wikipedia. The tricky part was that to execute arbitrary code on an affected installation, one needed to upload a legitimate file on the server and then pass shell meta-characters to the application which would execute arbitrary code. Making it more urgent, a PoC was available! Since Qualys has always treated customer data confidentially, a file upload was out of the question. It was with Blind Elephant that this detection was made possible.

 

More Detections

Keep visiting the Blind Elephant Supported Detections to read about the support added for different web-applications and their extensions/plugins.  If you want a detection added for a certain open source web application, please post your request to the Blind Elephant community.

Cloudpebble LogoHere’s a short story about a simple vulnerability that was easy to fix, but nonetheless could have had serious consequences.

 

Imagine an attacker, who doesn’t even have root access, being able to:

-  Get source code from the community of Pebble watch developers

-  Replace their binaries with malicious ones

-  Deploy the malicious binaries to the developers’ watches when they click the ‘Remote Deployment’ button.

 

The above was possible (until Pebble made a quick fix -- kudos to them!). And Pebble is not alone: researchers at Black Hat and DEF CON this year demonstrated a wide array of device hacks. The lesson for developers is to always include secure coding practices and testing in your software lifecycle.

 

About Pebble

Pebble is a well-known player in the expanding wearables (smart watch) market. One of their key strengths has been their apps market which currently has more than 6000 apps and watch faces. In 2013 Pebble launched the cloudpebble.net portal where developers can code, build and remotely deploy apps to their smartwatch without installing any SDK on their machines.

 

The Vulnerability

While building a Pebble watch app through cloudpebble.net, I observed that the build logs contain output from build commands run on the Linux shell. I was interested to check if I could inject a custom command during the build process and get its output from the build log. After a few tries, I was able to successfully demonstrate the attack. Following Qualys’ responsible disclosure policy, I contacted Pebble and provided details of the attack. Pebble acknowledged the issue and provided a fix within 6 hours, which was quite impressive. As a token of appreciation they added me to their ‘White Hat Hall Of Fame’.

 

Proof of Concept

Following are the details about how I was able to carry out this attack.

 

  1. I created a cloudpebble.net account, logged in at https://cloudpebble.net/ide/, and created a new project with the configuration shown below.

    figure1.jpg
  2. I created a .js source file under this project. The file name field just had client side validation, and surprisingly there was no server side validation.

    I removed the client side validation and tried adding different commands in the filename string.

    figure2.jpg
  3. I was successfully able to execute different commands on this server, and the output of a few of the commands was getting dumped to the buildlog.txt file.

    After a few tries, I figured out that I could get the system to dump the contents of the /etc/passwd file if I used filename “a.js | wget –I /etc/passwd | a.js”. While this resulted in a build failure, buildlog.txt was available and held the contents of the /etc/passwd file. This meant I could log in as any other developer on this build system and access and change their source and binary files.

    figure3.jpg
  4. The cloudpebble.net website also provides a remote deployment facility. Using this feature, files built on this server in a specific developer’s account can be automatically installed on that developer’s Pebble watch. Since I could log in as any developer, I had a very powerful attack.

 

The Fix

Fixing this issue was straightforward. The Pebble team simply added server side validation to the file/project name creation pages, along with existing client side JavaScript validation. Now even when someone disables the JavaScript validation in their browser, the server still won’t accept the invalid file name.

 

For an extra level of safety, developers should disable the ‘Developer Connection’ option in the Pebble Mobile Application, except when they are trying to deploy and test their applications using cloudpebble.net.

amolsarwate

OpenSSL Vulnerability

Posted by amolsarwate on Jul 8, 2015

The OpenSSL team has announced a fix to resolve a high severity vulnerability (CVE-2015-1793) which allows certificate forgery. During certificate verification, OpenSSL (starting from version 1.0.1n and 1.0.2b) will attempt to find an alternative certificate chain if the first attempt to build such a chain fails. An error in the implementation of this logic can mean that an attacker could cause certain checks on untrusted certificates to be bypassed, such as the CA flag, enabling them to use a valid leaf certificate to act as a CA and "issue" an invalid certificate. This issue will impact any application that verifies certificates including SSL/TLS/DTLS clients and SSL/TLS/DTLS servers using client authentication. It affects OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o

 

OpenSSL 1.0.2b/1.0.2c users should upgrade to 1.0.2d

OpenSSL 1.0.1n/1.0.1o users should upgrade to 1.0.1p

 

Stable distributions of many Linux flavors are not affected:

 

RedHat: No Red Hat products are affected by the CVE-2015-1793 flaw. No actions need to be performed to fix or mitigate this issue in any way.

OpenSUSE: The OpenSSL versions shipped in openSUSE 13.1 and 13.2 are not affected. The openSUSE Tumbleweed distribution never received a vulnerable version and was never affected. The next submission into Factory will skip any vulnerable versions.

Ubuntu: Ubuntu versions 12.04LTS, 14.04LTS, 14.10LTS, 15.04 and 15.10 are not affected.

Debian: The stable and old stable versions are not vulnerable. The 'testing' and 'unstable' versions are affected.

 

Qualys has released QID 38104. Please refer to the knowledge base for more information on this check.

Most organizations enforce system configuration policies to reduce the chance of misconfiguration and improve their overall security posture. For Microsoft Windows systems, many organizations rely on guidance from Microsoft Security Compliance Manager (SCM) for proper configuration. For organizations deploying Windows 8.1, this Top 4 list helps you understand and implement the new settings introduced in SCM for Windows 8.1.

 

As an engineer on the Qualys Policy Compliance product team, I routinely compare compliance benchmarks, and have compiled this list based on my work. If you are already familiar with previous version of Windows, this blog post can help you to quickly adopt the new changes.

 

1. Windows Defender

Windows Defender is your first line of defense against spyware, viruses and malicious software. It helps to identify and remove them. On Windows 8 and above, it runs in the background and notifies when some action is needed from the user.

 

 

 

In Windows 8.1, Microsoft has introduced more options related to scanning, reporting, real-time protection and many more in Windows Defender. Of the over 90 settings in Windows Defender, the following are the most important ones you should enable if Windows Defender is the only anti-malware present on the target system.

  1. Turn on behavior monitoring
  2. Scan removable drives
  3. Scan packed executables
  4. Scan all downloaded files and attachments
  5. Check for the latest virus and spyware definitions before running a scheduled scan

 

2. Local Security Authority (LSA) Protection

LSA is a process to prevent code injection that could compromise credentials. LSA, which also includes the Local Security Authority Server Service (LSASS) process, validates users for local and remote sign-ins and enforces local security policies.

 

 

 

It is recommended to enable the protected process settings in Windows 8.1 (although they are not available in Windows RT 8.1). These settings improve protection for credentials stored and managed by LSA by preventing memory reads and code injection by non-protected processes. Additional protection is achieved when this process is used along with Secure Boot. Protected process requirements for plug-ins and drivers to load in LSA are:

 

  1. Signature Verification -
    1. The plug-in that is loaded in LSA needs to be digitally signed with Microsoft signature.
    2. Plug-ins that are drives, such as smart card drivers, need to be digitally signed by using WHQL certification.
    3. Plug-ins that are not signed will fail to load in LSA
  2. Plug-ins should be compliant with SDL process guidelines otherwise plug-ins digitally signed with Microsoft signature but are non-compliant will may result in failing them to load in LSA.

    Enable audit mode of lsass.exe using the below steps:
    1. Open registry path “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\LSASS.exe”
    2. Set key to Auditlevel=dword:00000008
    3. Now reboot the computer

 

After reboot you can check event 3065 and 3066 to determine if the plug-in and driver are loaded in LSA.

 

Event 3065: This event records that a code integrity check determined that a process (usually lsass.exe) attempted to load a particular driver that did not meet the security requirements for Shared Sections. However, due to the system policy that is set, the image was allowed to load.

 

Event 3066: This event records that a code integrity check determined that a process (usually lsass.exe) attempted to load a particular driver that did not meet the Microsoft signing level requirements. However, due to the system policy that is set, the image was allowed to load.

 

These events will not be generated when the kernel debugger is attached and enabled on the system.

 

Audit mode can be enabled for multiple computers in a domain, by modifying HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\LSASS.exe registry key.

 

Note - For the GPO to take effect when the audit level value setting is created in the Group Policy Object (GPO), the GPO changes must be replicated to all domain controllers.

 

More information related to LSA is available here: https://technet.microsoft.com/en-us/library/dn408187.aspx

 

3. Enhanced Mitigation Experience Toolkit (EMET)

EMET 5.1 is a new toolkit for system administrators, and is compatible with Windows 7 onwards and designed to secure Windows targets against attacks. Much software today is vulnerable in a way that normal users typically cannot see or detect, which creates opportunities for attackers to exploit the target either for data collection or denial of service. EMET is effective in detecting and blocking attack techniques and common actions used to exploit memory corruption-related vulnerabilities.

 

Once you install EMET you can configure it either through EMET user interface or through the group policy editor. Here are some setting recommendations using the group policy editor.

 

 

Recommendation 1: Enable System SEHOP settings. When software suffers from memory corruption vulnerabilities, an exploit may be able to overwrite a data structure that controls how the software handles exceptions. Structured Exception Handler Overwrite Protection (SEHOP) verifies the integrity of those structures before they use the handle. Keep this setting to “Enable: Application Opt-Out”.

 

 


Recommendation 2: Enable System DEP setting.  Data Execution Protection (DEP) is used to prevent the execution of code from memory pages that are not explicitly marked as executable, but this flag typically needs to be set while compiling the binary. The “System DEP” setting through EMET also prevents execution of code not explicitly marked as executable, even when the flag was not set at compile time. Set it to “Enable: Application Opt – Out”.

 

 


Recommendation 3: Microsoft allows you to specify that applicable EMET protections be applied to two sets of software: recommended software and popular software. Enabling default protections for these two sets of software improves the security to your machine.

 

Popular software includes browsers, PDF readers, compression tools, video players and Internet Messenger.

 

 

 

The set of recommended software includes Adobe reader, Microsoft Office and WordPad.

 

 

 

4. Miscellaneous Settings

 

In addition to the critical new SCM settings for Windows 8.1 above, organizations should also refer to some of the settings below:

    1. Restrict delegation of credentials to remote servers.
      When running in restricted mode, participating apps do not expose credentials to remote computers. You should note, however, that restricted mode may limit access to resources.
    2. Automatically send memory dumps for OS-generated error reports.
      This policy setting controls whether memory dumps in support of OS-generated error reports can be sent to Microsoft automatically. This policy does not apply to error reports generated by 3rd-party products, or additional data other than memory dumps.
    3. Set what information is shared in Search.
      It is recommended to set to Not Configured or Disabled, depending on organization policy, to avoid leakage of user information and location to Bing search.
    4. Configure Group Policy Caching.
      If you enable or do not configure this setting, the Group Policy will cache the policy information after every background processing session. This minimizes bandwidth usage as it just checks the link speed and does not download the latest version of policy information. If this setting is disabled the Group Policy client will not cache applicable GPOs or settings.

     

    Configuration, compliance and ultimately improving your organization's security posture can be a daunting task. But with the help of pre-built compliance benchmarks and tools, this goal can be achieved successfully and with a reasonable effort.

    Would you buy a cellphone with a hardcoded password? Definitely not. I wouldn’t either.

     

    But as is sometimes the case with non-mass-market devices, security can be overlooked in favor of convenience, even if in retrospect it’s clearly a mistake to do so. Fortunately, this story has a happy ending, thanks to responsible disclosure and quick vendor response.

     

    As a vulnerability research engineer at Qualys, I routinely audit various devices, and today Qualys is releasing information on three new vulnerabilities I found on the Garrettcom Magnum 6k and Magnum 10k Series managed switches.

     

    These devices had the following security issues:

    • The firmware contained hardcoded password linked to a privileged account used for support and maintenance operations. A malicious person who discovered the password and had access to the device could execute commands or shut down the device. Even worse, a Shodan search indicated that at least seven of these devices are connected to the Internet and publicly discoverable.
    • The firmware also contained hardcoded RSA private keys and certificate files. An attacker having access to these certificates and keys could not only decrypt the HTTPS secure traffic but also log in via ssh without a username/password to any device running the same version of the firmware.
    • Less interesting but still important to fix were some cross-site scripting (XSS) vulnerabilities.

     

    In accordance with Qualys’ responsible disclosure policy, we notified the vendor and ICS-CERT, the vendor has released fixes for the Magnum 6k series and for the Magnum 10k series, and now we are going forward with coordinated disclosure following publication of the ICS-CERT advisory.

     

    About Garretcom Magnum

    Garrettcom Magnum devices are a range of managed switches specially designed and hardened to withstand some of the most grueling industrial environments with high EMI, extended temperature range and significant atmospheric contamination. GarrettCom estimates the affected products are deployed primarily in the United States with a small percentage in Europe and Asia. According to GarrettCom, the devices are deployed across several U.S. critical infrastructure sectors including critical manufacturing, defense industrial base, energy, water, and transportation. According to their website the devices are used in the power industries, smart grid backbones, intelligent traffic systems and other industries / systems.

     

    All tests were performed on firmware (Rel_6K_A450.bin) version 4.5.0 for Magnum 6k series as shown in the image below of the string output for the firmware binary.

     

     

     

    I found three classes of vulnerabilities.

     

    1. CVE-2015-3959: Hardcoded Passwords (A5: Security Misconfiguration)

    CVSS Score: AV:N/AC:L/Au:N/C:C/I:N/A:N (how to read)

     

    Firmware with hardcoded credentials or backdoors leave any device at a huge risk of getting hacked if an attacker gains knowledge of this information, allowing complete control of the device, as the accounts are generally high privileged accounts meant for support or maintenance operations. The firmware contained a hardcoded password for a high-privilege user : “factory” as shown in below snapshot:

     

     

    The account was meant to provides an elevated access to the device, meaning an attacker having access to the device could gain elevated access to the device and change the device setting or initiate a complete shutdown causing a denial of service. Search results returned by Shodan reveal 17 active devices that are accessible over the internet, which are still having vulnerable versions of firmware running on them. The vulnerability were reported to be patched in earlier versions of the firmware, but we found that the account and password still exist. The newer firmware revision 4.5.6 fixes this issue, and users are advised to upgrade to the latest version.

     

    2. CVE-2015-3960: Hardcoded RSA private key (A5: Security Misconfiguration)

    CVSS Score: AV:N/AC:L/Au:N/C:C/I:N/A:N (how to read)

     

    During the reverse engineering process of the firmware, it was discovered  that the firmware had hardcoded RSA private keys and certificate files, which were used by the server for SSH and HTTPS connections. Any firmware using hardcoded private keys and certificates pose a greater security risk, as the firmwares are meant for a series of multiple devices, all the devices running the affected or earlier versions of that firmware are by default vulnerable, which means that an attacker just has to gain access to the keys/certificates and he would be exploit any similar devices with ease. An attacker having access to these certificates and keys can not only decrypt the HTTPS secure traffic but can also log in via ssh without username/password to any device running the same version of the firmware.

     

    A general best practice that should be followed while developing the firmware is to:

    • Avoid use of any hardcoded keys
    • The private keys should not be stored unencrypted format
    • The private key should be protected with a strong passphrase which is long and hard enough to crack in case anyone gains access to the keys

     

    In case with Garrettcom Magnum 6k and 10k device series, the firmware contained hardcoded key which were used for ssh connection to the device. The key are meant to provide ssh access to the device without any password, the key were protected using a passphrase that was easy to guess as demonstrated in the below snapshot.

     

    Proof-of-Concept:

    The snapshot below shows the key used for SSH connections to the device. The key is being protected by passphrase: “magnum6k” which was verified using PuTTY Key Generator, as shown below in next snapshot.

     

     

     

    The firmware also had keys and certificates that were meant for HTTPS communication. Access to these certificates and private keys means that any attacker intercepting the traffic between the device and the users can decrypt the communication channel and tamper the data or conduct replay attacks. The below snapshot is for the key and cert used for HTTPS connections.

     

     

    The private key above present in the firmware was tested using the HTTPS traffic captured between the browser and device and was successfully decrypted using the above key obtained from the firmware.As the private keys are hardcoded in the firmware, same keys were used across range of devices.

     

    3. CVE-2015-3942: Improper Sanitization of user input (A3: Cross-Site Scripting XSS)

    CVSS Score: AV:N/AC:L/Au:N/C:N/I:P/A:N (how to read)

     

    Multiple XSS vulnerabilities were discovered in the web server which is present on the device. These vulnerabilities exist due to improper sanitization of user input which can be leveraged by an un-authenticated attacker to carry out cross-site scripting (XSS) attacks. As the web server and files remain same across all the devices, all the Magnum 6k and Magnum 10k devices are vulnerable

     

    Proof-of-concept:

    Below is the demonstration of the vulnerability, which shows that the device is vulnerable to XSS.

    http://<server_url>/ gc/service.php?a="><img src=x onerror=alert('Xssed')><"

    http://<server_url>/"><img src=x onerror=alert('Xssed')><"

    http://<server_url>/gc/flash"><img src=x onerror=alert('Xssed')><"

     

    Below is a snapshot of the vulnerability.

     

     

     

    Private Keys and Certificates

    Below are the private keys and certificates that were extracted from the firmware. The first Certificate and RSA key pair is for SSH connection. The rsa key is protected using passphrase: “magnum 6k” and can be verified by loading it in PuTTY Key Generator.

     

    -----BEGIN CERTIFICATE-----

    MIIDcTCCAtqgAwIBAgIBADANBgkqhkiG9w0BAQQFADCBiDELMAkGA1UEBhMCVVMx

    EzARBgNVBAgTCkNhbGlmb3JuaWExEDAOBgNVBAcTB0ZyZW1vbnQxGDAWBgNVBAoT

    D0dhcnJldHRDb20gSW5jLjERMA8GA1UEAxMIbWFnbnVtNmsxJTAjBgkqhkiG9w0B

    CQEWFnN1cHBvcnRAZ2FycmV0dGNvbS5jb20wHhcNMDUwNDI5MDEwOTM1WhcNMDYw

    NDI5MDEwOTM1WjCBiDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWEx

    EDAOBgNVBAcTB0ZyZW1vbnQxGDAWBgNVBAoTD0dhcnJldHRDb20gSW5jLjERMA8G

    A1UEAxMIbWFnbnVtNmsxJTAjBgkqhkiG9w0BCQEWFnN1cHBvcnRAZ2FycmV0dGNv

    bS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAKAjwUUr3aS5O+iuLz7Q

    R6F+pHp+HyUPoWUvxBnKcr2/12FpN25OkBEubTWJ+wwDz9bX/V6Q9RsL6PWdY1OS

    x6KBaN1274r71fQf4wzE0sZq/ThkXon5M1C1mRFGjFBf731A1DDSgYHXlY/Ekn0R

    b4mhUCBmWORhdC7hNyyHTM9XAgMBAAGjgegwgeUwHQYDVR0OBBYEFIcT5dxybegN

    e8G8SnqOXcZYNBs7MIG1BgNVHSMEga0wgaqAFIcT5dxybegNe8G8SnqOXcZYNBs7

    oYGOpIGLMIGIMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEQMA4G

    A1UEBxMHRnJlbW9udDEYMBYGA1UEChMPR2FycmV0dENvbSBJbmMuMREwDwYDVQQD

    EwhtYWdudW02azElMCMGCSqGSIb3DQEJARYWc3VwcG9ydEBnYXJyZXR0Y29tLmNv

    bYIBADAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBAUAA4GBAFzVduVeU2+3BjPt

    a3sAMTJJyGyb+5/HowxQLhlkpWy//CdrD9UTwPIs4qXOjB0ETJMNPCpxoJj5o6od

    x7K3fpPYb0Un+5HLovXvkaWb+Hg1fcfoFucCjmiP2FrkaYdcCYvjwXkVwDADNepu

    Lxq15PtoLcchtQXEob0jJMSDld+7

    -----END CERTIFICATE-----

     

    -----BEGIN RSA PRIVATE KEY-----

    Proc-Type: 4,ENCRYPTED

    DEK-Info: DES-EDE3-CBC,58D326A37D2A5F52

    lfVfiGyCCCg/U1g6U5Exa7E5KpqqyE1ihCbvvPlb9BRpwa0b7ur+YUKWFrnP+/Hc

    qcxa1vTdQkbofkjs2L8FYsnvzq7osXzXi3FhIcdGKgoLR3p5jg2OdwZagj1fBf5Q

    fQu0oYMwved2fdLEdLaJkjfm/S72Z/ESGOyj1zVIdGZC5ltbD9Qp1lvhkLoez6JB

    Z8B0UQ30EFyTPcJ0Auc+NIHpvuKrwcT84hun0QJEvgcn9Z1u28pu25jmIsC0LLz3

    n8zn5TbQELwZF8llEWr0asSsAFsKO02gdah/w7kdaT91CjFbUEFgUQHqkRs2ALwf

    oZqs1ZLvibtEM2rn9Ldq5ZZ9A5IlkecuhbeLshT2vMjW9raBdKutsGuviYWVvSIg

    CF2A36BZdzeGspJuo6J/7DtAvTDsLp1jiumSldf31xiR6KWmbVgJfka89X72c0Lv

    tNdrAv17qRmwxxug6yEoSo/U7CleBIE8ReN6TS7Hi0ZjBU7/kg5XNqDEI1S4Uasr

    tE/cAdb0zxVXn7sVF8F5bJWP3BvTlDa5cMVwtDGPvV0yiPDiv8FUTuRtlUgLTUZ3

    p3A1MfxaWBPO/dhDGC98HjyRlI2Dy5ykHxZRC44EEEn7E9W8b1K+vh1Hu+Ecu2+3

    SCJ0xQZqzl5w4S934vG/M9tqzsnOkyl695nT0HICYeu1fLcN3UvaOVdRF8WQ63PT

    Z4Jsoka+z6xTmX9LUGfd/bKYm+bTMAbog1eaiuP8mk0kaQFDx3NmZLSLleXSnS5I

    Bxdgilak6Gd9sredChTzdGgG0988z+ClXy18CycBANL8U2jVu+j9iQ==

    -----END RSA PRIVATE KEY-----

     

    The below key and certificate are being used for HTTPS connections:

     

    -----BEGIN RSA PRIVATE KEY-----

    MIICXQIBAAKBgQC+NtXC4dGI5wf1h8p7hzSiYNlbsdQp68Aih4zFPQSBmcvAh0Cu

    PeATnRiSG4w56Fo6PaDlmCkAg24l01qScyfJDe6t/3spmeZbWzU1k6OtndvNtqPl

    2HfO7wiOthJS/oNq9r2tTkqX+VeZubpvJWZSC7kI6ohHotgRmYKPxfsLOQIDAQAB

    AoGBALIXRSyhoT08kgcgjEP74xvk8Z0YcjyNreamYvaImp99D3fDKpv48sNqYobp

    o/DTyyacbPiJ7lm8tHRV3ocfqi7EOERq4YXCyDFenlWvBuByyUAak6xG6K6zIhIG

    r0xKXosAWiboWYemzDeS81EYQVfVdRTbo/CI7pmbziAj0uPBAkEA9uyqQ2BU5EnG

    b5ddKM5Uk2vmvdK/We7lnlcXl214LBcOcFHvbf+h1VfG/2Lek73xCwHdcj5KcnEu

    VbM1Ix0RlwJBAMU0k+jOD8S03Nox9CGNY79usEjn0Wfzj2pj4Eltb9em0K5RaRax

    9lbqiRonnmfLBg5Ymot6M3kIjekPQQ+6w68CQE0TeN5JLpaH9NoWbGz1Yu8VilQM

    edBvwtsXInURJabVl5s16D/0wKZgnOxRB1skuh4OefpUOVbZv3Xe16JbS4cCQH1K

    qGaS9QW++0pNzpO6pxMrGilXz33CCu5HQmqkcxiKTa9S3fejXaVfIXhSj5vWK6TV

    umq/WxCc1LysCmQZ/tUCQQDexekhrldyve81TuOG0G4tiJjIV/7GEQYsRHPjPqRj

    WULhzmMEdnGnReH4ZY+eiqs94rxwt1FPkkff1/izsGRZ

    -----END RSA PRIVATE KEY-----

     

    -----BEGIN CERTIFICATE-----

    MIICqTCCAhICAQAwDQYJKoZIhvcNAQEEBQAwgZwxCzAJBgNVBAYTAlVTMQswCQYD

    VQQIEwJDQTEQMA4GA1UEBxMHRnJlbW9udDEYMBYGA1UEChMPR2FycmV0dGNvbSBJ

    bmMuMRQwEgYDVQQLEwtFbmdpbmVlcmluZzEXMBUGA1UEAxMOU29mdHdhcmUgR3Jv

    dXAxJTAjBgkqhkiG9w0BCQEWFnN1cHBvcnRAZ2FycmV0dGNvbS5jb20wHhcNMDYx

    MjExMjAzMzA5WhcNMTYxMjA4MjAzMzA5WjCBnDELMAkGA1UEBhMCVVMxCzAJBgNV

    BAgTAkNBMRAwDgYDVQQHEwdGcmVtb250MRgwFgYDVQQKEw9HYXJyZXR0Y29tIElu

    Yy4xFDASBgNVBAsTC0VuZ2luZWVyaW5nMRcwFQYDVQQDEw5Tb2Z0d2FyZSBHcm91

    cDElMCMGCSqGSIb3DQEJARYWc3VwcG9ydEBnYXJyZXR0Y29tLmNvbTCBnzANBgkq

    hkiG9w0BAQEFAAOBjQAwgYkCgYEAvjbVwuHRiOcH9YfKe4c0omDZW7HUKevAIoeM

    xT0EgZnLwIdArj3gE50YkhuMOehaOj2g5ZgpAINuJdNaknMnyQ3urf97KZnmW1s1

    NZOjrZ3bzbaj5dh3zu8IjrYSUv6Dava9rU5Kl/lXmbm6byVmUgu5COqIR6LYEZmC

    j8X7CzkCAwEAATANBgkqhkiG9w0BAQQFAAOBgQCucRNrjIRa+F4cfNoh10fTESzR

    cJw0Uh80JxAued1x1WM5J+RWx8jECSx6xu28QKoqRa5ru9/pngu0TS3eKRmscKSr

    0+ILC6H9gyO2lOKRfhKQDH7Xee57QD141cWkQd4wnKcpSJqMEu305WSQdlF8ma8w

    k6yX4cP+nUyw+/CQ7g==

    -----END CERTIFICATE-----

    As part of my job working on SSL Labs, I spend a lot of time helping others improve their TLS security, both directly and indirectly–by developing tools and writing documentation. Over time, I started to notice that deploying TLS securely is getting more complicated, rather than less. One possibility is that, with so much attention on TLS and many potential issues to consider, we're losing sight of what's really important.

     

    That's why I would like to introduce a TLS Maturity Model, a conceptual deployment model that describes a journey toward robust TLS security. The model has five maturity levels.

     

    At level 1, there is chaos. Because you don't have any policies or rules related to TLS, you're leaving your security to chance (e.g., vendor defaults), individuals, and ad-hoc efforts generally. As a result, you don't know what you have or what your security will be. Even if your existing sites have good security, you can't guarantee that your new projects will do equally well. Everyone starts at this level.

     

    Level 2, configuration, concerns itself only with the security of the TLS protocol, ignoring higher protocols. This is the level that we spend most time talking about, but it's usually the easiest one to achieve. With modern systems, it's largely a matter of server reconfiguration. Older systems might require an upgrade, or, as a last resort, a more secure proxy installed in front of them.

     

    Level 3, application security, is about securing those higher application protocols, avoiding issues that might otherwise compromise the encryption. If we're talking about web sites, this level requires avoiding mixing plaintext and encrypted areas in the same application, or within the same page. In other words, the entire application surface must be encrypted. Also, all application cookies must be secure and checked for integrity as they arrive in order to defend against cookie injection attacks.

     

    Level 4, commitment, is about long-term commitment to encryption. For web sites, you achieve this level by activating HTTP Strict Transport Security (HSTS), which is a relatively new standard supported by modern browsers (IE support coming in Windows 10). HSTS enforces a stricter TLS security model and, as a result, defeats SSL stripping attacks and attacks that rely on users clicking-through certificate warnings.

     

    Finally, at level 5, robust security, you're carving out your own private sliver of the PKI cloud to insulate yourself from the PKI's biggest weakness, which is the fact that any CA can issue a certificate for any web site without the owner's permission. You do this by deploying public key pinning. In one approach, you restrict which CAs can issue certificates for your web sites. Or, in a more secure case, you effectively approve each certificate individually.

     

    The conceptual simplicity of the TLS Maturity Model enables us to easily understand where we are and what we need to do to improve. As a result, we can focus our attention on what really matters. Although level 5 provides best security, it involves most work and addresses risks that don't exist for most sites. Level 4 is arguably the minimum level that can be called secure, and the level that most organisations should be aiming for.

    Earlier this week we released SSL Labs 1.17.10, whose main purpose was to increase the penalty when RC4 is used with modern protocols (i.e., TLS 1.1 and TLS 1.2). We had announced this change some time ago, and then put in place on May 20. The same release introduced another change, which was to increase the penalty for servers that don't support TLS 1.2 from B to C. And it seems that this second change is being somewhat controversial, with many asking us to better explain why we did that.

     

    Although what initially prompted us to think about changing the grading for not supporting TLS 1.2 was grade harmonisation (ensuring that a wide range of servers all get grades that make sense -- in other words, to have better-configured servers have better grades), that doesn't change the fact that the reality is that TLS 1.0 is an obsolete security protocol. TLS 1.0 came out in 1999, followed by TLS 1.1 in 2003 and TLS 1.2 in 2008. These new protocol versions were released for a reason -- to address security issues with earlier protocol versions. But, despite being obsolete, TLS 1.0 continues to be the best supported protocol version on many servers. It's not very bad, mind you -- we know from SSL Pulse that about 60% of servers already support TLS 1.2. Client-side, the situation is probably better, because modern browsers have supported TLS 1.2 since 2013. You could say that, overall server configuration is the weaker link.

     

    In that light, we feel that the increase of the penalty for the lack of TLS 1.2 is the natural next step in the deprecation of TLS 1.0. In fact, SSL Labs is probably late in doing that. Just last month, the PCI Security Council deprecated SSL v3 and TLS 1.0 for commercial transactions. No new systems are allowed to use TLS 1.0 for credit card processing and existing systems must immediately begin to transition to better protocols. In comparison, the SSL Labs change of grading is only a mild nudge in the right direction. And, while some people are not happy that we're pushing for TLS 1.2, others are complaining that we're not doing enough. For example, the Chrome browser has been warning about lack of TLS 1.2 and authenticated (GCM) suites for some time now. Clearly, it's difficult to make everyone happy.

     

    The bottom line is that TLS 1.0 is insecure and we must migrate away from it. In 2011, there came the BEAST attack, and, in 2013, the Lucky 13 attack. TLS 1.0 remains vulnerable to this problems, but TLS 1.2 (with authenticated suites) isn't. These attacks are serious and some organisations continue to use RC4 in combination with TLS 1.0 just to be sure that they are mitigated. We understand that many organisations face significant challenges adding support TLS 1.2, but that is unavoidable. In computer technology, and in security in particular, it is often necessary to keep running just to stay in place.

     

    We did get one thing wrong, however -- we didn't communicate our grading changes in advance. It was not our intention to surprise anyone. In fact, we'd prefer much more if changes were smoother. To that end, in the future we'll be announcing all grading changes with at least one month notice, and hopefully more for some more significant changes.


    Update June 3, 2015: Notification of SSL Labs grading changes (including signups to get notifications by email) is now available at SSL Labs Notifications.