The series title Welcome to 2016 is a bit of a reminder that Moore’s law and the old marketing saw “give the people what they want” are alive and well. There is nothing magic about the cloud, it’s a data center that is not under your control, plain and simple. The same argument could be made if the organization outsourced I.T. to a third party service provider, some might argued that there are controls in place but the third party’s first obligation is to their shareholders so it’s unrealistic to expect any level of control beyond what is explicitly contracted or assured.
Fortunately, thanks to market pressure, a multitude of software and hardware advances and governance bodies like the Cloud Security Alliance and ISACA, today’s cloud is not the cloud of ancient times like 2011. In addition to the standards there are plenty of recent books out there and as with so much guidance the benefit needs to be analyzed within the context of the risk the needs to be mitigated.
This article examines the use of two factor authentication, all too often the source of debate between customer/user convenience and robust security controls. Fortunately almost everything cloud can be accessed from one’s living room, 7X24 with little more than a few bucks still left on your credit card post Christmas shopping, an independent researcher’s paradise. Read on for real life testing and proof of concept rather than a theoretical essay.
“Just because your paranoid doesn’t mean they are not after you …”
The screen shot below was captured less than 30 minutes after creating a new cloud instance on an obscure,(read really cheap), infrastructure as a service provider. Three servers in three different data centers were created, there is no marketing, no DNS entries and no reason for anyone to be connecting to any of these devices.
The Despite the somewhat anonymous deployment it did not stop three threat actors, ( assumed based on the three different IP addresses) probing this sacrificial test server. In the first and third instances the inbound connection is rejected by the test server, these are the reset flags circled in red. In the second instance TCP port 22 is an active service and the test server responds with a syn-ack. Like a mischievous tween playing phone games the threat actor promptly hangs up,( [R] flag underlined in yellow), rather than engaging.
On its own port scanning has next to no impact on modern computing equipment and viewed in isolation the intentions of the threat actor are not knowable, but the possibility always exists that this activty is an advance indicator of an attack against a service the threat actor can reach. While it doesn’t always happen on queue, this time the demo gods smiled. The port scan for port 22 that occurred at 15:28:18 was followed up less than ten minutes later with 9 attempts to log in via SSH as shown in the filtered log output below.
root@west:/var/log# grep 18.104.22.168 auth.log | grep -i password
Dec 19 15:37:32 west sshd: Failed password for root from 22.214.171.124 port 34902 ssh2
… snip …
Dec 19 15:37:59 west sshd: Failed password for invalid user admin from 126.96.36.199 port 56588 ssh2
Dec 19 15:38:03 west sshd: Failed password for root from 188.8.131.52 port 59312 ssh2 at 15:37:29
Nine attempts later the attacker gives up, and based on the user names attempted there is no indication this threat actor has any insight into the environment, I.E. the attack is likely opportunistic vs. targeted and the single factor root password is strong enough to resist such a weak attempt. We can’t say this scenario is risk free, in the event the threat actor successfully guesses the password they can install new software, access any data on the server, use the server as an attack point to reach deeper into the internal network or use the server as a tool to attack other systems on the internet.
Although the lack of anything meaningful on this test server could be the basis for an argument that there is no impact so therefore no risk, accepting the acceptable use policy of the cloud service provider defines responsibility for taking reasonable security measures and therefore the liability in the event a third party is harmed and the source of the attack is identified as this little test instance.
The value of complex passwords and non-standard usernames is obvious to anyone reading this far, it’s a well proven approach to reducing the likelihood of a successful attack. What we miss as infosec professionals is how the effectiveness of both valid controls decays over time, especially when the conventional advice about generic accounts and password reuse is not, or can’t be, followed for any number of reasons.
When a credential is shared by different people in a business unit it becomes more widely known. Over the course of months and years the username and password becomes widely socialized and exposed to people who should not have it/no longer need to have it. The main mechanism of the credential control’s effectiveness, its secrecy, diminishes over time.
This detail is often obscured because our inner geek typically focuses on the nature of the network connection used to transport those credentials. Sure TLS 1.1 is stronger than SSL V2 if crypto is the concern but any authentication protocol that relies on the user/programs knowledge of a credential alone should be considered robust because it is easily defeated by credential disclosure and the fact that the credentials have been exposed to an unauthorized party is often not known.
One way around the security by obscurity problem is to introduce a second element to the authentication scenario. In the screen shot below the example annotated by yellow arrows shows an attempted file transfer in which the threat actor has already gained access to the victim system, located an SSH key but is unable to complete the attack successfully because the passphrase for the SSH key is not known.
In the second example, annotated in green both the pass phrase for the key and access to the SSH key are in place so the transfer operation completes successfully.
An SSH key with a passphrase meets the requirements for true two factor and it has the added benefit of being technically enforceable through the SSH daemon config file, stopping opportunistic brute force password based login attacks 100% of the time.
# Change to yes to enable challenge-response passwords (beware issues with
# Change to no to disable tunnelled clear text passwords
OpenSSH wins points for being free and providing two factor authentication that is tough enough to last on the internet. Since many people could have a copy of the key, shared accounts can still be secured via two factors, and revoked when not needed/known compromised so what’s not to love? Two things complexity and confidentiality. A quick Google search for SSH keys will return any number of people struggling with implementation, and the same issue of unauthorized access that existed with long lived well socialized account credentials.
While SSH may not be everyone’s preferred method of interacting with an application or operating system it’s quick to model and the principles remain the same when considering a web application that prompts for user credentials shared by a number of people in a business unit. ( I have done risk reviews on two such apps used by major commercial ventures in the last six weeks).
A reasonably well coded application will separate the day to day use/data access from administrative functions like adding users, enabling new features and so forth. Using the application as an authenticated and non-privileged should not result in a negative impact to the organization, I.E, the risk is tolerable. Authorization controls within the application and/or operating system is another mechanism to mitigate risks presented by a threat actor able to successfully circumvent both the network restrictions and initial authentication as a non-privileged user.
In the screen show below the threat actor we are simulating the scenario where one segmentation boundary has been fully compromised, I.E. the SSH key and the passphrase have been acquired through some means and the threat actor is now attempting to gain access to a more valuable asset.
(The example below involves the shadow file on the server but it could just as easily be the super admin account on a database or decryption keys needed for accessing confidential data.)
The yellow annotated actions illustrate the threat actor attempting to circumvent the segmentation boundary around privileged access. On the test system the sudo program allows authorization to be enforced through the knowledge of a credential token. If the token needed for privilege escalation is not on the system under attack or the one initially compromised by the threat actor it becomes a robust defense against attack tools that monitor computer memory to defeat many operating system protection mechanisms.
Since ram scrapers and tools like WCE and mimikatz are readily available to a very large threat community taking the extra measures needed to protect privilege elevation is more pragmatic than paranoid. Fortunately top tier cloud providers include these capabilities as line items during configuration and SAAS providers like Duo Security enable customized two factor solutions for custom applications and specific work flow requirements.
Returning to test Linux server offering shell services to users, the business requirements could very well be simple access for non-privileged users, I.E., all users can initially log in with single factor authentication, password complexity recommended. The solution would then rely on the operating system to enforce privilege separation and protect data confidentiality through file system access controls. The only challenge remaining is protecting the system admin accounts from malware or the prying eyes of non-privileged users.
It is not documented on the Duo Security web site but modifying a pam configuration file for sudo met the requirements remarkably well. The Duo folks have compiled packages for most of the major Linux distros and the step by step instructions were easy to follow. Rather than adjust the PAM config file for SSH the sudo PAM file was updated as shown below.
root@east:/etc/pam.d# cat sudo
auth required pam_env.so readenv=1 user_readenv=0
auth required pam_env.so readenv=1 envfile=/etc/default/locale user_readenv=0
# Added for duo security
auth required /lib64/security/pam_duo.so
The screen shot below shows the prompt for Duo Security followed by failed attempts.
In addition to the credential being stored completely outside of the system being attacked failed attempts are quite easy to detect on both the system administrator’s phone and the system logs. Leveraging detective controls will be a subject of another article, for now the screen shot of the failed privilege escalation events below will need to suffice.