Sunday, August 14, 2016

Evaluating the Value of the (@)Purge Rule

“Only sometimes when we pick and choose among the rules we discover later that we have set aside something precious in the process.”  
― Helen Simonson, Major Pettigrew's Last Stand

Background and Problem Statement:

I was recently asked the following question: "Is there any value in supporting the character purge rule in Hashcat?" The purge rule '@x' will remove all characters of a specific type from a password guess. So for example the rule '@s' would turn 'password' into 'paword'. The full thread can be found on the Hashcat forum here. The reason behind this inquiry was that while the old version of Hashcat implemented the character purge rule, GPU versions of Hashcat and Hashcat 3.0 dropped support for it. Since then, At0m added support for the rule back in the newest build of Hashcat which makes this question much less pressing. That being said, similar questions pop up all the time and I felt it was worth looking into if only to talk about the process of investigating problems like this.

Side note, as evidence that any change will break someone's workflow, when researching this topic I did find one user who stored passphrase dictionaries with spaces left intact. They would then use the purge rule to remove the spaces during a cracking session so that way they wouldn't have to save a second copy of their passphrase wordlist without spaces. For that reason alone I think there is some value in the purge rule

The Purge Rule Explained:

Hashcat Rule Syntax: @X where (X) is the character you want to purge from the password guess
Example Rule: @s
Example Input: password
Example Output: paword


Hypothesis:

My gut feeling is that the purge rule will have limited impact on a cracking session. I base that on a rule of thumb that mangling rules work best if they mimic the thought process people use when creating passwords. For example, people often start with a base word and then append digits to it, replace letters with L33t replacements, etc. Therefore rules that mimic these behaviors tend to be more successful. I just don't see many people removing character classes from their password.

Now if you are a Linux fan, you'll realize Linux developers *love* removing characters from commands. Do you want to change your password? Well "passwd" is the command for you! Maybe Linux developers use the same strategy for their passwords? So I certainly could be wrong. That being said, the whole idea of a hypothesis is to go out on a limb and make a prediction on how an existing model will react so here I go:

My hypothesis is that the purge rule will crack less than 1 thousand passwords of a 1 million password dataset, (0.1%). Of those passwords cracked, a vast majority (95%), will be cracked due to weaknesses of the input dictionary vs. modeling how the user created the password. For example, 'paword' might be a new Pokemon type that didn't show up in the input dictionary vs being created by a user taking the word 'password' and then removing the S's.

Short Summary of Results:

The purge ruleset cracked 164 passwords (0.016% of the test set). This was slightly better then just using random rules which in a test run cracked 23 password, but not by much. Supporting this rule is unlikely to help in any noticeable degree with your cracking sessions.

Experimental Setup:

Test Dataset: 1 million passwords from the newest MySpace leak. These were randomly selected from the full set using the 'gshuf -n 1000000' command.

Reason: Truth be told, the main reason I used the MySpace passwords was I'm getting tired of using the RockYou dataset for everything. That being said, it's useful for this experiment that all of the passwords in that dataset have been converted to lowercase since I don't have to worry about combining case mangling rules with the purge rules.

Tools Used: Hashcat for the cracking, and John the Ripper for the --status option

Rulesets Used: Hashcat's D3ad0ne manging rules. I broke it up into two different rulesets with one containing the purge rules, (along with a few append/prepend '@' rules that snuck in), and the other one containing all the other mangling rules.

Reason: D3ad0ne's mangling rules contains about 34 thousand individual mangling rules. Due to its size and the fact that it is included with Hashcat it should make a good example of a ruleset that many Hashcat users are likely to incorporate in their cracking sessions. I initially split the base ruleset into two different subsets, with all rules including the '@' into one ruleset called d3ad0ne_purge, and all the other rules into another one called d3ad0ne_base. I then started manually going through d3ad0ne_purge and placing rules such as "append a @" into the d3ad0ne_base, but with over 1k rules in d3ad0ne_purge I quickly decided to remove the results of the append/prepend '@' after the fact instead of trying to fully isolate only purge rules in their own ruleset.


Dictinary Used: I used dic-0294 as my wordlist. Yes there are better input dictionaries out there, but this is a common one and strikes a good balance between size and coverage, plus it is public vs other dictionaries I have that are based on cracked passwords


Experimental Results:

Step 1) Run a normal cracking session on the 1 million myspace passwords using dic-0294 and D3ad0ne_base. This is important since the purge rule will likely crack many passwords that would be cracked normally with other rules. Running a normal cracking session first remove those passwords so we can focus on password that would only be cracked by the purge rules. The command I ran was below, (note, I'm editing some of the path information out of the commands for clarity sake).
./hashcat -D1 -m 100 -a 0 --remove myspace_rand_1m_hc.txt -r rules/d3ad0ne_base.rule dic-0294.txt
A couple of notes about the above rule. I'm using a version of Hashcat that I updated on August 10th 2016. I ran it on a very old MacBook Pro so the -D1 is telling it to use CPU only, (since the GPU doesn't have enough memory). The -m 100 is telling it to crack unsalted SHA-1 hashes. The -a 0  is to do a basic dictionary attack. --remove was to remove any cracked hashes so they aren't counted twice in future cracking sessions. myspace_rand_1m_hc.txt is my target set, rules/d3ad0ne_base.rule is my ruleset, and dic-0294.txt is my input dictionary. Below are the results of running this first attack.



With 36% of the passwords cracked by a very vanilla attack on a slow computer, that isn't bad. Next up is running the purge rules.

Step 2) Delete the previous hashcat.pot file. Run a cracking session on the remaining passwords using the purge ruleset. The command I ran was very similar to the one above:
./hashcat -D1 -m 100 -a 0 myspace_rand_1m_hc.txt -r rules/d3ad0ne_purge.rule dic-0294.txt
Note, I took off the --remove option since I didn't care about removing cracked hashes for this. I also deleted the previous .pot file of cracked passwords since I only wanted to store passwords associated with this test. Here is a screenshot I took partway through the cracking session:



As you can see. many of the cracked passwords were due to "insert a @ symbol" vs. using the purge rule. Here are the final results:



The session managed to crack 405 unique hashes. I then went into the pot file and deleted any password containing the '@' character so what was left was due to the purge rule.  This left me a list containing 128 unique passwords. A screenshot is shown below:



Now it's hard to tell what people were thinking when they created these passwords, but glancing through the list, it certainly appeared that most of the cracked passwords were simply due to limitations in my input dictionary vs users purging characters from their passwords. I was actually surprised 'jayden' and 'fatguy' weren't in dic-0294 but after double checking it they were in fact missing from it.

Now, input dictionaries are always going to be limited to a certain extent so these cracks absolutely count. They only represent uniq cracked hashes though. For example, if 20 people used the password 'imabear' it would only be counted once. To figure out how many total accounts would have been cracked, I re-ran the above dictionary through John the Ripper against the myspace_1m_rand list. This was to get the files into John's cracked file (pot) format. For example here is 'imabear' in john.pot:

{SHA}QiPoQuc4sqqs3J+OulWLt3H09kY=:imabear

The reason I did this was because JtR has a really cool feature '-show' that will match up cracked passwords with the accounts in the target set. Running the command:

./john -format=raw-sha1 -show myspace_rand_1m_clean.txt

resulted in the following output:


Therefore the purge rules cracked a total of 164 passwords from the test set, or 0.0164% of the total. That's a really small amount. Admittedly every password cracked is nice, but still I was curious if the purge rules were better then just running random mangling rules instead. Luckily, Hashcat supports a command to test that out:
./hashcat -D1 -m 100 -a 0 myspace_rand_1m_hc.txt -g 500 dic-0294.txt
The only difference with the above command and the previous Hashcat commands I ran was that instead of a rules file I specified '-g 500'. What that does is tell Hashcat to generate 500 random rules to run on the input dictionary. I choose that number since there were over a thousand rules in my D3ad0ne_purge dictionary and I guestimated that about half of them were actual purge rules. When I ran the above I ended up cracking 23 more passwords. That's significantly less then the 164 the purge rules did but in the grand scheme of things it was about the same in effectiveness. Considering some of those rules were likely duplicates of rules in D3ad0ne_base ruleset as well I'd argue that running a purge rule is about equivalent of running a random mangling rule. In fact if you don't already have purge rules in your mangling set, I'd probably recommend not worrying about it and just running a brute force method like Markov mode to stretch your dictionary instead.

Conclusion:

For once my gut feeling was right and the value of Hashcat's purge rule '@' was limited in the tests that were run. That's not to say that it's not useful. It may help when targeting certain users or aid in keeping the size of your dictionary files on disk manageable. But at the same time, it's not a major feature that other password crackers should rush to mimic. I hope this blog post was informative in helping show different ways to evaluate the effectiveness of a mangling technique. If you have any questions, comments or suggestions please feel free to leave them in the comments section.

Thursday, July 7, 2016

Cracking the MySpace List - First Impressions

Alt Title: An Embarrassment of Riches

Backstory:

Sometime around 2008, a hacker or disgruntled employee managed to break into MySpace and steal all the usernames, e-mails, and passwords from the social networking site. This included information covering more than 360 million accounts. Who knows what else they stole or did, but for the purposes of this post I'll be focusing only on the account info. For excellent coverage of why the dataset appears to be from 2008 let me refer you to the always superb Troy Hunt's blog post on the subject. Side note, most of my information about this leak also comes from Troy's coverage.

This dataset has been floating around the underground crime markets since then, but didn't gain widespread notoriety until May 2016 when an advertisement offering it for sale was posted to the "Real Deal" dark market website. Then on July 1st, 2016, another researcher managed to obtain a copy and then posted a public torrent of then entire leak for anyone to download. That's where things stand at this moment.

Unpacking the Dataset:

The first thing that stands out about the dataset is how big it is. When uncompressed the full dump is 33 Gigs. Now, I've dealt with database dumps of similar size but they always included e-mails, forum posts, website code, etc. The biggest password dataset I previously had the chance to handle was RockYou set which weighed in at 33 million passwords and took up 275 MB of disk. Admittedly that didn't include user info and passwords were stored as plaintext, (the plaintexts are generally shorter than hex representation of hashes), but still that's a huge leap in data to process. Heck, even the full RockYou list is a bit of a pain to processes.

Let me put this another way. Here is a simple question, "How many accounts are in the MySpace list?" Normally that's quick and easy. Just run:
wc -l
And then you wait ... and wait ... and wait ... and then Google if there is a faster way to count lines .. and then wait. 16 minutes and 24 seconds later, I fount out there were 360,213,049 lines in the file. Does that equal the number of total accounts or is there junk in that file? Well, I don't want to spend the 30+ minutes to run a more complicated parser so that sounds about right to me ¯\_(ツ)_/¯.  Long story short, doing anything with this file takes time. Eventually I plan on moving over to a computer with a SSD and more hardware which should help but it's something to keep in mind.

That being said, the next question is "What does the data look like?" Well here is a screenshot of the first couple of lines.


As you can see, it takes the form of unique ID that increments, e-mail address, username, and then two hashes. All of the fields except the unique ID can be blank.To answer the next question, "Why two hashes?" well ... ¯\_(ツ)_/¯. That's something I plan on looking at but I haven't gotten around to it yet.

Update: 7/7/16: Just as I was finalizing this post, I ran across CynoSure Prime's analysis where they managed to crack almost every single hash in this dataset. You can find their blog post here. It turns out the second hash is actually the original password, (full length with upper case characters) salted with the user_id. I'm going to leave most of this blog entry unmodified even though how to parse the list can certainly be optimized based on this new info. </Update>

Other random tidbits: The final unique ID is 1005290998. That's significantly higher than the number of accounts in this dataset so there are large chunks of accounts that were deleted at some point in time. My guess is when a user deleted their MySpace account it really was deleted in which case, kudos to MySpace for doing that! That's just a guess though. As you would expect the first accounts were administrative accounts and system process accounts. I know I blocked out the user e-mails but I will admit I googled the first name. When I found his LinkedIn profile my first reaction was, "Wow, he needs brag about his accomplishments more than just saying:"
Developed, and launched the initial Myspace community which currently has over 100 million members and was acquired by Fox Corp. for $580 million.
I mean if it was me I would post that database dump on my resume! Of course further googling led me to to the book "Stealing MySpace." Reading about all the drama that went on and suddenly there went my evening. Needless to say, the general layout of the dataset looks legit but one more interesting fact was all those gmail accounts. MySpace was created in 2003, Gmail opened for invitation access in 2004, and the lead engineer of MySpace left in 2003. So employees were able to update their accounts after they had left the company. Once again, kudos to MySpace but that was surprising.

Password Hash Format:

I initially learned from Troy Hunt's posts that the hashes were unsalted SHA1 with the plaintext lowercased and then truncated to 10 characters long. Therefore the password:
123#ThisIsMyPassword
would be saved as:
 123#thisis
I've heard some people say that this means hackers can just brute force the entire key-space. If I was feeling nit-picky I could argue *technically* that's beyond the reach of commercial setups as 70^10 is still a really big number (27 characters + 10 digits, + 33 special characters). In reality though by intelligently searching the key-space, (who uses commas in their password?), a vast majority of unsalted password hashes can be cracked under that format. It's a bit of a moot point though since the real issue is using such a fast unsalted hash. Ah 2008, when it was still acceptable to claim ignorance for using a bad hashing set-up.

Long story short, from my experiments so far I can confirm that it appears all the hashes had their plaintexts lowercased and truncated to 10 characters. Also, yes, serious attackers are very likely to crack almost every password in this list.

Cracking MySpace Passwords With John the Ripper (Take 1):

After glancing around the dataset, the next thing I wanted to do was start cracking. To do this, I needed to extract and format the hashes. My first attempt to do this yielded the following script:
cat Myspace.com.txt | awk -F':' '{if (length($2) > 3) {print "myspace_big_hash1:" substr($4,3); if (length($5) > 3) {print "myspace_big_hash2:" substr($5,3)}}}'  > myspace_clean_big.hsh
To point out a couple of features, I was labeling my data-sets so they are correctly identified in my input file, (I maintain different input files for different data sets but still having that name there has saved me trouble in the past), and I was removing blank hashes. Also I was stripping the username and e-mail addresses since I really didn't want to see passwords associated with names. The problem was the resulting file was huge. I didn't save it, but it was bigger than the original list! I couldn't afford the full naming convention. Therefore I switched to to following script:
cat Myspace.com.txt | awk -F':' '{if (length($2) > 3) {print substr($4,3); if (length($5) > 3) {print substr($5,3)}}}'  > myspace_temp.hsh
 And then to remove duplicates I ran:
sort -u myspace_temp.hsh > myspace_big.hsh
The resulting file was a little under 8 gigs which was better. Problems occurred though when I tried to load the resulting hash file into JtR. More specifically after letting it run overnight, JtR still hadn't loaded up the password list and started making guesses. That kind of makes sense, That's way more passwords than normal to parse and my laptop only had 8 gigs of ram so even in an ideal case the whole list probably couldn't be stored in memory. That's not an ideal cracking situation. Being curious, I then decided to try and load it up in Hashcat.

Cracking MySpace Passwords With Hashcat:

Loading up the dump in Hashcat was interesting since it gave me warnings about records in the dataset that weren't parsed correctly.


Regardless, once all was said and done, I ended up with the following error:
ERROR: cuMemAlloc() 2


Doing some quick Googling, I found out the cause was that the GPUs ran out of memory trying to load the hashes. Not surprising but it meant I had to take a different approach if I wanted to crack any hashes from this set.

The easiest way to do this was to split the full list up into smaller chunks and then crack each section by itself. One way to do that is with the split command
split -l 5000000000 myspace_big.hs myspace_split_
This will break up the list into 5 million hash chunks that follow the line of myspace_split_aa, myspace_split_ab .... The downside is since you have to crack each file individually, the total cracking time has been increased by close to a factor of 40.  I'd recommend playing with the file size to maximize the total number of hashes per file that your GPU supports. On the plus side, after all that I can now finally crack passwords!

Finally cracking passwords

One issue I had was that there were so many hashes cracking all the time that it was hard to see the status of my session. It's not that my attack was effective, but with a list that large it's hard not to crack something. I belatedly realized I could pause hashcat, print the status and then resume. Or are Jeremi Gosney replied on Twitter, I could have used the following switch with Hashcat:
-o /dev/null 

Closing Thoughts:

I'll admit I'm writing this conclusion with CynoSure Prime's analysis fresh in my mind. While the MySpace list is great for giving me a real world challenge to knock my head against, I'm not sure how useful it'll be from a research perspective. The 66 million salted hashes that were created from the original plaintexts will be nice for new training and testing sets so researcher's don't have to keep using RockYou for everything. That being said, MySpace is actually an older list than RockYou. Also I fully expect there to be a lot of overlap in the passwords between the two datasets. RockYou's entire business model was allowing apps to work across multiple social networking sites in the era before federated logins. RockYou was storing MySpace + LiveJournal + Facebook passwords in the clear so its app could post cross-post across all of them. Statistically I expect MySpace and RockYou to be very similar. 

What worries me though, and what makes the MySpace list special, is it has user information associated with all those 360 million accounts + password hashes. Just about everyone who did any social networking and is between the ages of 24 and 40 is in this dump. I realize this list has been in the hands of criminals for the last eight years and a lot of the damage has already been done. Still, now that this list is public it enables many more targeted attacks to be carried out by malicious actors from all over the internet. How long before we start seeing the top 100 celebrity passwords posted on sites like Gawker? What about ex's using this information against former partners? Previous public password dumps have been much more limited or didn't contain e-mail addresses. I really don't know what will happen with this one. Hopefully I'm being overly paranoid but it's hard not to think about the downsides associated with this dump being widely distributed. On the plus side, hopefully this is the only mega-breach we'll see with weak password storage. Sites like Google and Facebook are now using very strong hashes which will limit a lot of damage if their user information is disclosed in the future.

Wednesday, May 11, 2016

Getting Started With Quantum Computing

“More often than not, the only reason we need experiments is that we're not smart enough.” ― Scott Aaronson
IBM is currently offering free time on one of their quantum computers for interested researchers. Yup, you can program a real life quantum computer right now! In fact, I highly recommend signing up which you can do here. Go ahead and check it out. It took me about 24 hours to get my account approved so you can come back here afterwards to finish reading this post.

What got me interested in this opportunity was that while I have tried to keep up on the field of quantum computing, it basically is magic to me.  I've been building up some general rules in my head about quantum systems, but any sort of question about them that did more than scratch the surface left me shrugging my shoulders. Also it was hard to separate fact from fiction.
Quantum Laws (in Matt's head):
  1. Quantum is a system like everything else. 
  2. A quantum state is a configuration of the system.
  3. A quantum state changes; it naturally wants to evolve, but it can always be undone.
  4. Evolution of a closed system is a unitary transformation on its Hilbert space.
  5. Only the Keeper can block quaffle shots thrown by the opposing team
  6. Do not feed your qubits after midnight
That's why IBM's offer interested me so much. Let's be honest, there's always going to be some magic when it comes to quantum systems, but the opportunity to actually get hands on time programming one would at least turn the whole experience into alchemy if not science for me.

Participating in IBM's Quantum Experience:

After your account is approved you immediately have access to a research portal which IBM calls the "Quantum Experience". It's currently in Beta, but beyond a few bugs in the composer, (which I'll talk about in a bit), it's a very well polished site.

Are you ready to experience some quantums!?
The portal is divided into three tabs, "User Guide", "Composer", and "My Scores". The User Guide is fairly self explanatory but actually impressed me more than the quantum computer itself. I'm still making my way through it but the authors deserve a pat on the back since it's some of the best technical writing I've seen in a while. What's more, there are multiple links to the quantum simulator with examples for each section so you can read about a particular operation or theory and then run a simulation of it and check the results. You can then modify the example, re-run it, and in general play around with the concept before going back to where you were in the user's guide.

Don't worry, it starts out with simpler concepts.
The Composer is the programming interface for the quantum computer. It is attached to a quantum simulator as well. In it, you write "Scores" which are basically circuits to run on the quantum computer. IBM calls them scores since with five qubits to work with it looks like sheet music. That's also how the composer got its name.

An example score in the composer. Yes, this is the default example for Grover's algorithm, but I renamed it since it's all about how you frame the problem.
You can simulate a given quantum score as much as you'd like. When doing so, (or creating a new score), you have the option of choosing an ideal or real layout. The difference is that there are physical limitations of the real quantum computer which directly impact how you design your score.

Red pill or blue pill?
Qubit 2 is the gatekeeper
That's one of the neat things about using this service vs a standard quantum simulator. You can see some of the limitations that current implementations have to deal with. For example, Qubit 2 is the only qubit that can talk to other qubits, so if you want to perform operations like conditional NOTs, (CNOT), that has a huge impact.

Running it For Real:

That's all fun but the real reason you are probably using IBM's service is to actually run programs on their quantum computer.  I'll admit the "good old days" of punch card mainframes was before my time but the whole setup is somewhat similar. You are given "Units" which are then used up when you run a program vs simulate it. IBM currently is being very generous with giving them out and you can request more for free. The typical program uses around 3 units to run. The results are probabilistic, so each run can be made up of multiple "shots", and then in the end the average of the results is presented to you. Further display options, such as blotch spheres where the results are plotted as a vector on a 3D sphere take even more shots to generate.

I feel a bit guilty about not just running this once, but it's the same price!
To further help you save units, as well as get you the results sooner, if your quantum score has previously been run by someone IBM will give you the option to see the saved results vs re-running it yourself.

Well I guess I wasn't that original...
If you do choose to run your program you are added to the queue. So far, most of my results have been available within a couple of minutes.

Someone else is quantuming ahead of me
Looks like the plain-text is '00'. As I said, it's all about framing the problem.
Remember earlier when I said the runs were probabilistic? You can really see that in the results above. The correct answer was '00', but around 24% of the time a different answer was chosen. 

Issues With the Composer:

I need to submit bug reports, (the bug icon is prominently displayed in the lower right corner of the screen on the portal site), but I've been hesitant to since all the issues I've run into have been very minor. Sometimes the composer gets a bit wonky, (gates get stuck or aren't saved when you run your simulation), but the problem goes away when I refresh my screen. Also, it would be nice if the transitions between composer and users guide were quicker or I could have them open side by side, (opening multiple browser windows does not work). All in all though, I haven't run into any major issues considering this program is currently a beta release.

Summary:

You are not going to be able to hack any Gibsons with IBM's quantum computer. It's very limited, but that is kind of the point. It shows where the field of quantum computing is right now. IBM is providing an amazing free learning opportunity with this service and if you are at all interested in the future of computing I highly recommend checking it out.