Tuesday, December 28, 2010

Breaking PIN for software tokens

Software tokens usually use PIN for accessing token functions. Some vendors avoid implementing methods of validating PIN within an application. PIN is validated implicitly by validating dynamic password or response value generated by the application. It is possible because PIN is hashed and then the result (key) is used to decrypt a main key. Next, the main key is used by the application to generate OTP. So when the wrong PIN is provided, the application will decrypt incorrect key and will generate not valid OTP.

But under some conditions it is possible to find out the right PIN code. First of all, we have to obtain at least one valid OTP. For time based tokens we have to know the time when the valid OTP was generated. For counter based tokens we have to find out the counter value. It can be a problem so the attack is more effective when we know an approximate value of the counter for corresponding OTP. The counter is stored in local database of the application and usually we are able to modify the content.

The last step is to perform brute force attack and to compare the values of two dynamic passwords. After every try we have to recover the previous value of the counter (for counter based tokens) or set the right time (for time based tokens).

The attack is very difficult to perform against mobile devices with offline software tokens. We have to have an access to local database of mobile device and capture valid OPT which is send via network.

Tuesday, June 15, 2010

Two-Factor Transaction Authentication

More and more banks to mitigate banking frauds implement new methods of authenticating electronic transactions. Few methods rely on the customer's mobile phone. It looks that today the most popular are SMS messages and software challenge response tokens. Both methods can help in detecting the Man in the Browser attack. And in both cases the customer is able to verify details of transaction during making a payment.

A discussion about which method is "better" seems to be quite interesting. One of such discussion takes place on the OWASP-POLAND mailing list. What "the better" mean? It is tricky to justify which method is "better" because banks must consider several factors. Security is important but system has to be customer-friendly and inexpensive.

I am not going to write here about attack vectors and all pros & cons but in my opinion method which is based on SMS messages has more advantages.

The customer can receive SMS message with transaction details including full destination account number, an amount and an authentication code (valid only for particular transaction). The code is used by the customer to confirm the transaction. When the customer submits authentication code to online banking system then the bank will consider that the customer has just verified transaction details and accepted the payment.

The second method is not customer-friendly enough and can have security issues. I have to emphasize here that I do not have knowledge about all solutions which are available on the market and it is possible that some solutions work in different way and offer different level of security.

As I mentioned, the second method is based on software challenge response tokens. An application (software token) is installed on the customer's mobile phone. Bank and the customer shares the same secret key which is used to generate and verify challenge and response codes.

When the customer makes a payment, the bank's application generates the challenge by using destination account number, an algorithm and a secret key. Only few digits from destination account number are used to create the challenge. Of course all destination account number can be used but then the challenge will be too long. The customer submits the challenge to the mobile phone and after acceptance, details about transaction are displayed on the screen (but only selected digits from destination account number) - the challenge, the algorithm and the secret key are used to calculate and display such information. Sometimes bank requires from the customer to submit additional information like the amount of payment. And then this value is used to calculate the response. In next step, the customer submits the response code to internet banking application. When everything is fine, the bank will accept the transaction. It is a little bit complicated, isn’t it?

I'm going to write more about algorithms in next article. Here let's just focus on numbers.
So why such system can have more secure issues than SMS messages? All because of the challenge code length. Limitation of challenge code length does not allow to display all numbers of destination account. For example, in 6-digit challenge it is possible to “hide” only 4 digits from destination account number.

We can imagine an attack vector where an intruder will use different account numbers which contain the same 4 digits like original destination account number. It is important to mention about the length of IBAN notation of destination account number. The IBAN consists of the Country Code, a 2-digit checksum, a Bank code and the account number. The account number in Poland has 16 digits.

Now, let’s try to calculate the probability of above attack vector. I will use the formula for classical probability P(A) = n (A) / n (S).
Below calculations are performed for the attack where victim and mule accounts have the same Bank code. It is very popular scenario used by frauders. Calculation are done for 4, 5 and 6 digits. Also, I checked if showing first 2-digit checksum from destination account number decreases the risk of the attack.

In addition to the formula I created the Python script to compare results. The script generates the valid account numbers.

Variant 1A – last 4 digits are extracted from the challenge and displayed on mobile phone. Probability is 1:10000. It means that we have to have 10000 accounts to perform a successful attack. Few valid account numbers:

PL50101000000000000000008765
PL98101000000000000000018765
PL49101000000000000000028765
PL97101000000000000000038765
PL48101000000000000000048765
PL96101000000000000000058765
PL47101000000000000000068765
PL95101000000000000000078765
PL46101000000000000000088765
PL94101000000000000000098765


Variant 1B – first 2 and last 2 digits are extracted from the challenge and displayed on mobile phone. It is important to emphasize that the checksum is only used to verify if an account number is correct. It is not related to security and we can notice that probability of attack is the same 1:10000. The result received from my script is presented below:

PL50101000000000000000008765
PL50101000000000000000018465
PL50101000000000000000028165
PL50101000000000000000037865
PL50101000000000000000047565
PL50101000000000000000057265
PL50101000000000000000066965
PL50101000000000000000076665
PL50101000000000000000086365
PL50101000000000000000096065
...
Variant 2A – last 5 digits. Probability is 1: 100000. Few examples of valid account numbers:

PL94101000000000000000098765
PL89101000000000000000198765
PL84101000000000000000298765
PL79101000000000000000398765
PL74101000000000000000498765
PL69101000000000000000598765
PL64101000000000000000698765
PL59101000000000000000798765
PL54101000000000000000898765
PL49101000000000000000998765
...

Variant 2B – first 2 and last 3 digits. Probability is 1: 1:100000. The same is for Variant 2A.

PL50101000000000000000008765
PL50101000000000000000105765
PL50101000000000000000202765
PL50101000000000000000299765
PL50101000000000000000396765
PL50101000000000000000493765
PL50101000000000000000590765
PL50101000000000000000687765
PL50101000000000000000784765
...

Variant 3A – last 6 digits. Probability is 1:1000000.

PL89101000000000000000198765
PL39101000000000000001198765
PL86101000000000000002198765
PL36101000000000000003198765
PL83101000000000000004198765
...

Variant 3B – first 2 digits and last 4 digits. Probability is 1:1000000.

PL50101000000000000000008765
PL50101000000000000000978765
PL50101000000000000001948765
PL50101000000000000002918765
...

Above examples show that the risk of attack is very low but the situation can change when we consider that destination account is in different bank and/or that few last digits have value 0 (what sometimes can be true).

Thursday, June 03, 2010

Internet Banking Security

Last few months I was engaged in projects related to internet banking security. I was performing analysis of various variants of malicious software called “banking Trojans” and spent a lot of time playing with fraud detection systems. Currently I am involved in another project related to new methods of transaction authentication. In the meantime I was invited to workshops for the police officers and prosecutors.
I would like to share some materials (in Polish) that I prepared for these workshops. I hope that it can be useful for some of you. First document is available at http://forensic.seccure.net

Sunday, May 17, 2009

Anti-forensic techniques in malware

It is not a surprise that anti-forensic techniques are being used by malware writers to increase the examiner’s time. Few weeks ago I was analyzing malware for the customer (the malware has been identified by VirusTotal as Zbot-Trojan). I noticed quite interesting behavior of the malicious code.


The malware self-modify file attributes - MAC times of file which contain malicious code are modified during installation & execution (system startup). This is an example of anti forensic method which makes the creation of Timeline Activity less valuable.


Trojan is using GetFileTIme() and SetFileTime() API which are exported by kernel32.dll. MAC times of malware executable file are set to MAC times of an operating system library – ntdll.dll file.



We can still use time related information from MFT but above activity can lead to misinterpret results of reconstructed timeline activity.

Friday, August 24, 2007


Persistence of documents on file systems

Persistence of data on storage media is always an interesting topic. No one can predict how long deleted file will be stored on hard disk.
Sometimes during investigation it is necessary to present at court the history of resident documents or deleted documents. Today I would like to discuss the behavior of Microsoft Word application and how it influences on creating timeline history of information. Some mentioned behaviors are the same for other applications like AutoCAD. I will focus on method of creating timeline history of documents which were edited by the users in the past. I this article I’ve used the NTFS file system but similar behavior can be observed on FATx file systems. Analyzing data from documents metadata are out of the scope of this document.

1. General behavior of Microsoft Word

Let’s say that we already have a file on local file system. It means that several clusters are allocated for storing the content of the file. For better explanation I will use particular cluster numbers. Our file has at least one run list which starts at cluster number 0x15ca0 (89248).
When we add or remove at least one character to/from the doc file and save changes then a new MFT entry and new clusters will be allocated for storing metadata and content of the updated file. New allocated FILE entry will store the original name of the file. The FILE entry with the previous version of the file is also updated because the file is renamed into ~WRL????.tmp. At this time we have 2 allocated FILE entries which points to different clusters. (There are more changes in the FILE entry but there are not so important at this stage – of course MAC times are always useful ;)).
If we close the file, the MFT entry and clusters of ~WRL????.tmp file will be freed. It means that the operating system can overwrite content of entry and clusters at any time. The picture below shows clusters (previously reserved for first version of our file) which now are marked as unused. As I mentioned above the first cluster number is 0x15ca0.

The content of updated file is now stored at new clusters. The first cluster of the first run list is 0x15cef (89327).
When we repeat above activity (1. open file, 2. change something and finally, 3. close it), the situation will be the same. It means that new entry in the MFT will be allocated (very often the MFT entry freed previously are allocated once again, so only 2 FILE entries are usually used concurrently – I observed this behavior only for the MFT entries – not for clusters). Also new clusters will be allocated for updated file and the old clusters will be freed. In this case we can still recover previous files, even after hours or days, but as always, there is a risk that free clusters which contain previous versions of file will be simply allocated by the operating system.
Anyhow, we have to use data carving techniques to find all doc files on the file system (as we know the header of the doc file is well known ;)).

2. The save button (ctrl + S) during editing documents

Every “save process” invoked by the user will create new file on the file system – the previous one is renamed with the following prefix ~WRL. For example: dokument.doc file is being edited for some period of time and the user had saved changes in the content 4 times. The result is presented below:

The above statements are true only when the user had changed the content of the document before invoking “save process” (save process = press the button save or press CTRL + S).
As we can see all created files are visible during “editing session”. The content of each of file is stored at different allocated run lists. It also means that each file has its own (allocated) FILE entry in the MFT.

The part of the MFT is presented below:



The dokument.doc is allocated on clusters where the first cluster has number = 0x15c3b (89147). The ~WRL0003.tmp starts from 0xfaa8 (64168). The ~WRL0005.tmp starts from 0x15b06 (88838). The ~WRL0656.tmp starts from 0x15bee (89070). The last one - ~WRL1188.tmp starts from 0x15ba1 (88993).

When the file is closed by the user only one file will stay visible – document.doc. Rests of documents are deleted automatically. Delete means that the entries in the MFT and clusters are freed.

Such behavior allows us to trace the document history. We can easily recover each file because we can identify FILE entries in the MFT. We can also create the timeline history by analyzing MAC times which are written inside FILE entries. It is worth to mention that above entries and clusters can be allocated by other users or process (because there are not allocated).


3. Auto-save option


There is one more place from which documents edited (in the past) by users can be recovered. Microsoft Word has auto-save feature enable by default. This feature creates the copy of documents being edited. The default settings are presented below:

When the file is open & the content of file was modified, Microsoft Word will create the copy in “safe location” defined in “File Locations” tab. The name of the file is “AutoRecovery save of .asd”.
When the user modify the content of the file, after some period of time (10 minutes by default), Microsoft Word automatically will save changes in new file with the same name (“AutoRecovery save of .asd”) and will free clusters which contain the content of old file. Also the FILE entry of the MFT is freed. In brief the behavior is similar to activities described in first part of this article – General behavior of Microsoft Word. The only difference is that Microsoft Word closes and opens .asd file in background. It is worth to mention that each time new clusters are allocated, so the same content of file is at least in 2 different locations on file systems (original and backup location).

Friday, September 08, 2006

Partial file matching in host intrusion prevention systems

A few weeks ago Jesse Kornblum released the SSdeep [1] tool. The main propose of this tool is to identify similar files by calculating hashes and comparing those hash values to the known values computed previously and stored in database.
T
he big difference between SSdeep and other well-known tools to generate hash values (like md5sum) is that SSdeep calculates hash values for small chunks of target file. So if someone modifies only few bytes of a file, the new calculated hash value will be similar to the previous one [2].

49152:1xY5ndv7xb2OnhONkVCUDNl3lBB6U0ahgyFFvebjj:1xsdvH9RbMU0HyFFve3j,"gg.exe"
49152:PxY5ndv7xb2OnhONkVCUDNl3lBB6U0ahKvFBvebjj:PxsdvH9RbMU0VvFBve3j,"gg.exe"


The main features (altered document matching and partial file matching) of the SSdeep are very helpful during forensic analysis. But you can also use partial file matching feature in host intrusion prevention systems. You can use this function to disable execution of specific programs.

It was rather useless to use “normal” hash values in IPS to prevent execution because such solution can be easily cheated. After changing even one byte in executable file the new hash value is completely different. As you can guess there are a lot of places in an executable file which can be modified and the exe file still will be executed without any problems.

Before one byte modification:

C:\ssdeep>md5sum gg.exe
\0323de930ed3e8e0552843db7e16dab7 *C:\\ssdeep\\gg.exe

After one byte modification:

C:\ssdeep>md5sum gg.exe
\bfa5aed4078c2a316786c1e7cb1e4f8e *C:\\ssdeep\\gg.exe

As mentioned above the SSDeep generates has values for small blocks of target file. So few modifications of target file will not change the whole value of generated hash as it is presented below:

Before modifications:

C:\ssdeep>ssdeep -l gg.exe
ssdeep,1.0--blocksize:hash:hash,filename
49152:PxY5ndv7xb2OnhONkVCUDNl3lBB6U0ahKvFBvebjj:PxsdvH9RbMU0VvFBve3j,"gg.exe"

C:\ssdeep>ssdeep gg.exe > sum.txt
C:\ssdeep>ssdeep -m sum.txt gg.exe
C:\ssdeep\gg.exe matches C:\ssdeep\gg.exe (100)

After few modifications in .rsrc section:

C:\ssdeep>ssdeep -l gg.exe
ssdeep,1.0--blocksize:hash:hash,filename
49152:1xY5ndv7xb2OnhONkVCUDNl3lBB6U0ahgyFFvebjj:1xsdvH9RbMU0HyFFve3j,"gg.exe"

C:\ssdeep>ssdeep -m sum.txt -p gg.exe
C:\ssdeep\gg.exe matches C:\ssdeep\gg.exe (91)

Additionally, the percentage value of similarity is generated (value in brackets).
By setting the value to 60 or 70 we can implement quite effective method of blocking execution of specific files.

This solution could block the execution of particular program and even new versions of it because very often new releases are based on the previous one.

Useful links:

[1] http://ssdeep.sourceforge.net/

[2] http://www.dfrws.org/2006/proceedings/12-Kornblum.pdf

Tuesday, August 22, 2006

Grsecurity and forensic analysis

A few weeks ago a new version of grsecurity 2.1.9 was released [1]. It is worth to mention about it because one new features affect how Linux physical memory forensic analysis will be performed.

Firstly, all physical memory pages which are freed are overwritten. During freeing page frames, a new PaX feature zeroes out them. It means that it will be impossible to recover content of pages such as memory mapped files from memory images which represent /dev/mem or /proc/kcore. Still, we can use methods of analysis which are based on interpreting internal kernel structures or trying to detect and recover hidden data [2].

Secondly, swap areas can be encrypted. It means that creating bit-by-bit copy of swap space partition from hard disk which was removed from compromised machine is useless.

Useful links:
[1] http://www.grsecurity.net/news.php#grsec219
[2] http://forensic.seccure.net/pdf/mburdach_digital_forensics_of_physical_memory.pdf