Showing posts with label 03 Computer Forensic. Show all posts
Showing posts with label 03 Computer Forensic. Show all posts

Friday, February 19, 2016

Computer Forensic: - Forensic Workflow III & IV – Reporting & Testify as Expert Witnesses


As per what I mentioned in the past about Computer Forensic is mainly about story telling by presenting the fact to facilitate the investigating works and the judgement of the case, reporting would be one of the most critical area that demonstrating the examiners seniority following the analysis skill level.  Computer forensic report is usually litigious and likely to be distributed to both technologies technical and non-technical parties.  As such, accurately presenting the fact in a human-readable way with no bias would always be the key of writing a good report and, going forward, the following would be some noticeable requirements and pre-concept according to my computer forensic examiner’s experience.

1.      Reporting purpose

The ultimate objective of reporting is to present the fact to address the technical concern.  This must be presented in the manner of understandable and human-readable.  Jargon must be carefully identified assuming that the readers are having zero computer knowledge especially if the report is going to be used in litigations, the report readers would then likely to be non-technical individuals, such as attorneys, judge, jury, etc.  Besides, since the report may be the only opportunity to present the facts found in the investigation, this must encompass the whole of any testimony in details for the trier of fact. Otherwise this may induce serious financial and legal consequences due to misrepresent any of the findings.

2.      Report structure and style

Ideally all examiner reports are required to be capable in standing on their own and providing the clear and accurate information to anyone, who read the report, to reach the same conclusions.  Terms such as “many”, “significantly”, highly”, etc, which are subjective and able to be interpreted in multiple ways must be avoided.  Industrial accepted reference should be used whenever possible as to substantiate the statements and the content presented.  Also, every single page should contain a unique identifier include the report title, date of issue and also the examiner basic info / company name for references purpose.  The more importantly, the examiner’s background are suggested to be clear state and identified at the beginning of the report and the following are the sections that typically included in the examiner reports:-

·         Cover page
·         Executive summary
·         Examiner profile
·         Introduction / Background of the case
·         Scope of work
·         List of supporting documents
·         Observations and analyses conducted
·         Examiner’s log
·         Chain-of-custody records
·         Photographs / reference materials
·         Disclaimers
·         Signature

3.      Quality assurance

When the issues are complex, mistake and errors may always be present no matter how careful the examiner is.  As such, peer review for me would be suggested as one of the most effective and essential way to resolve these issues.  Peer review is to conduct by the one who is at the same level or more senior than you in terms of experience.  At least two peers are suggested for you to invite as your peer reviewer.  It is not only a general review in terms of grammatical errors or the phases and wordings used, but also a quality assurance on any of the assumptions and analysis made under the report.

The above would be only some basic idea on how a forensic examiner report looks like.  In conclusion, here comes the end of the Computer Forensic Workflow overview.  In the future computer forensic post, I would try to share some of the real-life examples.  Hope all of you found this useful and I would be always happy to discuss if you are interested.

Previous Step

Thursday, February 11, 2016

Computer Forensic: - Forensic Workflow II – Forensic Analysis


Following up from the data acquisition, the next is to conduct actual forensic analysis.  There are numbers of analyses available and the most common quick analyses are shared as below.   

1.      Deletion Analysis

This is one of the most common analyses that required in almost all kind of cases.  We could normally achieve this easily by leveraging the forensic software functionalities.  Depend on the custodian’s OS version, the data storage device type and the forensic software, the high level results, such as no. of file recovered, could be always different.  Also, deletion analysis might not be available in some situations, such as SSD, Linux, etc.  On the other hand, deletion analysis would also be available to mobile forensic but it would be subject to the level of data access that available to examiners and the mobile device models. 

2.      Signature Analysis

One of the most common ways to hide the data files for scanning is to alter its file extension, for example pretending an Excel file to a Text file by changing the extension from xlsx to txt.  This would possibly affect the file extraction (if this relies on file type) and the subsequent keyword search process on e-Discovery or any other subsequent forensic data review process.  However, the fact is that extension is not the only way to identify the file type.  There would be always a file header for each file telling the system that what type of file is it.  Signature Analysis is to confirm if the file header / signature tie to the extension and identify the potential real identities.

3.      Hash Analysis

Files might be duplicated for backup purpose in general computer usage OR known as no risk since they are system file in fact.  In order to identify this, cryptographic hash functions could help.  According to Wikipedia, “a cryptographic hash function is a hash function which is considered practically impossible to invert, that is, to recreate the input data from its hash value alone.”  MD5 is one the most commonly used hash function for data integrity verification purpose.  If two files having the same hash code, then it would be confirmed and accepted to be identical in terms of file content.  And for the zero-risk files, we may leverage the information from a project namely National Software Reference Library (NSRL) which provide a Reference Data Set (RDS) of most known and traceable software applications’ files.  By comparing the hash with each other and with the NSRL list, the review population would be reduced effectively.

4.      Keyword search

There is number of ways to perform analytics on the data acquired and Keyword Search would be known as the most common one.  The basic idea is similar to perform search in Google by input the keyword and review the search results accordingly.  There would be plenty of ways to run keyword search, such as running in the forensic software or perform file extraction and run Windows search.  The most effective, traceable and auditable way is to load the data in scope into the e-Discovery platform for search and review.  In terms of loading data for search and filter, ensure that not all data has to be loaded normally since there always exist advanced data analytics and filtering process, such as filter by file type / data, apply analytics on user deletion activities, etc. to trim down the data size for data loading and run the subsequent keyword search to identify the high risk data population for review.
 
Please note the above would be only a quick overview of the most common task for general investigation purpose.  In fact it would be thousands more analysis that available for deep down investigation.  I would share more on this in the near future with some real-life example.

Previous Step | Next Step

Thursday, January 21, 2016

Computer Forensic: - Forensic Workflow I - Data Acquisition And Preservation

Having been focused on Data Analytics in the previous posts, it’s time for Computer Forensic.  As what I said earlier, Computer Forensic is mainly about story telling by presenting the fact to facilitate the investigating works.  As such, in-depth IT technical knowledge on hardware and software as well as proper presentation skill would always be essentials.

Back to 2006 when I first jumped into this industry as a law enforcement officer, almost all the cases are about analyzing hard disk.  However, it is no doubt that today Computer Forensic is becoming more and more complicated and people in the recent years are starting to call it as Digital Forensic.  The fact is that types of digital devices are becoming more and more, such as smart phone, tablet, etc.  Having said that, the workflow of computer forensic works still pretty much the same and is as below:-

1.      Data Acquisition And Preservation
2.      Forensic Analysis
3.      Reporting
4.      Testify as Expert Witnesses

The first step is to get the related data and to preserve it with an auditable process and proper chain-of-custody maintenance regardless the targeted devices type.  A sounded forensic process is required and leveraged to ensure that no-alter exists during the acquisition by a proper forensic kit with the industrial acceptable verification algorithm, such as MD5 and SHA hash.  The most preferred acquisition way is a full data cloning (also known bit-by-bit coping) with write-blocker connection to ensure that an identical copy is being obtained and no data integrity concern is available by preserving the data into a non-alterable format. 

However, subject to technical limitations, sometimes we might only acquire the logical data file or might only able to perform a drag and drop data coping, such as server’s email data acquisition or old hard disk with serious bad-sector issues, etc. resulting that, worst come to worst, this could only be proofed and justified by the examiner personal integrity in some rare circumstances as I would say that it is always nothing is impossible in terms of technologies.

Throughout the data acquisition process, one Master and one Backup would be produced and sometimes an additional Working copy depends on the case nature.  In most circumstances, the flow is to get the custodian’s devices, image the data and then return the devices.  With this approach, once the devices is returned and the custodian started to use this again, the source data is altered and the exact image will never be able to re-produce again.  Therefore, a backup would be essential and all analysis is supposed to be performed on the backup or the working copy. The master copy will only be used for creating backup copy whenever this is the only workable copy. 

On top of the above data acquisition process, I did experience that, due to the case sensitive concern, the original copy also required to be seizure and only a clone copy for custodian continuous use.  The main disadvantage of returning cloned copy only is that more cost would be induced but I believed that this would be the best and the most secure process that I ever performed.

In my forensic life, I have been experienced plenty of tools which allowing me to perform forensic imaging, such as, but not limited to, EnCase, FTK Imager, Paradin, Helix, etc. for hard disk data acquisition; and Oxygen Forensic, XRY, Cellebrite, etc. for mobile forensic; and Macquisition for MacOS data acquisition.  My major comment on these tools is that most of them are similar to each other where 90% of data acquisition works are pretty strict forward and these tools perform very well but the rest 10% would be full of unexpected issues which relies on the examiner experience.  I would share more on this unexpected issues in the future computer forensic post when sharing about real life example.