Digital Forensic Tool Testing (DFTT) Results
Results from the test images on this site have been posted to the
CFTT Yahoo! Groups
e-mail list, but this is not an ideal reporting mechanism for several
reasons. First, the archive is sorted by date and e-mail subject
and the threads for each test image can be long. It is tedious to
find the message with the results for a specific tool. Second, it
is not easy to show that a tool has fixed a bug if one is found.
To help solve these limitations, test results will be saved to a
Test Results Tracker on SourceForge. A fill in the blank form
will be released with each test image and the contents of that form
will be added to the Tracker. Results from new versions of tools
can be added at any time. This makes it more clear when a bug
has been fixed.
To keep the results accurate, the following must be satisfied for
results to be published in the Tracker:
- Test results must be e-mailed to the CFTT Yahoo! Groups
list. Results cannot be e-mailed to only me and they must
come from an e-mail account that uses a real name and not only a
handle. It is understood that there is no way to verify it is
indeed a real name.
- Test results must show the version number of the tool and/or
documentation that was tested.
- Test results must be confirmed by a second and independent
user. In other words, at least two people must independently
run the test,
obtain the same results, and submit the results to
- At least one of the two people that submit test results for
a tool must not have a relationship with the tool's sale
or development. Like the real name requirement, it is
that there is no way to verify this for every case.
- If the test results show that a tool does not have any errors,
then the Tracker entry will be set to the closed status.
- If the test results show that a tool has an error, then the
Tracker entry will have an open status. When a new
version of the tool or a new version of the documentation
is released, then new test results should be submitted
subject to the previous requirements. If the new results
do not have any errors, then the Tracker entry will be set
to the closed status.
- If it is not clear if the test results are an error, then
the Tracker entry will be set to the pending status
until the issue is resolved through a discussion on CFTT
about what the expected behavior of the test should be.
This procedure uses the CFTT list members as enforcers of the Tracker
because the CFTT archive should contain the same information that
the Tracker does.
A report will be removed from the Tracker only if it is shown to
be incorrect. To show that a report is incorrect, two reports that
contradict the report in question must be submitted and must meet
the same requirements that are used when submitting the initial
reports. After the conflict is resolved, the correct report will
What is an Error?
can typically be thought of as a result that is different from what
is expected. Unfortunately, with digital forensics there are not
many documented procedures and there is not a clear expectation.
The following are the types of errors that I think may be found.
This list may change as more tests are conducted.
- Data Specification Errors: An error where a tool does
not process data according to the data's design specification.
If there is
not a published data design specification or a generally
accepted and published method for processing the data then
this error cannot occur. This applies to processing
structured data such as a file system, file formats, and
- User Environment Errors: An error where a tool does
not provide an investigator access to the same data that the
user had access to. An example of this type of error is
the Extended Partition Test
where the test image does not meet specification but operating
systems will support it. Another example is the FAT daylight savings test because
the behavior is not in the FAT specification.
- Analysis Tool Specification Errors: An error where a
not interpret or present the data as it claims to in the
that comes with the tool. There are a couple of variations
of this type of error.
- One variation is when a tool makes a vague statement
about its abilities and there are scenarios that it cannot
handle but that other tools can handle. If a tool makes
only general claims, then its expected behavior is the same
as an "average" tool in the same "category". For example,
consider a tool Q that states that it "can recover deleted
files from file system X". If the tests show that the tool
can recover files that have allocated consecutive sectors,
but it cannot recover files that have allocated fragmented
sectors AND there is a trivial recovery technique for
fragmented sectors that tools M and N support, then the
tool has a tool specification error because the vague
statement implies that it does both consecutive and fragmented
recovery. This error can be fixed by more clearly documenting
the abilities of the tool or by adding the missing
functionality. This will be the most difficult type of
error to identify because the expected behavior is subject
- A second variation is when a tool makes a specific
statement about its analysis abilities, but its results are
not correct. For example, consider a keyword search where
an incorrect address of the keyword is given. The tool has
presented incorrect data to the user. It may not always be
clear when an error is a tool specification error and a
data specification error.
Copyright © 2004 by Brian Carrier
Last Updated: June 9, 2004