Valid conclusions?

Posted on July 12, 2011. Filed under: forensic | Tags: , , , , , , , |

WARNING : Initial thoughts on a recent situation ahead Рincomplete  Рmore to follow, eventually !

Recently, the Casey Anthony trial in the USA has been a source of discussion in many fora, but most recently a bit of a “spat” seems to be in danger of breaking out between the developers of two of the tools used to analyse the web history.

Leaving aside the case itself, let’s start by looking at what the two developers have to say about the issue that came up during cross-examination :

http://blog.digital-detective.co.uk/2011/07/digital-evidence-discrepancies-casey.html

http://www.cacheback.ca/news/news_release-20110711-1.asp

No preference is implied by the ordering of those links, by the way, it’s just the order in which I became aware of them. I don’t use either tool – I have my own methods for doing these things when necessary.

Two issues arise from these two posts, for me :

i) Both developers admit that there were possible problems with their tools which may have resulted in incorrect results and no-one was aware of this until the two tools were run side by side

ii) Neither tool seems to have been validated for the case in question. I’m sure they were verified (i.e checked for conformance to design/specification) but not convinced that they were tested against the requirements for the case.

Here comes the repetitive bit : as far as I’m concerned under the requirements of current and proposed ISO standards, neither tool could be considered reliable. There is no clear documentation about errors nor is there evidence that either has been subjected to a proper structured validation process. Dual-tooling is not validation. It merely compares two implementations of methods designed to solve the same problem as the developers understand things. At no point does anyone check that the results are correct, just how similar they are. Two implementations of the same wrong algorithm are more likely than not to come up with the same wrong results.

This is typical of the issues we will see more and more of in the digital forensics world – we depend too much on third-party tools which use algorithms developed through reverse engineering and have not been completely tested.

I’m not suggesting that every tool needs to be tested in every possible configuration on every possible evidence source -that’s plainly impossible – but we do need to get to a position where properly structured validation is carried out, and records which document that validation – including areas which have NOT been tested – are maintained and made available.

An examiner should always be free to use new methods, tools & processes, but should be personally responsible for choosing them and justifying their use. Information about usage limits & limitations on testing are vital and any competent examiner should be able to carry out additional validation where it is needed.

Let the flamng (of this post) begin…

 

P.S. – I’ve been doing a lot of work on models & systems for validation recently – they’re currently commercially confidential but if you’ld like to discuss the issues more please do contact me via n-gate.net

Read Full Post | Make a Comment ( 10 so far )

    About

    This is the weblog of Angus M. Marshall, forensic scientist, author of Digital Forensics : digital evidence in criminal investigations and MD at n-gate ltd.

    RSS

    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET News.com
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS

    Meta

Liked it here?
Why not try sites on the blogroll...

%d bloggers like this: