Valid conclusions?

Posted on July 12, 2011. Filed under: forensic | Tags: , , , , , , , |

WARNING : Initial thoughts on a recent situation ahead – incomplete  – more to follow, eventually !

Recently, the Casey Anthony trial in the USA has been a source of discussion in many fora, but most recently a bit of a “spat” seems to be in danger of breaking out between the developers of two of the tools used to analyse the web history.

Leaving aside the case itself, let’s start by looking at what the two developers have to say about the issue that came up during cross-examination :

No preference is implied by the ordering of those links, by the way, it’s just the order in which I became aware of them. I don’t use either tool – I have my own methods for doing these things when necessary.

Two issues arise from these two posts, for me :

i) Both developers admit that there were possible problems with their tools which may have resulted in incorrect results and no-one was aware of this until the two tools were run side by side

ii) Neither tool seems to have been validated for the case in question. I’m sure they were verified (i.e checked for conformance to design/specification) but not convinced that they were tested against the requirements for the case.

Here comes the repetitive bit : as far as I’m concerned under the requirements of current and proposed ISO standards, neither tool could be considered reliable. There is no clear documentation about errors nor is there evidence that either has been subjected to a proper structured validation process. Dual-tooling is not validation. It merely compares two implementations of methods designed to solve the same problem as the developers understand things. At no point does anyone check that the results are correct, just how similar they are. Two implementations of the same wrong algorithm are more likely than not to come up with the same wrong results.

This is typical of the issues we will see more and more of in the digital forensics world – we depend too much on third-party tools which use algorithms developed through reverse engineering and have not been completely tested.

I’m not suggesting that every tool needs to be tested in every possible configuration on every possible evidence source -that’s plainly impossible – but we do need to get to a position where properly structured validation is carried out, and records which document that validation – including areas which have NOT been tested – are maintained and made available.

An examiner should always be free to use new methods, tools & processes, but should be personally responsible for choosing them and justifying their use. Information about usage limits & limitations on testing are vital and any competent examiner should be able to carry out additional validation where it is needed.

Let the flamng (of this post) begin…


P.S. – I’ve been doing a lot of work on models & systems for validation recently – they’re currently commercially confidential but if you’ld like to discuss the issues more please do contact me via

Make a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

10 Responses to “Valid conclusions?”

RSS Feed for Forensically sound(ing off) Comments RSS Feed

Interesting how you talk about “two implementations of the same wrong algorithm” It strikes me that there are two separate issues here: the correctness of any algorithm used; the (separate) implementation of any such algorithm, The fact that the two tools provide inconsistent data could be a result of one, or other, or even both issues combined in wonderful non-deterministic ways.. I’ve only skimmed the links but did both tools actually use the same algorithm?

Obviously formal methods for analysis is a huge can of worms (if indeed in some cases it’s even possible). Even implementation is difficult as checking compliance to a given spec is not without problems. I suppose in some cases providing a set of test suites including unit tests might be possible to test such tools against a given algorithmic spec but it’s hardly trivial.

I don’t know anything about the detail of the algorithms or the implementations beyond what the developers have published.

My comment about algorithms is drawn from long years of experience with novice programmers and their ability to devise and share incorrect algorithms 😉

Your point about complexity is absolutely right, though. In a paper I wrote for someone recently, I drew a distinction between full validation and adequate validation for that very reason.

“..not convinced that they were tested against the requirements for the case.”

Are you implying that each tool should be validated explicitly with regard to what is required in each individual case? If so, does that not make and standards relating to the process of validation immensely more complicated, with then requiring that validating body (or whomever) needing to carry out some sort of preliminary review of the case, prior to ascertaining the requirements of the tool, then validating the tool? Or should there be some general standard/s which confers some power to the validating body, putting the process of validation in their hands?

Sorry, I know this is probably horrible to read, but I hope you can make some sense of it – essentially, how can you produce a standard that specifies validating a tool based upon case requirements without essentially being very general, or putting everything at the discretion of the validator..?

Validation can be attempted prior to process deployment. If a proper validation record is maintained the current case can be checked against that record to ensure that the process is validated for the requirements of the case.

If the process isn’t validated for the requirements of the case, then it is out of scope and may require a new validation to extend the scope.

This is almost exactly what happens in other forensic sciences.

N.B. – use of “process” as a concept. A good tool in a poor process doesn’t help anyone.

In a hypothetical scene investigation in which the conditions are poor (outdoors, dark, windy, rain, etc.), time becomes a factor to maintain evidential value and integrity. The methods / processes used are already established as being sound based upon set standards / protocols (I assume, I recall being taught *how* to examine a scene, but not really what standards governed them) – You prioritise based on perceived evidential value. A standard process is accepted to take swabs, another for finding and recovering latent prints, etc. But, more importantly, the tools used for these have been validated (since they are standard tools), e.g. swabs are verified as being sterile, etc.

However, I envision this as being different if a comparable DF scenario, such as one in which a live examination, live acquisition, live network investigation, etc. is required. Time becomes a factor, just the same, but in this case, does it mean that the tools you intend to use (specifically for this case) have to be validated before you use the tools to carry out their intended process (for this case – and thus, would it sacrifice time, and therefore the integrity of the evidence / evidential value)? Or would a general validation occur for that tool (perhaps prior to even taking on the case) which says that the tool can be used for X, Y and Z (where this case requires the process of X)?

Sorry, I realise my previous comment was actually answered. Tools are validated for use for set processes. Tools would then require further validation for any process/es falling outside of what that tool was validated for. Still, in my previous hypothetical scenario, I assume there will be something that allows for the use of a tool for an unvalidated process, providing the investigator can subsequently justify that use, followed by some sort of post-validation?

Yes, Angus, nice example how extremely important the validation process is!
But extremely complicated, resources consuming, right methodology demanding, … It is probably the reason that not so much attempts to do this kind of work we can see in the world.

The complexity is a problem – but I *think* I have a solution to that problem now. In the UK, from discussion I’ve had, the biggest barrier to widespread validation is twofold – fear of complexity and fear of the cost.


I am interested to see you have attempted to sensationalise the issues in the Anthony case with your description of a “spat”. I am certainly not aware of any “spat” as you state; our software was discussed on video during the case testimony and I wanted to clarify some of the issues raised as we were not provided with an opportunity to do so. I had no intention of making any further comment on it; however, having read your post, I feel obliged to correct some of the assumptions and incorrect information you have felt necessary to post.

Firstly, you state “Both developers admit that there were possible problems with their tools which may have resulted in incorrect results”; I have never once stated that the data we recovered for that file was incorrect, in fact it was correct and I have verified this manually, hence my post regarding two of the critical records in the trial.

The original data was manually recovered from unallocated clusters and was not completely intact. Yes, not all of the records were recovered from this file with the software used at the time; however, what was recovered on our part was accurate. I am not surprised that some data was filtered out considering the fact it was not from a live file. The data in question had missing dates, missing URL cells and other missing artefacts, so I am not sure how you make such a statement when you have not examined the original file yourself. We did not recover any data which was wrong so your statement regarding inaccurate results (for our part) is incorrect.

You also state, “nor is there evidence that either has been subjected to a proper structured validation process”. This is an assumption on your part and considering you further state you have been “doing a lot of work on models & systems for validation recently” I wonder about your motivation is making such a claim. I would also ask you to point me in the direction of any publications from any other forensic software vendors regarding their published validation process.

I agree that the end user of a software tool should be able to manually verify, if necessary, any discrepancies between tools used. One of the testing methods we use is against known data sets to validate output rather than comparing with other software, which of course as you state may not be providing the correct result.

You also state “Neither tool seems to have been validated for the case in question.” As this is obviously aimed at the end user, I would be obliged if you could explain what you do to validate a tool for a case. I am also interested in how you would have extracted the data from an incomplete Mork database had you been asked to examine the file for this case and how you would have validated your findings.

I will go into this in detail when I get a chance as I followed this case from the moment I realized that the judge had allowed, which is not the usual practice in the state of Florida, non-Forensic evidence being portrayed as forensic evidence. From the research “sniffing” project used to determine the “chloroform” in the trunk of the car. As a scientist in the real world so many things about that made me say “what? Did I hear what I thought I just heard”.
The internet search was one very important item due to it being the only evidence to show pre-meditation and they means for the prosecution to get the conviction and potentially the death penalty (no, I do believe she was guilty of being involved-and that is what the defense started the trial with…that she was involved). I honestly do not think that you can review the evidence (as I have-not the actual internet file…but all of the discovery) and not have a very bad taste in your mouth about the way the prosecution put on the case. Read the reports on the “sniffing” research, the first ones, not the second ones which had a totally different view. When the internet file was looked at without the tools it showed the truth. I am so dismayed at the way this whole thing was handled and as I have explained to people caught up in the emotion, take Casey out of it and image your daughter was accused of something and they prosecuted with NON-Forensic evidence – this was way beyond the internet issue and it is all available to any person that wants to review the evidence. The main analyst for the state need to be investigated, as well as whoever else involved in the “pressure” on the “sniffer” scientists to re-work the report. I do not even know if this post will go through as I am not a member here. I am just so upset about the whole trial and how it was handled. So much so that I am writing this late on a Sunday night because I saw this blog post – of which I really liked, because it was at least questioning some things. I am putting together an overview to make my point (but not just for a blog post as I have been working toward some action taken against this run-away non-forensic level prosecution). I am sorry to rant because in the long run in lowers my professional opinion. I will be posting a complete analysis of the science (I will not say junk science – because it was just “out of scope” and I truly believe that many people were pressured to report things they did not believe in this case (which if you read the reports it is actually in black and white). Have a good night and I sure hope that real science plays out in the end but the forensics (search) issue seems to be the only one that is causing people to question this, which is good because the prosecution will have to answer in the end it this stays “a discussed problem”.

Where's The Comment Form?


    This is the weblog of Angus M. Marshall, forensic scientist, author of Digital Forensics : digital evidence in criminal investigations and MD at n-gate ltd.


    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS


Liked it here?
Why not try sites on the blogroll...

%d bloggers like this: