Validation and deci...
 
Notifications
Clear all

Validation and decision making

19 Posts
6 Users
0 Likes
921 Views
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

I am trying to capture 'common sense decision making' into a framework in order to try and produce a process flow for practitioners to use when determining when to report information form their cases.

I have visualised it HERE

I would be really interested to have your feedback on this, any evaluation or additions, edits would be very much appreciated. I know some of it is very much basic but I want to hone it to the point where the process is engaged with to prevent misinterpreted, erroneous content entering DF reports/statements.

Thanks in advance for any help.


*****EDIT - just noticed there are a couple of 'yes' links missed from the top right hand side of diagram. Its just in draft at present.

 
Posted : 15/06/2018 2:00 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

I have visualised it HERE

In the top row the two rightmost squares are in a no-win/no-win situation 😯

Most probably only a couple typos, and the meaning can still be understood, but if you could re-check and correct in such a way that each "Decision" has always a "in" arrow and two exits, one with Yes and one with No we would avoid possible misunderstandings.

Usually the "decision" is however represented as a diamond, so it would be better if you could redesign the flow chart according to the "standard", *like*
https://www.rff.com/flowchart_shapes.php

Oval= Beginning or End
Rectangle=Process
Diamond=Decision

jaclaz

 
Posted : 15/06/2018 3:43 pm
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

Thanks Jaclaz, amended. Also, I will change the shapes for conformity when its finalised.

Any comments regarding the actual content of the framework, any bits missing, in need of adding etc? lack of detail?

 
Posted : 15/06/2018 4:06 pm
jaclaz
(@jaclaz)
Posts: 5133
Illustrious Member
 

Thanks Jaclaz, amended. Also, I will change the shapes for conformity when its finalised.

Any comments regarding the actual content of the framework, any bits missing, in need of adding etc? lack of detail?

Generically speaking I can see a "generic" issue with the reliance on either
1) published (and thus peer reviewed) material
2) peer reviewing in general

We all know how (particularly in the latest years) there are a number of published material that is (IMHO) very poor.

The issue as I see it, is that often the Author is a poor experimenter and its peers are as poor as he is, or maybe they didn't review the article as they supposed to.

If you prefer, a number of published articles (including those related to CF) are not reproducible.

More generally, peer reviewing seems to be at a low, basically because finding such peers is not as easy as it seems.

Then, a number of articles are very, very "narrow", so that it is rare that BOTH the conditions
1) the peer reviewed material exists
2) the peer reviewed material actually applies to the specific OS, version, etc.
are fulfilled.

So, once excluded very "basic" knowledge and "unchanged" and re-known behaviours, when it comes to the more "difficult" parts the path through the peer reviewed material would be largely impracticable, and you are left only with the "experiment yourself" path.

This latter is likely to be not doable (because of limited time/resources/etc.) or at least there is a great risk that the experiment won't be "fully" or "fully and properly" executed, and since the results of the experiments (according to the diagram) are not verified by third parties or peer reviewed they somehow carry with them some less relevance/authoritativeness.

My doubt is that if the flowchart is followed to the letter, this would take either endless time or too often produce a "unsafe to report" result.

So maybe there is space for "levels of confidence in the report". ?

Finally, supposing that the diagram should represent a sort of guideline, I believe that it should be added (of course as a mere, not-binding recommendation) that the results of the experiments should be published (so that they can be - formally or informally - be reviewed/commented/etc.) and hopefully become part of the peer reviewed material.

jaclaz

 
Posted : 15/06/2018 4:33 pm
(@athulin)
Posts: 1156
Noble Member
 

I would be really interested to have your feedback on this, any evaluation or additions, edits would be very much appreciated.

You might want to follow some flow-charting/process standard (unless you are already following one I haven't seen before). <>-boxes are almost always used for decisions, for example … (Already covered?)

Add reference numbers to the boxes. Makes it so much easier to talk about 'decision D5', instead of the green box to the left of the yellow box near the center.

It's a bit odd that testing (to unknown standards) trumps peer-reviewed publication.

It's very much waterfall. The area of 'expand knowledge' should probably loop back to 'do we know enough to report?', which probably makes the outcome of 'expand knowledge' some form of publication. (It makes it less practical, but that is probably not relevant here.)

I know some of it is very much basic but I want to hone it to the point where the process is engaged with to prevent misinterpreted, erroneous content entering DF reports/statements.

That requires a decision box 'Am I competent to make any or all the decisions in this flow chart?' before this sequence is entered.

Not sure if 'evidence type' is a good starting point. Perhaps 'conclusions, inferences and assumptions' would be slightly better, as that makes it clear that it's something to do once a preliminary report is available.

 
Posted : 15/06/2018 5:44 pm
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

Generically speaking I can see a "generic" issue with the reliance on either
1) published (and thus peer reviewed) material
2) peer reviewing in general

We all know how (particularly in the latest years) there are a number of published material that is (IMHO) very poor.

The issue as I see it, is that often the Author is a poor experimenter and its peers are as poor as he is, or maybe they didn't review the article as they supposed to.

If you prefer, a number of published articles (including those related to CF) are not reproducible.

More generally, peer reviewing seems to be at a low, basically because finding such peers is not as easy as it seems.

Then, a number of articles are very, very "narrow", so that it is rare that BOTH the conditions
1) the peer reviewed material exists
2) the peer reviewed material actually applies to the specific OS, version, etc.
are fulfilled.

So, once excluded very "basic" knowledge and "unchanged" and re-known behaviours, when it comes to the more "difficult" parts the path through the peer reviewed material would be largely impracticable, and you are left only with the "experiment yourself" path.

This latter is likely to be not doable (because of limited time/resources/etc.) or at least there is a great risk that the

experiment won't be "fully" or "fully and properly" executed, and since the results of the experiments (according to the diagram) are not verified by third parties or peer reviewed they somehow carry with them some less relevance/authoritativeness.

Interesting point about peer-reviewed material and i agree in part. But if we cant rely on stuff like this we arguably have nothing. However to cover for this, I did add in the decision boxes 'reliably' peer reviewed because I see your point.

Regarding the latter, in terms of evidence reliability, I know it is burdensome, but the alternative is essentially that people should include non-validated content in there report - leaving us in the position we are in now. Surely this is not good. We essentially need to build up a body of reliable knowledge before this process becomes more efficient.

My doubt is that if the flowchart is followed to the letter, this would take either endless time or too often produce a "unsafe to report" result.

So maybe there is space for "levels of confidence in the report"

Is this not part of the issue, it might end in unsafe because of the lack of work the field has done so far with regards to non-validating content - time to start?

Finally, supposing that the diagram should represent a sort of guideline, I believe that it should be added (of course as a mere, not-binding recommendation) that the results of the experiments should be published (so that they can be - formally or informally - be reviewed/commented/etc.) and hopefully become part of the peer reviewed material.

Great idea!!

You might want to follow some flow-charting/process standard (unless you are already following one I haven't seen before). &lt;&gt;-boxes are almost always used for decisions, for example … (Already covered?)

Yes, this is just in draft, I will follow a standard for the final one.

Add reference numbers to the boxes. Makes it so much easier to talk about 'decision D5', instead of the green box to the left of the yellow box near the center.

Will do.

It's a bit odd that testing (to unknown standards) trumps peer-reviewed publication.

It doesn't, only when the peer reviewed material either doesn't exist for the scenario faced by the practitioner or that its not reliable?

That requires a decision box 'Am I competent to make any or all the decisions in this flow chart?' before this sequence is entered.

Not sure if 'evidence type' is a good starting point. Perhaps 'conclusions, inferences and assumptions' would be slightly better, as that makes it clear that it's something to do once a preliminary report is available.

Good point, I will add.

any other stages missing etc? I mean, over all, in a perfect world, if followed properly, should this process flow prevent issues?

 
Posted : 15/06/2018 7:37 pm
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

Process flow has now been updated, original post edited and also reported HERE

 
Posted : 15/06/2018 8:17 pm
(@athulin)
Posts: 1156
Noble Member
 

It's a bit odd that testing (to unknown standards) trumps peer-reviewed publication.

It doesn't, only when the peer reviewed material either doesn't exist for the scenario faced by the practitioner or that its not reliable?

But there's no obvious validation that the test or the report of it is reliable? (Perhaps that's covered in some other way.) I had the impression that that was the gateway, so I expected that also internal tests would need to pass either an 'test exit gateway' or an 'DF analysis entry gateway' to ensure that internal work was held to the same standard as external work.

any other stages missing etc? I mean, over all, in a perfect world, if followed properly, should this process flow prevent issues?

In a perfect world, issues would not be present in the first place. It's difficult to know just how perfect/imperfect we assume the world is.

In real life, the person who follows a flow-chart tends to take additional circumstances into consideration. 'Either I spend a couple of days at the library hunting relevant research results, or I do my own tests, starting now. Which do I prefer?' As there is no obvious quality gateway for tests, doing my own tests may be seen as the simpler path to go. (Some serious quality indoctrination is required to overcome any such personal bias… and there may even be reason to have a separate test panel who approves of proposed tests – after all, some other analyst may have done relevant tests only a week ago, so there's no sense in repeating the work. But that depends on the actual organization – a one man shop won't have that particular problem.)

Also, if the test exit gateway is kept by the same person who performs or manages operations, the current length of the work queue may affect the decision if the test results are good. (This is part of the reason why I think the expand knowledge/testing part may be better as a separate flow it may not even involve the analyst following the main flowchart.)

If the flowchart is a flowchart, it's one person following it. Anything someone else does (such as evaluating quality) is often better treated as a 'subroutine', or as a handover to another flow.

If the flowchart is a process chart, on the other hand, those different roles (DF analyst, DF report QA, test QA, …) probably need to be identified for clarity as part of the 'boxes' 'Who does this part?'

Often, processes or workflows designed to minimize issues also add some kind of monitoring and improvement work. Like 'OK, those test results were successfully challenged by opposing team. We need to identify what we missed, and ensure that we don't do that kind of mistake again'. Or 'The research that we relied on was not repeatable, thus flawed. How do we ensure that we don't do that mistake again?' (And in a really ideal world, 'We just learned that the research we used as basis for our report/conclusion last year was flawed. We identified the wrong person. We need to inform our customer that we failed. They may have to decide if they should reverse judgement on that one …'.)

But then, this is more of an 'outer' process, monitoring and modifying the flowchart you have given, so it should perhaps be kept separate.

 
Posted : 16/06/2018 6:32 am
steve862
(@steve862)
Posts: 194
Estimable Member
 

Hi,

I like what you are trying to do but my concern is the fast moving nature of the programs we reverse engineer and the sheer number of different artefacts out there.

I also agree that documented information is not at all abundant and that unfortunately too high a proportion of what there is, is not reliable for one reason or another.

Based on these considerations I wouldn't expect there to be material I would be willing to rely on very often for anything more than operating systems and file systems.

The rate of change of programs/apps means from one version to another, the types of artefacts and their correct interpretation could change several times in a year. It could even be the artefacts stay the same but their interpretation is different from one version to the next. I've seen this personally on so many occasions.

I also wouldn't want to underestimate the impact OS settings, anti-virus activity and user options could have on causing one device and/or program to function differently from an identical device and/or program in another scenario.

As such I think so much depends on validating the results we have.

Just my two pence worth.

Steve

 
Posted : 16/06/2018 10:40 am
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

But there's no obvious validation that the test or the report of it is reliable? (Perhaps that's covered in some other way.) I had the impression that that was the gateway, so I expected that also internal tests would need to pass either an 'test exit gateway' or an 'DF analysis entry gateway' to ensure that internal work was held to the same standard as external work.

Great point, it assumes that someone is competent enough to test properly. And in reality, this isnt going to be the case. That being said, how do we put this issue into a framework - who then is the 'gatekeeper' in this sense? Who can be the testing validator?

As there is no obvious quality gateway for tests, doing my own tests may be seen as the simpler path to go. (Some serious quality indoctrination is required to overcome any such personal bias… and there may even be reason to have a separate test panel who approves of proposed tests – after all, some other analyst may have done relevant tests only a week ago, so there's no sense in repeating the work. But that depends on the actual organization – a one man shop won't have that particular problem.)

Good points. I suppose I need to add steps in and then an organisation would need to deal with it feasibly? Maybe if you cant do these tasks then you aren't in a position to reliably operate in this field? controversial? surely quality measures have to be enforced somehow?

If the flowchart is a flowchart, it's one person following it. Anything someone else does (such as evaluating quality) is often better treated as a 'subroutine', or as a handover to another flow.

Yes, i think structurally, I have some issues with mapping this all out. I guess at this stage, I only have a vague and non-conforming model which just gives an idea.

I like what you are trying to do but my concern is the fast moving nature of the programs we reverse engineer and the sheer number of different artefacts out there.

I am trying to keep it high-level, therefore I might suggest testing takes place, but the burden of testing and all of the issues and variables to consider are then on the practitioner/organisation to deal with. But i very much see you point here.

I also agree that documented information is not at all abundant and that unfortunately too high a proportion of what there is, is not reliable for one reason or another.

This clearly needs to change. im not sure how we can achieve this just yet unfortunately.

Based on these considerations I wouldn't expect there to be material I would be willing to rely on very often for anything more than operating systems and file systems.

Its a terrible state of affairs really, we almost need to go back to basics and cover the robust testing of the foundation things before we go to advanced.

Some more great points to include here. I do agree, this isnt going to solve world hunger, but who knows, it may help a little

 
Posted : 16/06/2018 3:28 pm
Page 1 / 2
Share: