Intelligent Reasoning

Promoting, advancing and defending Intelligent Design via data, logic and Intelligent Reasoning and exposing the alleged theory of evolution as the nonsense it is. I also educate evotards about ID and the alleged theory of evolution one tard at a time and sometimes in groups

Sunday, January 29, 2006

Explaining the (Design) Explanatory Filter

The Design Explanatory Filter has been getting bad press. However it is obvious the bad press is due to either misunderstanding or misrepresentation. Some anti-IDists argue that it is an eliminative filter. Well, yeah! All filters eliminate. The EF eliminates via consideration. Would they prefer we started at the design inference and stay there until it is falsified? Crick’s statement would have changed to “We must remind ourselves that what we are observing was designed.” (as opposed to “…wasn’t designed, rather evolved.”)

By getting to the final decision block where we separate that which has a small probability of occurring with intentional design (an event/ object that has a small probability of occurring by chance and fits a specified pattern), means we have looked into the possibility of X to have occurred by other means. May we have dismissed/ eliminated some too soon? In the realm of anything is possible, possibly. That is what comes next.

Also it pertains to a design INFERENCE. That inference is still subject to falsification. It is also subject to confirmation. Counterflow would be such evidence and/ or confirmation for the design inference: Del Ratzsch in his book Nature, Design and Science discusses “counterflow as referring to things running contrary to what, in the relevant sense, would (or might) have resulted or occurred had nature operated freely.”

IOW it took our current understanding in order to make it to that decision node and it takes our current understanding to make the inference. Future knowledge will either confirm or falsify the inference. The research does not and was never meant to stop at the last node. The DEF is for detecting design only and only when agent activity is questioned.

Look at it this way: How do forensic scientists approach a crime scene? Do they run in guns blazing, kicking stuff around? No. They pick the place clean looking for clues- macro and micro. The clues lead them to an accidental or natural death or a homicide. Somewhere along the line there may be a key indicator of agent activity, IOW something that was determined couldn’t have occurred by chance.

If the evidence points to the lava flow causing the fire then they don’t look any further. We know when lava flows make contact with buildings a fire will ensue. In the absence of lava or other natural causes (unintelligent, undirected), they look for other clues. Only after collecting and examining ALL the evidence can arson be inferred. Arson and homicide imply intent and that adds to the existing pile of evidence to nab the culprit(s).

Dembski admits that an intelligent agency may work to mimic regularity or chance. That is another reason the research doesn’t stop after the initial inference.


Finally, as Wm. Dembski states:
"The principal advantage of characterizing design as a complement of regularity and chance is that it avoids committing itself to a doctrine of intelligent agency.
Defining design as the negation of regularity and chance avoids prejudicing the causal stories we associate with the design inference."


Can anyone propose a better way to look at evidence/ phenomenon? How about a better way to make a design inference?

And one more word from Wm. Dembski:

"The prospect that further knowledge will upset a design inference poses a risk for the Explanatory Filter. But it is a risk endemic to all of scientific inquiry. Indeed, it merely restates the problem of induction, namely, that we may be wrong about the regularities (be they probabilistic or necessitarian) which operated in the past and apply in the present.

0 Comments:

Post a Comment

<< Home