Intelligent Reasoning

Promoting, advancing and defending Intelligent Design via data, logic and Intelligent Reasoning and exposing the alleged theory of evolution as the nonsense it is. I also educate evotards about ID and the alleged theory of evolution one tard at a time and sometimes in groups

Thursday, April 12, 2007

A 2ndclass Jerkoff

Over on the ARN discussion board ID imbecile aiguy has a thread titled Planet of the Intelligent Designers, in which he finishes his OP with:

In the end, all of ID's claims about detecting intelligent causation that transcends fixed law and chance crumble into dust, for the obvious reason that computers can easily pass ID's tests for intelligent agency.


IOW he is saying that since computers can produce CSI, ID is falsified because he alledges that computers operate purely via fixed law and chance. I guess the operating system and application programs have nothing to do with it.

On Monday (April 9th) another imbecile, 2ndclass, chimed in saying that I have a reading comprehension issue because I stated:

Over on the ARN discussion board there is a discussion about that computers can generate CSI that refutes ID. This “aiguy” is clueless to the fact that the CSI generated by the computer can be traced back to its designers.


This is EXACTLY what IDists have been saying since the 1990's! Read page 92 of Science and Evidence for Design in the Universe, first paragraph on that page.

The debate is what nature, operating freely can do-> computer outputs are not in any way, shape or form "nature, operating freely". Nor do computers operate via fixed laws and chance.

Also given the predictions and data presented in "The Privileged Planet" we know that any silcon-based life, ie aiguy's computers, had to have originated from carbon-based organisms. IOW we would know they were artifacts.

2ndclass's confusion is from thinking that CSI is a probability measure and apparently nothing more. He(?) is fixated on that approach.

Good luck with that.

And when nature, operating freely can desig, build, power up and program a computer aiguy will have a point. Until then all he has is a strawman.

64 Comments:

  • At 12:34 AM, Blogger R0b said…

    Joe, some questions:

    - If aiguy is clueless to the regress argument, why did he spend two paragraphs addressing it?
    - Did you read his whole post, or just the last sentence?
    - Do you agree that CSI is a probability measure?
    - If so, then what is your objection to what I said?
    - Do you think anyone in the ID camp would agree with your assertion that computers don't operate via fixed law and chance?
    - Why did you disappear from our discussion at Alan Fox's blog, leaving a long list of assertions unsupported?

     
  • At 7:58 AM, Blogger Joe G said…

    If aiguy is clueless to the regress argument, why did he spend two paragraphs addressing it?

    How is that relevant? IOW why does it matter how many paragraphs were used to set up a strawman?

    What is obvious is that aiguy does NOT understand what is being debated.

    Did you read his whole post, or just the last sentence?

    I read the whole post and all subsequent posts.

    Perhaps you both should read Why is a Fly Not a Horse? by geneticist Giuseppe Sermonti. Chapter VIII is titked "I Can Only Tell You What You Already Know"- IOW it is an explanation of the "conduit effect".

    There was an experiment conducted using migrating birds- the diurnal Silviidae- the become nocturnal at migrating time, become aggitated when the stars come out and fly off in SSW direction.

    What the scientists did was to raise these birds from hatchlings- away dfrom all adult birds. When the night sky was revealed to them for the first time, guess what happened? They became aggitated and flew off SSW until the stars became hidden by cloud cover.

    They repeated this is the Spring and the birds flew off in the opposite direction!

    The birds did this even under an artificial sky of a planetarium!

    IOW they already knew or they were able to tap into that pre-existing information, ie act as a conduit.

    Do you agree that CSI is a probability measure?

    I agree that CSI can be determined via probability. But that is about it.

    Do you think anyone in the ID camp would agree with your assertion that computers don't operate via fixed law and chance?

    Everyone should. Even people outside of ID.

    Now perhaps you could support the assertion that computers operate via fixed law and chance.

    Since when are operating systems and application programs mere fixed law and chance? How many programmers would agree with that assessment?

    Since when do design engineers design a computer using only fixed law and chance? And why don't we ever see computers arise via fixed law and chamce alone?

    I would love to hear about all of that.

    Why did you disappear from our discussion at Alan Fox's blog, leaving a long list of assertions unsupported?

    I am here. There isn't any need for me to go to another blog and try to educate those who wish to remain ignorant of ID.

    And as far as a long list of unsubstantiated assertions that would be you. It appears that is all you have.

     
  • At 8:07 AM, Blogger Joe G said…

    The debate is what nature, operating freely can do-> computer outputs are not in any way, shape or form "nature, operating freely".

    What part about that is not understood?

     
  • At 10:27 AM, Blogger Joe G said…

    I would like to correct and clarify the following:

    when asked:
    Do you agree that CSI is a probability measure?

    I said:
    I agree that CSI can be determined via probability. But that is about it.

    That is not quite right. Only the C in CSI can be determined via probabilty. IOW C is a probability measure.

    And this ties into why I stopped going to Alan's blog. I am sure this was explained to you already and you still refuse to grasp the concept.

     
  • At 11:38 AM, Blogger R0b said…

    aiguy explicitly addressed your specific argument. How is that a strawman?

    When computers are working properly, they operate via fixed law only, that is, they're completely deterministic. There are rare chance occurrences in which a transistor opens or closes when it shouldn't. I'm not aware of anything a computer does that isn't fixed law or chance.

    CSI is measured in bits. 1 bit represents a probability of .5, 2 bits represents a probability 0f .25, etc. I suppose you could measure it in bytes or something else, but whatever unit you choose, the principle is the same. CSI has not other units, so it does not indicate distance, voltage, power, force, or anything else. It's a measure of probability only.

    Moving the discussion from Alan's blog to here would have been fine if you had let my responses go through, which you didn't. I'm happy to continue our conversations here if you guarantee that all of my posts will go through. Otherwise, let's move it to Alan's blog or to ARN.

     
  • At 11:54 AM, Blogger Joe G said…

    When computers are working properly, they operate via fixed law only, that is, they're completely deterministic.

    So that is your definition of fixed law? That is just pure nonsense!

    Computers operation is determined by their programs. Programs are NOT fixed laws and they are determined by the programmer. IOW the computer operation is determined by the programmers.

    So just as Stephen Meyer stated on page 92 of "Science and Evidence for Design in the Universe", the information output by the computer can be traced back to the programmers.

    CSI is measured in bits.

    Wrong. Not just any bits will be considered CSI.

    It's a measure of probability only.

    So you choose willful ignorance. I see.

    Perhaps you could provide a reference to substantiate your nonsense?

    Even the EF demonstrates that CSI is more than probability. That you can't even understand a simple flow chart says quite a bit about your agenda of misrepresentation and distortion.

    Moving the discussion from Alan's blog to here would have been fine if you had let my responses go through, which you didn't.

    I have NEVER held back any of your posts. IOW now you are exposed as a liar.

     
  • At 11:56 AM, Blogger Joe G said…

    The debate is what nature, operating freely can do-> computer outputs are not in any way, shape or form "nature, operating freely".

    What part about that is not understood?

    Now pull your head out of your ass and answer that question.

     
  • At 12:04 PM, Blogger R0b said…

    Since you asked so nicely, I will. I don't understand the "nature acting freely" part.

    When you say nature, are you talking about natural vs. artificial or natural vs. supernatural?

    And what does "acting freely" mean? Is a planet that's constrained to its orbit by the laws of motion and gravity acting freely? How do I go about determining whether something is acting freely or not?

    Will you guarantee that you'll let all of my comments go through on this blog?

     
  • At 12:36 PM, Blogger R0b said…

    Wrong. Not just any bits will be considered CSI.

    Did I say "just any bits will be considered CSI"? No. I said "CSI is measured in bits". How is this "wrong"? I'm talking about the units of CSI. The units aren't volts, amps, or meters; they're bits, and the number of bits indicates a probability.

    I have NEVER held back any of your posts. IOW now you are exposed as a liar.

    I posted at least one response to the "A secondclass lowlife (or secondclass = no class)". Where is it?

    For the third time, will you guarantee that all of my posts will go through?

     
  • At 1:04 PM, Blogger Joe G said…

    For the third time, will you guarantee that all of my posts will go through?

    Yes. That is up until the point you just keep regurgitating that which has been explained and/ or refuted.

    CSI

    The coincidence of conceptual information and physical information where the conceptual information is both identifiable indendently of the physical information and also complex.

    Question-

    Do YOU see anything about probabilities in that definition of CSI?

    Probabilities are to determine complexity only. That much is clear from reading any ID literature written by IDists.

     
  • At 1:06 PM, Blogger R0b said…

    Yes. That is up until the point you just keep regurgitating that which has been explained and/ or refuted.

    Retract the qualifier and I'll continue the discussion. Or we could move it to ARN or Brainstorms, both of which are ID forums.

     
  • At 1:15 PM, Blogger Joe G said…

    I don't understand the "nature acting freely" part.

    Then perhaps you shouldn't be trying to argue against ID. And perhaps you also be so selective in reading my blog entries as it has been explained.

    Read Nature, Design and Science by Del Ratzsch and get back to me. It is never a good thing to argue from ignorance.

    When you say nature, are you talking about natural vs. artificial or natural vs. supernatural?

    What nature can do vs. what agencies can do. Whether or not the agency is supernatural or not can only be determined by following the data. And in the end the origin of nature and the laws that govern it lie beyond nature.

    Is a planet that's constrained to its orbit by the laws of motion and gravity acting freely?

    In the sense it is not governed by some agency, yes.

    How do I go about determining whether something is acting freely or not?

    You start by employing tried and true design detection techniques. IOW we already have processes in place that enable/ allow us to do just that! And guess what? We have found out that it makes a huge difference to an investigation whether or not that which is being investigated arose via an agency or nature, operating freely.

    see also Counterflow

     
  • At 1:20 PM, Blogger Joe G said…

    Umm I checked and not one of your comments has ever been held back. Never. Not one. At least not by me. And no one else has any control except the guys who run this thing.

    Keep posting. I will post everything you have until we both agree that you just refuse to get it.

    Take your time to respond as I will be gone for several hours.

     
  • At 1:27 PM, Blogger R0b said…

    Keep posting. I will post everything you have until we both agree that you just refuse to get it.

    Why would I agree to that?

    Again, I need an unqualified guarantee that you won't hold back any of my posts. Or we need to move it to ARN or Brainstorms.

     
  • At 5:50 PM, Blogger Joe G said…

    Allow me to explain-

    When I say:

    I will post everything you have until we both agree that you just refuse to get it.

    It means we would BOTH have to agree that YOU just refuse to get it. As with an AND gate BOTH inputs must be affirmative in order for the output to follow suit.

    IOW if only ONE of us holds that position then your posts will be posted.

    And if we BOTH agree that you just refuse to get it then what is the point of posting at all? Shirley* you must have something better to do?

    The bottom line is whether or not your posts show up is up to YOU and YOU alone. I just worded it in a way that would make you think about it.

    Now I remember- you are the one who thinks a flow chart can be used by starting with any decision box. Where's the flow? (Hey Moe, where's the flow? yuck, yuck)

    You have my guarantee that all of your posts will show up as long as you want them to.

    Just realize you are also going to have to produce- IOW you are not going get to just challenge without providing something to reference.

    So please get started by reading the book by Del Ratzsch.


    * "Airplane!" reference

     
  • At 5:58 PM, Blogger Joe G said…

    BTW I hope by now you understand that IDists do not have an issue with humans being a conduit by which information can flow.

    Giuseppe Sermonti's book "Why is a Fly Not a Horse?" was translated into English by the Discovery Institute- that is they had it translated. It is endorsed by many top-level IDists. Conduits, get over it and learn how to use it and control it. "Feel the force Luke."

    IOW to say otherwise would be a strawman.

    Also seeing that Meyer had already covered aiguy's arguement YEARS before it was made, any normal person would understand that aiguy has to address Meyer's point before claiming ID is refuted.

     
  • At 1:18 AM, Blogger R0b said…

    Umm I checked and not one of your comments has ever been held back.

    If I spaced it and forgot to hit "publish", then I sincerely apologize for falsely accusing you of censorship.

     
  • At 1:28 AM, Blogger R0b said…

    Okay, let's recap the questions you need to answer in order to substantiate some of your past assertions:

    1. Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

    2. Please provide a quote from Dembski showing that not everything that has a small probability is complex according Dembski's usage of the term.

    3. Please provide a quote from Dembski indicating that a knowledge of designers' capabilities is necessary in order to infer design.

    4. Please provide evidence that I don't know that simplicity of description requires pre-existing knowledge.

    5. I said that the Caputo case exhibits specified complexity according to Dembski, and that you don't know that. Please provide evidence that this statement is a lie.

    6. Provide a quote from Dembski demonstrating that the Caputo court did not share Dembski's inference.

    7. Please tell me what's wrong with my pulsar analysis here (trivially modified to include the first 2 nodes of the EF).



    And since you've made more unsubstantiated claims recently, I'll add the following for starters:

    8. Please provide evidence that the statement "CSI is measured in bits" is wrong.

    9. Point to the strawman in aiguy's post at ARN.


    Which reminds me of something you never answered from way back when:

    10: How is this a strawman: "And by that, Joe means that everyone who claims to have accepted his challenge is, ipso facto, lying."?

    Well, that's an even 10. But don't worry, there are more.

     
  • At 9:03 AM, Blogger Joe G said…

    Wow, I take it that it is just way too difficult for secondclass to stay on topic. No surprise there seeing he got his assed kicked by trying to do so.

    1. Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

    Umm that is irrelevant. Also that is not how one approaches an investigation.

    Ya see first we make an observation- pulsars, for example. We notice that the signals are very regular. Nothing special about that at all.

    Then we notice that the signal is across all bands. Now if the discoverers of the signal had any brains at all they would contact a communication expert. Soneone well versed in RF. That expert would tell them that there isn't anything that we know of that would broadcast over such a wide spectrum plus the fact there isn't anything special about the signal- who would design such a transmitter, which would take a good bit of knowledge to do, and only broadcast repeating blips?

    Then we take what we now know and apply that using the EF. Pulsars would not make it past the first node. There isn't anything complex about a signal that repeats on/ off.

    Now your turn:

    CSI

    The coincidence of conceptual information and physical information where the conceptual information is both identifiable indendently of the physical information and also complex.


    Question-

    Do YOU see anything about probabilities in that definition of CSI?

    Probabilities are to determine complexity only. That much is clear from reading any ID literature written by IDists.

    Give and take. I answer one of your and you answer one of mine.

    BTW your continued fixation with Dembski is duly noted. And as I have already told you ID is more than Dembski. IOW to say I have to answer your questions in terms of Dembski just exposes your warped sense of everything.

    Also I have explained your number 5 a few times. And I am sure I have explained a few others- number 10 for example. And I covered #9 IN THIS THREAD!

    If you didn't understand the explanations when I gave them why should I think you will understand them now?

     
  • At 9:09 AM, Blogger Joe G said…

    BTW if simplicity of description requires pre-existing knowledge (which it does) then YOUR point about it (simplicity of description) is MOOT.

    That is because with pre-existing knowledge anything can be desribed using ONE letter or one number.

    Have you ever eaten at a diner or Chinese resturant?

    I bet that you will fail to understand that point.

     
  • At 9:32 AM, Blogger Joe G said…

    6. Provide a quote from Dembski demonstrating that the Caputo court did not share Dembski's inference.

    See pages 18-19 of The Design Inference:

    "The court therefore stopped short of charging Caputo with dishonesty." pg 19

     
  • At 10:00 AM, Blogger Joe G said…

    Next secondclass can provide the reference that shows that CSI is a measurement of probability only.

    That's two you need to answer.

    Good luck with that...

     
  • At 11:30 AM, Blogger R0b said…

    1. Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

    Umm that is irrelevant.


    The first node of the EF asks if the phenomenon in question is highly probable. The second node asks if it has intermediate probability. You have claimed several times that all nodes have to be traversed in order. Now you're saying that checking for high and intermediate probability is irrelevant?

     
  • At 11:35 AM, Blogger R0b said…

    Pulsars would not make it past the first node. There isn't anything complex about a signal that repeats on/ off.

    The first node asks if the phenomenon has a high probability. So I'll ask again: Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal highly probable?

     
  • At 11:39 AM, Blogger R0b said…

    BTW if simplicity of description requires pre-existing knowledge (which it does) then YOUR point about it (simplicity of description) is MOOT.

    That is because with pre-existing knowledge anything can be desribed using ONE letter or one number.


    Far from making the point moot, it serves as a good example. If your pre-existing knowledge allows you to specify something with a single letter or number, then the phenomenon is very highly specified. Simplicity of description means high specificity.

    Have you ever eaten at a diner or Chinese resturant?

    I bet that you will fail to understand that point.


    I could guess at what you're getting at, but I would probably be wrong.

     
  • At 11:44 AM, Blogger R0b said…

    Also I have explained your number 5 a few times. And I am sure I have explained a few others- number 10 for example. And I covered #9 IN THIS THREAD!

    No you haven't. Where is the evidence that I was lying, per #5? Where did you explain how my assertion in #10 is a strawman? And what was aiguy's strawman in his ARN post?

     
  • At 11:45 AM, Blogger Joe G said…

    1. Under what chance hypothesis (identifiable when pulsar signals were first discovered) is a pulsar signal high or intermediate probability?

    Umm that is irrelevant.

    Am I going to have to explain EVERY FREAKIN' SENTENCE?!

    It is irrelevant because of your ill-conceived includance of "under what chance hypothesis". Any hypothesis would be based on the observation the signal exists, this is how we observe it and this is what it does.

    Science asks 3 basic questions:

    1. What’s there?
    The astronaut picking up rocks on the moon, the nuclear physicist bombarding atoms, the marine biologist describing a newly discovered species, the paleontologist digging in promising strata, are all seeking to find out, “What’s there?”

    2. How does it work?
    A geologist comparing the effects of time on moon rocks to the effects of time on earth rocks, the nuclear physicist observing the behavior of particles, the marine biologist observing whales swimming, and the paleontologist studying the locomotion of an extinct dinosaur, “How does it work?”

    3. How did it come to be this way?
    Each of these scientists tries to reconstruct the histories of their objects of study. Whether these objects are rocks, elementary particles, marine organisms, or fossils, scientists are asking, “How did it come to be this way?”

    This is where the EF would come in.

    The rest of my explanation in that post stands. However I am sure you won't/ didn't understand any of it. Or at least that is the game you're going to play- or perhaps you're not playing...

     
  • At 11:46 AM, Blogger Joe G said…

    BTW I will be gone for the rest of the afternoon. I will post all of your comments when I get log on again...

     
  • At 12:08 PM, Blogger R0b said…

    Do YOU see anything about probabilities in that definition of CSI?
    Yes, the word "information". Information is a probability measure in Dembski's paradigm. Specifically, the amount of information in X is -log2(P(X)).

    CSI represents the probability of a given composite event that includes all probabilistic resources. It's a probability measure. Bits measure only probability, nothing else.

     
  • At 12:11 PM, Blogger R0b said…

    It is irrelevant because of your ill-conceived includance of "under what chance hypothesis". Any hypothesis would be based on the observation the signal exists, this is how we observe it and this is what it does.

    Now you've lost me. Are you saying that the probabilities in the EF are not based on chance hypotheses? Are you saying that the pulsar has a high probability because it was actually observed?

    The question still stands. You can't apply the EF without chance hypotheses.

     
  • At 10:02 PM, Blogger Joe G said…

    As I have already told you the EF is only as good as the people/ person using it.

    The following is how I would apply it:

    The first node deals with regularity or chance. (it has always been my point that pulsars would not get past that node)

    So the first question I would ask when I applied the EF (which I would only do if I thought that which is being observed may be of intelligent origin) is, is it regular? Which would be followed by- Are there any discernable differences? What are they?

    So I guess the bottom line is I would apply the EF much differently than you would/ have.

    I would also say that I am NOT beholden to some Dembski fixation in that I can take data gathered by other people not hindered by materialistic tunnel vision and funnel that into a concept that your Dembski fixation will not allow you to comprhend.

    For example when he states that "Complexity measures that are probability measures transformed by -log2 will henceforth be referred to as information measures", you mistakenly infer that to mean that information is a probability measure.

     
  • At 10:12 PM, Blogger Joe G said…

    Also I have explained your number 5 a few times. And I am sure I have explained a few others- number 10 for example. And I covered #9 IN THIS THREAD!

    No you haven't.

    It is already known that you have a reading problem.

    Where is the evidence that I was lying, per #5?

    I never said the Caputo sequence wasn't an example of SC.

    Where did you explain how my assertion in #10 is a strawman?

    I stated that not everyone who accepted my challenge is lying. I never implied anything of the kind. And I have still yet to see one (a response to the challenge) that has any substance. Bottaro focuses on the production company and some conspiracy. He never offers up any valid scientific explanation for the materialistic anti-ID scenario.

    And what was aiguy's strawman in his ARN post?

    He thinks that since computers can output CSI that somehow invalidates ID. That is the stuff of a babbling lunatic, IMHO.

     
  • At 10:22 PM, Blogger Joe G said…

    BTW if simplicity of description requires pre-existing knowledge (which it does) then YOUR point about it (simplicity of description) is MOOT.

    That is because with pre-existing knowledge anything can be desribed using ONE letter or one number.


    Far from making the point moot, it serves as a good example.

    Thanks. That pretty much means that I am, once again, correct.

    If your pre-existing knowledge allows you to specify something with a single letter or number, then the phenomenon is very highly specified.

    Obviously that point went right over your head too. If a royal flush has a higher specificty than a pair of threes, just because of its simplicity of description, and yet both can be descirbed as a single pre-set letter, then the point is moot.

    A royal flush was the example given, right? And its value was also tied to the fact it had a higher simplicity of description than that of a pair of threes. But if a royal flush was designated R and a pair of threes was designated G then the value could not be determined by the simplicity of description.

     
  • At 12:17 AM, Blogger R0b said…

    I never said the Caputo sequence wasn't an example of SC.

    Yes, you did. Referring to the Caputo sequence, you said: "It isn't a complex sequence."

    Which would mean that it is not an instance of specified complexity.

    You then went on to say that design was inferred by its specified improbability, which you regard as different from specified complexity (although they both indicate design).

     
  • At 12:39 AM, Blogger R0b said…

    I stated that not everyone who accepted my challenge is lying. I never implied anything of the kind.

    Your original claim was that nobody had accepted your challenge. How does that not imply that those who claimed to have accepted it were lying?

    He thinks that since computers can output CSI that somehow invalidates ID. That is the stuff of a babbling lunatic, IMHO.

    Even if aiguy's argument were completely fallacious, that doesn't mean that it necessarily included a strawman. Where did aiguy attribute a position to IDers that they don't really hold?

    And contrary to your assertion at UD, aiguy was fully aware the CSI could be traced back to the designer, and he discussed the consequences of doing so.

     
  • At 12:50 AM, Blogger R0b said…

    The first node deals with regularity or chance. (it has always been my point that pulsars would not get past that node)

    So the first question I would ask when I applied the EF (which I would only do if I thought that which is being observed may be of intelligent origin) is, is it regular?


    According to the EF, the first question you're supposed to ask is whether it's highly probable. If it is, then you conclude that it's a product of regularity, i.e. fixed law. There is no node that says to ask whether the sequence itself is regular. If there were, then the Caputo sequence would have dropped out at that node. A sequence can be irregular and still be the product of fixed law, e.g. the output of a pseudorandom number generator.

    The question of whether something is highly probable makes no sense without a hypothesis on which to base that probability. What's the hypothesis for the pulsar?

     
  • At 1:33 AM, Blogger R0b said…

    Obviously that point went right over your head too. If a royal flush has a higher specificty than a pair of threes, just because of its simplicity of description, and yet both can be descirbed as a single pre-set letter, then the point is moot.

    I assume that we're still talking about the point that simplicity of description entails higher specificity. Why do you think that the truth or falsehood of this assertion is irrelevant?

    If you don't like the fact that specificity and simplicity are directly related, you should take it up with Dembski. He's the one who defined specificity as follows:
    σ = –log2[Phi_S(T)·P(T|H)]

    Phi_S(T) goes down as descriptions get simpler, which means that specificity goes up. A royal flush doesn't have higher specificity just because of its simpler description, but it's a factor.

    A royal flush was the example given, right? And its value was also tied to the fact it had a higher simplicity of description than that of a pair of threes. But if a royal flush was designated R and a pair of threes was designated G then the value could not be determined by the simplicity of description.

    If they're so designated before the hands are dealt, then they will have equal specificational resources, but not equal specificity, according to Dembski. From his Specification paper:

    To see that the specificity so defined corresponds to our intuitions about specificity in general, think of the game of poker and consider the following three descriptions of poker hands: “single pair,” “full house,” and “royal flush.” If we think of these poker hands as patterns denoted respectively by T1, T2, and T3, then, given that they each have the same description length (i.e., two words for each), it makes sense to think of these patterns as associated with roughly equal specificational resources.

    Since specificity is inversely proportional to the log of specificational resources, specificity goes up when a description is reduced to a single word. So predesignating a royal flush and a pair of threes with one-word descriptions raises their specificity, but doesn't make them equally specified.

    Now what was your point that you think went over my head?

     
  • At 1:46 AM, Blogger R0b said…

    "The court therefore stopped short of charging Caputo with dishonesty."

    The court didn't convict Caputo, but they inferred design, according to Dembski:

    In the trial of Nicholas Caputo the New Jersey Supreme Court employed the Explanatory Filter, first rejecting a law explanation, then a chance explanation, and finally inferring a design explanation.

    If you think that they didn't infer design, then you must think that Dembski's reconstruction of their process as spelled out in NFL and elsewhere is wrong. Is that what you think?

     
  • At 1:52 AM, Blogger R0b said…

    For example when he states that "Complexity measures that are probability measures transformed by -log2 will henceforth be referred to as information measures", you mistakenly infer that to mean that information is a probability measure.

    Yes, but it's not an inference, it's explicit. Information measures are probability measures transformed by -log2. How is that a mistake?

     
  • At 7:31 AM, Blogger Joe G said…

    For example when he states that "Complexity measures that are probability measures transformed by -log2 will henceforth be referred to as information measures", you mistakenly infer that to mean that information is a probability measure.

    Yes, but it's not an inference, it's explicit.

    It is explicit. That is if you know how to read.

    Complexity measures that are probability measures

    Did you catch that? Do you understand it?

    Information measures are probability measures transformed by -log2. How is that a mistake?

    It's a mistake because it does not apply in all situations. It can only apply in situations in which complexity measures are probability maesures.

    Also it is a given that information can be determined in the absence of any probability calculation. Again your fixation with Dembski causes you to miss the obvious.

     
  • At 7:36 AM, Blogger Joe G said…

    I never said the Caputo sequence wasn't an example of SC.

    Yes, you did. Referring to the Caputo sequence, you said: "It isn't a complex sequence."

    Provide the reference. Ya see it isn't the "Caputo sequence" unless it is explicitly specified. And we both know that all you did was to put down a series of Ds with one R placed in the sequence. A series of Ds with one R is meaningless unless the context is provided.

     
  • At 7:39 AM, Blogger Joe G said…

    I stated that not everyone who accepted my challenge is lying. I never implied anything of the kind.

    Your original claim was that nobody had accepted your challenge. How does that not imply that those who claimed to have accepted it were lying?

    Did you read anyone of the alledged accepted challenges? The entries I read did not even address it. Therefore the challenge remains untouched, which is just as good as being unaccepted. That is no one has mounted a series challenge to my challenge.

     
  • At 7:44 AM, Blogger Joe G said…

    He thinks that since computers can output CSI that somehow invalidates ID. That is the stuff of a babbling lunatic, IMHO.

    Even if aiguy's argument were completely fallacious, that doesn't mean that it necessarily included a strawman.

    A strawman is a fallicious arguemnet.

    Where did aiguy attribute a position to IDers that they don't really hold?

    Saying that since CSI can be generated by computers reutes ID:

    In the end, all of ID's claims about detecting intelligent causation that transcends fixed law and chance crumble into dust, for the obvious reason that computers can easily pass ID's tests for intelligent agency.

    It is also a strawman to say that computers operate via fixed law and chance.

    And contrary to your assertion at UD, aiguy was fully aware the CSI could be traced back to the designer, and he discussed the consequences of doing so.

    And again I showed that he is wrong. IOW his whole argument is based on ID ignorance.

    strawman- a weak or sham argument set up to be easily refuted.

    aiguy set up a weak arguement and used it to try to reute ID.

     
  • At 7:57 AM, Blogger Joe G said…

    According to the EF, the first question you're supposed to ask is whether it's highly probable.

    It depends which EF you are using.

    Page 182 of "Signs of Intelligence" says to ask "Is it contingent?" at the first node. See also page 13 of NFL. On page 12 Dembski says that we ask "are we going to attribute it to necessity, chance or design?" and in TDI he uses regularity, chance and agency see page 11.

    So the bottom line is you are so fixated with Dembski you have tunnel vision.

    So again I would observe that the signal is regular and powerful enough to bleed across the EM band.

    In my mind design is already ruled out. That is because what I know about transmitters and receivers.

    Ya see to me it would be a waste for someone to build a transmitter to powerfully broadcast over the entire EM spectrum and then only have it transmit a simple repeating pattern. However I do know of celestrial bodies that have powerful magnetic fields.

     
  • At 8:01 AM, Blogger Joe G said…

    Page 12 of NFL:

    "One concern is that the filter assigns merely improbable events to design. But this is clearly not the case. In addition to complexity or improbability, the filter needs to assess specification before attributing design."

    That refutes your premise that information is a probability measure.

     
  • At 8:12 AM, Blogger Joe G said…

    If you don't like the fact that specificity and simplicity are directly related, you should take it up with Dembski.

    I don't have to take anything up with Dembski. That is because I know you are bastardizing his ideas. Heck you can't even read a simple sentence without making up some whacked inference about it.

    What part about the following don't you understand?

    A royal flush was the example given, right? And its value was also tied to the fact it had a higher simplicity of description than that of a pair of threes. But if a royal flush was designated R and a pair of threes was designated G then the value could not be determined by the simplicity of description.

     
  • At 8:21 AM, Blogger Joe G said…

    Caputo was acquitted 6-0. The court only suggested he change the way he picked.

    I do not have a Dembski fixation and I will not get caught up in yours.

     
  • At 8:49 AM, Blogger Joe G said…

    Here is another strawman by aiguy:

    "2) ID, and religious traditions in general, teach that human beings were designed by God/Designer. To claim that we humans are necessarily intelligent even though we were designed, then turn around and say computers are not intelligent because they are designed, is blatantly inconsistent. Why don't you credit the Designer with His manufacturing, programming, and debugging ability, instead of humans?"

    First he tries to tie ID with religious traditions- strawman. Then he says that ID claims that humans were designed by God/Designer. That is another strawman as ID does nOT make such a claim.

    And in the end to say that computers run via fixed law and chance is a strawman. It would only be correct if computers arosde via fixed law and chance. And we know that isn't so.

     
  • At 1:24 PM, Blogger R0b said…

    It's a mistake because it does not apply in all situations. It can only apply in situations in which complexity measures are probability maesures.

    When Dembski talks about descriptional complexity, obviously probability doesn't apply. But the fact remains that when Dembski says information measures, he's referring to complexity measures that are probability measures. IOW, his definition of "information measures" refers to probability measures. He never says that it could refer to something else.

    Also it is a given that information can be determined in the absence of any probability calculation.

    We're talking specifically about Dembski's definition of information, as referenced in his concept of CSI. That particular definition of information always entails a probability calculation.

    Again your fixation with Dembski causes you to miss the obvious.

    We can't discuss CSI without discussing Dembski's definition of it. He developed the concept by himself.

    Provide the reference.

    Are you doubting that you said it? Most people would use Google to find their own quotes on their own blog, but if you want me to find it for you, here it is.

    Ya see it isn't the "Caputo sequence" unless it is explicitly specified.

    Sure it is. High algorithmic compression entails a specification, according to Dembski. Regardless, we were talking specifically about the Caputo sequence, referred to by name, and you said it wasn't complex.

    Did you read anyone of the alledged accepted challenges?

    Yes.

    The entries I read did not even address it. Therefore the challenge remains untouched, which is just as good as being unaccepted. That is no one has mounted a series challenge to my challenge.

    So when you said that nobody had accepted your challenge, what you meant was that the responses to your challenge were not, in your opinion, serious. Got it.

    A strawman is a fallicious arguemnet.

    Yes, but not all fallacious arguments are strawmen. Even if you were correct in saying that aiguy's argument is fallacious, it still wouldn't be a strawman.

    I asked: Where did aiguy attribute a position to IDers that they don't really hold?

    You answered: Saying that since CSI can be generated by computers reutes ID

    Where did aiguy attribute that position to IDers? If he did, he would be saying that IDers believe that ID is refuted.

    It is also a strawman to say that computers operate via fixed law and chance.

    No, in order to be a strawman, aiguy would have had to claim that IDers say that.

    You said: And contrary to your assertion at UD, aiguy was fully aware the CSI could be traced back to the designer, and he discussed the consequences of doing so.

    I answered: And again I showed that he is wrong. IOW his whole argument is based on ID ignorance.

    Do you mean he's wrong that CSI can be traced back to the designer, or he's wrong about the consequences? What you said was that aiguy is "clueless to the fact that the CSI generated by the computer can be traced back to its designers", when, in fact, he explicitly acknowledged that CSI could be traced back to the designer.

    strawman- a weak or sham argument set up to be easily refuted.

    aiguy set up a weak arguement and used it to try to reute ID


    There is a difference between setting up an intentionally weak argument in order to refute that same weak argument, and setting up an unintentionally weak argument in order to refute something else.

    It depends which EF you are using.

    Page 182 of "Signs of Intelligence" says to ask "Is it contingent?" at the first node. See also page 13 of NFL. On page 12 Dembski says that we ask "are we going to attribute it to necessity, chance or design?" and in TDI he uses regularity, chance and agency see page 11.


    Exactly. If an event is non-contingent, then it has a probability of one or very close to one. In other words, it has a high probability. If we attribute an event to necessity, then we're likewise saying that it has a high probability. No matter how you slice it, the first node asks whether the event has a high probability. The probability is based on a chance hypothesis. What's your chance hypothesis?

    So the bottom line is you are so fixated with Dembski you have tunnel vision.

    The EF is defined by Dembski. How can we talk about it without talking about how he defined it?

    In my mind design is already ruled out.

    Your original claim was: "The properly applied DEF would have not allowed design to be the initial inference." The first step in the EF is to check for high probability based on a chance hypothesis. What's your chance hypothesis?

    "One concern is that the filter assigns merely improbable events to design. But this is clearly not the case. In addition to complexity or improbability, the filter needs to assess specification before attributing design."

    That refutes your premise that information is a probability measure.


    The quote says nothing about information. Information is measured in bits, which are units of probability transformed by -log2. In TDI, the probability in question is that of the saturated event: P(D*_omega | H). In his Specification paper, he defines specified complexity as –log2[M·N· Phi_S(T)·P(T|H)], which is an upper bound on P(D*_omega | H). Specificity comes into play when calculating the probability, which means that unspecified events will not result in a low saturated probability, thus Dembski's quote above. The fact remains that CSI is a probability, specifically that of the saturated event, transformed by -log2.

    What part about the following don't you understand?

    A royal flush was the example given, right? And its value was also tied to the fact it had a higher simplicity of description than that of a pair of threes. But if a royal flush was designated R and a pair of threes was designated G then the value could not be determined by the simplicity of description.


    I understood it and responded. Again, if we predesignate events with one-word descriptions, we increase the specificity of the event. Dembski defines specificity as –log2[ Phi_S(T)·P(T|H)], where Phi_s(T) is the specificational resources. If we simplify the description of an event (before it occurs, of course), Phi_s(T) decreases, which means that specificity increases. According to Dembski's definition, specificity is always directly correlated with simplicity of description.

    Caputo was acquitted 6-0. The court only suggested he change the way he picked.

    I do not have a Dembski fixation and I will not get caught up in yours.


    I'm not following your point. Are you saying that Dembski's reconstruction was wrong?

    Then he says that ID claims that humans were designed by God/Designer. That is another strawman as ID does nOT make such a claim.

    I take "God/Designer" to mean "God or Designer", which would seem to accurately reflect the position most IDers (who say that the designer could be God). If you interpret it to mean that the designer is necessarily God, then I agree that it's a strawman.

    And in the end to say that computers run via fixed law and chance is a strawman.

    Again, in order to be a strawman, aiguy would have had to claim that IDers say that.

    It would only be correct if computers arosde via fixed law and chance. And we know that isn't so.

    I see a difference between operating via fixed law and chance and arising via fixed law and chance. Again, computers operate deterministically except for rare glitches. Whether they (including their OS and applications) arise via fixed law and chance depends on whether the human design process is considered to be one of fixed law and chance. I know of no evidence that the human design process is not one of fixed law and chance.

     
  • At 3:01 PM, Blogger R0b said…

    Just to elaborate on determinism, a computer's future states depend only on its current state and the laws of physics. Likewise, the future states of a system of celestial bodies depends only on its current state and the laws of physics. That's what is meant by a operating deterministically.

    Computer programs are written by humans, for the most part. In writing a program and hitting "execute", we're setting up the initial state. From that point, the course that the program will run is fully determined by the laws of physics.

     
  • At 4:01 PM, Blogger R0b said…

    Also, to see that Dembski's design inference consists of calculating P(D*_omega|H) and comparing it to what he calls the "magic number", one-half, see the first three sections of chapter 6 in TDI.

     
  • At 5:59 PM, Blogger Joe G said…

    Just to elaborate on determinism, a computer's future states depend only on its current state and the laws of physics.

    But even those laws of physics can be traced back to a mind- ie agency. However I know the future state of my computer depends on whether or not it is turned on. And then it depends on what programs are run/ running.

    Computer programs are written by humans, for the most part. In writing a program and hitting "execute", we're setting up the initial state. From that point, the course that the program will run is fully determined by the laws of physics.

    No, it is fully determined by the program along with any allowed inputs. Physics may explain the flow of electrons but that flow is fully determined by the design of the hardware coupled with the design of the software.

    And in the end to say that computers run via fixed law and chance is a strawman.

    Again, in order to be a strawman, aiguy would have had to claim that IDers say that.

    It is a strawman because it is false and he is using it to set up his argument.

    It would only be correct if computers arosde via fixed law and chance. And we know that isn't so.

    I see a difference between operating via fixed law and chance and arising via fixed law and chance.

    I don't.

    Again, computers operate deterministically except for rare glitches.

    Computer operation is determined by their hardware and software. A healthy power source helps.

    Whether they (including their OS and applications) arise via fixed law and chance depends on whether the human design process is considered to be one of fixed law and chance. I know of no evidence that the human design process is not one of fixed law and chance.

    My apologies but that is just plain nuts. If human design is nothing more than "fixed law and chance" than why have archaeology? Why have forensics?

    If everything is "fixed law and chance" than why do we even care about those things?

    Then he says that ID claims that humans were designed by God/Designer. That is another strawman as ID does nOT make such a claim.

    I take "God/Designer" to mean "God or Designer", which would seem to accurately reflect the position most IDers (who say that the designer could be God). If you interpret it to mean that the designer is necessarily God, then I agree that it's a strawman.

    It is a strawman because ID does NOT claim that humans were designed by Designer. If that is what the data points to than ID would accept it.

     
  • At 6:35 PM, Blogger Joe G said…

    We can't discuss CSI without discussing Dembski's definition of it. He developed the concept by himself.

    Even if that was so, and I am not sure that it is, I have provided his definition of CSI.


    So when you said that nobody had accepted your challenge, what you meant was that the responses to your challenge were not, in your opinion, serious.

    In reality they, the responses I read, didn't address the challenge. However there could be some somewhere that I haven't read.

    I am here.

    The Caputo sequebnce was as I thought. You just posted a the sequence without referencing it. And then you even provided a link to substantiate it.

    Now I admit I originally claimed it was not detachable but then I realized that in my haste to look up an old reference, I was mistaken. I explained this in that thread.

    So the bottom line is you are so fixated with Dembski you have tunnel vision.

    The EF is defined by Dembski.

    Really? Even HE cals it SOP- standard operating procedure.

    How can we talk about it without talking about how he defined it?

    There is a difference between what you are doing and then applying the EF. One can apply the EF without any knowledge of Dembski.

    The first decision box asks if we can attribute X to regularity, necessity, law OR not?

    Again and I can't say this enough- the EF is only as good as the person/ people using it. An anal-retentive, tunnel-visioned, ignorant twit has no business trying to use it.

    The first step in the EF is to check for high probability based on a chance hypothesis.

    The good thing is that YOU don't get to tell me how I have to proceed. And I happen to think your "suggestion" is nonsense.

    The first node should be approached exactly how I just explained it. You ask as many questions and do as much as possible to satisfactually answer it.

    Then I would do just as explained.

    Do you mean he's wrong that CSI can be traced back to the designer, or he's wrong about the consequences?

    What consequences? You mean like the fact I have already stated that it is OK if we are just conduits?

    Or are there any other strawman consequences I should be aware of?

    It is also obvious that aiguy has a hang-up with the word intelligence. The way IDists use it is intelligence is that which can cause counterflow. IOW it has to do with that nature, operating freely thingy that you refuse to understand.

    6. Provide a quote from Dembski demonstrating that the Caputo court did not share Dembski's inference.

    The court acquitted him 6-0 and Dembski found him guilty. That is the end of it- that is as far as my substantiating my claim on the matter.

    According to Dembski's definition, specificity is always directly correlated with simplicity of description.

    It could but as I demonstrated it doesn't have to. It all depends on predeterminations.

     
  • At 7:07 PM, Blogger R0b said…

    But even those laws of physics can be traced back to a mind- ie agency.

    An interesting position. So all natural processes, including planetary trajectories, chemical reactions, etc. can be traced back to agency.

    However I know the future state of my computer depends on whether or not it is turned on. And then it depends on what programs are run/ running.

    The power-on status of a computer, the programs and data that are loaded in memory, and the registers, including the instruction pointer, are all part of the current state. Since the future state can be predicted solely from the current state and the laws of physics, execution is deterministic.

    It is a strawman because it is false and he is using it to set up his argument.

    No it isn't. You don't understand the meaning of the word strawman.

    My apologies but that is just plain nuts.

    Apologies accepted. Google "compatibilism".

    If human design is nothing more than "fixed law and chance" than why have archaeology? Why have forensics?

    Why not? If chemistry is reducible to physics, why do we study chemistry?

    And if physical laws are artifacts of some unknown agency, why have physicists? Shouldn't we leave the studying of artifacts to the archaeologists?

    It is a strawman because ID does NOT claim that humans were designed by Designer.

    Are you saying that ID does not claim that humans are necessarily designed, or that they do not claim that the design was necessarily done by a designer?

     
  • At 8:00 PM, Blogger R0b said…

    Even if that was so, and I am not sure that it is, I have provided his definition of CSI.

    The definition you provided entailed a probability calculation. There's no way around it.

    The Caputo sequebnce was as I thought. You just posted a the sequence without referencing it. And then you even provided a link to substantiate it.

    Now I admit I originally claimed it was not detachable but then I realized that in my haste to look up an old reference, I was mistaken. I explained this in that thread.


    After we got past the confusion over detachability, and after it was clear that we were talking about the Caputo sequence (I referred to it as such in the snippet that you were responding to), you still said it wasn't complex.

    Really? Even HE cals it SOP- standard operating procedure.

    Yes, and whether that's true is another question. The fact remains that Dembski invented the term "Explanatory Filter", and he alone defined it to refer to a series of steps. The first step is to ask whether the event has high probability under a chance hypothesis. What's your chance hypothesis for the pulsar? I'll keep asking until you answer it.

    One can apply the EF without any knowledge of Dembski.

    Certainly. But we don't know that we're applying it unless we compare our process to Dembski's definition.

    The first decision box asks if we can attribute X to regularity, necessity, law OR not?

    And to attribute it to any of those things, it has to have a high probability under our chance hypothesis. What's the chance hypothesis for the pulsar?

    The good thing is that YOU don't get to tell me how I have to proceed. And I happen to think your "suggestion" is nonsense.

    It's not my suggestion, it's Dembski's definition of the EF. And I agree that the EF is nonsense.

    The first node should be approached exactly how I just explained it.

    Ah, argument by fiat.

    You ask as many questions and do as much as possible to satisfactually answer it.

    Absolutely. And the only way to answer it is to determine the probability under a chance hypothesis. What is the chance hypothesis for the pulsar?

    An anal-retentive, tunnel-visioned, ignorant twit has no business trying to use it.

    Since everything I've said comes straight from Dembski, you must be referring to Dembski.

    What consequences? You mean like the fact I have already stated that it is OK if we are just conduits?

    The consequences, as aiguy said in the post that you claimed to have read, are (1) conduits are falsely labeled intelligent by ID's method, and (2) anything that seems intelligent could actually be an unintelligent conduit.

    And I notice you've swept under the rug the fact that your original accusation against aiguy was clearly false.

    It is also obvious that aiguy has a hang-up with the word intelligence. The way IDists use it is intelligence is that which can cause counterflow. IOW it has to do with that nature, operating freely thingy that you refuse to understand.

    I've talked with IDists for years, and very rarely has the term "counterflow" come up. What's more, I've read physics textbooks and primary literature, and never seen the term. Why is that? Why hasn't Ratzsch submitted his work to science journals? Why should I pay any attention to a philosopher at Calvin College who has had zero impact on science?

    The court acquitted him 6-0 and Dembski found him guilty.

    So Dembski was wrong. And since the Caputo case reconstruction is the only example that comes anywhere close to being a full application of the Generic Chance Elimination Argument (which is a formalization of the EF), this means that there is no semi-formal example of Dembski's method anywhere.

    It could but as I demonstrated it doesn't have to. It all depends on predeterminations.

    Again, here is Dembski's definition of specificity: σ = –log2[Phi_S(T)·P(T|H)]. If you think that specificity and simplicity are not correlated, then you're not using Dembski's definition of specificity. Since Dembski's definition of specified complexity is based on the above definition, you're apparently using your own definition of SC also.

     
  • At 7:44 AM, Blogger Joe G said…

    Do you agree that CSI is a probability measure?

    Only the C in CSI can be determined via probability. IOW C is a probability measure.

    Even the EF demonstrates that CSI is more than probability.

    "One concern is that the filter assigns merely improbable events to design. But this is clearly not the case. In addition to complexity or improbability, the filter needs to assess specification before attributing design." page 12 NFL

    That refutes your premise that CSI is a probability measure as to determine CSI is to determine design.

     
  • At 8:03 AM, Blogger Joe G said…

    But even those laws of physics can be traced back to a mind- ie agency.

    An interesting position.

    I don't know how interesting it is but I know that has been the position for some centuries.

    However I know the future state of my computer depends on whether or not it is turned on. And then it depends on what programs are run/ running.

    The power-on status of a computer, the programs and data that are loaded in memory, and the registers, including the instruction pointer, are all part of the current state.


    Since the future state can be predicted solely from the current state and the laws of physics, execution is deterministic.

    A computer's future state does not depend on its current state. If its current state is off that does not mean its future state has yo also be off.

    A computer's future state is determined by its power source, its hardware and its software. I know. I used to design and program them.

    To say otherwise just expsoes your ignorance.

    It is a strawman because it is false and he is using it to set up his argument.

    No it isn't. You don't understand the meaning of the word strawman.

    I posted the meaning. You must be having that reading comprehension issue again.

    If human design is nothing more than "fixed law and chance" than why have archaeology? Why have forensics?

    Why not?

    Lol! Was that supposed to be a rebuttal? If everything is nothing more than "fixed law and chance" then no one should be able to determine what is an artifact. That is because to make that determination one has to be able to know what fixed law and chance are capable coupled with their knowledge of what agencies are capable of.

    If chemistry is reducible to physics, why do we study chemistry?

    Ask the chemists. Perhaps it isn't reducible to physics. Or poerhaps chemistry is just a branch, a specified branch, of physics.

    And if physical laws are artifacts of some unknown agency, why have physicists?

    To study them.

    Shouldn't we leave the studying of artifacts to the archaeologists?

    Archaeologists are only good at studying Earth-bound objects.

    It is a strawman because ID does NOT claim that humans were designed by Designer.

    Are you saying that ID does not claim that humans are necessarily designed,

    No one knows exactly what was designed. That is why we need science- to help us answer questions like that. IOW ID does not say that humans were directly designed.

     
  • At 8:22 AM, Blogger Joe G said…

    Really? Even HE cals it SOP- standard operating procedure.

    Yes, and whether that's true is another question.

    You had better find an answer.

    The fact remains that Dembski invented the term "Explanatory Filter", and he alone defined it to refer to a series of steps.

    I doubt you are correct.

    The first step is to ask whether the event has high probability under a chance hypothesis.

    That is false.

    What's your chance hypothesis for the pulsar? I'll keep asking until you answer it.

    Ask all you want. All you are doing is exposing your willful ignaorance.

    One can apply the EF without any knowledge of Dembski.

    Certainly. But we don't know that we're applying it unless we compare our process to Dembski's definition.

    You might not know but that is only because you have a Dembski fixation.


    The first decision box asks if we can attribute X to regularity, necessity, law OR not?

    And to attribute it to any of those things, it has to have a high probability under our chance hypothesis.

    That could be true but that does not mean that is how one has to proceed.

    One usually has to ask many questions to get the answer.

    The good thing is that YOU don't get to tell me how I have to proceed. And I happen to think your "suggestion" is nonsense.

    It's not my suggestion, it's Dembski's definition of the EF.

    Nope, it is your suggestion and yours alone.

    And I agree that the EF is nonsense.

    The EF is the best process we have to determine design from non-design without being biased toward design.

    IOW the EF is exactly as I stated- only as good as the person/ people using it. And in your hands the EF turns into nonsense.

    What consequences? You mean like the fact I have already stated that it is OK if we are just conduits?

    The consequences, as aiguy said in the post that you claimed to have read, are (1) conduits are falsely labeled intelligent by ID's method,

    That is just an unsubstantiated assertion.

    and (2) anything that seems intelligent could actually be an unintelligent conduit.

    How does he know that conduits are unintelligent?

    And I notice you've swept under the rug the fact that your original accusation against aiguy was clearly false.

    Seeing you have been wrong so far what is that accusation that I alledgedly swept under the rug?

    It is also obvious that aiguy has a hang-up with the word intelligence. The way IDists use it is intelligence is that which can cause counterflow. IOW it has to do with that nature, operating freely thingy that you refuse to understand.

    I've talked with IDists for years, and very rarely has the term "counterflow" come up.

    That is hardly a refutation.

    What's more, I've read physics textbooks and primary literature, and never seen the term.

    Why would you think that term would be in a physics book?

    Why should I pay any attention to a philosopher at Calvin College who has had zero impact on science?

    Because he explains what is being debated. And by reading his book you wouldn't be arguing from ignorance.

    The court acquitted him 6-0 and Dembski found him guilty.

    So Dembski was wrong.

    One does not follow the other. The judges could have been wrong.

    All I had to do was:

    6. Provide a quote from Dembski demonstrating that the Caputo court did not share Dembski's inference.

    Been there, done that. I have nothing more to say about that point.

     
  • At 9:05 AM, Blogger Joe G said…

    aiguy has a hang-up on the word "intelligence". He needs to read the following:

    Explaining the "I" in "ID" - Again

    Also Del Ratszch is a philosopher of science. Philosophers of science set up the rules by which science plays.

     
  • At 11:03 AM, Blogger Joe G said…

    "Complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple formula or algorithm. A specification, on the other hand, is a match or correspondence between an event or object and an independently given pattern or set of functional requirements."-- Stephen C. Meyer in Evidence for Design in Physics and Biology: From the Origin of the Universe to the Origin of Life

    That was an essay built of Wm Dembski's previous essay in "Science and the Evidence for Design in the Universe" The Proceedings of the Wethersfield Institute (1999)- Behe, Dembski & Meyer

    That is why I know the EF was NOT properly applied. Pulsar signals are very regular and there isn't anything improble about the sequence. It is just a repetition.

    NFL page 15:

    "For a pattern to count as a specification, the important thing is not when it was identified but whether in a certain well-defined sense it is independent of the event it describes."

    signal present-> signal gone->signal present->signal gone

    We named them "pulsars" because that describes the event...

    The EF is not a rush to an inference. Each node (decision block) in the filter requires rigorous scientific investigation.

    The design inference depends on us knowing and understanding what designing agencies are capable of coupled with us knowing and understanding what nature, operating freely, is capable of.

    Everything that is complex has a small probability- just like all widgets are gadgets. However everything that has a small probability does not have to be complex- just like all gadgets don't have to be widgets.

     
  • At 1:25 PM, Blogger R0b said…

    Me: The first step is to ask whether the event has high probability under a chance hypothesis.

    Joe: That is false.

    That pretty much says it all. How do you reason with someone who says that black is white? Answer: You don't.

     
  • At 2:21 PM, Blogger Joe G said…

    That pretty much says it all.

    Yes it does. It says that you refuse to deal with reality and you get upset when others do.

    And I have always said there isn't any reasoning with you. I tried and you just refuse to get it. Which is what I said.

    All you are doing is just keep regurgitating that which has been explained and/ or refuted.

    That you want people to believe I am the issue is laughable. You have tunnel vision brought on by your Dembski fixation.

    I apologize that I couldn't help you get over it.

     
  • At 1:59 PM, Blogger Dazza McTrazza said…

    IOW he is saying that since computers can produce CSI, ID is falsified because he alledges

    And which incredible definition of "alledges" am I missing?

     
  • At 3:17 PM, Blogger Joe G said…

    And which incredible definition of "alledges" am I missing?

    Is this it? Is picking on alledged spelling errors the best you have?

    Oh no, that's right. You can also jump to incorrect conclusions.

    You must be very proud of yourself.

    Get a life...

     

Post a Comment

<< Home