Report on the Annual Meeting

Well, the first annual meeting (or as some, inspired by thermodynamics, have called it, the “zeroth annual meeting”) has come and gone. Overall I think everyone there had a good time. There were about 35 participants, coming from 10 states, and a range from students up to emeritus faculty. Here is my summary of the events:

The debate on Friday night with Martie Poenie and Jed Macosko was indeed very “friendly”. Martie started with a presentation of three evidences of historical continuity, namely the connection of the bacterial flagella to the secretion system and simpler versions of flagella; the presence of noncoding (“junk” DNA), in particular, evidence of virus “uploads” into the human genome which have been “silenced”, as well as what appear to be virus uploads that are now active and essential parts of the genome; and the evidence of fusion of two chromosomes in humans. Jed then gave a talk which started with a survey of the wonderfully designed-looking molecular machines in the cell; he then summarized work in numerical calculations which argues that the time needed for three to six neutral mutations to become probable may be longer than the number of generations of living things in the whole history of the earth, while these machines need hundreds to thousands of mutations. He also summarized the arguments of Behe and others that most beneficial mutations involve breaking or disabling things for defensive purposes, not making new things.

The two then gave ten minute responses to each other, followed by an extended Q&A time. My (opinionated) summary of the discussion is 1) Martie argued strongly that point mutations are not the whole story and there are many other ways to have mutations, but for me and some others, this was unpersuasive because these other mechanisms just add new ways to mutate; the question still remains how likely it is that random changes of the genome by any mechanism lead to positive changes. Other mechanisms of mutation just increase the overall mutation rate, and we still have to ask what fraction are likely to be beneficial. 2) Jed conceded that the homologies (similarities of gene patterns) are persuasive evidence of common descent in at least some cases, but argued that the lack of a mechanism to make it likely to have beneficial changes involving more than a few mutations was even stronger evidence against common descent. In my opinion, the two arguments are not mutually exclusive. One could imagine a scenario in which there is a continuous series of extremely unlikely events leading to change. In this scenario there would be many homologies between species, but the changes from one thing to another would involve highly unlikely (but God-directed) events such as virus uploads of new, useful information. Thus, random, undirected common descent could be very unlikely, but common descent could have happened by a series of essentially miraculous violations of laws of probability. For example, as Martie discussed, placental animals seem to have acquired placenta by not one, but a whole bunch of completely different virus uploads of new information. What is the likelihood that even one virus would carry new information needed to make a uterus, much less ten or so independent different viruses?

Neither speaker addressed arguments I have heard in the past that the homologies don’t all work out to give a single evolutionary tree. Comparing two species, one can often find parts that are similar, but when many species are compared, one often finds that one species will be similar to another in one area, but similar to a third in a different area, and that one similar to a fourth in yet a different area, so that no consistent line of descent can be drawn. I’d like to see a talk on this topic at next year’s meeting. Perhaps some know of papers already out there which address this. Also, neither speaker addressed the question of whether noncoding DNA plays a role in the spatial structure of the DNA, another theory which is out there.

On Saturday morning, Casey Luskin gave an impassioned and well documented presentation that there really are bad people out there who play hard ball with people sympathetic to ID, or even theistic evolutionists perceived to be sympathetic to ID. Some words of advice from his talk: if someone says something discriminatory to you, get it in writing by writing them an email summarizing what they have said to you (in a polite and non confrontational way). Create a paper trail. And don’t put anything in writing yourself that you would not like to see on the front page of the New York Times– only say what you will stick by.

I gave a talk at 1:30 with some handwaving and controversial statements on quantum mechanics. Some may not agree with my bias that the randomness of quantum mechanics is not fundamentally different, from a theological/philosophical perspective, from classical randomness, and doesn’t really help with the freedom of the will problem. Is it better to have my actions determined by random particles than by deterministic particles?

Gary Patterson then gave a talk on why entropy is our friend, and noted the prevalence of negative talk about entropy and the Second Law not only by young-earth creationists but also by prominent evangelical theologians. Since entropy is a count of the number of possibilities available to a system, it can be viewed as part of God’s design to allow many possibilities, instead a single boring monad.

Everyone enjoyed David Bossard’s final talk on the cave paintings in France, dated at 17,000 BCE. The pictures he showed gave a fairly persuasive case that some of the pictures of animals are star charts (spots on these pictures fit the stars of the night sky at that time fairly well). If this theory is true, then it would mean not only were humans around at that time but that they were pretty intelligent, not the dumb “cavemen” we often see in the popular media.

Lots of fun discussions over meals and breaks, as well! Everyone agreed this was worth doing again, at least once a year. Hopefully by next year we will have the funds to support travel for some people to attend the meeting.

 

 

6 responses to “Report on the Annual Meeting”

  1. Andy Walsh Avatar
    Andy Walsh

    “the time needed for three to six neutral mutations to become probable may be longer than the number of generations of living things in the whole history of the earth”

    If this is indeed the case (I’d be curious to see the math), then perhaps that simply argues for selection pressure as the dominant force in genetic variation, rather than neutral drift.

    “What is the likelihood that even one virus would carry new information needed to make a uterus, much less ten or so independent different viruses?”

    I would submit that this is the wrong question. It is like flipping a coin 100 times and then remarking on how unlikely it was for that particular sequence of heads and tails to occur. Yes, that particular sequence was very unlikely – but the same could be said of any result. In other words, there was a 100% chance that the outcome of the coin flipping would be a sequence about which you could remark “How unlikely!”

    I would suggest that instead of asking about probabilities and likelihoods, we should be looking at a measure more akin to entropy, which takes into account equivalent states.

    “Comparing two species, one can often find parts that are similar, but when many species are compared, one often finds that one species will be similar to another in one area, but similar to a third in a different area, and that one similar to a fourth in yet a different area, so that no consistent line of descent can be drawn”

    This is at least partly because the methods used to assess distance generally assume parsimony as a heuristic – in other words, they assume as few changes as possible to explain the observed differences. But it is just a heuristic; there is no guarantee that mutations actually occurred that way.

    “He also summarized the arguments of Behe and others that most beneficial mutations involve breaking or disabling things for defensive purposes, not making new things.”

    To me, this is less of a refutation of natural selection and more of a restatement of it. The fitness of each mutation is context-specific; an adaptation to new circumstances may very well read as “broken” in a different context.

  2. David Snoke

    Hi, just a few comments–
    On the 3-6 number, that is a combination of Behe’s mathematical work (with me), Doug Axe’s mathematical work which follows that up with realistic numbers, and the work in Behe’s book The Edge of Evolution. As Jed pointed out in his talk, the basic premise of our work has since been confirmed several times in the literature, and I have seen biophysics talks along the same lines. As you say, it basically implies that there must be selection pressure for changes more than 2-3 neutral steps. Before our work, there was a lot of work that assumed you could have much longer neutral drifts, and I think there are still people out there arguing for significant neutral drift. But if you go to a scenario where nearly every change has to be selected for, that changes the game quite a bit. It means you have to have “channels in parameter space” to get from anything to anywhere else, and then you have to ask what is the probability that so many changes have directed channels leading to them?

    Your discussion of probability seems to ignore the whole discussion of “specified” states, which, e.g., Dembski has discussed. If I call a specific sequence of coins in advance, then if I get that, it is fantastically unlikely. But also some “uncalled” sequences are fantastically unlikely because they are effectively “implicitly called” by having special properties. E.g., all heads. Even without calling that series, if you flipped a coin 100 times and got all heads, you would say it is highly unlikely. That is the basis on the 2nd Law, that some states lie in regions of special properties that are extremely unlikely.

    One could cast this discussion in terms of entropy, by saying that the DNA which works for living systems has few equivalent states– most small mutations lead to death, and most random combinations of DNA (by far) lead to no function. But it would be better to put the discussion in terms of information. Entropy and information are related but are not in a simple inverse relation. Thus, for example, a crystal at T=0 is a low entropy state but low information, while white noise has high entropy and low information. Living systems have high information. Dembski’s equivalent of the 2nd law is that information never spontaneously increases. I think that is valid but is not as well nailed down as the 2nd law because there is still debate about the proper definition of information (actually, there is also debate about the proper definition of entropy, but it is more well defined).

    On the trees, I agree that is exactly how these models are done, as a most likely assignment. But that undermines what some people want to insist, that the homologies definitely prove common descent.

  3. Andy Walsh Avatar
    Andy Walsh

    “It means you have to have ‘channels in parameter space’ to get from anything to anywhere else, and then you have to ask what is the probability that so many changes have directed channels leading to them?”

    I would agree that a better understanding the nature of sequence space/the fitness landscape is needed. To my mind, the challenge in estimating the probability you are interested in is the fact that the selection pressure on a given sequence is constantly changing.

    “If I call a specific sequence of coins in advance, then if I get that, it is fantastically unlikely.”

    Right, but no one “called” uteri in advance. So they are not unlikely by this criterion.

    “But also some ‘uncalled’ sequences are fantastically unlikely because they are effectively ‘implicitly called’ by having special properties. E.g., all heads. Even without calling that series, if you flipped a coin 100 times and got all heads, you would say it is highly unlikely.”

    I would argue that “all heads” is not a special property, it is merely a description of one possible sequence which is no more or less likely than any other possible sequence. What is arguably a special property is that it is a sequence which can be described succinctly, that is without writing the entire sequence explicitly. Such sequences are indeed rare among all possible coin flip sequences of length 100.

    But I’m still not sure that explains to me what the special property is of genomes that encode for a uterus that makes that implicitly interesting. Can you state what that special property is?

    “One could cast this discussion in terms of entropy, by saying that the DNA which works for living systems has few equivalent states– most small mutations lead to death, and most random combinations of DNA (by far) lead to no function.”

    I’m not sure the first part is correct. Every human has mutations relative to their parents’ genomes; the number of changes was recently estimated to be about 60. Let’s say that “most mutations are lethal” means 51% are (I think “most” implies a majority of some kind). The probability of having 60 nonlethal mutations is thus 0.49 ^ 60, or 2.6*10^-19, which is clearly not consistent with the observed ratio of live births to the various alternatives.

    I won’t dispute the notion that random DNA sequences are unlikely to be fit. However, this just highlights the need for a search algorithm which is more effective than random walks – and selection has been shown to be precisely such a search algorithm.

    “Living systems have high information. Dembski’s equivalent of the 2nd law is that information never spontaneously increases.”

    So, if the information in a genome has to come from somewhere, why can’t it come from the environment, or more precisely, be about the environment? In other words, can’t we think of a genome as a way of encoding the history of that organisms ancestors? In that sense, it is a very lossy encoding, and thus information is not increasing at all.

    “On the trees, I agree that is exactly how these models are done, as a most likely assignment. But that undermines what some people want to insist, that the homologies definitely prove common descent.”

    To me, these are largely different topics. Any set of sequences (of anything, even coin flips) can be represented in a tree regardless of whether they were generated by a process of descent or not. So being able to generate a tree does not suggest common descent.

    What does suggest common descent is the homologies, not just in functional units which could be similar because a common designer repeated a known solution for a common problem, but also in structural features such as chromosomal inversions, gene duplications, etc. Further, there is the fact that trees generated from unrelated regions can predict which genomes will contain a particular structural feature.

    Now, why aren’t all those trees and predictions in complete agreement? There are at least 3 factors that can contribute to that. One is lateral genetic transfer; as you mentioned, some amount of genomic innovation is mediated by viruses. Another is convergent evolution, where a trait arises in multiple lineages. And the third is reversions, which can make two sequences closer in sequence space than they are in chronological natural history.

  4. David Snoke

    Responses:
    “a better understanding of the fitness landscape is needed.” One can never, as a scientist, argue against “more study is needed”, but given what we do know, channels in the fitness landscape that lead directly to beneficial functions by means of only other beneficial mutations are highly unlikely. What the new work has shown is that random walk in parameter space with rewards only after 5-10 neutral changes, i.e., selection after neutral drift, is highly inefficient to obtain new function. So the only type of selection that will work is if every change is immediately beneficial. That is a huge paradigm change.

    “no one called uteri in advance”. This, again, misses the whole set of work on specified complexity. It is possible to argue that some states are so special that they belong to what one can call a set of “pre-called” states. E.g. if I saw a number sequence that exactly gave the digits of Pi, even though I was not looking for Pi, I would say it was highly unlikely to be a random string. Or if, in fact, you did come into a room with 100 coins all showing heads, you would “know” that a person set them that way, that they weren’t randomly thrown (I often do this before class– set up dice all showing 1’s and ask the students if any of them has doubt that a person set them that way.) In the same way, in standard statistical mechanics we teach that some states are so special that they will never happen e.g. all the atoms in a room lining up along one wall, leaving a vacuum in the rest of the room. Even without “calling” that in advance, we can say that states with special properties are highly unlikely; they are effectively “called” by nature itself.

    The general “special property” that you are asking about is functionality. I have published on this general topic. For a quantitative measure one can look at “sensitivity”– how does the function of the thing change with small changes in its internal structure? E.g. a computer will break down with just a small change in a single chip, and a car will break down with just removing a single hose.

    “humans have 60 changes from their parents”. I think you are referring to point mutations. In saying “most mutations are lethal” I was using shorthand to refer to mutations which actually change function. Many point mutations don’t change function as they are a tiny fraction of the billions of locations in DNA. The question is, if enough mutations occur to make a protein which can be called “new”, what is the likelihood that the new protein will be beneficial? I would say that the great majority will be either deleterious or neutral, and as I have discussed, neutral is no help.

    “Why can’t information come from the environment?” This is akin to asking “Why can’t a system lower its entropy by giving entropy to the environment?” One just can’t posit spontaneous giving and taking of things, one has to look at whether the laws of nature and probability allow it. There is a fun story I assign in class called “The Big Bounce” in which a material gains kinetic energy by converting thermal energy into energy of motion, and so bounces higher and higher until it reaches absolute zero temperature. It doesn’t violate energy conservation, but it violates the 2nd Law.

  5. Andy Walsh Avatar
    Andy Walsh

    “So the only type of selection that will work is if every change is immediately beneficial. That is a huge paradigm change.”

    That could be a paradigm change from certain neutralist schools of thought. But it is not clear to me that a consensus has formed on the neutral theory of molecular evolution anyway. The field of evolutionary biology has always had a vein which holds that most/all changes are the result of selection pressure.

    “‘no one called uteri in advance’. This, again, misses the whole set of work on specified complexity.”

    Sorry; I wasn’t trying to ignore that work. Rather, I was trying to state explicitly that if we are to describe uteri as unlikely, it has to be because of that line of thinking. I was setting up my question about what property makes uteri “pre-called.”

    “Or if, in fact, you did come into a room with 100 coins all showing heads, you would ‘know’ that a person set them that way, that they weren’t randomly thrown (I often do this before class– set up dice all showing 1’s and ask the students if any of them has doubt that a person set them that way.)”

    Let’s explore this inference a little further. Setting aside the artificiality of finding 100 coins or a group of dice set up regardless of the pattern (even though in reality that certainly factors into the inference), what are we really saying when we infer that a person arranged them that way? The probability of getting 100 heads given a flipping process is 7.9 x 10^-31. The probability of getting 100 heads given a human process is harder to calculate, but let’s say it’s 1/3; any probability significantly higher than 7.9 x 10^-31 will do. Whether we take a frequentist approach and choose the distribution that maximizes the likelihood of the data, or a Bayesian approach to calculating the probability of a human process given the observed 100 heads, we are going to conclude that a human process was involved.

    Now, let’s say instead we observed coins in the following sequence:
    HHTTHTTHTTTTHHHHHHTHHTHTHTHTTTHTTTHTTTTHTHHTHTTTHHTTTTHTTTHHTHTTHHTTTHTTHHTTTHHTTHHTTTHTHTTTHTHHHTTT

    By the same inference, we would likely conclude that it was generated by a flipping process, because we’d determine the probability of getting that sequence given a human process to be less than 7.9 x 10^-31. (If some sequences get a much higher probability than that, the rest must get a lower probability.) But as it turns out, those are the digits of pi in binary, with a head representing a 1.

    So when we say that 100 heads is a special sequence, are we describing something intrinsic to that sequence, or are we saying something about human tendencies? In other words, is it possible that individuals from another culture (possibly from another planet) would make the opposite inferences, because in their culture complex patterns are interesting and repetition is boring, so they’d never think of arranging coins as all heads?

    “The general ‘special property’ that you are asking about is functionality.”

    OK, but then how do we define functionality? Isn’t it dependent on context? When we say it is unlikely that a protein/tissue/organ “functions”, are we saying something about that protein/tissue/organ, or about the environment it is in that makes it functional, or maybe even about what humans consider to be a function?

    “For a quantitative measure one can look at ‘sensitivity’– how does the function of the thing change with small changes in its internal structure? E.g. a computer will break down with just a small change in a single chip, and a car will break down with just removing a single hose.”

    One could say the same thing about a computer program – remove certain instructions, and it will fail to run or generate the desired output. And yet, computer programs can be and have been generated from a process of variation and selection. How would you distinguish between a computer program written by a human programmer, and one generated from variation and selection?

    “‘humans have 60 changes from their parents’. I think you are referring to point mutations. In saying ‘most mutations are lethal’ I was using shorthand to refer to mutations which actually change function.”

    Yes, mutation is an overloaded word. The be more precise, I was referring to changes of an individual base of DNA, which can be called point mutations – but so can changes of an individual amino acid in a protein.

    “Many point mutations don’t change function as they are a tiny fraction of the billions of locations in DNA.”

    Do you mean that functional locations are a tiny fraction of all locations in the human genome? I guess that also depends at least partly on what you mean by “functional.”

    “The question is, if enough mutations occur to make a protein which can be called ‘new’, what is the likelihood that the new protein will be beneficial? I would say that the great majority will be either deleterious or neutral, and as I have discussed, neutral is no help.”

    That may very well be the case, but there are at least two ways to ameliorate that problem. The first is to have a massive number of inexpensive progeny, so that the rare beneficial changes have a chance of occurring. This is the approach of viruses; it is estimated that in a person infected with HIV, every possible point mutation in the genome is generated in a single day.

    The second approach is to have enough room in the genome for duplication events, so that one copy of a gene can maintain necessary functionality while the second copy can change and potentially develop a new function without disastrous consequences. This seems to be the approach of most multicellular organisms.

    “‘Why can’t information come from the environment?’ This is akin to asking ‘Why can’t a system lower its entropy by giving entropy to the environment?’ One just can’t posit spontaneous giving and taking of things, one has to look at whether the laws of nature and probability allow it.”

    What about the laws of nature and probability disallow information about environmental conditions being encoded in the genome of an organism? How is it fundamentally different from, say, making a sound recording on vinyl? Doesn’t the record have more information afterwards than when it was blank? Didn’t that information come from the sound waves in its environment?

    1. David Snoke

      I am just finishing reading Steve Myers’ new book “Darwin’s Doubt”. Most of the issues you raise are well addressed in that book, so I will leave off and strongly encourage you to get a copy and read it (I will post a review on this site soon.)

      Just one comment on “random” computer code. I can confidently say that no useful working code was every generated by a completely random mechanism– even in code written by an intelligent writer, it is hard to write working code–thus the long, painful process of “debugging” known to every programmer. What I think you are referring to as “random” is an overarching, well-written (non-random) program which runs other programs inside a “shell”, in which sections of well-written code can be switched in order and recombined. That is similar to the “toolkit” approaches that Steve Myers critiques in his book. Such approaches look at one small aspect which is random, and ignore the much larger, non-random system that controls the randomization.

Leave a Reply