As has been previously and compellingly demonstrated, a Nobel Prize is, in reality, no assurance of anything. Yassir Arafat received the Peace Prize; need I say more?
But be that as it may, there is little doubt that Prof. Robert Aumann is eminently qualified for the award he now shares with Thomas Schelling (to which Rabbi Adlerstein referred earlier), in the area of Game Theory. He has an extraordinary resume, and my own familiarity with the subject matter (one semester) is sufficient to know that the topic is extremely complex and demands a commanding intellect. But the debate about Codes need not of necessity be closed, simply because one side’s argument now bears “the imprimatur of a Nobel laureate.”
As Rabbi Adlerstein and others already know, my own history of enthusiasm for the Codes is (or, as will be clarified below, was) roughly as long-standing as his own skepticism. And while I agree in essence with Prof. Aumann’s conclusions, I disagree with Rabbi Adlerstein’s. This may well be a subject for further scientific investigation, regardless of the work of the committee.
One thing is clear — the opponents, no less than the proponents, have often come to the topic in order to validate their “gut response” rather than to perform a sincere investigation. As Prof. Aumann writes:
When I first presented the results of Witztum, Rips, and Rosenberg at the Center for the Study of Rationality at the Hebrew University, Professor Maya Bar-Hillel told me after the presentation, Bob, I won’t believe this no matter what evidence you bring me. She now says — and no doubt believes — that this was not really meant literally; but I believe that it was, and indeed that it remains true today. Many others hold similar views.
Let me rewind a bit, and quote the paragraph from Prof. Aumann that immediately precedes that quoted by Rabbi Adlerstein:
As an observer—not a researcher!—I have been involved with the Codes research for close to twenty years, and have invested in it a tremendous amount of time and energy. Though the basic thesis of the research seems wildly improbable, for many years I thought that an ironclad case had been made for the codes; I did not see how “cheating” could have been possible. Then came the work of the “opponents” (see, for example, McKay, Bar-Natan, Bar-Hillel and Kalai, Statistical Science 14 (1999), 149-173). Though this work did not convince me that the data had been manipulated, it did convince me that it could have been; that manipulation was technically possible. The arguments that ensued—including, on both sides, implicit or explicit accusations of manipulation—eventually became extremely complex, and I was unable to follow them sufficiently well to decide for myself who is right. Having become convinced that the only way to settle the matter to my satisfaction is to conduct an experiment designed and analyzed under my own supervision, I welcomed the suggestion of Eliyahu Rips to chair the committee referred to in Paragraph 1 above. Though fairly sure that the committee’s work would convince almost no one who did not hold the corresponding opinion beforehand, I still thought it worthwhile to conduct the experiment just for the purpose of deciding the issue for myself. And, I decided that that would be the end of my own involvement in the Codes research.
I suggest a careful reading of Isaak Lapides’ critique, and a look at Doron Witztum’s as well, in the Analyses of the “Gans” Committee Report to which Rabbi Adlerstein directed us earlier. The objections of Prof. Lapides, as a member of the committee, are of course the more compelling of the two — and he cites “extreme carelessness, resulting in dozens of errors and… many essential violations of the Committee.” Witztum points out trivial spelling errors and confusion of city names in the data. Even Professor Aumann concedes that “in one case, an expert provided data some of which he himself subsequently acknowledged to be mistaken.”
Now, if the null hypothesis is correct, namely that there is no code, then of course all the errors above would be irrelevant. But if there are truly “codes,” after all, then errors in the data would obfuscate them and lead to a false negative. That being the case, I cannot perceive a scientific or statistical basis for Prof. Aumann’s assertion that despite all of the errors, “the committee’s work should not be entirely discounted” (and note, even there, a healthy acknowledgement of trouble). If the errors are of a nature that would distort the results (and no one denies that they were), then the negative results were all but guaranteed beforehand; they need to correct the spelling and try again.
I met with Doron Witztum in 1990 or thereabouts, at which time he showed me the research and detailed how long it took to run the experiments in 1986, and the equipment they were using. For those who are familiar with the history of computing, the original runs were done on an IBM PC or compatible machine with an Intel 286 processor. If your computer has a Pentium IV, it is roughly 500 times faster than a 286, and, as Moore’s Law predicted, the fastest microprocessors today are 5,000 times faster than the one they used. As Prof. Aumann said, “I did not see how ‘cheating’ could have been possible.” I echo that, for the computing power necessary to tune or doctor the results was simply beyond their reach.
When it comes to the work of the committee, I would venture to say that Prof. Aumann’s own conclusion was reached in much the same way as Prof. Bar-Hillel’s several years earlier. He says that from the get-go, he felt that “the basic thesis of the research seems wildly improbable.” He was, nonetheless, very impressed with the initial work, as he describes. But then, after years of wrangling, he had reached a point of frustration with the whole project because of the doubt cast upon it, and allowed the committee’s non-conclusion to merely confirm his uncertainty. The committee’s results, and the codes themselves, deserve scrutiny and further scientific investigation before Prof. Aumann’s report can be deemed conclusive.
But, having said all of the above, I will refer to something the computer industry calls FUD:
FUD (Fear, Uncertainty, and Doubt) is the term for any strategy intended to make a company’s customers insecure about future product plans with the purpose of discouraging them from adopting competitors’ products. For example, “You can try using X instead of our product, but you may lose all your data.”
With regards to the Codes, there is certainly enough FUD in the air to question the value of portraying them as a proof of Divine Authorship, regardless of the actual authenticity — or lack thereof — of the Codes. If a person comes to Torah thanks in part to a compelling presentation of the Codes work, and then stumbles upon the rebuttal of “the opponents,” then who is to say that that person will discount the opposition — rather than looking at the Kiruv professionals as snake-oil salesmen (and women)? As Prof. Aumann wrote, “the data is too complex and ambiguous, and its analysis involves too many judgment calls, to allow reaching meaningful scientific conclusions” — at least, for now.
It is for this reason that, while approaching the matter from a very different perspective of that of Rav Adlerstein, I reach a similar conclusion — it’s more a matter of faith rather than science at this point. Since I don’t view WRR’s conclusion as “wildly improbable” I find their results more compelling than the refutations. But there are many other demonstrations of the greatness of Torah that have no such FUD factor, and whose value for Kiruv (Jewish outreach) cannot be questioned.